Archives For expert knowledge

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author information: Moti Mizrahi, Florida Institute of Technology, mmizrahi@fit.edu

Mizrahi, Moti. “More in Defense of Weak Scientism: Another Reply to Brown.” Social Epistemology Review and Reply Collective 7, no. 4 (2018): 7-25.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W1

Please refer to:

Image by eltpics via Flickr / Creative Commons

 

In my (2017a), I defend a view I call Weak Scientism, which is the view that knowledge produced by scientific disciplines is better than knowledge produced by non-scientific disciplines.[1] Scientific knowledge can be said to be quantitatively better than non-scientific knowledge insofar as scientific disciplines produce more impactful knowledge–in the form of scholarly publications–than non-scientific disciplines (as measured by research output and research impact). Scientific knowledge can be said to be qualitatively better than non-scientific knowledge insofar as such knowledge is explanatorily, instrumentally, and predictively more successful than non-scientific knowledge.

Brown (2017a) raises several objections against my defense of Weak Scientism and I have replied to his objections (Mizrahi 2017b), thereby showing again that Weak Scientism is a defensible view. Since then, Brown (2017b) has reiterated his objections in another reply on SERRC. Almost unchanged from his previous attack on Weak Scientism (Brown 2017a), Brown’s (2017b) objections are the following:

  1. Weak Scientism is not strong enough to count as scientism.
  2. Advocates of Strong Scientism should not endorse Weak Scientism.
  3. Weak Scientism does not show that philosophy is useless.
  4. My defense of Weak Scientism appeals to controversial philosophical assumptions.
  5. My defense of Weak Scientism is a philosophical argument.
  6. There is nothing wrong with persuasive definitions of scientism.

In what follows, I will respond to these objections, thereby showing once more that Weak Scientism is a defensible view. Since I have been asked to keep this as short as possible, however, I will try to focus on what I take to be new in Brown’s (2017b) latest attack on Weak Scientism.

Is Weak Scientism Strong Enough to Count as Scientism?

Brown (2017b) argues for (1) on the grounds that, on Weak Scientism, “philosophical knowledge may be nearly as valuable as scientific knowledge.” Brown (2017b, 4) goes on to characterize a view he labels “Scientism2,” which he admits is the same view as Strong Scientism, and says that “there is a huge logical gap between Strong Scientism (Scientism2) and Weak Scientism.”

As was the case the first time Brown raised this objection, it is not clear how it is supposed to show that Weak Scientism is not “really” a (weaker) version of scientism (Mizrahi 2017b, 10-11). Of course there is a logical gap between Strong Scientism and Weak Scientism; that is why I distinguish between these two epistemological views. If I am right, Strong Scientism is too strong to be a defensible version of scientism, whereas Weak Scientism is a defensible (weaker) version of scientism (Mizrahi 2017a, 353-354).

Of course Weak Scientism “leaves open the possibility that there is philosophical knowledge” (Brown 2017b, 5). If I am right, such philosophical knowledge would be inferior to scientific knowledge both quantitatively (in terms of research output and research impact) and qualitatively (in terms of explanatory, instrumental, and predictive success) (Mizrahi 2017a, 358).

Brown (2017b, 5) does try to offer a reason “for thinking it strange that Weak Scientism counts as a species of scientism” in his latest attack on Weak Scientism, which does not appear in his previous attack. He invites us to imagine a theist who believes that “modern science is the greatest new intellectual achievement since the fifteenth century” (emphasis in original). Brown then claims that this theist would be an advocate of Weak Scientism because Brown (2017b, 6) takes “modern science is the greatest new intellectual achievement since the fifteenth century” to be “(roughly) equivalent to Weak Scientism.” For Brown (2017b, 6), however, “it seems odd, to say the least, that [this theist] should count as an advocate (even roughly) of scientism.”

Unfortunately, Brown’s appeal to intuition is rather difficult to evaluate because his hypothetical case is under-described.[2] First, the key phrase, namely, “modern science is the greatest new intellectual achievement since the fifteenth century,” is vague in more ways than one. I have no idea what “greatest” is supposed to mean here. Greatest in what respects? What are the other “intellectual achievements” relative to which science is said to be “the greatest”?

Also, what does “intellectual achievement” mean here? There are multiple accounts and literary traditions in history and philosophy of science, science studies, and the like on what counts as “intellectual achievements” or progress in science (Mizrahi 2013b). Without a clear understanding of what these key phrases mean here, it is difficult to tell how Brown’s intuition about this hypothetical case is supposed to be a reason to think that Weak Scientism is not “really” a (weaker) version of scientism.

Toward the end of his discussion of (1), Brown says something that suggests he actually has an issue with the word ‘scientism’. Brown (2017b, 6) writes, “perhaps Mizrahi should coin a new word for the position with respect to scientific knowledge and non-scientific forms of academic knowledge he wants to talk about” (emphasis in original). It should be clear, of course, that it does not matter what label I use for the view that “Of all the knowledge we have, scientific knowledge is the best knowledge” (Mizrahi 2017a, 354; emphasis in original). What matters is the content of the view, not the label.

Whether Brown likes the label or not, Weak Scientism is a (weaker) version of scientism because it is the view that scientific ways of knowing are superior (in certain relevant respects) to non-scientific ways of knowing, whereas Strong Scientism is the view that scientific ways of knowing are the only ways of knowing. As I have pointed out in my previous reply to Brown, whether scientific ways of knowing are superior to non-scientific ways of knowing is essentially what the scientism debate is all about (Mizrahi 2017b, 13).

Before I conclude this discussion of (1), I would like to point out that Brown seems to have misunderstood Weak Scientism. He (2017b, 3) claims that “Weak Scientism is a normative and not a descriptive claim.” This is a mistake. As a thesis (Peels 2017, 11), Weak Scientism is a descriptive claim about scientific knowledge in comparison to non-scientific knowledge. This should be clear provided that we keep in mind what it means to say that scientific knowledge is better than non-scientific knowledge. As I have argued in my (2017a), to say that scientific knowledge is quantitatively better than non-scientific knowledge is to say that there is a lot more scientific knowledge than non-scientific knowledge (as measured by research output) and that the impact of scientific knowledge is greater than that of non-scientific knowledge (as measured by research impact).

To say that scientific knowledge is qualitatively better than non-scientific knowledge is to say that scientific knowledge is explanatorily, instrumentally, and predictively more successful than non-scientific knowledge. All these claims about the superiority of scientific knowledge to non-scientific knowledge are descriptive, not normative, claims. That is to say, Weak Scientism is the view that, as a matter of fact, knowledge produced by scientific fields of study is quantitatively (in terms of research output and research impact) and qualitatively (in terms of explanatory, instrumental, and predictive success) better than knowledge produced by non-scientific fields of study.

Of course, Weak Scientism does have some normative implications. For instance, if scientific knowledge is indeed better than non-scientific knowledge, then, other things being equal, we should give more evidential weight to scientific knowledge than to non-scientific knowledge. For example, suppose that I am considering whether to vaccinate my child or not. On the one hand, I have scientific knowledge in the form of results from clinical trials according to which MMR vaccines are generally safe and effective.

On the other hand, I have knowledge in the form of stories about children who were vaccinated and then began to display symptoms of autism. If Weak Scientism is true, and I want to make a decision based on the best available information, then I should give more evidential weight to the scientific knowledge about MMR vaccines than to the anecdotal knowledge about MMR vaccines simply because the former is scientific (i.e., knowledge obtained by means of the methods of science, such as clinical trials) and the latter is not.

Should Advocates of Strong Scientism Endorse Weak Scientism?

Brown (2017b, 7) argues for (2) on the grounds that “once the advocate of Strong Scientism sees that an advocate of Weak Scientism admits the possibility that there is real knowledge other than what is produced by the natural sciences […] the advocate of Strong Scientism, at least given their philosophical presuppositions, will reject Weak Scientism out of hand.” It is not clear which “philosophical presuppositions” Brown is talking about here. Brown quotes Rosenberg (2011, 20), who claims that physics tells us what reality is like, presumably as an example of a proponent of Strong Scientism who would not endorse Weak Scientism. But it is not clear why Brown thinks that Rosenberg would “reject Weak Scientism out of hand” (Brown 2017d, 7).

Like other proponents of scientism, Rosenberg should endorse Weak Scientism because, unlike Strong Scientism, Weak Scientism is a defensible view. Insofar as we should endorse the view that has the most evidence in its favor, Weak Scientism has more going for it than Strong Scientism does. For to show that Strong Scientism is true, one would have to show that no field of study other than scientific ones can produce knowledge. Of course, that is not easy to show. To show that Weak Scientism is true, one only needs to show that the knowledge produced in scientific fields of study is better (in certain relevant respects) than the knowledge produced in non-scientific fields.

That is precisely what I show in my (2017a). I argue that the knowledge produced in scientific fields is quantitatively better than the knowledge produced in non-scientific fields because there is a lot more scientific knowledge than non-scientific knowledge (as measured by research output) and the former has a greater impact than the latter (as measured by research impact). I also argue that the knowledge produced in scientific fields is qualitatively better than knowledge produced in non-scientific fields because it is more explanatorily, instrumentally, and predictively successful.

Contrary to what Brown (2017b, 7) seems to think, I do not have to show “that there is real knowledge other than scientific knowledge.” To defend Weak Scientism, all I have to show is that scientific knowledge is better (in certain relevant respects) than non-scientific knowledge. If anyone must argue for the claim that there is real knowledge other than scientific knowledge, it is Brown, for he wants to defend the value or usefulness of non-scientific knowledge, specifically, philosophical knowledge.

It is important to emphasize the point about the ways in which scientific knowledge is quantitatively and qualitatively better than non-scientific knowledge because it looks like Brown has confused the two. For he thinks that I justify my quantitative analysis of scholarly publications in scientific and non-scientific fields by “citing the precedent of epistemologists who often treat all items of knowledge as qualitatively the same” (Brown 2017b, 22; emphasis added).

Here Brown fails to carefully distinguish between my claim that scientific knowledge is quantitatively better than non-scientific knowledge and my claim that scientific knowledge is qualitatively better than non-scientific knowledge. For the purposes of a quantitative study of knowledge, information and data scientists can do precisely what epistemologists do and “abstract from various circumstances (by employing variables)” (Brown 2017b, 22) in order to determine which knowledge is quantitatively better.

How Is Weak Scientism Relevant to the Claim that Philosophy Is Useless?

Brown (2017b, 7-8) argues for (3) on the grounds that “Weak Scientism itself implies nothing about the degree to which philosophical knowledge is valuable or useful other than stating scientific knowledge is better than philosophical knowledge” (emphasis in original).

Strictly speaking, Brown is wrong about this because Weak Scientism does imply something about the degree to which scientific knowledge is better than philosophical knowledge. Recall that to say that scientific knowledge is quantitatively better than non-scientific knowledge is to say that scientific fields of study publish more research and that scientific research has greater impact than the research published in non-scientific fields of study.

Contrary to what Brown seems to think, we can say to what degree scientific research is superior to non-scientific research in terms of output and impact. That is precisely what bibliometric indicators like h-index and other metrics are for (Rousseau et al. 2018). Such bibliometric indicators allow us to say how many articles are published in a given field, how many of those published articles are cited, and how many times they are cited. For instance, according to Scimago Journal & Country Rank (2018), which contains data from the Scopus database, of the 3,815 Philosophy articles published in the United States in 2016-2017, approximately 14% are cited, and their h-index is approximately 160.

On the other hand, of the 24,378 Psychology articles published in the United States in 2016-2017, approximately 40% are cited, and their h-index is approximately 640. Contrary to what Brown seems to think, then, we can say to what degree research in Psychology is better than research in Philosophy in terms of research output (i.e., number of publications) and research impact (i.e., number of citations). We can use the same bibliometric indicators and metrics to compare research in other scientific and non-scientific fields of study.

As I have already said in my previous reply to Brown, “Weak Scientism does not entail that philosophy is useless” and “I have no interest in defending the charge that philosophy is useless” (Mizrahi 2017b, 11-12). So, I am not sure why Brown brings up (3) again. Since he insists, however, let me explain why philosophers who are concerned about the charge that philosophy is useless should engage with Weak Scientism as well.

Suppose that a foundation or agency is considering whether to give a substantial grant to one of two projects. The first project is that of a philosopher who will sit in her armchair and contemplate the nature of friendship.[3] The second project is that of a team of social scientists who will conduct a longitudinal study of the effects of friendship on human well-being (e.g., Yang et al. 2016).

If Weak Scientism is true, and the foundation or agency wants to fund the project that is likely to yield better results, then it should give the grant to the team of social scientists rather than to the armchair philosopher simply because the former’s project is scientific, whereas the latter’s is not. This is because the scientific project will more likely yield better knowledge than the non-scientific project will. In other words, unlike the project of the armchair philosopher, the scientific project will probably produce more research (i.e., more publications) that will have a greater impact (i.e., more citations) and the knowledge produced will be explanatorily, instrumentally, and predictively more successful than any knowledge that the philosopher’s project might produce.

This example should really hit home for Brown, since reading his latest attack on Weak Scientism gives one the impression that he thinks of philosophy as a personal, “self-improvement” kind of enterprise, rather than an academic discipline or field of study. For instance, he seems to be saying that philosophy is not in the business of producing “new knowledge” or making “discoveries” (Brown 2017b, 17).

Rather, Brown (2017b, 18) suggests that philosophy “is more about individual intellectual progress rather than collective intellectual progress.” Individual progress or self-improvement is great, of course, but I am not sure that it helps Brown’s case in defense of philosophy against what he sees as “the menace of scientism.” For this line of thinking simply adds fuel to the fire set by those who want to see philosophy burn. As I point out in my (2017a), scientists who dismiss philosophy do so because they find it academically useless.

For instance, Hawking and Mlodinow (2010, 5) write that ‘philosophy is dead’ because it ‘has not kept up with developments in science, particularly physics’ (emphasis added). Similarly, Weinberg (1994, 168) says that, as a working scientist, he ‘finds no help in professional philosophy’ (emphasis added). (Mizrahi 2017a, 356)

Likewise, Richard Feynman is rumored to have said that “philosophy of science is about as useful to scientists as ornithology is to birds” (Kitcher 1998, 32). It is clear, then, that what these scientists complain about is professional or academic philosophy. Accordingly, they would have no problem with anyone who wants to pursue philosophy for the sake of “individual intellectual progress.” But that is not the issue here. Rather, the issue is academic knowledge or research.

Does My Defense of Weak Scientism Appeal to Controversial Philosophical Assumptions?

Brown (2017b, 9) argues for (4) on the grounds that I assume that “we are supposed to privilege empirical (I read Mizrahi’s ‘empirical’ here as ‘experimental/scientific’) evidence over non-empirical evidence.” But that is question-begging, Brown claims, since he takes me to be assuming something like the following: “If the question of whether scientific knowledge is superior to [academic] non-scientific knowledge is a question that one can answer empirically, then, in order to pose a serious challenge to my [Mizrahi’s] defense of Weak Scientism, Brown must come up with more than mere ‘what ifs’” (Mizrahi 2017b, 10; quoted in Brown 2017b, 8).

This objection seems to involve a confusion about how defeasible reasoning and defeating evidence are supposed to work. Given that “a rebutting defeater is evidence which prevents E from justifying belief in H by supporting not-H in a more direct way” (Kelly 2016), claims about what is actual cannot be defeated by mere possibilities, since claims of the form “Possibly, p” do not prevent a piece of evidence from justifying belief in “Actually, p” by supporting “Actually, not-p” directly.

For example, the claim “Hillary Clinton could have been the 45th President of the United States” does not prevent my perceptual and testimonial evidence from justifying my belief in “Donald Trump is the 45th President of the United States,” since the former does not support “It is not the case that Donald Trump is the 45th President of the United States” in a direct way. In general, claims of the form “Possibly, p” are not rebutting defeaters against claims of the form “Actually, p.” Defeating evidence against claims of the form “Actually, p” must be about what is actual (or at least probable), not what is merely possible, in order to support “Actually, not-p” directly.

For this reason, although “the production of some sorts of non-scientific knowledge work may be harder than the production of scientific knowledge” (Brown 2017b, 19), Brown gives no reasons to think that it is actually or probably harder, which is why this possibility does nothing to undermine the claim that scientific knowledge is actually better than non-scientific knowledge. Just as it is possible that philosophical knowledge is harder to produce than scientific knowledge, it is also possible that scientific knowledge is harder to produce than philosophical knowledge. It is also possible that scientific and non-scientific knowledge are equally hard to produce.

Similarly, the possibility that “a little knowledge about the noblest things is more desirable than a lot of knowledge about less noble things” (Brown 2017b, 19), whatever “noble” is supposed to mean here, does not prevent my bibliometric evidence (in terms of research output and research impact) from justifying the belief that scientific knowledge is better than non-scientific knowledge. Just as it is possible that philosophical knowledge is “nobler” (whatever that means) than scientific knowledge, it is also possible that scientific knowledge is “nobler” than philosophical knowledge or that they are equally “noble” (Mizrahi 2017b, 9-10).

In fact, even if Brown (2017a, 47) is right that “philosophy is harder than science” and that “knowing something about human persons–particularly qua embodied rational being–is a nobler piece of knowledge than knowing something about any non-rational object” (Brown 2017b, 21), whatever “noble” is supposed to mean here, it would still be the case that scientific fields produce more knowledge (as measured by research output), and more impactful knowledge (as measured by research impact), than non-scientific disciplines.

So, I am not sure why Brown keeps insisting on mentioning these mere possibilities. He also seems to forget that the natural and social sciences study human persons as well. Even if knowledge about human persons is “nobler” (whatever that means), there is a lot of scientific knowledge about human persons coming from scientific fields, such as anthropology, biology, genetics, medical science, neuroscience, physiology, psychology, and sociology, to name just a few.

One of the alleged “controversial philosophical assumptions” that my defense of Weak Scientism rests on, and that Brown (2017a) complains about the most in his previous attack on Weak Scientism, is my characterization of philosophy as the scholarly work that professional philosophers do. In my previous reply, I argue that Brown is not in a position to complain that this is a “controversial philosophical assumption,” since he rejects my characterization of philosophy as the scholarly work that professional philosophers produce, but he does not tell us what counts as philosophical (Mizrahi 2017b, 13). Well, it turns out that Brown does not reject my characterization of philosophy after all. For, after he was challenged to say what counts as philosophical, he came up with the following “sufficient condition for pieces of writing and discourse that count as philosophy” (Brown 2017b, 11):

(P) Those articles published in philosophical journals and what academics with a Ph.D. in philosophy teach in courses at public universities with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science (Brown 2017b, 11; emphasis added).

Clearly, this is my characterization of philosophy in terms of the scholarly work that professional philosophers produce. Brown simply adds teaching to it. Since he admits that “scientists teach students too” (Brown 2017b, 18), however, it is not clear how adding teaching to my characterization of philosophy is supposed to support his attack on Weak Scientism. In fact, it may actually undermine his attack on Weak Scientism, since there is a lot more teaching going on in STEM fields than in non-STEM fields.

According to data from the National Center for Education Statistics (2017), in the 2015-16 academic year, post-secondary institutions in the United States conferred only 10,157 Bachelor’s degrees in philosophy and religious studies compared to 113,749 Bachelor’s degrees in biological and biomedical sciences, 106,850 Bachelor’s degrees in engineering, and 117,440 in psychology. In general, in the 2015-2016 academic year, 53.3% of the Bachelor’s degrees conferred by post-secondary institutions in the United States were degrees in STEM fields, whereas only 5.5% of conferred Bachelor’s degrees were in the humanities (Figure 1).

Figure 1. Bachelor’s degrees conferred by post-secondary institutions in the US, by field of study, 2015-2016 (Source: NCES)

 

Clearly, then, there is a lot more teaching going on in science than in philosophy (or even in the humanities in general), since a lot more students take science courses and graduate with degrees in scientific fields of study. So, even if Brown is right that we should include teaching in what counts as philosophy, it is still the case that scientific fields are quantitatively better than non-scientific fields.

Since Brown (2017b, 13) seems to agree that philosophy (at least in part) is the scholarly work that academic philosophers produce, it is peculiar that he complains, without argument, that “an understanding of philosophy and knowledge as operational is […] shallow insofar as philosophy and knowledge can’t fit into the narrow parameters of another empirical study.” Once Brown (2017b, 11) grants that “Those articles published in philosophical journals” count as philosophy, he thereby also grants that these journal articles can be studied empirically using the methods of bibliometrics, information science, or data science.

That is, Brown (2017b, 11) concedes that philosophy consists (at least in part) of “articles published in philosophical journals,” and so these articles can be compared to other articles published in science journals to determine research output, and they can also be compared to articles published in science journals in terms of citation counts to determine research impact. What exactly is “shallow” about that? Brown does not say.

A, perhaps unintended, consequence of Brown’s (P) is that the “great thinkers from the past” (Brown 2017b, 18), those that Brown (2017b, 13) likes to remind us “were not professional philosophers,” did not do philosophy, by Brown’s own lights. For “Socrates, Plato, Augustine, Descartes, Locke, and Hume” (Brown 2017b, 13) did not publish in philosophy journals, were not academics with a Ph.D. in philosophy, and did not teach at public universities courses “with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science” (Brown 2017b, 11).

Another peculiar thing about Brown’s (P) is the restriction of the philosophical to what is being taught in public universities. What about community colleges and private universities? Is Brown suggesting that philosophy courses taught at private universities do not count as philosophy courses? This is peculiar, especially in light of the fact that, at least according to The Philosophical Gourmet Report (Brogaard and Pynes 2018), the top ranked philosophy programs in the United States are mostly located in private universities, such as New York University and Princeton University.

Is My Defense of Weak Scientism a Scientific or a Philosophical Argument?

Brown argues for (5) on the grounds that my (2017a) is published in a philosophy journal, namely, Social Epistemology, and so it a piece of philosophical knowledge by my lights, since I count as philosophy the research articles that are published in philosophy journals.

Brown would be correct about this if Social Epistemology were a philosophy journal. But it is not. Social Epistemology: A Journal of Knowledge, Culture and Policy is an interdisciplinary journal. The journal’s “aim and scope” statement makes it clear that Social Epistemology is an interdisciplinary journal:

Social Epistemology provides a forum for philosophical and social scientific enquiry that incorporates the work of scholars from a variety of disciplines who share a concern with the production, assessment and validation of knowledge. The journal covers both empirical research into the origination and transmission of knowledge and normative considerations which arise as such research is implemented, serving as a guide for directing contemporary knowledge enterprises (Social Epistemology 2018).

The fact that Social Epistemology is an interdisciplinary journal, with contributions from “Philosophers, sociologists, psychologists, cultural historians, social studies of science researchers, [and] educators” (Social Epistemology 2018) would not surprise anyone who is familiar with the history of the journal. The founding editor of the journal is Steve Fuller, who was trained in an interdisciplinary field, namely, History and Philosophy of Science (HPS), and is currently the Auguste Comte Chair in Social Epistemology in the Department of Sociology at Warwick University. Brown (2017b, 15) would surely agree that sociology is not philosophy, given that, for him, “cataloguing what a certain group of people believes is sociology and not philosophy.” The current executive editor of the journal is James H. Collier, who is a professor of Science and Technology in Society at Virginia Tech, and who was trained in Science and Technology Studies (STS), which is an interdisciplinary field as well.

Brown asserts without argument that the methods of a scientific field of study, such as sociology, are different in kind from those of philosophy: “What I contend is that […] philosophical methods are different in kind from those of the experimental scientists [sciences?]” (Brown 2017b, 24). He then goes on to speculate about what it means to say that an explanation is testable (Brown 2017b, 25). What Brown comes up with is rather unclear to me. For instance, I have no idea what it means to evaluate an explanation by inductive generalization (Brown 2017b, 25).

Instead, Brown should have consulted any one of the logic and reasoning textbooks I keep referring to in my (2017a) and (2017b) to find out that it is generally accepted among philosophers that the good-making properties of explanations, philosophical and otherwise, include testability among other good-making properties (see, e.g., Sinnott-Armstrong and Fogelin 2010, 257). As far as testability is concerned, to test an explanation or hypothesis is to determine “whether predictions that follow from it are true” (Salmon 2013, 255). In other words, “To say that a hypothesis is testable is at least to say that some prediction made on the basis of that hypothesis may confirm or disconfirm it” (Copi et al. 2011, 515).

For this reason, Feser’s analogy according to which “to compare the epistemic values of science and philosophy and fault philosophy for not being good at making testable predications [sic] is like comparing metal detectors and gardening tools and concluding gardening tools are not as good as metal detectors because gardening tools do not allow us to successfully detect for metal” (Brown 2017b, 25), which Brown likes to refer to (Brown 2017a, 48), is inapt.

It is not an apt analogy because, unlike metal detectors and gardening tools, which serve different purposes, both science and philosophy are in the business of explaining things. Indeed, Brown admits that, like good scientific explanations, “good philosophical theories explain things” (emphasis in original). In other words, Brown admits that both scientific and philosophical theories are instruments of explanation (unlike gardening and metal-detecting instruments). To provide good explanations, then, both scientific and philosophical theories must be testable (Mizrahi 2017b, 19-20).

What Is Wrong with Persuasive Definitions of Scientism?

Brown (2017b, 31) argues for (6) on the grounds that “persuasive definitions are [not] always dialectically pernicious.” He offers an argument whose conclusion is “abortion is murder” as an example of an argument for a persuasive definition of abortion. He then outlines an argument for a persuasive definition of scientism according to which “Weak Scientism is a view that has its advocates putting too high a value on scientific knowledge” (Brown 2017b, 32).

The problem, however, is that Brown is confounding arguments for a definition with the definition itself. Having an argument for a persuasive definition does not change the fact that it is a persuasive definition. To illustrate this point, let me give an example that I think Brown will appreciate. Suppose I define theism as an irrational belief in the existence of God. That is, “theism” means “an irrational belief in the existence of God.” I can also provide an argument for this definition:

P1: If it is irrational to have paradoxical beliefs and God is a paradoxical being, then theism is an irrational belief in the existence of God.

P2: It is irrational to have paradoxical beliefs and God is a paradoxical being (e.g., the omnipotence paradox).[4]

Therefore,

C: Theism is an irrational belief in the existence of God.

But surely, theists will complain that my definition of theism is a “dialectically pernicious” persuasive definition. For it stacks the deck against theists. It states that theists are already making a mistake, by definition, simply by believing in the existence of God. Even though I have provided an argument for this persuasive definition of theism, my definition is still a persuasive definition of theism, and my argument is unlikely to convince anyone who doesn’t already think that theism is irrational. Indeed, Brown (2017b, 30) himself admits that much when he says “good luck with that project!” about trying to construct a sound argument for “abortion is murder.” I take this to mean that pro-choice advocates would find his argument for “abortion is murder” dialectically inert precisely because it defines abortion in a manner that transfers “emotive force” (Salmon 2013, 65), which they cannot accept.

Likewise, theists would find the argument above dialectically inert precisely because it defines theism in a manner that transfers “emotive force” (Salmon 2013, 65), which they cannot accept. In other words, Brown seems to agree that there are good dialectical reasons to avoid appealing to persuasive definitions. Therefore, like “abortion is murder,” “theism is an irrational belief in the existence of God,” and “‘Homosexual’ means ‘one who has an unnatural desire for those of the same sex’” (Salmon 2013, 65), “Weak Scientism is a view that has its advocates putting too high a value on scientific knowledge” (Brown 2017b, 32) is a “dialectically pernicious” persuasive definition (cf. Williams 2015, 14).

Like persuasive definitions in general, it “masquerades as an honest assignment of meaning to a term while condemning or blessing with approval the subject matter of the definiendum” (Hurley 2015, 101). As I have pointed out in my (2017a), the problem with such definitions is that they “are strategies consisting in presupposing an unaccepted definition, taking a new unknowable description of meaning as if it were commonly shared” (Macagno and Walton 2014, 205).

As for Brown’s argument for the persuasive definition of Weak Scientism, according to which it “is a view that has its advocates putting too high a value on scientific knowledge” (Brown 2017b, 32), a key premise in this argument is the claim that there is a piece of philosophical knowledge that is better than scientific knowledge. This is premise 36 in Brown’s argument:

Some philosophers qua philosophers know that (a) true friendship is a necessary condition for human flourishing and (b) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for true friendship and (c) (therefore) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for human flourishing (see, e.g., the arguments in Plato’s Gorgias) and knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge (see, e.g., St. Augustine’s Confessions, book five, chapters iii and iv) [assumption]

There is a lot to unpack here, but I will focus on what I take to be the points most relevant to the scientism debate. First, Brown assumes 36 without argument, but why think it is true? In particular, why think that (a), (b), and (c) count as philosophical knowledge? Brown says that philosophers know (a), (b), and (c) in virtue of being philosophers, but he does not tell us why that is the case.

After all, accounts of friendship, with lessons about the significance of friendship, predate philosophy (see, e.g., the friendship of Gilgamesh and Enkidu in The Epic of Gilgamesh). Did it really take Plato and Augustine to tell us about the significance of friendship? In fact, on Brown’s characterization of philosophy, namely, (P), (a), (b), and (c) do not count as philosophical knowledge at all, since Plato and Augustine did not publish in philosophy journals, were not academics with a Ph.D. in philosophy, and did not teach at public universities courses “with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science” (Brown 2017b, 11).

Second, some philosophers, like Epicurus, need (and think that others need) friends to flourish, whereas others, like Diogenes of Sinope, need no one. For Diogenes, friends will only interrupt his sunbathing (Arrian VII.2). My point is not simply that philosophers disagree about the value of friendship and human flourishing. Of course they disagree.[5]

Rather, my point is that, in order to establish general truths about human beings, such as “Human beings need friends to flourish,” one must employ the methods of science, such as randomization and sampling procedures, blinding protocols, methods of statistical analysis, and the like; otherwise, one would simply commit the fallacies of cherry-picking anecdotal evidence and hasty generalization (Salmon 2013, 149-151). After all, the claim “Some need friends to flourish” does not necessitate, or even make more probable, the truth of “Human beings need friends to flourish.”[6]

Third, why think that “knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge” (Brown 2017b, 32)? Better in what sense? Quantitatively? Qualitatively? Brown does not tell us. He simply declares it “self-evident” (Brown 2017b, 32). I take it that Brown would not want to argue that “knowledge concerning the necessary conditions of human flourishing” is better than scientific knowledge in the quantitative (i.e., in terms of research output and research impact) and qualitative (i.e., in terms of explanatory, instrumental, and predictive success) respects in which scientific knowledge is better than non-scientific knowledge, according to Weak Scientism.

If so, then in what sense exactly “knowledge concerning the necessary conditions of human flourishing” (Brown 2017b, 32) is supposed to be better than scientific knowledge? Brown (2017b, 32) simply assumes that without argument and without telling us in what sense exactly “knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge” (Brown 2017b, 32).

Of course, philosophy does not have a monopoly on friendship and human flourishing as research topics. Psychologists and sociologists, among other scientists, work on friendship as well (see, e.g., Hojjat and Moyer 2017). To get an idea of how much research on friendship is done in scientific fields, such as psychology and sociology, and how much is done in philosophy, we can use a database like Web of Science.

Currently (03/29/2018), there are 12,334 records in Web of Science on the topic “friendship.” Only 76 of these records (0.61%) are from the Philosophy research area. Most of the records are from the Psychology (5,331 records) and Sociology (1,111) research areas (43.22% and 9%, respectively). As we can see from Figure 2, most of the research on friendship is done in scientific fields of study, such as psychology, sociology, and other social sciences.

Figure 2. Number of records on the topic “friendship” in Web of Science by research area (Source: Web of Science)

 

In terms of research impact, too, scientific knowledge about friendship is superior to philosophical knowledge about friendship. According to Web of Science, the average citations per year for Psychology research articles on the topic of friendship is 2826.11 (h-index is 148 and the average citations per item is 28.1), and the average citations per year for Sociology research articles on the topic of friendship is 644.10 (h-index is 86 and the average citations per item is 30.15), whereas the average citations per year for Philosophy research articles on friendship is 15.02 (h-index is 13 and the average citations per item is 8.11).

Quantitatively, then, psychological and sociological knowledge on friendship is better than philosophical knowledge in terms of research output and research impact. Both Psychology and Sociology produce significantly more research on friendship than Philosophy does, and the research they produce has significantly more impact (as measured by citation counts) than philosophical research on the same topic.

Qualitatively, too, psychological and sociological knowledge about friendship is better than philosophical knowledge about friendship. For, instead of rather vague statements about how “true friendship is a necessary condition for human flourishing” (Brown 2017b, 32) that are based on mostly armchair speculation, psychological and sociological research on friendship provides detailed explanations and accurate predictions about the effects of friendship (or lack thereof) on human well-being.

For instance, numerous studies provide evidence for the effects of friendships or lack of friendships on physical well-being (see, e.g., Yang et al. 2016) as well as mental well-being (see, e.g., Cacioppo and Patrick 2008). Further studies provide explanations for the biological and genetic bases of these effects (Cole et al. 2011). This knowledge, in turn, informs interventions designed to help people deal with loneliness and social isolation (see, e.g., Masi et al. 2010).[7]

To sum up, Brown (2017b, 32) has given no reasons to think that “knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge.” He does not even tell us what “better” is supposed to mean here. He also ignores the fact that scientific fields of study, such as psychology and sociology, produce plenty of knowledge about human flourishing, both physical and mental well-being. In fact, as we have seen, science produces a lot more knowledge about topics related to human well-being, such as friendship, than philosophy does. For this reason, Brown (2017b, 32) has failed to show that “there is non-scientific form of knowledge better than scientific knowledge.”

Conclusion

At this point, I think it is quite clear that Brown and I are talking past each other on a couple of levels. First, I follow scientists (e.g., Weinberg 1994, 166-190) and philosophers (e.g., Haack 2007, 17-18 and Peels 2016, 2462) on both sides of the scientism debate in treating philosophy as an academic discipline or field of study, whereas Brown (2017b, 18) insists on thinking about philosophy as a personal activity of “individual intellectual progress.” Second, I follow scientists (e.g., Hawking and Mlodinow 2010, 5) and philosophers (e.g., Kidd 2016, 12-13 and Rosenberg 2011, 307) on both sides of the scientism debate in thinking about knowledge as the scholarly work or research produced in scientific fields of study, such as the natural sciences, as opposed to non-scientific fields of study, such as the humanities, whereas Brown insists on thinking about philosophical knowledge as personal knowledge.

To anyone who wishes to defend philosophy’s place in research universities alongside academic disciplines, such as history, linguistics, and physics, armed with this conception of philosophy as a “self-improvement” activity, I would use Brown’s (2017b, 30) words to say, “good luck with that project!” A much more promising strategy, I propose, is for philosophy to embrace scientific ways of knowing and for philosophers to incorporate scientific methods into their research.[8]

Contact details: mmizrahi@fit.edu

References

Arrian. “The Final Phase.” In Alexander the Great: Selections from Arrian, Diodorus, Plutarch, and Quintus Curtius, edited by J. Romm, translated by P. Mensch and J. Romm, 149-172. Indianapolis, IN: Hackett Publishing Company, Inc., 2005.

Ashton, Z., and M. Mizrahi. “Intuition Talk is Not Methodologically Cheap: Empirically Testing the “Received Wisdom” about Armchair Philosophy.” Erkenntnis (2017): DOI 10.1007/s10670-017-9904-4.

Ashton, Z., and M. Mizrahi. “Show Me the Argument: Empirically Testing the Armchair Philosophy Picture.” Metaphilosophy 49, no. 1-2 (2018): 58-70.

Cacioppo, J. T., and W. Patrick. Loneliness: Human Nature and the Need for Social Connection. New York: W. W. Norton & Co., 2008.

Cole, S. W., L. C. Hawkley, J. M. G. Arevaldo, and J. T. Cacioppo. “Transcript Origin Analysis Identifies Antigen-Presenting Cells as Primary Targets of Socially Regulated Gene Expression in Leukocytes.” Proceedings of the National Academy of Sciences 108, no. 7 (2011): 3080-3085.

Copi, I. M., C. Cohen, and K. McMahon. Introduction to Logic. Fourteenth Edition. New York: Prentice Hall, 2011.

Brogaard, B., and C. A. Pynes (eds.). “Overall Rankings.” The Philosophical Gourmet Report. Wiley Blackwell, 2018. Available at http://34.239.13.205/index.php/overall-rankings/.

Brown, C. M. “Some Objections to Moti Mizrahi’s ‘What’s So Bad about Scientism?’.” Social Epistemology Review and Reply Collective 6, no. 8 (2017a): 42-54.

Brown, C. M. “Defending Some Objections to Moti Mizrahi’s Arguments Scientism.” Social Epistemology Review and Reply Collective 7, no. 2 (2017b): 1-35.

Haack, S. Defending Science–within Reason: Between Scientism and Cynicism. New York: Prometheus Books, 2007.

Hawking, S., and L. Mlodinow. The Grand Design. New York: Bantam Books, 2010.

Hojjat, M., and A. Moyer (eds.). The Psychology of Friendship. New York: Oxford University Press, 2017.

Hurley, P. J. A Concise Introduction to Logic. Twelfth Edition. Stamford, CT: Cengage Learning, 2015.

Kelly, T. “Evidence.” In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). https://plato.stanford.edu/archives/win2016/entries/evidence/.

Kidd, I. J. “How Should Feyerabend Have Defended Astrology? A Reply to Pigliucci.” Social Epistemology Review and Reply Collective 5 (2016): 11–17.

Kitcher, P. “A Plea for Science Studies.” In A House Built on Sand: Exposing Postmodernist Myths about Science, edited by N. Koertge, 32–55. New York: Oxford University Press, 1998.

Lewis, C. S. The Four Loves. New York: Harcourt Brace & Co., 1960.

Macagno, F., and D. Walton. Emotive Language in Argumentation. New York: Cambridge University Press, 2014.

Masi, C. M., H. Chen, and L. C. Hawkley. “A Meta-Analysis of Interventions to Reduce Loneliness.” Personality and Social Psychology Review 15, no. 3 (2011): 219-266.

Mizrahi, M. “Intuition Mongering.” The Reasoner 6, no. 11 (2012): 169-170.

Mizrahi, M. “More Intuition Mongering.” The Reasoner 7, no. 1 (2013a): 5-6.

Mizrahi, M. “What is Scientific Progress? Lessons from Scientific Practice.” Journal for General Philosophy of Science 44, no. 2 (2013b): 375-390.

Mizrahi, M. “New Puzzles about Divine Attributes.” European Journal for Philosophy of Religion 5, no. 2 (2013c): 147-157.

Mizrahi, M. “The Pessimistic Induction: A Bad Argument Gone Too Far.” Synthese 190, no. 15 (2013d): 3209-3226.

Mizrahi, M. “Does the Method of Cases Rest on a Mistake?” Review of Philosophy and Psychology 5, no. 2 (2014): 183-197.

Mizrahi, M. “On Appeals to Intuition: A Reply to Muñoz-Suárez.” The Reasoner 9, no. 2 (2015a): 12-13.

Mizrahi, M. “Don’t Believe the Hype: Why Should Philosophical Theories Yield to Intuitions?” Teorema: International Journal of Philosophy 34, no. 3 (2015b): 141-158.

Mizrahi, M. “Historical Inductions: New Cherries, Same Old Cherry-Picking.” International Studies in the Philosophy of Science 29, no. 2 (2015c): 129-148.

Mizrahi, M. “Three Arguments against the Expertise Defense.” Metaphilosophy 46, no. 1 (2015d): 52-64.

Mizrahi, M. “The History of Science as a Graveyard of Theories: A Philosophers’ Myth?” International Studies in the Philosophy of Science 30, no. 3 (2016): 263-278.

Mizrahi, M. “What’s So Bad about Scientism?” Social Epistemology 31, no. 4 (2017a): 351-367.

Mizrahi, M. “In Defense of Weak Scientism: A Reply to Brown.” Social Epistemology Review and Reply Collective 6, no. 11 (2017b): 9-22.

Mizrahi, M. “Introduction.” In The Kuhnian Image of Science: Time for a Decisive Transformation? Edited by M. Mizrahi, 1-22. London: Rowman & Littlefield, 2017c.

National Center for Education Statistics. “Bachelor’s degrees conferred by postsecondary institutions, by field of study: Selected years, 1970-71 through 2015-16.” Digest of Education Statistics (2017). https://nces.ed.gov/programs/digest/d17/tables/dt17_322.10.asp?current=yes.

Peels, R. “The Empirical Case Against Introspection.” Philosophical Studies 17, no. 9 (2016): 2461-2485.

Peels, R. “Ten Reasons to Embrace Scientism.” Studies in History and Philosophy of Science Part A 63 (2017): 11-21.

Rosenberg, A. The Atheist’s Guide to Reality: Enjoying Life Without Illusions. New York: W. W. Norton, 2011.

Rousseau, R., L. Egghe, and R. Guns. Becoming Metric-Wise: A Bibliometric Guide for Researchers. Cambridge, MA: Elsevier, 2018.

Salmon, M. H. Introduction to Logic and Critical Thinking. Sixth Edition. Boston, MA: Wadsworth, 2013.

Scimago Journal & Country Rank. “Subject Bubble Chart.” SJR: Scimago Journal & Country Rank. Accessed on April 3, 2018. http://www.scimagojr.com/mapgen.php?maptype=bc&country=US&y=citd.

Sinnott-Armstrong, W., and R. J. Fogelin. Understanding Arguments: An Introduction to Informal Logic. Eighth Edition. Belmont, CA: Wadsworth Cengage Learning, 2010.

Social Epistemology. “Aims and Scope.” Social Epistemology: A Journal of Knowledge, Culture and Policy (2018). https://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=tsep20.

Weinberg, S. Dreams of a Final Theory: The Scientist’s Search for the Ultimate Laws of Nature. New York: Random House, 1994.

Williams, R. N. “Introduction.” In Scientism: The New Orthodoxy, edited by R. N. Williams and D. N. Robinson, 1-22. New York: Bloomsbury Academic, 2015.

Yang, C. Y., C. Boen, K. Gerken, T. Li, K. Schorpp, and K. M. Harris. “Social Relationships and Physiological Determinants of Longevity Across the Human Life Span.” Proceedings of the National Academy of Sciences 113, no. 3 (2016): 578-583.

[1] I thank Adam Riggio for inviting me to respond to Brown’s second attack on Weak Scientism.

[2] On why appeals to intuition are bad arguments, see Mizrahi (2012), (2013a), (2014), (2015a), (2015b), and (2015d).

[3] I use friendship as an example here because Brown (2017b, 31) uses it as an example of philosophical knowledge. I will say more about that in Section 6.

[4] For more on paradoxes involving the divine attributes, see Mizrahi (2013c).

[5] “Friendship is unnecessary, like philosophy, like art, like the universe itself (for God did not need to create)” (Lewis 1960, 71).

[6] On fallacious inductive reasoning in philosophy, see Mizrahi (2013d), (2015c), (2016), and (2017c).

[7] See also “The Friendship Bench” project: https://www.friendshipbenchzimbabwe.org/.

[8] For recent examples, see Ashton and Mizrahi (2017) and (2018).

Author Information: Christopher M. Brown, University of Tennessee, Martin, chrisb@utm.edu

Brown, Christopher M. “Defending Some Objections to Moti Mizrahi’s Arguments for Weak Scientism.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 1-35.

The pdf of the article gives specific page references, and contains the article’s complete text. Due to its length, we have split the online publication of Brown’s reply into three segments. The first was published 30 January, and the second 1 February. Shortlink for part three: https://wp.me/p1Bfg0-3TQ

Please refer to:

Image by Chase Elliott Clark via Flickr / Creative Commons

 

Revisiting an Objection to Mizrahi’s Attempt to Defeat Objection O2

Recall that Mizrahi thinks Mizrahi’s Argument is a scientific argument. Furthermore, in 2017a he thinks he needs to defend Weak Scientism against objection O2. He does so by arguing that: (a) if O2 is true, then all knowledge by inference would be viciously circular; but the consequent of (a) is false, and, therefore, the antecedent of (a) is false.

In my 2017 response to Mizrahi 2017a, I argued that Mizrahi’s attempt to defeat objection O2 fails since he assumes, citing Ladyman, that “‘deductive inference is only defensible by appeal to deductive inference’ (Ladyman 2002, 49)” (Mizrahi 2017a, 362) whereas it is reasonable to think that the rules of deductive inference are defensible by noting we believe them by the same sort of power we believe propositions such as ‘1+1=2’ and ‘a whole is greater than one its parts’, namely, some non-inferential mode of knowing (see, e.g., Feldman 2003, 3-4). So there is no inconsistency in affirming both a scientific argument for Weak Scientism is a circular argument and knowledge of the rules of deductive inference is defensible.

Now, in responding to my comment in 2017, Mizrahi misconstrues my comment by rendering it as the following question: “why think that deductive rules of inference cannot be proved valid in a non-circular way?” (2017b, 9; emphasis mine). But as should be clear from the above, this is not my objection, since I never talk about “proving in a valid way” deductive rules of inference. Mizrahi seems to think that the only way to show deductive inference is defensible is by way of a circular proof of them. But why think a thing like that? Rather, as Aristotle famously points out, good deductive arguments have to start from premises that we know with certainty by way of some non-deductive means (Posterior Analytics, Book II, ch. 19, see esp. 100a14-100b18). Again, Mizrahi has not shown there is an inconsistency in affirming both a scientific argument for Weak Scientism is a circular argument and knowledge of the rules of deductive inference is defensible.

Against Mizrahi’s Claim that Philosophers Should Not Use Persuasive Definitions of Scientism.

In 2017a, Mizrahi claims that persuasive definitions of scientism, e.g., “scientism is a matter of putting too high a value on science in comparison with other branches of learning or culture” (Sorrell 1994, x) or “scientism is an exaggerated deference towards science, an excessive readiness to accept as authoritative any claim made by the sciences, and to dismiss every kind of criticism of science or its practitioners as anti-scientific prejudice” (Haack 2007, 17-18), are problematic because they beg the question against the scientistic stance (Mizrahi 2017a, 351; 352), or otherwise err by not “show[ing] precisely what is wrong with scientism” (2017a, 352).

In my 2017 response to Mizrahi’s claim that philosophers should not use persuasive definitions of scientism, I do two things. First, I offer a counter-example to Mizrahi’s view by showing that one can give a logically valid argument for the “persuasive” description, ‘abortion is murder’, an argument that does not beg questions against those who deny the conclusion and also explains why some folks accept the conclusion. Second, I attempted to offer a non-question begging argument for a persuasive description of scientism, one which offers an explanation—by way of its premises—why someone may accept that definition as true.

Mizrahi offers some objections to my 2017 response on this score. First, Mizrahi objects that my sample argument for the conclusion, abortion is murder, is invalid. He next posits that one of the premises of my sample argument for the conclusion, abortion is murder, is such that “the emotionally charged term ‘innocent’ is smuggled into [it]” (2017b, 18). Finally, he gives a reason why one may think the premise, the human fetus is an innocent person, is false.

Mizrahi thinks my argument for a persuasive definition of scientism “suffers from the same problems as [my] abortion argument” (2017b, 18). More specifically, he thinks the argument is “misleading” since it treats Strong Scientism and Weak Scientism in one argument and Mizrahi does not advocate for Strong Scientism, but for Weak Scientism. In addition, he notes I assume “without argument that there is some item of knowledge . . . that is both non-scientific and better than scientific knowledge. Given that the scientism debate is precisely about whether scientific knowledge is superior to non-scientific knowledge, one cannot simply assume that non-scientific knowledge is better than scientific knowledge without begging the question” (2017b, 19).

In responding to these objections, I begin with Mizrahi’s analysis of my sample argument for the conclusion, abortion is murder. The first thing to say is that Mizrahi criticizes an argument different from the one I give in my 2017 response. The sample argument I offer in 2017 is as follows:

14. Abortion is the direct killing of a human fetus.
15. The human fetus is an innocent person.
16. Therefore, abortion is the direct killing of an innocent person [from 14 and 15].
17. The direct killing of an innocent person is murder.
18. Therefore, abortion is murder [from 16 and 17].

For some reason, Mizrahi renders premise 14 as

14a. Abortion is the direct killing of a human being (2017b, 17).

Mizrahi then accuses me of offering an invalid argument. Now, I agree that an argument the conclusion of which is proposition 16 and the premises of which are 14a and 15 is a logically invalid argument. But my argument has 16 as its conclusion and 14 and 15 as its premises, and that argument is logically valid.

As for Mizrahi’s next objection to my sample argument for the conclusion, abortion is murder, just because a person S finds a premise “emotionally charged” does not mean a person S1 can’t properly use that premise in an argument; that is to say, just because some person S doesn’t like to consider whether a premise is true, or doesn’t like to think about the implications of a premise’s being true, it does not follow that the use of such a premise is somehow dialectically improper.

If it were the case that emotionally laden or emotionally charged premises are off-limits, then just about all arguments in applied ethics (about topics such as the morality of the death penalty, eating meat, factory farming, gun-control, etc.) would be problematic since such arguments regularly employ premises that advocates and opponents alike will find emotionally laden or emotionally charged. The claim that a premise is dialectically improper because it is emotionally laden or emotionally charged is a non-starter.

Perhaps Mizrahi would counter by saying premise 15 is itself a persuasive definition or description, and so to use it as a premise in an argument that is supposed to be a counter-example to the view that the use of persuasive definitions is question-begging is itself question-begging. In that case, one may add the following premises to my sample argument for a non-question-begging argument that explains why someone may think abortion is murder:

15a. If a human person has not committed any crimes and is not intentionally attacking a human person, then that human person is an innocent person [assumption].

15b. A human being is a human person [assumption].

15c. A human fetus is a human being [assumption].

15d. Therefore, a human fetus is a human person [from 15b and 15c]

15e. Therefore, if a human fetus has not committed any crimes and is not intentionally attacking a human person, then a human fetus is an innocent person [from 15a and 15d].

15f. A human fetus has not committed any crimes and is not intentionally attacking a human person [assumption].

15g. Therefore, a human fetus is an innocent person [from 15e and 15f, MP].

Now, it may be that Mizrahi will offer reasons for rejecting some of the premises in the argument above, just as he offers a reason in 2017a for thinking 15 is false in the argument consisting of propositions 14-18. But all that would be beside the point. For the goal was not to produce a sample argument whose conclusion was a persuasive definition or description that any philosopher would think is sound—good luck with that project!—but rather to produce a logically valid argument for a persuasive definition of a term that both (a) does not beg any questions against those who reject the conclusion and (b) provides reasons for thinking the conclusion is true. But both the argument consisting of propositions 14-18 and the argument consisting of propositions 15a-15g do just that. Therefore, these arguments constitute good counter-examples to Mizrahi’s claim that persuasive definitions are always dialectally pernicious.

Turning to my argument in defense of a persuasive definition of scientism, I grant that my attempt in 2017 to offer one argument in defense of a persuasive definition of scientism that makes reference both to Strong Scientism and Weak Scientism is misleading. I therefore offer here an argument for a persuasive definition of Weak Scientism.
Also, rather than using variables in my sample argument, which I thought sufficient in my 2017 response (for the simple reason I thought a sample schema for a non-question begging argument in defense of a persuasive definition of scientism is what was called for), I also offer a possible example of a piece of philosophical knowledge that is better than scientific knowledge in my argument here. In my view, the following logically valid argument both offers an explanation for accepting its conclusion and does not beg any questions against those who reject its conclusion:

  1. Weak Scientism is the view that, of the various kinds of knowledge, scientific knowledge is the best [assumption].
  2. If scientific knowledge is the best kind of knowledge, then scientific knowledge is better than all forms of non-scientific knowledge [self-evident].
  3. Weak Scientism implies scientific knowledge is better than all forms of non-scientific knowledge [from 28 and 29].
  4. If position P1 implies that x is better than all forms of non-x, then P1 implies x is more valuable than all forms of non-x [assumption].[1]
  5. Therefore, Weak Scientism implies scientific knowledge is more valuable than all forms of non-scientific knowledge [from 30 and 31].
  6. If position P1 implies that x is more valuable than all forms of non-x, but x is not more valuable than all forms of non-x, then P1 is a view that has its advocates putting too high a value on x [assumption].
  7. Therefore, if Weak Scientism implies that scientific knowledge is more valuable than all forms of non-scientific knowledge and scientific knowledge is not more valuable than all forms of non-scientific knowledge, then Weak Scientism is a view that has its advocates putting too high a value on scientific knowledge [from 33].
  8. Some philosophers qua philosophers know that (a) true friendship is a necessary condition for human flourishing and (b) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for true friendship and (c) (therefore) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for human flourishing (see, e.g., the argument in Plato’s Gorgias[2]) and knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge (see, e.g., St. Augustine’s Confessions, book five, chapters iii and iv), then there is a non-scientific form of knowledge better than scientific knowledge [self-evident].
  9. Some philosophers qua philosophers know that (a) true friendship is a necessary condition for human flourishing and (b) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for true friendship and (c) (therefore) the possession of the moral virtues or a life project aimed at developing the moral virtues is a necessary condition for human flourishing (see, e.g., the argument in Plato’s Gorgias) and knowledge concerning the necessary conditions of human flourishing is better than any sort of scientific knowledge (see, e.g., St. Augustine’s Confessions, book five, chapters iii and iv) [assumption].
  10. Therefore, there is a form of non-scientific knowledge better than scientific knowledge [from 35 and 36, MP].
  11. If knowing some form of non-x is better than knowing x, then knowing some form of non-x is more valuable than knowing x [assumption].
  12. Therefore, there is a form of non-scientific knowledge that is more valuable than scientific knowledge [from 37 and 38].
  13. Therefore, scientific knowledge is not more valuable than all forms of non-scientific knowledge [from 39].
  14. Therefore, Weak Scientism is a view that has its advocates putting too high a value on scientific knowledge [from 34, 32, and 40, MP].

In my view, the argument above both offers an explanation for accepting its conclusion and does not beg any questions against those who reject the conclusion. Someone may think one of the premises is false, e.g., 36. But that is beside the point at issue here. For Mizrahi claims the use of persuasive definitions always involves begging the question or a failure to support the persuasive definition with reasons.

But the argument above does not beg the question; someone may think Weak Scientism is true, become acquainted with the claim in premise 36, and then, realizing the error of his ways by way of the argument above, reject Weak Scientism. The argument above also provides a set of reasons for the conclusion, which is a persuasive description of Weak Scientism. It therefore constitutes a good counter-example to Mizrahi’s claim that the use of a persuasive definition of scientism is always problematic.

Contact details: chrisb@utm.edu

References

Aquinas, Saint Thomas. Summa Theologiae. Translated by the Fathers of the English Dominican Province. Allen, TX: Christian Classics, 1981.

Aquinas, Saint Thomas. Summa Contra Gentiles. Book One. Trans. Anton C. Pegis. South Bend, IN: University of Notre Dame Press, 1991.

Aristotle. Posterior Analytics. Trans. G.R.G. Mure. In The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 1941.

Aristotle. On the Parts of Animals. Trans. William Ogle. In The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 1941.

Aristotle. Nicomachean Ethics. Trans. W.D. Ross. In The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 1941.

Augustine, Saint. Confessions. Trans. Frank Sheed. 1942; reprint, Indianapolis: Hackett Publishing, 2006.

Brown, Christopher. “Some Logical Problems for Scientism.” Proceedings of the American Catholic Philosophical Association 85 (2011): 189-200.

Brown, Christopher. “Some Objections to Moti Mizrahi’s ‘What’s So Bad about Scientism?’.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 42-54.

Bourget, David and David J. Chalmers. “What do philosophers believe?” Philosophical Studies 170, 3 (2014): 465-500.

Chesterton, G.K. Orthodoxy. 1908; reprint, San Francisco: Ignatius Press, 1995.

Feldman, Richard. Epistemology. Upper Saddle River, NJ: Prentice-Hall, 2003.

Feser, Edward. The Last Superstition: A Refutation of the New Atheism. South Bend: St. Augustine’s Press, 2008.

Feser, Edward. “Blinded by Scientism.” Public Discourse. March 9, 2010a. Accessed January 15, 2018. http://www.thepublicdiscourse.com/2010/03/1174/.

Feser, Edward. “Recovering Sight after Scientism.” Public Discourse. March 12, 2010b. Accessed January 15, 2018. http://www.thepublicdiscourse.com/2010/03/1184/.

Feser, Edward. Scholastic Metaphysics: A Contemporary Introduction. editiones scholasticae, 2014.

Haack, Susan. Defending Science—Within Reason: Between Scientism and Cynicism. Amherst, NY: Prometheus Books, 2007.

Haack, Susan. “The Real Question: Can Philosophy Be Saved? Free Inquiry (October/November 2017): 40-43.

MacIntyre, Alasdair. God, Philosophy, and Universities. Lanham: Rowman & Littlefield, 2009.

Mizrahi, Moti. “What’s So Bad About Scientism?” Social Epistemology 31, no. 4 (2017a): 351-367.

Mizrahi, Moti. “In Defense of Weak Scientism: A Reply to Brown.” Social Epistemology Review and Reply Collective 6, no. 11 (2017b): 9-22.

Oxford English Dictionary Online, s.v. “scientism,” accessed January 10, 2018, http://www.oed.com/view/Entry/172696?redirectedFrom=scientism.

Papineau, David. “Is Philosophy Simply Harder than Science?” The Times Literary Supplement On-line. June 1, 2017. Accessed July 11, 2017. https://goo.gl/JiSci7.

Pieper, Josef. In Defense of Philosophy. Trans. Lothar Krauth. 1966; reprint, San Francisco: Ignatius Press, 1992.

Plato. Phaedo. In Five Dialogues. Trans. Grube and Cooper. Indianapolis: Hackett Publishing, 2002.

Plato. Gorgias. Trans. Donald J. Zeyl. Indianapolis: Hackett Publishing, 1987.

Plato. Republic. Trans. C.D.C. Reeve. Indianapolis: Hackett Publishing, 2004.

Postman, Neil. Technopoly: the Surrender of Culture to Technology. New York: Vintage Books, 1993.

Robinson, Daniel N. “Science, Scientism, and Explanation.” In Scientism: the New Orthodoxy. Williams and Robinson, eds. London: Bloomsbury Academic, 2015, 23-40.

Rosenberg, Alex. The Atheist’s Guide to Reality. New York: W. W. Norton and Co., 2011.

Sorrell, Tom. Scientism: Philosophy and the Infatuation with Science. First edition. London: Routledge, 1994.

Sorell, Tom. Scientism: Philosophy and the Infatuation with Science. Kindle edition. London: Routledge, 2013.

Van Inwagen, Peter. Metaphysics. 4th edition. Boulder, CO: Westview Press, 2015.

Williams, Richard. N. and Daniel N. Robinson, eds. Scientism: the New Orthodoxy. London: Bloomsbury Academic, 2015.

[1] The proposition S’s preferring x to y is logically distinct from the proposition, x’s being more valuable than y. For S may prefer x to y even though y is, in fact, more valuable than x.

[2] See Gorgias 507a-508a.

Author Information: Christopher M. Brown, University of Tennessee, Martin, chrisb@utm.edu

Brown, Christopher M. “Defending Some Objections to Moti Mizrahi’s Arguments for Weak Scientism.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 1-35.

The complete pdf of the article gives specific page references. Due to the length of Brown’s article, we will be posting it in three parts. The first installment can be found here. Shortlink for part two: https://wp.me/p1Bfg0-3TJ

Please refer to:

Image by Bruce Irschick via Flickr / Creative Commons

 

Problems for Mizrahi’s Argument, Given the Number and Kind of Philosophical Assumptions at Play in the Argument

In his 2017b response, Mizrahi makes some general criticisms of my strategy in criticizing Mizrahi’s Argument as well as offering particular objections to particular arguments I make in my 2017 essay with respect to Mizrahi’s Argument. In response, then, I first say a few things about Mizrahi’s general criticisms. Second, I respond to Mizrahi’s particular objections.

Mizrahi’s first general criticism of my approach is that I simply criticize Mizrahi’s Argument by proposing certain “what ifs?” (2017b, 9). His objection seems to be the following:

  1. “The question of whether scientific knowledge is superior to non-scientific [academic] knowledge is a question that can be answered empirically” (2017b, 10).
  2. “Therefore, in order to pose a serious challenge to my defense of Weak Scientism, Brown must come up with more than mere ‘what ifs’” (2017b, 10).

The argument is clearly an enthymeme. Mizrahi presumably is presupposing:

  1. If the question of whether scientific knowledge is superior to non-scientific [academic] knowledge is a question that one can answer empirically, then, in order to pose a serious challenge to my defense of Weak Scientism, Brown must come up with more than mere “what ifs” [assumption].

But why accept 27? Presumably because we are supposed to privilege empirical (I read Mizrahi’s ‘empirical’ here as ‘experimental/scientific’) evidence over non-empirical evidence. But that’s just assuming the sort of thing that is at issue when debating the truth or falsity of scientism. So Mizrahi’s response here begs the question against those who raise critical questions about Mizrahi’s Argument and Weak Scientism.

In addition, premise 25 is one of the propositions up for debate here. Mizrahi thinks Mizrahi’s Argument is a scientific argument. I disagree, for reasons stated in my 2017 article (more on this below).

A second general criticism Mizrahi raises for my critique of Mizrahi’s Argument concerns my habit of speaking about “controversial philosophical assumptions” at play in Mizrahi’s Argument. First, Mizrahi does not like my use of the word ‘assumptions’ in reference to the (implied) premises of Mizrahi’s Argument (2017b, 12; 14) since, according to Mizrahi, “an assumption is a statement that is taken to be true without justification or support.”

I just have to confess that I don’t think ‘assumption’ necessarily has this connotation. I certainly did not intend to communicate in every case I use the word ‘assumption’ in my 2017 article that Mizrahi had not supplied any justification or support for such propositions (although I do think it is the case that Mizrahi does not offer justification for some of the [implied] premises in Mizrahi’s Argument). For better or for worse (probably worse) I was thinking of ‘assumption’ as a synonym of ‘stipulation’ or ‘presupposition’ or ‘premise.’ But I will try to be more precise in what follows.

Second, Mizrahi takes me to task for calling the (implied) premises of Mizrahi’s Argument controversial, since I don’t say why they are controversial and, as Mizrahi states with respect to his 2017a, “the way I have characterized knowledge is exactly the way others in the scientism debate understand knowledge (see, e.g., Peels 2016, 2462), which means that my characterization of knowledge is not controversial as far as the scientism debate in philosophy is concerned” (2017b, 13; see also 14-15).  In addition, by calling a premise ‘controversial’, Mizrahi takes me to mean that I am saying it is doubtful (2017b, 14-15), which, if true, would raise some puzzles for my own responses to Mizrahi 2017a.

In response, my comment in 2017 that the (implied) premises in Mizrahi’s Argument are controversial was neither intended as commentary on a narrow philosophical discussion—what Mizrahi calls “the scientism debate in philosophy” (2017b, 13; emphasis mine)—nor meant simply to point out that it is possible to doubt those premises (Mizrahi 2017b, 14-15). Rather, what I intended to say (and should have made clearer) is that the (implied) premises of Mizrahi’s Argument are controversial when we contrast them with the views of a number of different philosophical schools of thought.

That is to say, I meant to suggest that a healthy minority of contemporary philosophers will reject those premises, and have reasons for rejecting them, where that healthy minority consists (just to name a few schools of thought that have contemporary adherents) of some Platonists, Aristotelians, neo-Aristotelians, Augustinians, Thomists, Scotists, Suarezians, Ockhamists, Cartesians, Liebnizians, Kantians, neo-Kantians of various sorts, Phenomenologists, Existentialists, Whiteheadians, as well as quite a few non-naturalist analytic philosophers.

Indeed, if we practice “the democracy of the dead,” as G. K. Chesterton suggested is only fair,[1] the majority of philosophers in the past would reject the implied premises in Mizrahi’s Argument; or, if that’s a bit anachronistic, they would reject premises at least analogous to those in Mizrahi’s Argument insofar as they would not reduce philosophical knowledge to what professional philosophers make public; think of, to take just one example, Plato’s criticism of the professional philosophers of his day as false philosophers in the Phaedo[2] and the Republic.[3]

Of course, there are non-philosophers too, including practicing natural scientists (past and present) who (would) also reject Weak Scientism and many of the (implied) premises in Mizrahi’s Argument. One gets the impression from both 2017a and 2017b that Mizrahi does not think Mizrahi’s Argument is at all controversial. It was for these reasons and in the sense specified here that I emphasized in my 2017 response that a number of (implied) premises in Mizrahi’s Argument are, in fact, very controversial.

In addition, Mizrahi himself cites contemporary philosophers engaged in “the scientism debate in philosophy” who reject Mizrahi’s reduction of philosophy and philosophical knowledge to what philosophers publish (see, e.g., Sorrell 1994 and Haack 2017). There are other professional philosophers engaged in debates about the plausibility of scientism who reject quite a few of the premises in Mizrahi’s Argument (see, e.g., Brown 2011, the authors of some of the papers in Williams & Robinson 2015, and the work of analytic philosopher, Edward Feser, who has offered criticisms of scientism in: 2008, 83-85; 2010a; 2010b, and 2014, 9-24).

Third, Mizrahi thinks I should not call his assumptions philosophical unless I have first defined ‘philosophy’ (2017b, 13; 14), particularly since I claim that his argument is a philosophical and not a scientific argument (2017b, 9; 15). He states: “what Brown labels as ‘philosophical’ is not really philosophical, or at least he is not in a positon to claim that it is philosophical, since he does not tell us what makes something philosophical (other than being work produced by professional philosophers, which is a characterization of ‘philosophical’ that he rejects)” (2017b, 14).

I do not define the nature of philosophy in my 2017 response to Mizrahi’s 2017a. I supposed, perhaps wrongly, that such an endeavor was altogether outside the scope of the project of offering some critical comments on a philosophy paper. Of course, as Mizrahi no doubt knows, even the greatest of Greek philosophers, e.g., Socrates, Plato, and Aristotle, all think about philosophy in very different ways (for Socrates philosophy is a way of life which consists of a search for wisdom; for Plato philosophy is not only a search for wisdom but also involves the possession of wisdom, if only by way of the recollection of an other-worldly (or pre-worldly) set of experiences; Aristotle thinks philosophy is said in many ways (hence metaphysics is ‘first philosophy’), but pace Plato, successful philosophy, Aristotle thinks, needs to make sense of what we know by common sense).

St. Augustine has yet a different way of thinking about the nature of philosophy (philosophy is the search for wisdom, but such a search need not be limited to a mere human investigation, as with the Greeks; it may be that wisdom can be found in a rational reception of a divine revelation). By the time we get to the twentieth century there is also the great divide between analytic and continental approaches to philosophy. As Mizrahi points out, philosophers today disagree with one another about the nature of philosophy (2017a, 356).

So I could give an account of how I understand the philosophical enterprise, but that account itself would be controversial, and beside the point.[4] Perhaps, if only for dialectical purposes, we can give the following as a sufficient condition for pieces of writing and discourse that count as philosophy (N.B. philosophy, not good philosophy):

(P) Those articles published in philosophical journals and what academics with a Ph.D. in philosophy teach in courses at public universities with titles such as Introduction to Philosophy, Metaphysics, Epistemology, Normative Ethics, and Philosophy of Science.

Whereas Mizrahi takes the reduction of philosophy to what professional philosophers publish in academic journals as a premise in Mizrahi’s Argument, I don’t take P to be a necessary condition for something’s counting as philosophy. For philosophical discourses are also recorded, for example, in old books, some of which are not typically taught in philosophy courses today, and (some very good) philosophy, productive of philosophical knowledge, also occurs in conversations between persons who can directly see and hear one another. Indeed, some persons who do not have a Ph.D. in philosophy do (good) philosophy too.

First Controversial Philosophical Premise in Mizrahi’s Argument

Having remarked on Mizrahi’s general criticisms of my objections to Mizrahi’s Argument, I now turn to addressing Mizrahi’s objections to the particular objections or points I make in my critique of Mizrahi’s Argument in 2017. I address these objections not in the order Mizrahi raises them in 2017b, but as these objections track with the objections I raise in my 2017 article, and in the order I raise them (Mizrahi does not comment upon what I call ‘the Second Assumption’ at play in Mizrahi’s Argument in his 2017b, and so I say nothing else about it here).

Recall that the general schema for Mizrahi’s Argument is the following:

  1. One kind of knowledge is better than another quantitatively or qualitatively [assumption].
    8. Scientific knowledge is quantitatively better than non-scientific knowledge (including philosophical knowledge) in terms of the number of journal articles published and the number of journal articles cited.
    9. Scientific knowledge is qualitatively better than non-scientific knowledge (including philosophical knowledge) insofar as scientific theories are more successful than non-scientific theories (including philosophical theories) where the success of a theory is understood in terms of its explanatory, instrumental, and predictive success.
    10. Therefore, scientific knowledge is better than non-scientific forms of knowledge (including philosophical knowledge) both quantitatively and qualitatively [from 8 and 9].
    11. Therefore, scientific knowledge is better than non-scientific forms of knowledge (including philosophical knowledge) [from 7 and 10].

A first controversial philosophical premise at play in Mizrahi’s Argument is a premise Mizrahi uses to defend premise 8 of Mizrahi’s Argument. The premise states that we should think about both knowledge and philosophy operationally. As I point out in 2017, Mizrahi needs to premise such accounts of knowledge and philosophy, since otherwise “it won’t be possible for him to measure the quantity of knowledge in scientific and non-scientific disciplines, something Mizrahi needs to do in order to make his argument for 8” (2017, 44).

Mizrahi has three criticisms of my comment here. First, Mizrahi claims to have provided sufficient justification for operationalizing the nature of philosophy and (philosophical) knowledge by noting the controversy surrounding the nature of philosophy. In light of such controversy, citing Lauer, Mizrahi says: “Arguably, as far as answering the question ‘What makes X philosophical?’ goes, [operationalizing philosophy as what professional philosophers do] may be the best we can do (Lauer 1989, 16)” (Mizrahi 2017a, 356; Mizrahi 2017b, 12). So, contrary to what I say (or imply), Mizrahi does not simply assume we should operationalize the nature of philosophy or knowledge. Second, Mizrahi thinks it problematic for me to challenge his premise reducing philosophy to what professional philosophers do without offering my own account of the nature of philosophy (2017b, 13). Third, Mizrahi thinks it strange that a philosopher (presumably, like me) who wants to defend the usefulness of philosophy should criticize his pragmatic account of the nature of philosophy.

As to Mizrahi’s first point, he offers justification for operationalizing the nature of philosophy and knowledge only in the sense of “here’s a reason why I am proceeding in the way that I am.” Indeed, as I point out in 2017, unless he operationalizes the nature of philosophy and knowledge, “it won’t be possible for him to measure the quantity of knowledge in scientific and non-scientific disciplines, something Mizrahi needs to do in order to make his argument for 8” (2017, 44).

Of course, Mizrahi is free to stipulate an understanding of philosophy or knowledge that can be measured empirically (it’s a free country). But insofar as one bemoans the current state of the research university as one obsessed with outcomes, and measuring outcomes empirically, Mizrahi will forgive those who think stipulating an understanding of the nature of philosophy and knowledge as operational is not only shallow insofar as philosophy and knowledge can’t fit into the narrow parameters of another empirical study, but furthermore, begs the question against those who think that, as great as experimental science and its methods are, experimental science does not constitute the only disciplined approach to searching for knowledge and understanding.

Mizrahi even goes so far to say (his way of) operationalizing the nature of knowledge and philosophy is the least controversial way of doing so (2017b, 13). It’s hard to understand why he thinks that is the case. Just citing the fact that philosophers disagree with one another about the nature of philosophy, citing one author who thinks this is the best we can do, and then adding an additional account of what philosophy is to the already large list of different accounts of what philosophy is—for after all, to say philosophy is what philosophers do, is itself to do some philosophy, i.e., metaphilosophy—does not warrant thinking (a way of) operationalizing of philosophy and knowledge is the least controversial way of thinking about philosophy and knowledge.

In addition, many philosophers think it is false that philosophy and philosophical knowledge are reducible to what professional philosophers do (it may be good to recall that Socrates, Plato, Augustine, Descartes, Locke, and Hume were not professional philosophers). Also, some philosophers think that not all professional philosophers are true philosophers (again, for precedent, see the arguments in Plato’s Phaedo and Republic). Still other philosophers will insist on a definition of knowledge such as, knowledge is warranted true belief, and also think much of what is argued in philosophy journals—and perhaps science journals too—does not meet the threshold of being warranted, and so of knowledge.

Perhaps Mizrahi means (his way of) operationalizing philosophy and knowledge are the least controversial ways of thinking about philosophy and knowledge among those engaged in “the scientism debate in philosophy” (2017b, 13). That may be so. In my original response—and in this response too—I’m trying to suggest that there are people interested in evaluating scientism that do not share the scientistic account of philosophy and knowledge of those engaged in “the scientism debate in philosophy.”

Having said something above why I did not describe the nature of philosophy in my 2017, I turn to Mizrahi’s puzzlement at my raising the possibility that we should not operationalize the nature of philosophy and knowledge, given my interest in showing that philosophy is useful. After all, if it may be the case that a published journal article in philosophy does not constitute philosophy or an item of philosophical knowledge, what hope can there be to for responding to those academics who think philosophy is dead or useless?

Mizrahi apparently puts me in the class of folk who want to defend philosophy as useful. Mizrahi also seems to assume the only way to show philosophy is useful is by defining philosophy operationally (2017b, 13). Therefore, it doesn’t make sense for me to be skeptical about operationalizing the nature of philosophy and knowledge.

Is philosophy useful? That depends upon what we mean by ‘useful.’ Philosophy won’t help us cure cancer or develop the next form of modern technology (not directly, at any rate).[5] So it is not useful as physics, chemistry, biology, or mathematics are useful. It is presumably in that technological sense of ‘useful’ that Martin Heidegger says, “It is entirely proper and perfectly as it should be: philosophy is of no use” (Einfuhrung in die Metaphysik; qtd. in Pieper 1992, 41).

But by ‘useful,’ we may mean, “able to help a person live a better life.” In my view, philosophy can be very useful in that sense. A philosopher can help persons live a better life—sometimes even herself—by writing journal articles (that is, there certainly are some excellent philosophy journal articles, and some—often far too few—read and profit from these). But more often than not, since most people who may profit from exposure to philosophy or a philosopher don’t read academic journals (and wouldn’t profit much from doing so, if they did), people’s lives are improved in the relevant sense by philosophy or philosophers insofar as they encounter a good philosopher in the classroom and in every day conversations or by reading classical philosophical works from the ancient, medieval, modern, and contemporary periods.

By operationalizing the nature of philosophy and knowledge, Mizrahi’s Argument fails to account for those occasions, times, and places where most persons exposed to philosophy can—and sometimes do—profit from the experience by gaining knowledge they did not possess before about what makes for a flourishing human life.

A Third Controversial Philosophical Premise in Mizrahi’s Argument

In my 2017 article, I mention a third controversial philosophical premise at play in Mizrahi’s Argument: the view that the knowledge of each academic discipline—in terms of both its output and impact—can be quantitatively measured. Mizrahi objects that I do not “tell us what makes this alleged ‘assumption’ philosophical” (2017b, 13). He also states that I do not provide evidence that it is controversial. Finally, Mizrahi claims:

that we can measure the research output of academic fields is not “contentious” [Brown 2017, 45] at all. This so-called “assumption” is accepted by many researchers across disciplines, including philosophy [see, e.g., Kreuzman 2001 and Morrow & Sula 2011], and it has led to fruitful work in library and information science, bibliometrics, scientometrics, data science [Andres 2009], and philosophy [see, e.g., Wray & Bornmann 2015 and Ashton & Mizrahi 2017] (Mizrahi 2017b, 13).

As for my claim that the premise one can quantify over knowledge produced in academic disciplines is a philosophical premise, I assumed in my 2017 essay that Mizrahi and I were working from common ground here, since Mizrahi states, “it might be objected that the inductive generalizations outlined above [in defense of premise 8 of Mizrahi’s Argument] are not scientific arguments that produce scientific knowledge because they ultimately rest on philosophical assumptions. One philosophical assumption that they ultimately rest on, for example, is the assumption that academic knowledge produced by academic disciplines can be measured” (2017a, 356; emphasis mine).

I supposed Mizrahi to agree with the highlighted portion of the citation above, but it may be that Mizrahi was simply writing in the voice of an objector to his own view (of course, even then, we often agree with some of the premises in an objector’s argument). I also (wrongly) took it to be obvious that the premise in question is a philosophical premise. What else would it be? A piece of common sense? A statement confirmed by experimental science?[6] Something divinely revealed from heaven?

Mizrahi also claims I don’t provide evidence that the claim that we can quantify over how much knowledge is produced in the academy is controversial. What sort of evidence is Mizrahi looking for? That some philosophical paper says so? Surely Mizrahi does not think we can settle a scholarly—let alone a philosophical—dispute by simply making an appeal to an authority. Does Mizrahi think we need sociological evidence to settle our dispute? Is that the best way to provide evidence for a claim? If the answer to either of these last two questions is ‘yes’, then Mizrahi’s Argument for Weak Scientism is begging the question at issue.

But, in any case, I do offer philosophical evidence that the philosophical claim that academic knowledge can be quantitatively measured is controversial in my 2017 article:

in order to measure the amount of scientific and non-scientific, academic knowledge—as Mizrahi needs to do in order to make his argument for premise 8 [of Mizrahi’s Argument]—he needs to define knowledge teleologically—as the goal or aim of an academic discipline—or operationally—as what academics produce. But thinking about the nature of (academic) knowledge in that pragmatic way is philosophically controversial. Therefore, thinking we can measure quantitatively the amount of knowledge across academic disciplines is itself philosophically controversial, since the latter assumption only makes sense on a pragmatic account of knowledge, which is itself a controversial philosophical assumption (2017, 45).

As I noted above, by ‘controversial’ here, I mean there is (at least) a large minority of philosophers, whether we simply count professional philosophers alive today, or also include dead philosophers, who (would) reject the claims that we can collectively quantify over what counts as knowledge, knowledge is teleological, only academics produce philosophical knowledge, and philosophical knowledge is what philosophers publish in academic journals.

Finally, note that Mizrahi’s evidence that (a) reducing what academics know to what can be quantitatively measured is not controversial is that (b) there are academics from across the disciplines, including philosophy, who accept the premise that we can quantify over knowledge produced by academics and (c) the premise that we can quantify over knowledge produced by academic disciplines has led to fruitful work in a number of disciplines, including information science.

But (a)’s itself being controversial (i.e., that a large minority reject it), even false, is consistent with the truth of both (b) and (c). By analogy, it is no doubt also true that (d) academics from across the disciplines, even some philosophers, think quantitative assessment of college teaching is a good idea and (e) much data has been collected from quantitative assessments of college teaching which will be very useful for those seeking doctorates in education. But surely Mizrahi knows that (d) is controversial among academics, even if (e) is true. Mizrahi’s argument that (a) is true on the basis of (b) and (c) is a non-sequitur.

A Fourth Controversial Philosophical Premise

In my 2017 article, I claim that a fourth controversial philosophical premise is doing important work for Mizrahi’s Argument. This premise states: the quantity of knowledge of each academic discipline—in terms of both output and impact—can be accurately measured by looking at the publications of participants within that discipline. I argue that reducing the production of academic knowledge to what academics publish shows a decided bias in favor of the philosophy of education dominating the contemporary research university, in contrast to the traditional liberal arts model that places a high value on reading and teaching classic texts in philosophy, mathematics, history (including the history of science), and literature. Showing such favor is significant for two reasons.

First, it is question-begging insofar as the philosophy of education in modern research universities, prizing as it does the sort of knowledge that the methods of the experimental sciences are specially designed to produce, i.e., new knowledge and discoveries, is itself rooted in a kind of cultural scientism, one that is supported by big business, university administrators, many journalists, most politicians, and, of course, the research scientists and academics complicit in this scientistic way of thinking about the university.[7] Second, since academics produce knowledge in ways other than publishing, e.g., by way of reading, teaching, mentoring, giving lectures, and engaging others in conversation, the premise that the quantity of knowledge of each academic discipline can be accurately measured by output and impact of publications does not “present us with a representative sample of knowledge produced within all academic disciplines” (Brown 2017, 46). That means that Mizrahi’s inference to the conclusion scientists produce more knowledge that non-scientific academics from the premise, scientists produce more publications than non-scientific academics and scientists produce publications that are cited more often that those published by non-scientist academics is logically invalid.

Mizrahi responds to my comments in this context by stating that I am confusing “passing on knowledge” or “sharing knowledge” with “producing knowledge” (2017b, 14). This distinction is significant, thinks Mizrahi, since “as far as the scientism debate is concerned, and the charge that philosophy is useless, the question is whether the methodologies of the sciences are superior to those of other fields in terms of producing knowledge, not in terms of sharing knowledge” (2017b, 14). Finally, Mizrahi also notes that those in the humanities do not corner the market on activities such as teaching, for scientists pass on knowledge by way of teaching too.

Mizrahi seems to assume that sharing knowledge is not a form of producing knowledge. But I would have thought that, if a person S does not know p at time t and S comes to know p at t+1, then that counts as an instance of the production of knowledge, even if some person other than S knows p at t or some time before t or S comes to know p by way of being taught by a person who already knows p. But say, if only for the sake of argument, that Mizrahi is correct to think sharing knowledge does not entail the producing of knowledge. The fact that either Mizrahi does not count passing on or sharing knowledge as a kind of producing of knowledge or Mizrahi’s Argument does not measure the sharing or passing on of knowledge would seem to mean that Mizrahi’s Argument simply measures the production of new knowledge or discoveries, where new knowledge or a discovery can be defined as follows:

(N) New knowledge or discovery =df some human persons come to know p at time t, where no human person or persons knew p before t.

Mizrahi’s focus on knowledge as new knowledge or discovery in Mizrahi’s Argument reinforces a real limitation of (that argument for) Weak Scientism insofar as it equates knowledge with new knowledge. But it also confirms what I said in my 2017 article: Mizrahi’s Argument is question-begging since it has as a premise that knowledge is to be understood as equivalent to the sort of knowledge which the methods of the experimental sciences are specially designed to produce, i.e., new knowledge and discoveries.

Surely philosophers sometimes make new discoveries, or collectively (believe they) make progress, but philosophy (in the view of some philosophers) is more about individual intellectual progress rather than collective intellectual progress (of course, we may think it is also has the power to bring about social progress, but some of us have our real doubts about that). As Josef Pieper says:

‘Progress’ in the philosophical realm is assuredly a problematic category—insofar as it means an ever growing collective accumulation of knowledge, growing in the same measure as time passes. There exists, under this aspect, an analogy to poetry. Has Goethe ‘progressed’ farther than Homer?—one cannot ask such a question. Philosophical progress undeniably occurs, yet not so much in the succession of generations as rather in the personal and dynamic existence of the philosopher himself (1992, 92).

To the charge that scientists teach students too, I, of course, concur. But if passing on knowledge by way of teaching, mentoring, giving lectures, and personal conversations count as ways of producing knowledge, then Mizrahi’s defense of premise 8 of Mizrahi’s Argument does not, as I say in my 2017 article, “present us with a representative sample of knowledge produced within all academic disciplines” (2017, 46). And if passing on knowledge by way of teaching or reading does not count as a way of producing knowledge, then, given what many of us take to be the real intellectual significance of passing on knowledge through teaching and reading, the position Mizrahi is actually defending in 2017a and 2017b is even weaker:

(Very, Very, Very, Very Weak Scientism): When it comes to the knowledge that is produced by academic journals, i.e., the N knowledge or discoveries published in academic journals, knowledge that comes from scientific academic journals is the best.

A Fifth Controversial Philosophical Premise

Although Mizrahi says nothing about it in his 2017b response to my 2017 essay, I think it is important to emphasize again that, in arguing that scientific knowledge is better than non-scientific knowledge in terms of quantity of knowledge, Mizrahi makes use of a fifth controversial philosophical premise in Mizrahi’s Argument: the quantity of knowledge—in terms of output and impact—of each academic discipline can be successfully measured by looking simply at the journal articles published (output) and cited (impact) within that discipline. For to count only journal articles when quantifying over impact of the knowledge of a discipline is, again, to adopt a scientific, discovery-oriented, approach to thinking about the nature of knowledge.

For how often do the works of Plato, Aristotle, Virgil, St. Augustine, St. Thomas Aquinas, Dante, Shakespeare, Descartes, Hume, Kant, Hegel, Marx, and Dostoevsky, just for starters, continue to have research impact on the work of historians, social scientists, theologians, and literature professors, not to mention, philosophers? So Mizrahi’s Argument either begs the question against non-scientist academics for another reason—it neglects to count citations of great thinkers from the past—or, by focusing only on the citation of journal articles, we are given yet another reason to think the sample Mizrahi uses to make his inductive generalization in defense of premise 8 of Mizrahi’s Argument is simply not a representative one.

The Sixth and Ninth Controversial Philosophical Premises

Here I address the following controversial philosophical premises, both of which function as key background assumptions in Mizrahi’s Argument:

(K1) For any two pieces of knowledge, p and q, where p and q are produced by an academic discipline or disciplines, p is to be treated as qualitatively equal to q where measuring the quantity of knowledge produced within academic disciplines is concerned.

(K2) For any two pieces of knowledge, p and q, where p and q are produced by an academic discipline or disciplines, p is to be treated as qualitatively equal to q in the sense of the nobility or importance or perfection of p and q where measuring the quality of p and q, where quality in this latter sense measures the extent to which the theories employed in an academic discipline or discipline productive ofand q enjoy some degree of explanatory, instrumental, and predictive success.

Premise 8 of Mizrahi’s Argument says that scientific knowledge is quantitatively better than non-scientific academic knowledge because scientists publish more journal articles than non-scientists and the journal articles published by scientists are cited more often—and so have a greater “research impact”—than do the journal articles published by non-scientists (2017a, 355-58). In my 2017 essay I note that, in concluding to premise 8 on the basis of his inductive generalizations, Mizrahi is assuming (something such as) K1.

Furthermore, we may reasonably think K1 is false since the production of some sorts of non-scientific knowledge work may be harder than the production of scientific knowledge (and if a piece of work W is harder to publish than a piece of work W1, then, all other things being equal, W is qualitatively better than W1). For example, I mentioned the recent essay by philosopher David Papineau: “Is Philosophy Simply Harder than Science?” (2017). I also offered up as a reason for questioning whether K1 is correct Aristotle’s famous epistemological-axiological thesis that a little knowledge about the noblest things is more desirable than a lot of knowledge about less noble things.[8]

Premise 9 of Mizrahi’s Argument says that scientific knowledge is qualitatively better than non-scientific academic knowledge insofar as scientific theories are more successful than non-scientific theories (including philosophical theories) where the success of a theory is understood in terms of its explanatory, instrumental, and predictive success. In my 2017 response, I suggest that Mizrahi has (something such as) K2 implicitly premised as a background philosophical assumption in his argument for premise 9, and a premise such as K2 is a philosophically controversial one. At the very least, Mizrahi’s implicitly premising K2 in Mizrahi’s Argument therefore limits the audience for which Mizrahi’s Argument will be at all rhetorically convincing. For as I stated in my 2017 response:

Assume . . .  the following Aristotelian epistemological axiom: less certain knowledge (or less explanatorily successful knowledge or less instrumentally successful knowledge or less testable knowledge) about a nobler subject, e.g., God or human persons, is, all other things being equal, more valuable than more certain knowledge (or more explanatorily successful knowledge or more instrumentally successful knowledge or more testable knowledge) about a less noble subject, e.g., stars or starfish. . . . [And] consider, then, a piece of philosophical knowledge P and a piece of scientific knowledge S, where P constitutes knowledge of a nobler subject than S. If S enjoys greater explanatory power and more instrumental success and greater testability when compared to P, it won’t follow that S is qualitatively better than P (2017, 50).

Mizrahi raises a number of objections to the sections of my 2017 essay where I mention implicit premises at work in Mizrahi’s Argument such as K1 and K2. First, I don’t explain why, following Papineau, philosophy may be harder than science (2017b, 9). Second, he offers some reasons to think Papineau is wrong: “producing scientific knowledge typically takes more time, effort, money, people, and resources . . . [therefore], scientific knowledge is harder to produce than non-scientific knowledge” (2017b, 9). Third, he notes I don’t argue for Aristotle’s epistemological-axiological thesis, let alone explain what it means for one item of knowledge to be nobler than another.

Fourth, in response to my notion that philosophy and science use different methodologies insofar as the methods of the former do not invite consensus whereas the methods of the latter do, Mizrahi notes that “many philosophers would probably disagree with that, for they see the lack of consensus, and thus progress in philosophy as a serious problem” (2017b, 10). Fifth, Mizrahi thinks there is precedent for his employing a premise such as K1 in his defense of premise 8 in Mizrahi’s Argument insofar as analytic epistemologists often use variables in talking about the nature of knowledge, e.g., propositions such as ‘if person S knows p, then p is true,’ and therefore treat all instances of knowledge as qualitatively equal (2017b, 13, n. 2).

In mentioning Papineau’s article in my 2017 essay, I offer an alternative interpretation of the data that Mizrahi employs in order to defend premise 8 of Mizrahi’s Argument, an interpretation that he should—and does not—rule out in his 2017a paper, namely, that scientists produce more knowledge than non-scientists not because scientific knowledge is better than non-scientific knowledge but rather because non-scientific knowledge (such as philosophical knowledge) is harder to produce than scientific knowledge. Indeed, Mizrahi himself feels the need to rule out this possibility in his 2017b reply to my 2017 article’s raising this very point (see 2017b, 9).

Mizrahi’s inference about the greater difficulty of scientific work compared to non-scientific academic work such as philosophy goes through only if we think about the production of philosophical knowledge in the operational manner in which Mizrahi does. According to that model of philosophical knowledge, philosophical knowledge is produced whenever someone publishes a journal article. But, traditionally, philosophical knowledge is not that easy to come by. Granted, scientific knowledge too is hard to produce. As Mizrahi well notes, it takes lots of “time, effort, money, people, and resources” to produce scientific knowledge.

But we live in a time that holds science in high regard (some think, too high a regard), not just because of the success of science to produce new knowledge, but because science constantly provides us with obvious material benefits and new forms of technology and entertainment.[9] So it stands to reason that more time, effort, money, and resources are poured into scientific endeavors and more young people are attracted to careers in science than in other academic disciplines. When one adds to all of this that scientists within their fields enjoy a great consensus regarding their methods and aims, which invites greater cooperation among researchers with those fields, it is not surprising that scientists produce more knowledge than those in non-scientific academic disciplines.

But all of that is compatible with philosophy being harder than science. For, as we’ve seen, there is very little collective consensus among philosophers about the nature of philosophy and its appropriate methods. Indeed, many academics—indeed, even some philosophers—think knowledge about philosophical topics is not possible at all. It’s not beyond the pale to suggest that skepticism about the possibility of philosophical knowledge is partly a result of the modern trend towards a scientistic account of knowledge. In addition, some philosophers think philosophical knowledge is harder to acquire than scientific knowledge, if only because of the nature of those topics and questions that are properly philosophical (see Papineau 2017 and Van Inwagen 2015, 14-15).

As for the meaning of Aristotle’s epistemological-axiological claim, I take it that Aristotle thinks p is a nobler piece of knowledge than q if, all others being equal, the object of p is nobler than the object of q. For example, say we think (with Aristotle) that it is better to be a rational being than a non-rational being. It would follow that rational animals (such as human persons) are nobler than non-rational animals. Therefore, applying Aristotle’s epistemological-axiological claim, all other things being equal, knowing something about human persons—particularly qua embodied rational being—is a nobler piece of knowledge than knowing something about any non-rational object.

Now, as Mizrahi points out, not all philosophers agree with Aristotle. But my original point in mentioning Aristotle’s epistemological-axiological thesis was to highlight an implicit controversial philosophical premise in the background of Mizrahi’s Argument. The Aristotelian epistemological-axiological thesis is perhaps rejected by many, but not all, contemporary philosophers. The implicit assumption that Aristotle is wrong that (knowledge of) some object is nobler than (knowledge of) another object is a philosophical assumption (just as any arguments that Aristotle is wrong will be philosophical arguments). Indeed, it may be that any reason a philosopher will give for rejecting Aristotle’s epistemological-axiological thesis will also show that they are already committed to some form of scientistic position.

I say the practice of philosophy doesn’t invite consensus, whereas one of the advantages of an experimental method is that it does. That is a clear difference between philosophy and the experimental sciences, and since at least the time of Kant, given the advantage of a community of scholars being able to agree on most (of course, not all) first principles where some intellectual endeavor is concerned, some philosophers have suggested philosophy does not compare favorably with the experimental sciences.

So it is no surprise that, as Mizrahi notes, “many [contemporary] philosophers . . . see the lack of consensus, and thus progress in philosophy as a serious problem” (2017b, 10). But it doesn’t follow from that sociological fact, as Mizrahi seems to suggest it does (2017, 10), that those same philosophers disagree that philosophical methods don’t invite consensus. A philosopher could lament the fact that the methods of philosophy don’t invite consensus (in contrast to the methods of the experimental sciences) but agree that that is the sober truth about the nature of philosophy (some professional philosophers don’t like philosophy or have science envy; I’ve met a few). In addition, the fact that some philosophers disagree with the view that philosophical methods do not invite consensus shouldn’t be surprising. Philosophical questions are by nature controversial.[10]

Finally, Mizrahi defends his premising (something such as) K1 by citing the precedent of epistemologists who often treat all items of knowledge as qualitatively the same, for example,  when they make claims such as, ‘if S knows p, then p is true.’ But the two cases are not, in fact, parallel. For, unlike the epistemologist thinking about the nature of knowledge, Mizrahi is arguing about and comparing the value of various items of knowledge. For Mizrahi to assume KI in an argument that tries to show scientific knowledge is better than non-scientific knowledge is to beg a question against those who reasonably think philosophy is harder than science or the things that philosophers qua philosophers know are nobler than the things that scientists qua scientists know, whereas epistemologists arguably are not begging a question when engaged in the practice of abstracting from various circumstances (by employing variables) in order to determine what all instances of knowledge have in common.

The Seventh and Eighth Controversial Philosophical Premises

In his attempt to defend the thesis that scientific knowledge is qualitatively better than non-scientific knowledge, Mizrahi assumes that a theory A is qualitatively better than a theory B if A is more successful than B (2017a, 358). He thus thinks about a theory’s qualitative value in pragmatic terms. But not all philosophers think about the qualitative goodness of a theory in pragmatic terms, particularly in a philosophical context, if only because, of two theories A and B, A could be true and B false, where B is more successful than A, and a philosopher may prize truth over successful outcomes.[11] This constitutes a seventh controversial philosophical premise in Mizrahi’s Argument.

In addition, as I point out in my 2017 response, there is an eighth controversial philosophical premise in the background of Mizrahi’s Argument, namely, that a theory A is more successful than a theory B if A is more explanatorily successful than B, and more instrumentally successful than B, and more predictively successful than B. Even if we grant, for the sake of argument, that this is a helpful account of a good scientific explanation, interestingly, Mizrahi thinks these criteria for a successful scientific theory can be rightfully applied as the measure of success for a theory, simpliciter.

I argued in my 2017 response that to think philosophical theories have to be, for example, instrumentally successful (in the way experimental scientific theories are, namely, (a) are such that they can be put to work to solve immediate material problems such as the best way to treat a disease or (b) are such that they directly lead to technological innovations) and predicatively successful in an argument for the conclusion that scientific knowledge is better than non-scientific knowledge is “to beg the question against non-scientific ways of knowing, ways of knowing that do not, by their very nature, employ controlled experiments and empirical tests as an aspect of their methodologies” (2017, 48).

In responding to my comments, Mizrahi makes three points. First, I criticize his account of explanation without offering my own account of explanation (2017b, 19). Second, passing over my comments that good philosophical theories need not be instrumentally successful (in the relevant sense) or predicatively successful, Mizrahi argues that I can’t say, as I do, that good philosophical theories explain things but do not enjoy the good-making qualities of all good explanations. As Mizrahi states, “the good-making properties of [good] explanations include unification, coherence, simplicity, and testability. Contrary to what Brown (2017, 48) seems to think, these good-making properties apply to explanations in general, not just to scientific explanations in particular” (2017b, 19) and

Contrary to what Brown asserts without argument, then, ‘To think that a theory T is successful only if—or to the extent that—it enjoys predicative success or testability’ is not to beg the question against non-scientific ways of knowing. For, insofar as non-scientific ways of knowing employ IBE [i.e., inference to the best explanation], which Brown admits is the case as far as philosophy is concerned, then their explanations must be testable (as well as unified, coherent, and simple) if they are to be good explanations (Mizrahi 2017b, 2); emphases in the original).

Mizrahi offers as evidence for the claim that all good explanations are testable and enjoy predicative power the ubiquity of such a claim in introductory textbooks on logic and critical thinking, and he offers as a representative example a chapter from a textbook by two philosophers, Sinnott-Armstrong & Fogelin 2010.

I plead guilty to not offering an account of good explanation in my 2017 article (for the same sort of reason I gave above for not defining philosophy). What I contend is that just as philosophical methods are different in kind from those of the experimental scientists, so too is a good philosophical explanation different in kind from what counts as a good explanation in an empirical science. That is not to say that philosophical and scientific explanations have nothing significant in common, just as it is not to say that the practice of philosophy has nothing in common with experimental scientific practice, despite their radical differences.

For both philosophy—at least on many accounts of its nature—and experimental science are human disciplines: their premises, conclusions, theories, and proposed explanations must submit to the bar of what human reason alone can establish.[12] Indeed, many philosophers who do not share Mizrahi’s scientistic cast of mind could happily agree that good philosophical explanations are coherent and, all other things being equal, that one philosophical explanation E is better than another E1 if E is more unified or simpler or has more explanatory power or depth or modesty than E1.

Others would add (controversially, of course) that, all other things being equal, philosophical theory E is better than E1 if E makes better sense of, or is more consistent with, common-sense assumptions about reality and human life,[13] e.g., if theory E implies human persons are never morally responsible for their actions whereas E1 does not, then, all other things being equal, E1 is a better philosophical theory than theory E. In addition, we may think, taking another cue from Aristotle, that a philosophical theory E is better than a theory E1, all other things being equal, if E raises fewer philosophical puzzles than E1.

Mizrahi also premises that good philosophical explanations have to be testable (2017a, 360; 2017b, 19-20). But what does he mean? Consider the following possibilities:

(T1) A theory or explanation T is testable if and only if T can be evaluated by controlled experiments and other methods characteristic of the experimental sciences, e.g., inductive generalization.

(T2) A theory or explanation is testable if and only if T can be evaluated by (a) controlled experiments and other methods characteristic of the experimental sciences, e.g., inductive generalization or (b) on the basis of deductive arguments or (c) the method of disambiguating premises, or (d) the method of refutation by counter-example or (e) inference to the best explanation or (e) thought experiments (or (f) any number of other philosophical methods or (g) methods we use in everyday life).

In my 2017 response, I took Mizrahi to mean (something such as) T1 by ‘a good explanation is testable’. For example, Mizrahi states: “as a general rule of thumb, choose the explanation that yields independently testable predictions” (2017a, 360; emphases mine). If Mizrahi accepts T1 and thinks all good explanations must be testable, then, as I stated in my 2017 response, “philosophical theories will . . . not compare favorably with scientific ones” (2017, 49).

But as philosopher Ed Feser well points out, to compare the epistemic values of science and philosophy and fault philosophy for not being good at making testable predications is like comparing metal detectors and gardening tools and concluding gardening tools are not as good as metal detectors because gardening tools do not allow us to successfully detect for metal (2014, 23).

In other words, if T1 is what Mizrahi means by ‘testable’ and Mizrahi thinks all good explanations are testable, then Mizrahi’s Argument does, as I contend in my 2017 response, “beg the question against non-scientific ways of knowing, ways of knowing that do not, by their very nature, employ controlled experiments and empirical tests as an aspect of their methodologies” (2017, 48; see also Robinson 2015).

But perhaps Mizrahi means by ‘an explanation’s being testable’ something such as T2. But in that case, good philosophical work, whether classical or contemporary, will compare favorably with the good work done by experimental scientists (of course, whether one thinks this last statement is true will depend upon one’s philosophical perspective).

Some Concluding General Remarks About Mizrahi’s Argument

Given the number of (implied) controversial philosophical premises that function as background assumptions in Mizrahi’s Argument, that argument should not convince those who do not already hold to (a view close to) Weak Scientism. As we’ve seen, Mizrahi premises, for example, that philosophy should be operationally defined as what philosophers do, that knowledge within all academic disciplines should be operationally defined as what academics publish in academic journals, K1, K2, and all good explanations are explanatorily, instrumentally, and predicatively successful.

Of course, the number of controversial philosophical premises at play in Mizrahi’s Argument isn’t in and of itself a philosophical problem for Mizrahi’s Argument, since the same could be said for just about any philosophical argument. But one gets the distinct impression that Mizrahi thinks Mizrahi’s Argument should have very wide appeal among philosophers. If Mizrahi wants to convince those of us who don’t already share his views, he needs to do some more work defending the implied premises of Mizrahi’s Argument, or else come up with a different argument for Weak Scientism.

Indeed, many of the implied controversial philosophical premises I’ve identified in Mizrahi’s Argument are, as we’ve seen, not only doing some heavy philosophical lifting in that argument, but such premises imply that (something such as) Weak Scientism is true, e.g., the quantity of knowledge of each academic discipline—in terms of both output and impact—can be accurately measured by looking at the publications of participants within that discipline, K1, K2, a theory A is more successful than a theory B if A is more explanatorily successful than B, and more instrumentally successful than B, and more predictively successful than B, and an explanation is a good explanation only if it is testable in the sense of T1. Mizrahi’s Argument thus begs too many questions to count as a good argument for Weak Scientism.

Mizrahi is also at pains to maintain that his argument for Weak Scientism is a scientific and not a philosophical argument, and this because a significant part of his argument for Weak Scientism not only draws on scientific evidence, but employs “the structure of inductive generalization from samples, which are inferences commonly made by practicing scientists” (2017a, 356). I admit that a scientific argument from information science is “a central feature of Mizrahi’s Argument” (Brown 2017, 50) insofar as he uses scientific evidence from information science to support premise 8 of Mizrahi’s Argument. But, as I also note in my 2017 essay, “Mizrahi can’t reasonably maintain his argument is thereby a scientific one, given the number of controversial philosophical assumptions employed as background assumptions in his argument” (2017, 51).

My objection that Mizrahi’s Argument is a piece of philosophy and not a scientific argument is one that Mizrahi highlights in his response (2017b, 9). He raises two objections to my claim that Mizrahi’s Argument is a philosophical and not a scientific argument. First, he thinks I have no grounds for claiming Mizrahi’s Argument is a philosophical argument since I don’t give an account of philosophy, and I reject his operationalized account of philosophy (2017b, 15). Second, Mizrahi states that

Brown seems to think than an argument is scientific only if an audience of peers finds the premises of that argument uncontroversial. . . . Accordingly, Brown’s (2017) criterion of controversy [according to Mizrahi, I think this is dubitability] and his necessary condition for an argument being scientific have the absurd consequence that arguments presented by scientists at scientific conferences (or published in scientific journals and books) are not scientific arguments unless they are met with unquestioned acceptance by peer audiences (2017b, 16).

I have already addressed Mizrahi’s comment about the nature of philosophy above. In response to his second objection, Mizrahi wrongly equates my expression, “controversial background philosophical assumptions,” with his expression, “controversial premises in a scientific argument.” Recognizing this false equivalency is important for evaluating my original objection, and this for a number of reasons.

First, what Mizrahi calls the “premises of a scientist’s argument” (2017b, 15) are, typically, I take it, not philosophical premises or assumptions. For is Mizrahi claiming that scientists, at the presentation of a scientific paper, are asking questions about propositions such as K1 or K2?

Second, Mizrahi and I both admit that philosophical background assumptions are sometimes in play in a scientific argument. Some of these claims, e.g., that there exists an external world, some philosophers will reject. What I claimed in my 2017 essay is that a scientific argument—in contrast to a philosophical argument—employs background philosophical assumptions that “are largely non-controversial for the community to which those arguments are addressed, namely, the community of practicing scientists” (2017, 15). For example, an argument that presupposed the truth of theism (or atheism), would not be, properly speaking, a scientific argument, but, at best, a philosophical argument that draws on some scientific evidence to defend certain of its premises.

So, contrary to what Mizrahi says, my argument that Mizrahi’s Argument is not a scientific argument neither implies that Darwin’s Origin of the Species is not science, nor does it imply a scientist’s paper is not science if audience members challenge that paper’s premises, methods, findings, or conclusion. Rather, my comment stands unscathed: because of the number of philosophical background premises that are controversial among the members of the audience to which Mizrahi’s Argument is directed—presumably all academics—Mizrahi’s Argument is not a scientific argument but rather, a philosophical argument that draws on some data from information science to defend one of its crucial premises, namely premise 8.

But, as I pointed out above, even Mizrahi’s argument for premise 8 in Mizrahi’s Argument is a philosophical argument, drawing as that argument does on the controversial philosophical background premises such as philosophy should be operationally defined as what philosophers do, knowledge within all academic disciplines should be operationally defined as what academics publish in academic journals, and K1.

Finally, there is another reason why Mizrahi himself, given his own philosophical principles, should think Mizrahi’s Argument is a piece of philosophy. As we’ve seen, Mizrahi thinks that philosophy and philosophical knowledge should be defined operationally, i.e., philosophy is what philosophers do, e.g., publish articles in philosophy journals, and philosophical knowledge is what philosophers produce, i.e., publications in philosophy journals (see Mizrahi 2017a, 353). But Mizrahi’s 2017a paper is published in a philosophy journal. Therefore, by Mizrahi’s own way of understanding philosophy and science, Mizrahi’s Argument is not a scientific argument, but a philosophical argument (contrary to what Mizrahi says in both 2017a and 2017b).

Contact details: chrisb@utm.edu

References

Aquinas, Saint Thomas. Summa Theologiae. Translated by the Fathers of the English Dominican Province. Allen, TX: Christian Classics, 1981.

Aquinas, Saint Thomas. Summa Contra Gentiles. Book One. Trans. Anton C. Pegis. South Bend, IN: University of Notre Dame Press, 1991.

Aristotle. Posterior Analytics. Trans. G.R.G. Mure. In The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 1941.

Aristotle. On the Parts of Animals. Trans. William Ogle. In The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 1941.

Aristotle. Nicomachean Ethics. Trans. W.D. Ross. In The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 1941.

Augustine, Saint. Confessions. Trans. Frank Sheed. 1942; reprint, Indianapolis: Hackett Publishing, 2006.

Brown, Christopher. “Some Logical Problems for Scientism.” Proceedings of the American Catholic Philosophical Association 85 (2011): 189-200.

Brown, Christopher. “Some Objections to Moti Mizrahi’s ‘What’s So Bad about Scientism?’.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 42-54.

Bourget, David and David J. Chalmers. “What do philosophers believe?” Philosophical Studies 170, 3 (2014): 465-500.

Chesterton, G.K. Orthodoxy. 1908; reprint, San Francisco: Ignatius Press, 1995.

Feldman, Richard. Epistemology. Upper Saddle River, NJ: Prentice-Hall, 2003.

Feser, Edward. The Last Superstition: A Refutation of the New Atheism. South Bend: St. Augustine’s Press, 2008.

Feser, Edward. “Blinded by Scientism.” Public Discourse. March 9, 2010a. Accessed January 15, 2018. http://www.thepublicdiscourse.com/2010/03/1174/.

Feser, Edward. “Recovering Sight after Scientism.” Public Discourse. March 12, 2010b. Accessed January 15, 2018. http://www.thepublicdiscourse.com/2010/03/1184/.

Feser, Edward. Scholastic Metaphysics: A Contemporary Introduction. editiones scholasticae, 2014.

Haack, Susan. Defending Science—Within Reason: Between Scientism and Cynicism. Amherst, NY: Prometheus Books, 2007.

Haack, Susan. “The Real Question: Can Philosophy Be Saved? Free Inquiry (October/November 2017): 40-43.

MacIntyre, Alasdair. God, Philosophy, and Universities. Lanham: Rowman & Littlefield, 2009.

Mizrahi, Moti. “What’s So Bad About Scientism?” Social Epistemology 31, no. 4 (2017a): 351-367.

Mizrahi, Moti. “In Defense of Weak Scientism: A Reply to Brown.” Social Epistemology Review and Reply Collective 6, no. 11 (2017b): 9-22.

Oxford English Dictionary Online, s.v. “scientism,” accessed January 10, 2018, http://www.oed.com/view/Entry/172696?redirectedFrom=scientism.

Papineau, David. “Is Philosophy Simply Harder than Science?” The Times Literary Supplement On-line. June 1, 2017. Accessed July 11, 2017. https://goo.gl/JiSci7.

Pieper, Josef. In Defense of Philosophy. Trans. Lothar Krauth. 1966; reprint, San Francisco: Ignatius Press, 1992.

Plato. Phaedo. In Five Dialogues. Trans. Grube and Cooper. Indianapolis: Hackett Publishing, 2002.

Plato. Gorgias. Trans. Donald J. Zeyl. Indianapolis: Hackett Publishing, 1987.

Plato. Republic. Trans. C.D.C. Reeve. Indianapolis: Hackett Publishing, 2004.

Postman, Neil. Technopoly: the Surrender of Culture to Technology. New York: Vintage Books, 1993.

Robinson, Daniel N. “Science, Scientism, and Explanation.” In Scientism: the New Orthodoxy. Williams and Robinson, eds. London: Bloomsbury Academic, 2015, 23-40.

Rosenberg, Alex. The Atheist’s Guide to Reality. New York: W. W. Norton and Co., 2011.

Sorrell, Tom. Scientism: Philosophy and the Infatuation with Science. First edition. London: Routledge, 1994.

Sorell, Tom. Scientism: Philosophy and the Infatuation with Science. Kindle edition. London: Routledge, 2013.

Van Inwagen, Peter. Metaphysics. 4th edition. Boulder, CO: Westview Press, 2015.

Williams, Richard. N. and Daniel N. Robinson, eds. Scientism: the New Orthodoxy. London: Bloomsbury Academic, 2015.

[1] “Tradition means giving votes to the most obscure of all classes, our ancestors. It is the democracy of the dead. Tradition refuses to submit to the small and arrogant oligarchy of those who merely happen to be walking about” (Orthodoxy [chapter four] 1995, 53).

[2] See Phaedo, 61c-d and 64b-69e.

[3] See Republic 473c-480a.

[4] Here follows a description of something like one traditional way of thinking about the intellectual discipline of philosophy, one that I often give in my introduction to philosophy classes. It describes philosophy by comparing and contrasting it with the experimental sciences, on the one hand, and revealed theology, on the other hand: philosophy is that intellectual discipline which investigates the nature of ultimate reality, knowledge, and value (i.e., subjects the investigation of which raise questions that can’t be settled simply by running controlled experiments and taking quantitative measurements) by way of methods such as deductive argumentation, conceptual analysis, and reflection upon one’s own experiences and the experiences of others (where the experiences of others include, but are not limited to, the experiences of experimental scientists doing experimental science and the experiences of those who practice other intellectual disciplines), by way of the natural light of human reason alone (where this last clause is concerned, philosophy is usefully compared and contrasted with revealed theology: revealed theology and philosophy investigate many of the same questions, e.g., are there any sorts of actions that ought to never be performed, no matter what?, but whereas philosophy draws upon the natural light of human reason alone to answer its characteristic questions [in this way philosophy is like the experimental sciences], and not on any supposed divine revelations, revealed theology makes use of the natural light of reason and [what revealed theologians believe by faith is] some divine revelation).

[5] Although, as I pointed out in my 2017 essay, it seems one can plausibly argue that modern science has the history of Western philosophy as a necessary or de facto cause of its existence, and so the instrumental successes of modern science also belong to Western philosophy indirectly.

[6] For academics don’t agree on which claims count as knowledge claims, e.g., some will say we can know propositions such as murder is always wrong, others don’t think we can know ethical claims are true. Are we, then, to simply measure those claims published at the university that all—or the great majority of academics believe count as knowledge claims? But in that case, we are no longer measuring what counts as knowledge, but rather what a certain group of people, at a certain time, believes counts as knowledge. I don’t think I’m going out on a limb when I say cataloguing what a certain group of people believe is sociology and not philosophy.

[7] See, e.g., Alasdair MacIntyre 2009, esp. 15-18 and 173-180.

[8] See, e.g., On the Parts of Animals, Book I, chapter 5 [644b32-645a1]. See also St. Thomas Aquinas, Summa contra gentiles, book one, ch. 5, 5 and Summa theologiae Ia. q. 1, a. 5, ad1.

[9] Of course, modern science and technology produce negative effects too, e.g., pollution, and, according to some, increased dissolution of traditional social bonds. But, because the positive effects of modern science and technology are often immediate and the negative effects often arise only after some time has passed, it is hard for us to take into account, let alone see, the negative consequences of modern science and technology. For some helpful discussion of the history of the culturally transformative effects of modern science and technology, both positive and negative, see Postman 1993.

[10] See, e.g., Bourget & Chalmers 2014. As that study shows, when a good number of contemporary philosophers were polled about a number of major philosophical questions, every question asked turns out to be collectively controversial. Although the paper certainly identifies certain tendencies among contemporary philosophers, e.g., 72.8% identified as atheists, and only 14.6% identified as theists, that latter number is still a healthy minority view so that its seems right to say that whether atheism or theism is true is collectively controversial for contemporary philosophers. For some good discussion of philosophical questions as, by nature, controversial, see also Van Inwagen 2015, 11-19.

[11] A small point: in responding to my comment here, Mizrahi misrepresents what I say. He renders what I call in my 2017 article “the seventh assumption” as “One theory can be said to be qualitatively better than another” (2017b, 12). That’s not what I say; rather I suggest Mizrahi’s Argument premises a theory A is qualitatively better than theory B if A is more successful than theory B.

[12] In this way, philosophy and the experimental sciences differ from one historically important way of understanding the discipline of revealed theology. For example, as St. Thomas Aquinas understands the discipline of revealed theology (see, e.g., Summa theologiae Ia. q. 1.), revealed theology is that scientia that treats especially of propositions it is reasonable to believe are divinely revealed, propositions that can’t be known by the natural light of human reason alone. In addition, the wise teacher of revealed theology can (a) show it is reasonable to believe by faith that these propositions are divinely revealed and (b) use the human disciplines, especially philosophy, to show how propositions that are reasonably believed by faith are not meaningless and do not contradict propositions we know to be true by way of the human disciplines, e.g., philosophy and the sciences.

[13] See, e.g., Aristotle’s Nicomachean Ethics, book vii, ch. 1 (1145b2-7).

Author Information: Christopher M. Brown, University of Tennessee, Martin, chrisb@utm.edu

Brown, Christopher M. “Defending Some Objections to Moti Mizrahi’s Arguments for Weak Scientism.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 1-35.

The pdf of the article gives specific page references. Due to the length of Brown’s article, we will be posting it in three parts. Shortlink: https://wp.me/p1Bfg0-3TE

Please refer to:

Image by Bryan Jones via Flickr / Creative Commons

 

In 2017a,[1] Moti Mizrahi distinguishes a position he calls Weak Scientism—of all the knowledge we have, scientific knowledge is the best—from what he calls Strong Scientism—the only real kind of knowledge is scientific knowledge. Whereas Strong Scientism may have serious problems, Mizrahi argues Weak Scientism is a defensible position. In my 2017 response, I raise some objections to the arguments Mizrahi employs to defend Weak Scientism. Mizrahi replies to my objections in 2017b. This essay has two parts. In the first part, I briefly summarize both Mizrahi’s arguments in defense of Weak Scientism in 2017a and the problems for Mizrahi’s arguments I identify in my 2017 essay. In the second part, I offer replies to Mizrahi’s objections in 2017b.

Mizrahi’s Arguments for Weak Scientism and Some Objections to those Arguments

In 2017a, Mizrahi does at least three things. First, he distinguishes persuasive and non-persuasive definitions of scientism and argues for adopting the latter rather than the former. Second, Mizrahi distinguishes Strong Scientism from the position he defends, Weak Scientism. Third, Mizrahi defends Weak Scientism in two ways. The first way Mizrahi defends Weak Scientism is by attempting to defeat the following two objections to that position:

(O1) It is epistemically impossible to offer scientific evidence for Weak Scientism.

(O2) It is viciously circular to support Weak Scientism with scientific evidence.

Where Mizrahi’s attempt to defeat O1 is concerned, he offers what he takes to be a scientific argument for Weak Scientism. Here follows a schema of the argument:

7. One kind of knowledge is better than another quantitatively or qualitatively.[2]
8. Scientific knowledge is quantitatively better than non-scientific knowledge (including philosophical knowledge) in terms of the number of journal articles published and the number of journal articles cited.
9. Scientific knowledge is qualitatively better than non-scientific knowledge (including philosophical knowledge) insofar as scientific theories are more successful than non-scientific theories (including philosophical theories) where the success of a theory is understood in terms of its explanatory, instrumental, and predictive success.
10. Therefore, scientific knowledge is better than non-scientific forms of knowledge (including philosophical knowledge) both quantitatively and qualitatively [from 8 and 9].
11. Therefore, scientific knowledge is better than non-scientific forms of knowledge (including philosophical knowledge) [from 7 and 10].

For the sake of ease of reference, let us call the argument above, Mizrahi’s Argument. A second way Mizrahi defends Weak Scientism in his 2017a paper is directly by way of Mizrahi’s Argument. For if Mizrahi’s Argument is sound, it not only shows O1 is false, but it shows Weak Scientism is true.

In my 2017 essay, I raise a number of objections to what Mizrahi argues in 2017a. First, I argue Weak Scientism is not really a form of scientism. Second, I argue Mizrahi does not give an advocate of Strong Scientism good reasons to adopt Weak Scientism. Third, I contend that, contrary to what Mizrahi supposes (2017a, 354), Weak Scientism is not relevant by itself for mediating the debate between defenders of philosophy and those who think philosophy is useless. Fourth, I argue that Mizrahi’s Argument presupposes philosophical positions that many academics reject, so that Mizrahi’s Argument is not as powerful as he seems to think. Fifth, I argue that some of the background philosophical premises in Mizrahi’s Argument are question-begging.

Sixth, I contend that Mizrahi’s primary argument for Weak Scientism—Mizrahi’s Argument—is a philosophical argument and not a scientific argument, and so he does not defeat objection O1. Seventh, I argue that Mizrahi does not defeat objection O2, since there is a way to think about the defensibility of deductive inference that does not involve making inferences. Finally, I offer two counter examples to Mizrahi’s contention that the use of a persuasive definition of a term necessarily involves both begging the question against those who reject such a definition and a failure to provide reasons for thinking that definition is true.

Responding to Mizrahi’s Objections

I now respond to objections Mizrahi raises in 2017b to my 2017 essay. In each section of this part I highlight an objection I raised for Mizrahi 2017a in my 2017 response, I explain Mizrahi’s response to that objection in 2017b, and I offer a response to Mizrahi’s response. In many cases Mizrahi has misconstrued one of my objections, and so I here clarify those objections. In other cases, Mizrahi misses the point of one of my objections, and so I try to make those objections clearer. Still in other cases, Mizrahi makes some good points about objections I raise in 2017, although not points fatal to those objections, and so I revise my objections accordingly. Finally, in some cases Mizrahi asks for more information and so I give it, at least where such information is relevant for evaluating Mizrahi’s defenses of Weak Scientism.

Is Weak Scientism Really Scientism?  

In 2017, I argue that Weak Scientism is not really strong enough to count as scientism. For, given Weak Scientism, philosophical knowledge may be nearly as valuable as scientific knowledge. In fact, given that Weak Scientism claims only that scientific knowledge is better than non-scientific academic knowledge (see, e.g., Mizrahi 2017a, 354; 356), Weak Scientism is compatible with the claim that non-academic personal knowledge, moral knowledge, and religious knowledge are all better than scientific knowledge. Certainly, Mizrahi’s defenses of Weak Scientism in 2017a and 2017b don’t show that scientific knowledge is better than non-academic forms of knowledge acquisition. Traditional advocates of scientism, therefore, will not endorse Weak Scientism, given their philosophical presuppositions.

Mizrahi raises two objections to my arguments here. First, even if I’m right that one could think about philosophical knowledge as nearly as valuable as scientific knowledge, this does nothing to show Weak Scientism is not strong enough to count as scientism, since “one of the problems with the scientism debate is precisely the meaning of the term ‘scientism’” (Mizrahi 2017a, 351-353; qtd. in Mizrahi 2017b, 10). Second, Mizrahi notes that scientism is an epistemological thesis and not a psychological one and that he sets out to show what traditional advocates of scientism should accept, and not what they would accept (2017b, 11).

Say Strong Scientism is false, if only because it is self-refuting and subject to good counter-examples. The questions remain, why think Weak Scientism, particularly the weak version of that view Mizrahi ends up defending in 2017a, is really a form of scientism? And why think advocates of Strong Scientism should accept Weak Scientism?

Take the first question. As Mizrahi’s list of citations at the beginning of 2017a makes clear, there already exist very entrenched linguistic conventions with respect to the meaning of ‘scientism.’ As Mizrahi notes, one such meaning is the pejorative or “persuasive” sense of ‘scientism’ that Mizrahi does not like, which (again as Mizrahi himself points out) is quite pervasive, e.g., scientism is an “exaggerated confidence in science (Williams 2015, 6)” (Mizrahi 2017a, 351), and “an exaggerated kind of deference towards science (Haack, 2007, 17; 18)” (Mizrahi 2017a, 351). Mizrahi also mentions persuasive descriptions of scientism in the work of Pigliucci and Sorrell. Why does this diverse group of philosophers use the word ‘scientism’ in this way? Perhaps because it is simply one of the meanings the word ‘scientism’ has come to have in the English language.

Consider, for example, the entry for ‘scientism’ in the Oxford English Dictionary. It has two main headings. Under the first heading of ‘scientism’ is a descriptive use of the term: “A mode of thought which considers things from a scientific viewpoint.” This meaning of ‘scientism’ is not relevant for our purposes since Weak Scientism is a normative and not a descriptive claim. Under the second heading of ‘scientism’ we have:

Chiefly depreciative [emphasis in the original]. The belief that only knowledge obtained from scientific research is valid, and that notions or beliefs deriving from other sources, such as religion, should be discounted; extreme or excessive faith in science or scientists [emphasis mine]. Also: the view that the methodology used in the natural and physical sciences can be applied to other disciplines, such as philosophy and the social sciences (2017).

For better or worse, something such as the following so-called persuasive definition of scientism is thus one of the meanings the word ‘scientism’ has come to have in the English language:

(Scientism1): having an exaggerated confidence in science or the methods of science.

Presumably, some philosophers use ‘scientism’ in the sense of Scientism1 because they think some contemporary thinkers have an exaggerated confidence in science, it is convenient to have a word for that point of view, and, since there is already a term in the English language which picks out that sort of view, namely, ‘scientism’, philosophers such as Williams, Haack, Sorrell, and Pigliucci reasonably use ‘scientism’ in the sense of Scientism1.

But what does this have to do with the question whether Weak Scientism is really a species of scientism? As we’ve seen, one of the meanings commonly attached to ‘scientism’ is the idea of having an exaggerated or improper view of the power or scope of science. But as Mizrahi also notes in 2017a, there is a second sort of meaning often attached to ‘scientism’:

(Scientism2): the view that states the methods of the natural sciences are the only (reliable) methods for producing knowledge or the methods of the natural sciences should be employed in all of the sciences or all areas of human life.

Mizrahi cites Richard Williams (Mizrahi 2017a, 351) and Alex Rosenberg (2017a, 352) as examples of philosophers who use ‘scientism’ with the meaning identified in Scientism2. In addition, as we saw above, this is (part of) the second entry for ‘scientism’ in the Oxford English Dictionary. This is good evidence that Scientism2 picks out one meaning that ‘scientism’ currently has in the English language.

The prevalence of Scientism2 as a meaning of ‘scientism’ goes some distance towards explaining the commonality of the use of Scientism1 as a meaning of ‘scientism’, since many philosophers, historians, psychologists, sociologists, and natural scientists think it is false that science is the only method for (reliably) producing knowledge or the methods of the natural sciences should be employed in all of the sciences or all areas of human life.

Of course, here, as in other areas of life, what some people think is a vice others think a virtue. So philosophers such as Alex Rosenberg think ‘scientism’ in the sense of Scientism2 is true, but reject that acceptance of Scientism2 represents “an exaggerated confidence in science,” since, in their view, the view that science is the only reliable path to knowledge is simply the sober truth.

What I am calling Scientism2 Mizrahi calls Strong Scientism, a view he thinks has problems (see Mizrahi 2017a, 353-354). Furthermore, Mizrahi argues that Weak Scientism is the view that advocates of Strong Scientism should adopt and the view philosophers who want to defend philosophy against charges of uselessness should attack (2017a, 354). But, as I point out in 2017, there is a huge logical gap between Strong Scientism (Scientism2) and Weak Scientism. To see this, recall that Mizrahi defines Weak Scientism as follows:

(Weak Scientism): Of all the knowledge we have, scientific knowledge is the best knowledge (2017a, 354).

In my 2017 response, I suggest that, as we take into account the philosophical premises at play in Mizrahi’s Argument, it turns out Weak Scientism becomes an even weaker thesis. For example, consider a strong interpretation of Weak Scientism:

(Fairly Strong Weak Scientism): Of all the knowledge we have, including non-academic forms of knowledge such as common sense knowledge, personal knowledge, moral knowledge, and religious knowledge, scientific knowledge is the best knowledge.

There is a big logical gap between Strong Scientism (Scientism2) and Fairly Strong Weak Scientism. For Strong Scientism (Scientism2) states that scientific knowledge is the only kind of real knowledge (or the only kind of reliable knowledge). But, for all Fairly Strong Weak Scientism says, scientific knowledge is just barely better, e.g., just barely more reliable, than religious knowledge or philosophical knowledge. There’s a huge logical gap between Strong Scientism (Scientism2) and Fairly Strong Weak Scientism.

As Mizrahi notes (2017a, 354; 356), and to which his practice in 2017a conforms, he is not interested in defending Fairly Strong Weak Scientism. This means that Mizrahi really has something such as the following in mind by Weak Scientism:

(Very Weak Scientism) When it comes to the kinds of knowledge produced within the academy, scientific knowledge is the best.

But there is a big logical gap between Strong Scientism (Scientism2) and Very Weak Scientism.  In fact, as I point out in my 2017 article, given other philosophical presuppositions Mizrahi makes or positions Mizrahi defends in 2017a, the view Mizrahi actually defends in 2017a gets even (and ever) weaker:

(Very, Very Weak Scientism) When it comes to the knowledge that is produced by academic publications, scientific publications are the best.

(Very, Very, Very Weak Scientism): When it comes to the knowledge that is produced by academic journals, knowledge that comes from scientific academic journals is the best.

Now, acceptance of Very, Very, Very Weak Scientism leaves open the possibility that there is philosophical knowledge produced by way of monographs, lectures, and conversations that is better than any sort of scientific knowledge. And, as I point out in my 2017 article, ultimately, something such as Very, Very, Very, Weak Scientism is the view Mizrahi defends in 2017a. Is Very, Very, Very, Weak Scientism really scientism? Given the conventional uses of ‘scientism’ and the huge logical gap between Weak Scientism—even on the strongest reading of the position—and Scientism2, it doesn’t make sense to think of Mizrahi’s Weak Scientism as a species of scientism.

Consider some other reasons for thinking it strange that Weak Scientism counts as a species of scientism. Imagine a person named Alice, about whom, let us say for the sake of argument, the following statements are true: (a) Alice thinks there is a God; (b) she knows the reasons for not thinking there is a God; (c) she has published influential attempted defeaters of the arguments that there is no God; (d) even though she reasonably thinks there are some good, if not compelling, arguments for the existence of God, she thinks it reasonable to believe in God without argumentative evidence; (e) she has published an influential account, by a prestigious academic press, of how a person S can be rational in believing in God, although S does not have good argumentative evidence that God exists; (f) she has published a much discussed argument that belief in God makes better sense of an evolutionary account of the human mind (understood as a reliable constellation of cognitive powers) than does an atheistic evolutionary one, and (g) she thinks that modern science is the greatest new intellectual achievement since the fifteenth century. If believing modern science is the greatest new intellectual achievement since the fifteenth century is (roughly) equivalent to Weak Scientism, then Alice is (roughly) an advocate of Weak Scientism. But it seems odd, to say the least, that Alice—or someone with Alice’s beliefs—should count an advocate (even roughly) of scientism.

One may also reasonably ask Mizrahi why he thinks the position picked put by Weak Scientism is a species of scientism in the first place. One may be inclined to think Weak Scientism is a species of scientism because, like Strong Scientism, Weak Scientism (as formulated by Mizrahi) puts too high a value on scientific knowledge. But Mizrahi won’t define or describe scientism in that way for the reasons he lays out in 2017a.

Given the conventional uses of ‘scientism,’ the huge logical gap between Weak Scientism and Scientism2, and Mizrahi’s refusal to employ a persuasive definition of scientism, it is not clear why Mizrahi’s Weak Scientism should count as a species of scientism. A friendly suggestion: perhaps Mizrahi should simply coin a new word for the position with respect to scientific knowledge and non-scientific forms of academic knowledge he wants to talk about, rather than simply coining a new (and problematic) meaning for ‘scientism.’

Mizrahi’s Argument Does Not Show Why Advocates of Strong Scientism Should Endorse Weak Scientism  

Given Mizrahi’s interest in offering “a defensible definition of scientism” (2017a, 353), which, among other things, means an alternative to Strong Scientism (2017a, 353-354), we can also consider the question, why think advocates of Strong Scientism should adopt Weak Scientism? Mizrahi does not argue in 2017a, for example, that there are (reliable) forms of knowledge other than science. His argument simply presupposes it. But if Mizrahi wants to convince an advocate of Strong Scientism that she should prefer Weak Scientism, Mizrahi can’t presuppose a view the advocate of Strong Scientism believes to be true (particularly, if it’s not even clear that Weak Scientism is a form of scientism).

In addition, as I try to show in my 2017 response, Mizrahi’s Argument relies on other philosophical positions that advocates of Strong Scientism do not accept and Mizrahi does not offer good philosophical arguments for these views. Indeed, more often than not, Mizrahi has simply stipulated a point of view that he needs in order to get Mizrahi’s Argument off the ground, e.g., that we should operationalize what philosophy is or we should operationalize what counts as knowledge in a discipline (for more on these points, see below). If philosophical premises that the advocate of Strong Scientism do not accept are doing the heavy lifting in Mizrahi’s Argument as I claim, premises which are undefended from the perspective of the advocate of Strong Scientism, then it’s not clear why Mizrahi thinks advocates of Strong Scientism should accept Weak Scientism based upon Mizrahi’s Argument.

For even Fairly Strong Weak Scientism is a lot different from the view that advocates of Strong Scientism such as Alex Rosenberg hold. Here’s Rosenberg: “If we’re going to be scientistic, then we have to attain our view of reality from what physics tells us about it. Actually, we’ll have to do more than that: we’ll have to embrace physics as the whole truth about reality” (2011, 20). Indeed, it seems the only reason an advocate of Strong Scientism such as Rosenberg would be even tempted to consider adopting Weak Scientism is because it contains the word ‘scientism.’

But once the advocate of Strong Scientism sees that an advocate of Weak Scientism admits the possibility that there is real knowledge other than what is produced by the natural sciences—indeed, in Mizrahi 2017a and 2017b, Weak Scientism is compatible with the view that common sense knowledge, knowledge of persons, and religious knowledge are each better than scientific knowledge—the advocate of Strong Scientism, at least given their philosophical presuppositions, will reject Weak Scientism out of hand. Given also that Mizrahi has not offered arguments that there is real knowledge other than scientific knowledge, and given that Mizrahi has not offered arguments for a number of views required for Mizrahi’s defense of Weak Scientism (see below), views that advocates of Strong Scientism reject, Mizrahi also does not show why advocates of Strong Scientism should adopt Weak Scientism.

How Is Weak Scientism by Itself Relevant Where the Philosophy-Is-Useless-Objection Is Concerned?

Mizrahi seems to think Weak Scientism is relevant for assessing the philosophy-is-useless claim. He states: “I propose . . . Weak Scientism is the definition of scientism those philosophers who seek to defend philosophy against accusations of uselessness . . . should attack if they want to do philosophy a real service” (2017, 354). But why think a thing like that?

In his response to my 2017 essay, Mizrahi gets his reader off on the wrong foot by reinterpreting my question as “Does Weak Scientism entail that philosophy is useless?” (2017b, 9; 11). Mizrahi says that I “object to [Mizrahi’s] argument in defense of Weak Scientism by complaining that Weak Scientism does not entail philosophy is useless” (2017b, 11) and he goes on to point out that he did not intend to defend the view that philosophy is useless.

But this is to miss the point of the problem (or question) I raise for Mizrahi’s paper in this section, which is, “how is Weak Scientism by itself relevant where the philosophy-is-useless-objection is concerned?” (Brown 2017, 42). For Weak Scientism itself implies nothing about the degree to which philosophical knowledge is valuable or useful other than stating scientific knowledge is better than philosophical knowledge.

Given Mizrahi’s definition of Weak Scientism, (a) one could accept Weak Scientism and think philosophy is extremely useful (there is no contradiction in thinking philosophy is extremely useful but scientific knowledge is better than, for example, more useful than, philosophical knowledge); (b) one could accept Weak Scientism and think philosophy is not at all useful (one may be thinking philosophical knowledge is real but pretty useless and that scientific knowledge is better than philosophical knowledge); (c) one could obviously reject Weak Scientism and think philosophy very useful (depending upon what one means by ‘philosophy is useful’; more on this point below), and (d) one could reject Weak Scientism and think philosophy useless (as some advocates of Strong Scientism surely do).

Accepting (or rejecting) Weak Scientism is compatible both with thinking philosophy is very useful and with thinking philosophy is useless. So it’s hard to see why Mizrahi thinks “Weak Scientism is the definition of scientism those philosophers who seek to defend philosophy against accusations of uselessness . . . should attack if they want to do philosophy a real service” (2107a, 354).

Contact details: chrisb@utm.edu

References

Aquinas, Saint Thomas. Summa Theologiae. Translated by the Fathers of the English Dominican Province. Allen, TX: Christian Classics, 1981.

Aquinas, Saint Thomas. Summa Contra Gentiles. Book One. Trans. Anton C. Pegis. South Bend, IN: University of Notre Dame Press, 1991.

Aristotle. Posterior Analytics. Trans. G.R.G. Mure. In The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 1941.

Aristotle. On the Parts of Animals. Trans. William Ogle. In The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 1941.

Aristotle. Nicomachean Ethics. Trans. W.D. Ross. In The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 1941.

Augustine, Saint. Confessions. Trans. Frank Sheed. 1942; reprint, Indianapolis: Hackett Publishing, 2006.

Brown, Christopher. “Some Logical Problems for Scientism.” Proceedings of the American Catholic Philosophical Association 85 (2011): 189-200.

Brown, Christopher. “Some Objections to Moti Mizrahi’s ‘What’s So Bad about Scientism?’.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 42-54.

Bourget, David and David J. Chalmers. “What do philosophers believe?” Philosophical Studies 170, 3 (2014): 465-500.

Chesterton, G.K. Orthodoxy. 1908; reprint, San Francisco: Ignatius Press, 1995.

Feldman, Richard. Epistemology. Upper Saddle River, NJ: Prentice-Hall, 2003.

Feser, Edward. The Last Superstition: A Refutation of the New Atheism. South Bend: St. Augustine’s Press, 2008.

Feser, Edward. “Blinded by Scientism.” Public Discourse. March 9, 2010a. Accessed January 15, 2018. http://www.thepublicdiscourse.com/2010/03/1174/.

Feser, Edward. “Recovering Sight after Scientism.” Public Discourse. March 12, 2010b. Accessed January 15, 2018. http://www.thepublicdiscourse.com/2010/03/1184/.

Feser, Edward. Scholastic Metaphysics: A Contemporary Introduction. editiones scholasticae, 2014.

Haack, Susan. Defending Science—Within Reason: Between Scientism and Cynicism. Amherst, NY: Prometheus Books, 2007.

Haack, Susan. “The Real Question: Can Philosophy Be Saved? Free Inquiry (October/November 2017): 40-43.

MacIntyre, Alasdair. God, Philosophy, and Universities. Lanham: Rowman & Littlefield, 2009.

Mizrahi, Moti. “What’s So Bad About Scientism?” Social Epistemology 31, no. 4 (2017a): 351-367.

Mizrahi, Moti. “In Defense of Weak Scientism: A Reply to Brown.” Social Epistemology Review and Reply Collective 6, no. 11 (2017b): 9-22.

Oxford English Dictionary Online, s.v. “scientism,” accessed January 10, 2018, http://www.oed.com/view/Entry/172696?redirectedFrom=scientism.

Papineau, David. “Is Philosophy Simply Harder than Science?” The Times Literary Supplement On-line. June 1, 2017. Accessed July 11, 2017. https://goo.gl/JiSci7.

Pieper, Josef. In Defense of Philosophy. Trans. Lothar Krauth. 1966; reprint, San Francisco: Ignatius Press, 1992.

Plato. Phaedo. In Five Dialogues. Trans. Grube and Cooper. Indianapolis: Hackett Publishing, 2002.

Plato. Gorgias. Trans. Donald J. Zeyl. Indianapolis: Hackett Publishing, 1987.

Plato. Republic. Trans. C.D.C. Reeve. Indianapolis: Hackett Publishing, 2004.

Postman, Neil. Technopoly: the Surrender of Culture to Technology. New York: Vintage Books, 1993.

Robinson, Daniel N. “Science, Scientism, and Explanation.” In Scientism: the New Orthodoxy. Williams and Robinson, eds. London: Bloomsbury Academic, 2015, 23-40.

Rosenberg, Alex. The Atheist’s Guide to Reality. New York: W. W. Norton and Co., 2011.

Sorrell, Tom. Scientism: Philosophy and the Infatuation with Science. First edition. London: Routledge, 1994.

Sorell, Tom. Scientism: Philosophy and the Infatuation with Science. Kindle edition. London: Routledge, 2013.

Van Inwagen, Peter. Metaphysics. 4th edition. Boulder, CO: Westview Press, 2015.

Williams, Richard. N. and Daniel N. Robinson, eds. Scientism: the New Orthodoxy. London: Bloomsbury Academic, 2015.

[1] I’m grateful to James Collier for inviting me to reply to Moti Mizrahi’s “In Defense of Weak Scientism: A Reply to Brown” (2017b) and Merry Brown for providing helpful comments on an earlier draft of this essay.

[2] For the sake of consistency and clarity, I number my propositions in this essay based on the numbering of propositions in my 2017 response.

Author information: Kjartan Koch Mikalsen, Norwegian University of Science and Technology, kjartan.mikalsen@ntnu.no.

Mikalsen, Kjartan Koch. “An Ideal Case for Accountability Mechanisms, the Unity of Epistemic and Democratic Concerns, and Skepticism About Moral Expertise.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 1-5.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3S2

Please refer to:

Image from Birdman Photos, via Flickr / Creative Commons

 

How do we square democracy with pervasive dependency on experts and expert arrangements? This is the basic question of Cathrine Holst and Anders Molander’s article “Public deliberation and the fact of expertise: making experts accountable.” Holst and Molander approach the question as a challenge internal to a democratic political order. Their concern is not whether expert rule might be an alternative to democratic government.

Rather than ask if the existence of expertise raises an “epistocratic challenge” to democracy, they “ask how science could be integrated into politics in a way that is consistent with democratic requirements as well as epistemic standards” (236).[1] Given commitment to a normative conception of deliberative democracy, what qualifies as a legitimate expert arrangement?

Against the backdrop of epistemic asymmetry between experts and laypersons, Holst and Molander present this question as a problem of accountability. When experts play a political role, we need to ensure that they really are experts and that they practice their expert role properly. I believe this is a compelling challenge, not least in view of expert disagreement and contestation. In a context where we lack sufficient knowledge and training to assess directly the reasoning behind contested advice, we face a non-trivial problem of deciding which expert to trust. I also agree that the problem calls for institutional measures.

However, I do not think such measures simply answer to a non-ideal problem related to untrustworthy experts. The need for institutionalized accountability mechanisms runs deeper. Nor am I convinced by the idea that introducing such measures involves balancing “the potential rewards from expertise against potential deliberative costs” (236). Finally, I find it problematic to place moral expertise side-by-side with scientific expertise in the way Holst and Molander do.

Accountability Mechanisms: More than Non-ideal Remedies

To meet the challenge of epistemic asymmetry combined with expert disagreement, Holst and Molander propose three sets of institutional mechanisms for scrutinizing the work of expert bodies (242-43). First, in order to secure compliance with basic epistemic norms, they propose laws and guidelines that specify investigation procedures in some detail, procedures for reviewing expert performance and for excluding experts with a bad record of accomplishment, as well as sanctions against sloppy work.

Second, in order to review expert judgements, they propose checks in the form of fora comprising peers, experts in other fields, bureaucrats and stakeholders, legislators, or the public sphere. Third, in order to assure that expert groups work under good conditions for inquiry and judgment, they propose organizing the work of such groups in a way that fosters cognitive diversity.

According to Holst and Molander, these measures have a remedial function. Their purpose is to counter the misbehavior of non-ideal experts, that is, experts whose behavior and judgements are biased or influenced by private interests. The measures concern unreasonable disagreement rooted in experts’ over-confidence or partiality, as opposed to reasonable disagreement rooted in “burdens of judgement” (Rawls 1993, 54). By targeting objectionable conduct and reasoning, they reduce the risk of fallacies and the “intrusion of non-epistemic interests and preferences” (242). In this way, they increase the trustworthiness of experts.

As I see it, this is to attribute a too limited role to the proposed accountability mechanisms. While they might certainly work in the way Holst and Molander suggest, it is doubtful whether they would be superfluous if all experts were ideal experts without biases or conflicting interests.

Even ideal experts are fallible and have partial perspectives on reality. The ideal expert is not omniscient, but a finite being who perceives the world from a certain perspective, depending on a range of contingent factors, such as training in a particular scientific field, basic theoretical assumptions, methodological ideals, subjective expectations, and so on. The ideal expert is aware that she is fallible and that her own point of view is just one among many others. We might therefore expect that she does not easily become a victim of overconfidence or confirmation bias. Yet, given the unavoidable limits of an individual’s knowledge and intellectual capacity, no expert can know what the world looks like from all other perspectives and no expert can be safe from misjudgments.

Accordingly, subjecting expert judgements to review and organizing diverse expert groups is important no matter how ideal the expert. There seems to be no other way to test the soundness of expert opinions than to check them against the judgements of other experts, other forms of expertise, or the public at large. Similarly, organizing diverse expert groups seems like a sensible way of bringing out all relevant facts about an issue even in the case of ideal experts. We do not have to suspect anyone of bias or pursuance of self-serving interests in order to justify these kinds of institutional measures.

Image by Birdman Photos via Flickr / Creative Commons

 

No Trade-off Between Democratic and Epistemic Concerns

An important aspect of Holst and Molander’s discussion of how to make experts accountable is the idea that we need to balance the epistemic value of expert arrangements against democratic concerns about inclusive deliberation. While they point out that the mechanisms for holding experts to account can democratize expertise in ways that leads to epistemic enrichment, they also warn that inclusion of lay testimony or knowledge “can result in undue and disproportional consideration of arguments that are irrelevant, obviously invalid or fleshed out more precisely in expert contributions” (244).

There is of course always the danger that things go wrong, and that the wrong voices win through. Yet, the question is whether this risk forces us to make trade-offs between epistemic soundness and democratic participation. Holst and Molander quote Stephen Turner (2003, 5) on the supposed dilemma that “something has to give: either the idea of government by generally intelligible discussion, or the idea that there is genuine knowledge that is known to few, but not generally intelligible” (236). To my mind, this formulation rests on an ideal picture of public deliberation that is not only excessively demanding, but also normatively problematic.

It is a mistake to assume that political deliberation cannot include “esoteric” expert knowledge if it is to be inclusive and open to everyone. If democracy is rule by public discussion, then every citizen should have an equal chance to contribute to political deliberation and will-formation, but this is not to say that all aspects of every contribution should be comprehensible to everyone. Integration of expert opinions based on knowledge fully accessible only to a few does not clash with democratic ideals of equal respect and inclusion of all voices.

Because of specialization and differentiation, all experts are laypersons with respect to many areas where others are experts. Disregarding individual variation of minor importance, we are all equals in ignorance, lacking sufficient knowledge and training to assess the relevant evidence in most fields.[2] Besides, and more fundamentally, deferring to expert advice in a political context does not imply some form of political status hierarchy between persons.

To acknowledge expert judgments as authoritative in an epistemic sense is simply to acknowledge that there is evidence supporting certain views, and that this evidence is accessible to everyone who has time and skill to investigate the matter. For this reason, it is unclear how the observation that political expert arrangements do not always harmonize with democratic ideals warrants talk of a need for trade-offs or a balancing of diverging concerns. In principle, there seems to be no reason why there has to be divergence between epistemic and democratic concerns.

To put the point even sharper, I would like to suggest that allowing alleged democratic concerns to trump sound expert advice is democratic in name only. With Jacob Weinrib (2016, 57-65), I consider democratic law making as essential to a just legal system because all non-democratic forms of legislation are defective arrangements that arbitrarily exclude someone from contributing to the enactment of the laws that regulate their interaction with others. Yet, an inclusive legislative procedure that disregards the best available reasons is hardly a case of democratic self-legislation.

It is more like raving blind drunk. Legislators that ignore state-of-the-art knowledge are not only deeply irrational, but also disrespectful of those bound by the laws that they enact. Need I mention the climate crisis? Understanding democracy as a process of discursive rationalization (Habermas 1996), the question is not what trade-offs we have to make, but how inclusive legislative procedures can be made sufficiently truth sensitive (Christiano 2012). We can only approximate a defensible democratic order by making democratic and epistemic concerns pull in the same direction.

Moral vs Scientific and Technical Expertise

Before introducing the accountability problem, Holst and Molander consider two ideal objections against giving experts an important political role: ‘(1) that one cannot know decisively who the knowers or experts are’ and ‘(2) that all political decisions have moral dimensions and that there is no moral expertise’ (237). They reject both objections. With respect to (1), they convincingly argue that there are indirect ways of identifying experts without oneself being an expert. With respect to (2), they pursue two strategies.

First, they argue that even if facts and values are intertwined in policy-making, descriptive and normative aspects of an issue are still distinguishable. Second, they argue that unless strong moral non-cognitivism is correct, it is possible to speak of moral expertise in the form of ‘competence to state and clarify moral questions and to provide justified answers’ (241). To my mind, the first of these two strategies is promising, whereas the second seems to play down important differences between distinct forms of expertise.

There are of course various types of democratic expert arrangements. Sometimes experts are embedded in public bodies making collectively binding decisions. At other occasions, experts serve an advisory function. Holst and Molander tend to use “expertise” and “expert” as unspecified, generic terms, and they refer to both categories side-by-side (235, 237). However, by framing their argument as an argument concerning epistemic asymmetry and the novice/expert-problem, they indicate that they have in mind moral experts in advisory capacities and as someone in possession of insights known to a few, yet of importance for political decision-making.

I agree that some people are better informed about moral theory and more skilled in moral argumentation than others are, but such expertise still seems different in kind from technical expertise or expertise within empirical sciences. Although moral experts, like other experts, provide action-guiding advice, their public role is not analogous to the public role of technical or scientific experts.

For the public, the value of scientific and technical expertise lies in information about empirical restraints and the (lack of) effectiveness of alternative solutions to problems. If someone is an expert in good standing within a certain field, then it is reasonable to regard her claims related to this field as authoritative, and to consider them when making political decisions. As argued in the previous section, it would be disrespectful and contrary to basic democratic norms to ignore or bracket such claims, even if one does not fully grasp the evidence and reasoning supporting them.

Things look quite different when it comes to moral expertise. While there can be good reasons for paying attention to what specialists in moral theory and practical reasoning have to say, we rarely, if ever, accept their claims about justified norms, values and ends as authoritative or valid without considering the reasoning supporting the claims, and rightly so. Unlike Holst and Molander, I do not think we should accept the arguments of moral experts as defined here simply based on indirect evidence that they are trustworthy (cf. 241).

For one thing, the value of moral expertise seems to lie in the practical reasoning itself just as much as in the moral ideals underpinned by reasons. An important part of what the moral expert has to offer is thoroughly worked out arguments worth considering before making a decision on an issue. However, an argument is not something we can take at face value, because an argument is of value to us only insofar as we think it through ourselves. Moreover, the appeal to moral cognitivism is of limited value for elevating someone to the status of moral expert. Even if we might reach agreement on basic principles to govern society, there will still be reasonable disagreement as to how we should translate the principles into general rules and how we should apply the rules to particular cases.

Accordingly, we should not expect acceptance of the conclusions of moral experts in the same way we should expect acceptance of the conclusions of scientific and technical expertise. To the contrary, we should scrutinize such conclusions critically and try to make up our own mind. This is, after all, more in line with the enlightenment motto at the core of modern democracy, understood as government by discussion: “Have courage to make use of your own understanding!” (Kant 1996 [1784], 17).

Contact details: kjartan.mikalsen@ntnu.no

References

Christiano, Thomas. “Rational Deliberation among Experts and Citizens.” In Deliberative Systems: Deliberative Democracy at the Large Scale, ed. John Parkinson and Jane Mansbridge. Cambridge: Cambridge University Press, 2012.

Habermas, Jürgen. Between Facts and Norms.

Holst, Cathrine, and Anders Molander. “Public deliberation and the fact of expertise: making experts accountable.” Social Epistemology 31, no. 3 (2017): 235-250.

Kant, Immanuel. Practical Philosophy, ed. Mary Gregor. Cambridge: Cambridge University Press, 1996.

Kant, Immanuel. Anthropology, History, and Edcucation, ed. Günther Zöller and Robert B. Louden. Cambridge: Cambridge University Press, 2007.

Rawls, John. Political Liberalism. New York: Columbia University Press, 1993.

Turner, Stephen. Liberal Democracy 3.0: Civil Society in an Age of Experts. London: Sage Publications Ltd, 2003.

Weinrib, Jacob. Dimensions of Dignity. Cambridge: Cambridge University Press, 2016.

[1] All bracketed numbers without reference to author in the main text refer to Holst and Molander (2017).

[2] This also seems to be Kant’s point when he writes that human predispositions for the use of reason “develop completely only in the species, but not in the individual” (2007 [1784], 109).

Author Information: Saana Jukola and Henrik Roeland Visser, Bielefeld University, sjukola@uni-bielefeld.de and rvisser@uni-bielefeld.de.

Jukola, Saana; and Henrik Roland Visser. “On ‘Prediction Markets for Science,’ A Reply to Thicke” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 1-5.

The pdf of the article includes specific page numbers. Shortlink: https://wp.me/p1Bfg0-3Q9

Please refer to:

Image by The Bees, via Flickr

 

In his paper, Michael Thicke critically evaluates the potential of using prediction markets to answer scientific questions. In prediction markets, people trade contracts that pay out if a certain prediction comes true or not. If such a market functions efficiently and thus incorporates the information of all market participants, the resulting market price provides a valuable indication of the likelihood that the prediction comes true.

Prediction markets have a variety of potential applications in science; they could provide a reliable measure of how large the consensus on a controversial finding truly is, or tell us how likely a research project is to deliver the promised results if it is granted the required funding. Prediction markets could thus serve the same function as peer review or consensus measures.

Thicke identifies two potential obstacles for the use of prediction markets in science. Namely, the risk of inaccurate results and of potentially harmful unintended consequences to the organization and incentive structure of science. We largely agree on the worry about inaccuracy. In this comment we will therefore only discuss the second objection; it is unclear to us what really follows from the risk of harmful unintended consequences. Furthermore, we consider another worry one might have about the use of prediction markets in science, which Thicke does not discuss: peer review is not only a quality control measure to uphold scientific standards, but also serves a deliberative function, both within science and to legitimize the use of scientific knowledge in politics.

Reasoning about imperfect methods

Prediction markets work best for questions for which a clearly identifiable answer is produced in the not too distant future. Scientific research on the other hand often produces very unexpected results on an uncertain time scale. As a result, there is no objective way of choosing when and how to evaluate predictions on scientific research. Thicke identifies two ways in which this can create harmful unintended effects on the organization of science.

Firstly, projects that have clear short-term answers may erroneously be regarded as epistemically superior to basic research which might have better long-term potential. Secondly, science prediction markets create a financial incentive to steer resources towards research with easily identifiable short-term consequences, even if more basic research would have a better epistemic pay-off in the long-run.

Based on their low expected accuracy and the potential of harmful effects on the organization of science, Thicke concludes that science prediction markets might be a worse ‘cure’ than the ‘disease’ of bias in peer review and consensus measures. We are skeptical of this conclusion for the same reasons as offered by Robin Hanson. While the worry about the promise of science prediction markets is justified, it is unclear how this makes them worse than the traditional alternatives.

Nevertheless, Thicke’s conclusion points in the right direction: instead of looking for a more perfect method, which may not become available in the foreseeable future, we need to judge which of the imperfect methods is more palatable to us. Doing that would, however, require a more sophisticated evaluation of the different strengths and weakness of the different available methods and how to trade those off, which goes beyond the scope of Thicke’s paper.

Deliberation in Science

An alternative worry, which Thicke does not elaborate on, is the fact that peer review is not only expected to accurately determine the quality of submissions and conclude what scientific work deserves to be funded or published, but it is also valued for its deliberative nature, which allows it to provide reasons to those affected by the decisions made in research funding or the use of scientific knowledge in politics. Given that prediction markets function through market forces rather than deliberative procedure, and produce probabilistic predictions rather than qualitative explanations, this might be (another) aspect on which the traditional alternative of peer review outperforms science prediction markets.

Within science, peer review serves two different purposes. First, it functions as a gatekeeping mechanism for deciding which projects deserve to be carried out or disseminated – an aim of peer review is to make sure that good work is being funded or published and undeserving projects are rejected. Second, peer review is often taken to embody the critical mechanism that is central to the scientific method. By pointing out defects and weaknesses in manuscripts or proposals, and by suggesting new ways of approaching the phenomena of interest, peer reviewers are expected to help authors improve the quality of their work. At least in an ideal case, authors know why their manuscripts were rejected or accepted after receiving peer review reports and can take the feedback into consideration in their future work.

In this sense, peer review represents an intersubjective mechanism that guards against the biases and blind spots that individual researchers may have. Criticism of evidence, methods and reasoning is essential to science, and necessary for arriving at trustworthy results.[1] Such critical interaction thus ensures that a wide variety of perspectives in represented in science, which is both epistemically and socially valuable. If prediction markets were to replace peer review, could they serve this second, critical, function? It seems that the answer is No. Prediction markets do not provide reasons in the way that peer review does, and if the only information that is available are probabilistic predictions, something essential to science is lost.

To illustrate this point in a more intuitive way: imagine that instead of writing this comment in which we review Thicke’s paper, there is a prediction market on which we, Thicke and other authors would invest in bets regarding the likelihood of science prediction markets being an adequate replacement of the traditional method of peer review. From the resulting price signal we would infer whether predictions markets are indeed an adequate replacement or not. Would that allow for the same kind of interaction in which we now engage with Thicke and others by writing this comment? At least intuitively, it seems to us that the answer is No.

Deliberation About Science in Politics

Such a lack of reasons that justify why certain views have been accepted or rejected is not only a problem for researchers who strive towards getting their work published, but could also be detrimental to public trust in science. When scientists give answers to questions that are politically or socially sensitive, or when controversial science-based recommendations are given, it is important to explain the underlying reasons to ensure that those affected can – at least try to – understand them.

Only if people are offered reasons for decisions that affect them can they effectively contest such decisions. This is why many political theorists regard the ability of citizens to demand an explanation, and the corresponding duty of decision-makers to be responsive to such demands, as a necessary element of legitimate collective decisions.[2] Philosophers of science like Philip Kitcher[3] rely on very similar arguments to explain the importance of deliberative norms in justifying scientific conclusions and the use of scientific knowledge in politics.

Science prediction markets do not provide substantive reasons for their outcome. They only provide a procedural argument, which guarantees the quality of their outcome when certain conditions are fulfilled, such as the presence of a well-functioning market. Of course, one of those conditions is also that at least some of the market participants possess and rely on correct information to make their investment decisions, but that information is hidden in the price signal. This is especially problematic with respect to the kind of high-impact research that Thicke focuses on, i.e. climate change. There, the ability to justify why a certain theory or prediction is accepted as reliable, is at least as important for the public discourse as it is to have precise and accurate quantitative estimates.

Besides the legitimacy argument, there is another reason why quantitative predictions alone do not suffice. Policy-oriented sciences like climate science or economics are also expected to judge the effect and effectiveness of policy interventions. But in complex systems like the climate or the economy, there are many different plausible mechanisms simultaneously at play, which could justify competing policy interventions. Given the long-lasting controversies surrounding such policy-oriented sciences, different political camps have established preferences for particular theoretical interpretations that justify their desired policy interventions.

If scientists are to have any chance of resolving such controversies, they must therefore not only produce accurate predictions, but also communicate which of the possible underlying mechanisms they think best explains the predicted phenomena. It seems prediction markets alone could not do this. It might be useful to think of this particular problem as the ‘underdetermination of policy intervention by quantitative prediction’.

Science prediction markets as replacement or addition?

The severity of the potential obstacles that Thicke and we identify depends on whether science prediction markets would replace traditional methods such as peer review, or would rather serve as addition or even complement to traditional methods. Thicke provides examples of both: in the case of peer review for publication or funding decisions, prediction markets might replace traditional methods. But in the case of resolving controversies, for instance concerning climate change, it aggregates and evaluates already existing pieces of knowledge and peer review. In such a case the information that underlies the trading behavior on the prediction market would still be available and could be revisited if people distrust the reliability of the prediction market’s result.

We could also imagine that there are cases in which science prediction markets are used to select the right answer or at least narrow down the range of alternatives, after which a qualitative report is produced which provides a justification of the chosen answer(s). Perhaps it is possible to infer from trading behavior which investors possess the most reliable information, a possibility explored by Hanson. Contrary to Hanson, we are skeptical of the viability of this strategy. Firstly, the problem of the underdetermination of theory by data suggests that different competing justifications might be compatible with the observation trading behavior. Secondly, such justifications would be post-hoc rationalizations, which sound plausible but might lack power to discriminate among alternative predictions.

Conclusion

All in all, we are sympathetic to Michael Thicke’s critical analysis of the potential of prediction markets in science and share his skepticism. However, we point out another issue that speaks against prediction markets and in favor of peer review: Giving and receiving reasons for why a certain view should be accepted or rejected. Given that the strengths and weaknesses of these methods fall on different dimensions (prediction markets may fare better in accuracy, while in an ideal case peer review can help the involved parties understand the grounds why a position should be approved), it is important to reflect on what the appropriate aims in particular scientific and policy context are before making a decision on what method should be used to evaluate research.

References

Hanson, Robin. “Compare Institutions To Institutions, Not To Perfection,” Overcoming Bias (blog). August 5, 2017. Retrieved from: http://www.overcomingbias.com/2017/08/compare-institutions-to-institutions-not-to-perfection.html

Hanson, Robin. “Markets That Explain, Via Markets To Pick A Best,” Overcoming Bias (blog), October 14, 2017 http://www.overcomingbias.com/2017/10/markets-that-explain-via-markets-to-pick-a-best.html

[1] See, e.g., Karl Popper, The Open Society and Its Enemies. Vol 2. (Routledge, 1966) or Helen Longino, Science as Social Knowledge. Values and Objectivity in Scientific Inquiry (Princeton University Press, 1990).

[2] See Jürgen Habermas, A Theory of Communicative Action, Vols1 and 2. (Polity Press, 1984 & 1989) & Philip Pettit, “Deliberative democracy and the discursive dilemma.” Philosophical Issues, vol. 11, pp. 268-299, 2001.

[3] Philip Kitcher, Science, Truth, and Democracy (Oxford University Press, 2001) & Philip Kitcher, Science in a democratic society (Prometheus Books, 2011).

Author Information: Reiner Grundmann, University of Nottingham, Reiner.Grundmann@nottingham.ac.uk

Grundmann, Reiner. “Regarding Experts and Expertise: A Reply to Szymanski.” Social Epistemology Review and Reply Collective 4, no. 7 (2015): 19-22.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2az

Please refer to:

experts_cover

Image credit: Routledge Press

The opening sentence of Erika Szymanski’s review encapsulates her tone and approach: ‘If you are looking for a provocative argument about what being an expert means in contemporary information-driven cultures, I would offer that your time is better spent somewhere other than Stehr and Grundmann’s Experts: The Knowledge and Power of Expertise (Routledge 2011).’

Unfortunately, she does not tell us what is provocative about the book, nor what better provocative books should be read instead. Towards the end of the review she comes to the view that the ‘central motion’ of the book is uncontroversial. Maybe it would have been a good idea to state upfront that she is in two minds about the book, and explain in what sense it is (un)controversial.  Continue Reading…

Author Information: Erika Szymanski, University of Otago, szymanskiea@hotmail.com

Szymanski, Erika. “Review—Experts: The Knowledge and Power of Expertise.Social Epistemology Review and Reply Collective 4, no. 5 (2015): 33-36.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-25x

experts_cover

Image credit: Routledge Press

Experts: The Knowledge and Power of Expertise
Nico Stehr and Reiner Grundmann
Routledge
146 pp.

Erika Szymanski, University of Otago

If you are looking for a provocative argument about what being an expert means in contemporary information-driven cultures, I would offer that your time is better spent somewhere other than Stehr and Grundmann’s Experts: The Knowledge and Power of Expertise (Routledge 2011).

The book reads more as a conservative intellectual history situating the “expert” in knowledge societies than a new position statement. That history is useful: they define and contextualize the expert as contemporary case studies often fail to do; they raise many questions about the role of experts as a general group that usually remain invisible in those studies. Unanswered as often as not, these questions might serve as a productive repository for future debate. Be forewarned, however, that you may find little that feels genuinely new as a reward for wading through Stehr and Grundmann’s sometimes-dense prose.  Continue Reading…