Archives For public knowledge

Author Information: Moti Mizrahi, Florida Institute of Technology, mmizrahi@fit.edu

Mizrahi, Moti. “Weak Scientism Defended Once More.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 41-50.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yx

See also:

One of Galileo’s original compasses, on display at the Museo Galileo, a feature of the Instituto e Museo di Storia della Scienza in Florence, Italy.
Image by Anders Sandberg via Flickr / Creative Commons

 

Bernard Wills (2018) joins Christopher Brown (2017, 2018) in criticizing my defense of Weak Scientism (Mizrahi 2017a, 2017b, 2018a). Unfortunately, it seems that Wills did not read my latest defense of Weak Scientism carefully, nor does he cite any of the other papers in my exchange with Brown. For he attributes to me the view that “other disciplines in the humanities [in addition to philosophy] do not produce knowledge” (Wills 2018, 18).

Of course, this is not my view and I affirm no such thing, contrary to what Wills seems to think. I find it hard to explain how Wills could have made this mistake, given that he goes on to quote me as follows: “Scientific knowledge can be said to be qualitatively better than non-scientific knowledge insofar as such knowledge is explanatorily, instrumentally, and predictively more successful than non-scientific knowledge” (Mizrahi 2018a, 7; quoted in Wills 2018, 18).

Clearly, the claim ‘Scientific knowledge is better than non-scientific knowledge’ entails that there is non-scientific knowledge. If the view I defend entails that there is non-scientific knowledge, then it cannot also be my view that “science produces knowledge and all the other things we tend to call knowledge are in fact not knowledge at all but something else” (Wills 2018, 18).

Even if he somehow missed this simple logical point, reading the other papers in my exchange with Brown should have made it clear to Wills that I do not deny the production of knowledge by non-scientific disciplines. In fact, I explicitly state that “science produces scientific knowledge, mathematics produces mathematical knowledge, philosophy produces philosophical knowledge, and so on” (Mizrahi 2017a, 353). Even in my latest reply to Brown, which is the only paper from my entire exchange with Brown that Wills cites, I explicitly state that, if Weak Scientism is true, then “philosophical knowledge would be inferior to scientific knowledge both quantitatively (in terms of research output and research impact) and qualitatively (in terms of explanatory, instrumental, and predictive success)” (Mizrahi 2018a, 8).

If philosophical knowledge is quantitatively and qualitatively inferior to scientific knowledge, then it follows that there is philosophical knowledge. For this reason, only a rather careless reader could attribute to me the view that “other disciplines in the humanities [in addition to philosophy] do not produce knowledge” (Wills 2018, 18).

There Must Be Some Misunderstanding

Right from the start, then, Wills gets Weak Scientism wrong, even though he later writes that, according to Weak Scientism, “there may be knowledge of some sort outside of the sciences” (Wills 2018, 18). He says that he will ignore the quantitative claim of Weak Scientism and focus “on the qualitative question and particularly on the claim that science produces knowledge and all the other things we tend to call knowledge are in fact not knowledge at all but something else” (Wills 2018, 18). Wills can focus on whatever he wants, of course, but that is not Weak Scientism.

Weak Scientism is not the view that only science produces real knowledge; that is Strong Scientism (Mizrahi 2017a, 353). Rather, Weak Scientism is the view that, “Of all the knowledge we have [i.e., there is knowledge other than scientific knowledge], scientific knowledge is the best knowledge” (Mizrahi 2017a, 354). In other words, scientific knowledge “is simply the best; better than all the rest” (Mizrahi 2017b, 20). Wills’ criticism, then, misses the mark completely. That is, it cannot be a criticism against Weak Scientism, since Weak Scientism is not the view that “science produces knowledge and all the other things we tend to call knowledge are in fact not knowledge at all but something else” (Wills 2018, 18).

Although he deems the quantitative superiority of scientific knowledge over non-scientific knowledge “a tangential point,” and says that he will not spend time on it, Wills (2018, 18) remarks that “A German professor once told [him] that in the first half of the 20th Century there were 40,000 monographs on Franz Kafka alone!” Presumably, Wills’ point is that research output in literature exceeds that of scientific disciplines. Instead of relying on gut feelings and hearsay, Wills should have done the required research in order to determine whether scholarly output in literature really does exceed the research output of scientific disciplines.

If we look at the Scopus database, using the data and visualization tools provided by Scimago Journal & Country Rank, we can see that research output in a natural science like physics and a social science like psychology far exceeds research output in humanistic disciplines like literature and philosophy. On average, psychology has produced 15,000 more publications per year than either literature or philosophy between the years 1999 and 2017. Likewise, on average, physics has produced 54,000 more publications per year than either literature or philosophy between the years 1999 and 2017 (Figure 1). 

Figure 1. Research output in Literature, Philosophy, Physics, and Psychology from 1999 to 2017 (Source: Scimago Journal & Country Rank)

Contrary to what Wills seems to think or what his unnamed German professor may have told him, then, it is not the case that literary scholars produce more work on Shakespeare or Kafka alone than physicists or psychologists produce. The data from the Scopus database show that, on average, it takes literature and philosophy almost two decades to produce what psychology produces in two years or what physics produces in a single year (Mizrahi 2017a, 357-359).

In fact, using JSTOR Data for Research, we can check Wills’ number, as reported to him by an unnamed German professor, to find out that there are 13,666 publications (i.e., journal articles, books, reports, and pamphlets) on Franz Kafka from 1859 to 2018 in the JSTOR database. Clearly, that is not even close to “40,000 monographs on Franz Kafka alone” in the first half of the 20th Century (Wills 2018, 18). By comparison, as of May 22, 2018, the JSTOR database contains more publications on the Standard Model in physics and the theory of conditioning in behavioral psychology than on Franz Kafka or William Shakespeare (Table 1).

Table 1. Search results for ‘Standard Model’, ‘Conditioning’, ‘William Shakespeare’, and ‘Franz Kafka’ in the JSTOR database as a percentage of the total number of publications, n = 12,633,298 (Source: JSTOR Data for Research)

  Number of Publications Percentage of JSTOR corpus
Standard Model 971,968 7.69%
Conditioning 121,219 0.95%
William Shakespeare 93,700 0.74%
Franz Kafka 13,667 0.1%

Similar results can be obtained from Google Books Ngram Viewer when we compare published work on Shakespeare, which Wills thinks exceeds all published work in other disciplines, for he says that “Shakespeare scholars have all of us beat” (Wills 2018, 18), with published work on a contemporary of Shakespeare (1564-1616) from another field of study, namely, Galileo (1564-1642). As we can see from Figure 2, from 1700 to 2000, ‘Galileo’ consistently appears in more books than ‘William Shakespeare’ does.

Figure 2. Google Books results for ‘William Shakespeare’ and ‘Galileo’ from 1700 to 2000 (Source: Google Books Ngram Viewer)

Racking Up the Fallacies

Wills continues to argue fallaciously when he resorts to what appears to be a fallacious ad hominem attack against me. He asks (rhetorically?), “Is Mr. Mizrahi producing an argument or a mere rationalization of his privilege?” (Wills 2018, 19) It is not clear to me what sort of “privilege” Wills wants to claim that I have, or why he accuses me of colonialism and sexism, since he provides no arguments for these outrageous charges. Moreover, I do not see how this is at all relevant to Weak Scientism. Even if I am somehow “privileged” (whatever Wills means by that), Weak Scientism is either true or false regardless.

After all, I take it that Wills would not doubt his physician’s diagnoses just because he or she is “privileged” for working at a hospital. Whether his physician is “privileged” for working at a hospital has nothing to do with the accuracy of his or her diagnoses. For these reasons, Wills’ ad hominem is fallacious (as opposed to a legitimate ad hominem as a rebuttal to an argument from authority, see Mizrahi 2010). I think that SERRC readers will be better served if we focus on the ideas under discussion, specifically, Weak Scientism, not the people who discuss them.

Speaking of privilege and sexism, however, it might be worth noting that, throughout his paper, Wills refers to me as ‘Mr. Mizrahi’ (rather than ‘Dr. Mizrahi’ or simply ‘Mizrahi’, as is the norm in academic publications), and that he has misspelled my name on more than one occasion (Wills 2018, 18, 22, 24). Studies suggest that addressing female doctors with ‘Ms.’ or ‘Mrs.’ rather than ‘Dr.’ might reveal gender bias (see, e.g., Files et al. 2017). Perhaps forms of address reveal not only gender bias but also ethnic or racial bias when people with non-white or “foreign” names are addressed as Mr. (or Ms.) rather than Dr. (Erlenbusch 2018).

Aside from unsubstantiated claims about the amount of research produced by literary scholars, fallacious appeals to the alleged authority of unnamed German professors, and fallacious ad hominem attacks, does Wills offer any good arguments against Weak Scientism? He spends most of his paper (pages 19-22) trying to show that there is knowledge other than scientific knowledge, such as knowledge produced in the fields of “Law and Music Theory” (Wills 2018, 20). This, however, does nothing at all to undermine Weak Scientism. For, as mentioned above, Weak Scientism is the view that scientific knowledge is superior to non-scientific knowledge, which means that there is non-scientific knowledge; it’s just not as good as scientific knowledge (Mizrahi 2017a, 356).

The Core of His Concept

Wills finally gets to Weak Scientism on the penultimate page of his paper. His main objection against Weak Scientism seems to be that it is not clear to him how scientific knowledge is supposed to be better than non-scientific knowledge. For instance, he asks, “Better in what context? By what standard of value?” (Wills 2018, 23) Earlier he also says that he is not sure what are the “certain relevant respect” in which scientific knowledge is superior to non-scientific knowledge (Wills 2018, 18).

Unfortunately, this shows that Wills either has not read the other papers in my exchange with Brown or at least has not read them carefully. For, starting with my first defense of Weak Scientism (2017a), I explain in great detail the ways in which scientific knowledge is better than non-scientific knowledge. Briefly, scientific knowledge is quantitatively better than non-scientific knowledge in terms of research output (i.e., more publications) and research impact (i.e., more citations). Scientific knowledge is qualitatively better than non-scientific knowledge in terms of explanatory, instrumental, and predictive success (Mizrahi 2017a, 364; Mizrahi 2017b, 11).

Wills tries to challenge the claim that scientific knowledge is quantitatively better than non-scientific knowledge by exclaiming, “Does science produce more knowledge that [sic] anything else? Hardly” (Wills 2018, 23). He appeals to Augustine’s idea that one “can produce a potential infinity of knowledge simply by reflecting recursively on the fact of [one’s] own existence” (Wills 2018, 23). In response, I would like to borrow a phrase from Brown (2018, 30): “good luck getting that published!”

Seriously, though, the point is that Weak Scientism is a thesis about academic knowledge or research. In terms of research output, scientific disciplines outperform non-scientific disciplines (see Figure 1 and Table 1 above; Mizrahi 2017a, 357-359; Mizrahi 2018a, 20-21). Besides, just as “recursive processes can extend our knowledge indefinitely in the field of mathematics,” they can also extend our knowledge in other fields as well, including scientific fields. That is, one “can produce a potential infinity of knowledge simply by reflecting recursively on the” (Wills 2018, 23) Standard Model in physics or any other scientific theory and/or finding. For this reason, Wills’ objection does nothing at all to undermine Weak Scientism.

Wills (2018, 23) tries to problematize the notions of explanatory, instrumental, and predictive success in an attempt to undermine the claim that scientific knowledge is qualitatively better than non-scientific knowledge in terms of explanatory, instrumental, and predictive success. But it seems that he misunderstands these notions as they apply to the scientism debate.

As far as instrumental success is concerned, Wills (2018, 23) asks, “Does science have (taken in bulk) more instrumental success than other knowledge forms? How would you even count given that craft knowledge has roughly 3 million-year head start?” Even if it is true that “craft knowledge has roughly 3 million-year head start,” it is irrelevant to whether Weak Scientism is true or false. This is because Weak Scientism is a thesis about academic knowledge or research produced by academic fields of study (Mizrahi 2017a, 356; Mizrahi 2017b, 11; Mizrahi 2018a, 12).

Solving the Problem and Explaining the Issue

As far as explanatory success is concerned, Wills (2018, 23) writes, “Is science more successful at explanation? Hardly, if science could solve problems in literature or history then these fields would not even exist.” There are a couple of problems with this objection. First, explaining and problem solving are not the same thing (Mizrahi and Buckwalter 2014). Second, what makes scientific explanations good explanations are the good-making properties that are supposed to make all explanations (both scientific and non-scientific) good explanations, namely, unification, coherence, simplicity, and testability (Mizrahi 2017a, 360-362; Mizrahi 2017b, 19-20; Mizrahi 2018a, 17).

I have already made this point several times in my replies to Brown, which Wills does not cite, namely, that Inference to the Best Explanation (IBE) is used in both scientific and non-scientific contexts (Mizrahi 2017a, 362). That is, “IBE is everywhere” (Mizrahi 2017b, 20). It’s just that scientific IBEs are better than non-scientific IBEs because they exhibit more of (and to a greater extent) the aforementioned properties that make any explanation a good explanation (Mizrahi 2018b).

As far as predictive success is concerned, Wills (2018, 23) asks, “Does science make more true predictions? Again how would you even count given that for millions of years, human beings survived by making hundreds of true predictions daily?” There are a few problems with this objection as well. First, even if it is true that “for millions of years, human beings survived by making hundreds of true predictions daily,” it is irrelevant to whether Weak Scientism is true or false, since Weak Scientism is a thesis about academic knowledge or research produced by academic fields of study (Mizrahi 2017a, 356; Mizrahi 2017b, 11; Mizrahi 2018a, 12).

Second, contrary to what Wills (2018, 24) seems to think, testing predictions in science is not simply a matter of making assertions and then checking to see if they are true. For one thing, a prediction is not simply an assertion, but rather a consequence that follows from a hypothesis plus auxiliary hypotheses (Mizrahi 2015). For another, a prediction needs to be novel such that we would not expect it to be the case except from the vantage point of the theory that we are testing (Mizrahi 2012).

As I have advised Brown (Mizrahi 2018, 17), I would also advise Wills to consult logic and reasoning textbooks, not because they provide support for the claim that “science is instrumentally successful, explanatory and makes true predictions,” as Wills (2018, 23) erroneously thinks, but because they discuss hypothesis testing in science. For Wills’ (2018, 24) remark about Joyce scholars suggests a failure to understand how hypotheses are tested in science.

Third, like Brown (2017, 49), Wills (2018, 23) admits that, just like science, philosophy is in the explanation business. For Wills (2018, 23) says that, “certainty, instrumental success, utilitarian value, predictive power and explanation all exist elsewhere in ways that are often not directly commensurable with the way they exist in science” (emphasis added). But if distinct fields of study have the same aim (i.e., to explain), then their products (i.e., explanations) can be evaluated with respect to similar criteria, such as unification, coherence, simplicity, and testability (Mizrahi 2017a, 360-362; Mizrahi 2017b, 19-20; Mizrahi 2018a, 17).

In other words, there is no incommensurability here, as Wills seems to think, insofar as both science and philosophy produce explanations and those explanations must exhibit the same good-making properties that make all explanations good explanations (Mizrahi 2018a, 17; 2018b).

“You Passed the Test!”

If Wills (2018, 24) wants to suggest that philosophers should be “testing their assertions in the ways peculiar to their disciplines,” then I would agree. However, “testing” does not simply mean making assertions and then checking to see if they are true, as Wills seems to think. After all, how would one check to see if assertions about theoretical entities are true? To test a hypothesis properly, one must derive a consequence from it (plus auxiliary assumptions) that would be observed only if the hypothesis (plus the auxiliary assumptions) is true.

Observations and/or experimentation would then indicate to one whether the consequence obtains or not (Mizrahi 2012). Of course, some philosophers have been doing just that for some time now (Knobe 2017). For instance, some experimental philosophers test hypotheses about the alleged intuitiveness of philosophical ideas and responses to thought experiments (see, e.g., Kissinger-Knox et al. 2018). I welcome such empirical work in philosophy.

Contrary to what Wills (2018, 19) seems to think, then, my aim is not to antagonize philosophers. Rather, my aim is to reform philosophy. In particular, as I have suggested in my recent reply to Brown (Mizrahi 2018a, 22), I think that philosophy would benefit from adopting not only the experimental methods of the cognitive and social sciences, as experimental philosophers have done, but also the methods of data science, such as data mining and corpus analysis (see, e.g., Ashton and Mizrahi 2018a and 2018b).

Indeed, the XPhi Replicability Project recently published a report on replication studies of 40 experimental studies according to which experimental studies “successfully replicated about 70% of the time” (Cova et al. 2018). With such a success rate, one could argue that the empirical revolution in philosophy is well under way (see also Knobe 2015). Resistance is futile!

Contact details: mmizrahi@fit.edu

References

Ashton, Z., and Mizrahi, M. “Intuition Talk is Not Methodologically Cheap: Empirically Testing the ‘Received Wisdom’ About Armchair Philosophy.” Erkenntnis 83, no. 3 (2018a): 595-612.

Ashton, Z., and Mizrahi, M. “Show Me the Argument: Empirically Testing the Armchair Philosophy Picture.” Metaphilosophy 49, no. 1-2 (2018b): 58-70.

Brown, C. M. “Some Objections to Moti Mizrahi’s ‘What’s So Bad About Scientism?’.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 42-54.

Brown, C. M. “Defending Some Objections to Moti Mizrahi’s Arguments for Weak Scientism.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 1-35.

Cova, Florian, Brent Strickland, Angela G Abatista, Aurélien Allard, James Andow, Mario Attie, James Beebe, et al. “Estimating the Reproducibility of Experimental Philosophy.” PsyArXiv, April 21, 2018. doi:10.17605/OSF.IO/SXDAH.

Erlenbusch, V. “Being a Foreigner in Philosophy: A Taxonomy.” Hypatia 33, no. 2 (2018): 307-324.

Files, J. A., Mayer, A. P., Ko, M. G., Friedrich, P., Jenkins, M., Bryan, M. J., Vegunta, S., Wittich, C. M., Lyle, M. A., Melikian, R., Duston, T., Chang, Y. H., Hayes, S. M. “Speaker Introductions at Internal Medicine Grand Rounds: Forms of Address Reveal Gender Bias.” Journal of Women’s Health 26, no. 5 (2017): 413-419.

Google. “Ngram Viewer.” Google Books Ngram Viewer. Accessed on May 21, 2018. https://books.google.com/ngrams.

JSTOR. “Create a Dataset.” JSTOR Data for Research. Accessed on May 22, 2018. https://www.jstor.org/dfr/.

Kissinger-Knox, A., Aragon, P., and Mizrahi, M. “Does Non-Moral Ignorance Exculpate? Situational Awareness and Attributions of Blame and Forgiveness.” Acta Analytica 33, no. 2 (2018): 161-179.

Knobe, J. “Experimental Philosophy.” Philosophy Compass 2, no. 1 (2007): 81-92.

Knobe, J. “Philosophers are Doing Something Different Now: Quantitative Data.” Cognition 135 (2015): 36-38.

Mizrahi, M. “Take My Advice–I Am Not Following It: Ad Hominem Arguments as Legitimate Rebuttals to Appeals to Authority.” Informal Logic 30, no. 4 (2010): 435-456.

Mizrahi, M. “Why the Ultimate Argument for Scientific Realism Ultimately Fails.” Studies in History and Philosophy of Science Part A 43, no. 1 (2012): 132-138.

Mizrahi, M. “Don’t Believe the Hype: Why Should Philosophical Theories Yield to Intuitions?” Teorema: International Journal of Philosophy 34, no. 3 (2015): 141-158.

Mizrahi, M. “What’s So Bad about Scientism?” Social Epistemology 31, no. 4 (2017a): 351-367.

Mizrahi, M. “In Defense of Weak Scientism: A Reply to Brown.” Social Epistemology Review and Reply Collective 6, no. 11 (2017b): 9-22.

Mizrahi, M. “More in Defense of Weak Scientism: Another Reply to Brown.” Social Epistemology Review and Reply Collective 7, no. 4 (2018a): 7-25.

Mizrahi, M. “The ‘Positive Argument’ for Constructive Empiricism and Inference to the Best Explanation.” Journal for General Philosophy of Science (2018b): https://doi.org/10.1007/s10838-018-9414-3.

Mizrahi, M. and Buckwalter, W. “The Role of Justification in the Ordinary Concept of Scientific Progress.” Journal for General Philosophy of Science 45, no. 1 (2014): 151-166.

Scimago Journal & Country Rank. “Subject Bubble Chart.” SJR: Scimago Journal & Country Rank. Accessed on May 20, 2018. http://www.scimagojr.com/mapgen.php?maptype=bc&country=US&y=citd.

Wills, B. “Why Mizrahi Needs to Replace Weak Scientism With an Even Weaker Scientism.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 18-24.

Author Information: Kenneth R. Westphal, Boðaziçi Üniversitesi, Ýstanbul, westphal.k.r@gmail.com

Westphal, Kenneth R. “Higher Education & Academic Administration: Current Crises Long Since Foretold.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 41-47.

The official SERRC publication pdf of the article gives specific page references for formal bibliographical reference. However, the author himself has provided a pdf using a layout specifically designed for the presentation of this manifesto for the future of research publication and academic exchange of ideas. We encourage you to download Dr. Westphal’s own file above. Shortlink: https://wp.me/p1Bfg0-3Tb

* * *

The current crises in education are indeed acute, though they have been long in the making, with clear analysis and evidence of their development and pending problems over the past 150 years! – evident in this concise chronological bibliography:

Mill, John Stuart, 1867. ‘Inaugural Address Delievered to the University of St. Andrews’, 1 Feb. 1867; rpt. in: J.M. Robson, gen. ed., The Collected Works of John Stuart Mill, 33 vols. (Toronto: University of Toronto Press, 1963–91), 21:217–257.

Ahrens, Heinrich, 1870. Naturrecht oder Philosophie des Rechts und des Staates, 2 vols. (Wien, C. Gerold’s Sohn), „Vorrede zur sechten Auflage“, S. v–x.

Cauer, Paul, 1890. Staat und Erziehung. Schulpolitische Bedenken. Kiel & Leipzig, Lipsius & Fischer.

Cauer, Paul, 1906. Sieben Jahre im Kampf um die Schulreform. Gesammelte Aufstötze. Berlin, Weidmann.

Hinneberg, Paul, ed., 1906. Allgemeine Grundlage der Kultur der Gegenwart. Leipzig, Tuebner. Cattell, J. McKeen, 1913. University Control. New York, The Science Press.

Veblen, Thorstein, 1918. The Higher Learning in America: A Memorandum on the Conduct of Universities by Business Men. New York, B.W. Huebsch.

José Ortega y Gasset, 1930. Misión de la Universidad. Madrid, Revista de Occidente; rpt. in: idem., OC 4:313–353; tr. H.L. Nostrand, Mission of the University (Oxford: Routledge, 1946).

Eisenhower, Milton S., et al., 1959. The Efficiency of Freedom: Report of the Committee on Government and Higher Education. Baltimore, Johns Hopkins University Press.

Snow, C.P., 1964. The Two Cultures, 2nd rev. ed. Cambridge, Cambridge University Press.

Rourke, Francis E., and Glenn E. Brooks, 1966. The Managerial Revolution in Higher Education. Baltimore, Johns Hopkins University Press.

Byrnes, James C., and A. Dale Tussing, 1971. ‘The Financial Crisis in Higher Education: Past, Present, and Future’. Educational Policy Research Center, Syracuse University Research Corp.; Washington, D.C., Office of Education (DHEW); (ED 061 896; HE 002 970).

Green, Thomas, 1980. Predicting the Behavior of the Educational System. Syracuse, NY, Syracuse University Press.

Schwanitz, Dietrich, 1999. Bildung. Alles, was man wissen muss. Frankfurt am Main, Eichhorn. Kempter, Klaus, and Peter Meusburger, eds., 2006. Bildung und Wissensgesellschaft (Heidelberger Jahrbücher 49). Berlin, Springer.

The British Academy, 2008. Punching our Weight: The Humanities and Social Sciences in Public Policy Making. London, The British Academy; http://www.britac.ac.uk.

Head, Simon, ‘The Grim Threat to British Universities’. The New York Review of Books, 13. Jan. 2011; https://www.readability.com/articles/n9pjbxmz.

Thomas, Keith, ‘Universities under Attack’. The London Review of Books, Online only • 28 Nov. 2011; (The author is a Fellow of All Souls College, Oxford, and former President of the British Academy); http://www.lrb.co.uk/2011/11/28/keith-thomas/universities-under-attack.

Hansen, Hal, 2011. ‘Rethinking Certification Theory and the Educational Development of the United States and Germany’. Research in Social Stratification and Mobility 29:31–55.

Benjamin Ginsberg, 2011. The Fall of the Faculty. Oxford University Press.

Don Watson, ‘A New Dusk’. The Monthly (Australia), August 2012, pp. 10–14; http://www.the monthly.com.au/comment-new-dusk-don-watson-5859.

Commission on the Humanities & Social Sciences, 2013. The Heart of the Matter: The Humanities and Social Sciences for a vibrant, competitive, and secure nation. Cambridge, Mass., American Academy of Arts and Sciences; http://www.amacad.org.

Randy Schekman, ‘How journals like Nature, Cell and Science are damaging science’. The Guardian Mon 9. Dec 2013;[1] http://www.theguardian.com/commentisfree/2013/dec/09/how-journalsnature-science-cell-damage-science.

Motroshilova, Nelly, 2013. [Real Factors of Scientific Activity and Citation Count; Russian.] ‘ÐÅÀËÜÍÛÅ ÔÀÊÒÎÐÛ ÍÀÓ×ÍÎ-ÈÑÑËÅÄÎÂÀÒÅËÜÑÊÎÃÎ ÒÐÓÄÀ È ÈÇÌÅ-ÐÅÍÈß ÖÈÒÈÐÎÂÀÍÈß’. Ïðîáëåìû îöåíêè ýôôåêòèâíîñòè â êîíêðåòíûõ îáëàñòÿõ íàóêè, 453–475. ÓÄÊ 001.38 + 519.24; ÁÁÊ 78.34.[2]

Ferrini, Cinzia, 2015. ‘Research “Values” in the Humanities: Funding Policies, Evaluation, and Cultural Resources. Some Introductory Remarks’. Humanities 4:42–67; DOI: 10.3390/ h4010042.[3]

O’Neill, Onora, 2015. ‘Integrity and Quality in Universities: Accountability, Excellence and Success’. Humanities 4:109–117; DOI: 10.3390/h4010109.

Scott, Peter, 2015. ‘Clashing Concepts and Methods: Assessing Excellence in the Humanities and Social Sciences’. Humanities 4:118–130; DOI: 10.3390/h4010118.

Halffman, Willem, and Hans Radder, 2015. ‘The Academic Manifesto: From an Occupied to a Public University’. Minerva 53.2:165–187 (PMC4468800);[4] DOI: 10.1007/s11024-015-9270-9.

Albach, Philip G., Georgiana Mihut and Jamil Salmi, 2016. ‘Sage Advice: International Advisory Councils at Tertiary Education Institutions’. CIHE Perspectives 1; Boston, Mass., Boston College Center for International Higher Education; World Bank Group; http://www.bc.edu/cihe.

Curren, Randall, 2016. ‘Green’s Predicting Thirty-Five Years On’. In: N. Levinson, ed., Philosophy of Education 2016 (Urbana, Ill.: PES, 2017), 000–000.

The CENTRAL AIMS OF EDUCATION, especially higher education, I explicate and defend in:

Westphal, Kenneth R., 2012. ‘Norm Acquisition, Rational Judgment & Moral Particularism’. Theory & Research in Education 10.1:3–25; DOI: 10.1177/1477878512437477.

———, 2016. ‘Back to the 3 R’s: Rights, Responsibilities & Reasoning’. SATS – Northern European Journal of Philosophy 17.1:21–60; DOI: 10.1515/sats-2016-0008.

On CITIZENSHIP EDUCATION for survival, see:

Randall Curren and Ellen Metzger, 2017. Living Well Now and in the Future: Why Sustainability Matters. Cambridge, Mass., MIT Press.

Randall Curren and Charles Dorn, forthcoming. Patriotic Education in a Global Age. Chicago, University of Chicago Press.

Though the latter title begins nationally, addressing proper patriotism, their thinking, analysis and recommendations are international and cosmopolitan; they write for a very global age in which we are all involved, however (un)wittingly, however (un)willingly, however (un)wisely.

On the necessity of liberal arts education also for technical disciplines, see:

Carnegie Mellon University, College of Engineering, General Education Requirements for [Graduating] Classes 2016 and Later: https://engineering.cmu.edu/education/undergraduate-programs/curriculum/general-education/index.html

On ‘BIBLIOMETRICS’ and journal ‘impact factor’, see:

Brembs, Björn, Katherine Button and Marcus Munafò, 2013. ‘Deep impact: unintended consequences of journal rank’. Frontiers in Human Neuroscience 7.291:1–12; DOI: 10.3389/fnhum.2013.00291.

Moustafa, Khaled, 2015. ‘The Disaster of the Impact Factor’. Science and Engineering Ethics 21: 139–142; DOI: 10.1007/s11948-014-9517-0.

PloS Medicine Editorial, 2006. ‘The impact factor game. It is time to find a better way to assess the scientific literature’. PLoS Medicine 3.6, e291.

Ramin, Sadeghi, and Alireza Sarraf Shirazi, 2012. ‘Comparison between Impact factor, SCImago journal rank indicator and Eigenfactor score of nuclear medicine journals’. Nuclear Medicine Review 15.2:132–136; DOI: 10.5603/NMR.2011.00022.

There simply is no substitute for informed, considered judgment. All the attempts to circumvent, replace or subvert proper judgments and proper judgment raise the question: who benefits from all the speed-up, distraction and over-load, and how do they benefit? And conversely: who loses out from all the speed-up, distraction and over-load, and how so?

P.S.: AHRENS (1870, v–x) Mahnung, uns umfaßend mit der Gesamtheit der Gesellschaft sowie der internationalen bzw. inter-kulturellen Verhältnissen, und nicht nur mit den besonderen Aufgaben unserer Gesellschaftsfraktion bzw. -gruppe, zu beschäftigen, wird nicht durch blose Ablehnung seiner vielleicht religiösen Auffaßung unserer „gesammten göttlich-menschlichen Lebens- und Culturordnung“ (a.a.O, S. ix) entgangen. Seine Mahnunng gilt gar ohne Milderung schon hinsichtlich unseres Hangs, den Eigen- bzw. Fraktionsinteressen Vorrang übers Gemeinwohl beizulegen, ohne sich zu besinnen, daß das Gemeinwohl auch die eigene Teilhabe daran miteinbeschließt. Die übliche Betonung der eng-konzipierten Zweckrationalität verdammt uns zur gegenseitigen, sei’s auch unabsichtlichen Beieinträchtigung, am Mindestens durch Tragik der Allmende.

* * *

Herrad von LANDSBERG, ‘Septem artes liberales’, Hortus deliciarum (1180). http://www.plosin.com/work/Hortus.html

 

Philosophy, the Queen, sits in the center of the circle. The three heads extending from her crown represent Ethics, Logic and Physics, the three parts of the teaching of philosophy. The streamer held by Philosophy reads: All wisdom comes from God; only the wise can achieve what they desire. Below Philosophy, seated at desks, are Socrates and Plato. The texts which surround them state that they taught first ethics, then physics, then rhetoric; that they were wise teachers; and that they inquired into nature of all things.

From Philosophy emerge seven streams, three on the right and four on the left. According to the text these are the seven liberal arts, inspired by the Holy Spirit: grammar, rhetoric, dialectic, music, arithmetic, geometry, and astronomy. The ring containing the inner circle reads: I, Godlike Philosophy, control all things with wisdom; I lay out seven arts which are subordinate to me. Arrayed around the circle are the liberal arts. Three correspond to the rivers which emerge from Philosophy on the right and are concerned with language and letters: grammar, rhetoric, and dialectic. Together they comprise the trivium. The four others form the quadrivium, arts which are concerned with the various kinds of harmony: music, arithmetic, geometry, and astronomy.

Each of the seven arts holds something symbolic, and each is accompanied by a text displayed on the arch above it. Grammar (12:00) holds a book and a whip. The text reads: Through me all can learn what are the words, the syllables, and the letters.

Rhetoric (2:00) holds a tablet and stylus. The text reads: Thanks to me, proud speaker, your speeches will be able to take strength.

Dialectic (4:00) points with a one hand and holds a barking dog’s head in the other. The text reads: My arguments are followed with speed, just like the dog’s barking.

Music (5:00) holds a harp, and other instruments are nearby. The text reads: I teach my art using a variety of instruments.

Arithmetic (7:00) holds a cord with threaded beads, like a rudimentary abacus. The text reads: I base myself on the numbers and show the proportions between them.

Geometry (9:00) holds a staff and compass. The text reads: It is with exactness that I survey the ground.

Astronomy (11:00) points heavenward and holds in hand a magnifying lens or mirror. The text reads: I hold the names of the celestial bodies and predict the future. The large ring around the whole scene contains four aphorisms:

What it discovers is remembered;

Philosophy investigates the secrets of the elements and all things;

Philosophy teaches arts by seven branches;

It puts it in writing, in order to convey it to the students.

Below the circle are four men seated at desks, poets or magicians, outside the pale and beyond the influence of Philosophy. According to the text they are guided and taught by impure spirits and they produce is only tales or fables, frivolous poetry, or magic spells. Notice the black birds speaking to them (the antithesis of the white dove, symbol of the Holy Spirit).

Some Observations on the Current State of Research Evaluation in Philosophy

K.R. WESTPHAL (2015)

Although many institutions, whether universities or government ministries, have now in effect mandated publication in ‘listed’ academic journals, such listings by (e.g.) Thompson-Reuters is o n ly a subscription service, nothing more, altogether regardless of academic standards or scholarly calibre. Significant publications are those which pass stringent peer review by relevant experts. Unfortunately, the trappings of such procedures – including ‘international’ editorial offices – are all too easy to imitate or dissemble. Furthermore, due to declining standards in graduate training in philosophy (across the Occident), peer reviewing even at reputable journals and presses is deteriorating significantly.

I know that there are ‘listed’ journals publishing ‘research’ papers I would not accept from an undergraduate student. I know that there are ‘international’ journals which publish materials not deserving the slightest notice. I know there are excellent journals and presses – in particular: by the very best German publishers – which are not ‘listed’ because those publishers simply do not need those listings, nor their expense. I know that there are highly regarded presses which publish very many good, even excellent items, but also publish spates of mediocre books to make money, and have been doing so for decades. These assertions I can document in detail, if ever details be of interest.

The increasingly common procedure to ‘rank’ individual research publications by the purported ‘rank’ of their venue – their press or journal – is in principle and in practice fallacious. There simply is no valid inference from any empirically established ‘curve’ to the putative value of any single (equally putative) ‘data point’. Additionally, no press or journal consistently publishes research falling only within one well-defined calibre; there are excellent pieces of research published in unassuming venues, and there is too much mediocre publication by purportedly leading venues.

I also know that constrictions in funding have led to ‘streamlining’ graduate training within the field of philosophy (and surmise that this is not at all unique to philosophy), so that less time is spent in graduate studies. Additionally, over-specialisation within the field of philosophy has accelerated the production of mutually irrelevant bits of ‘research’, each restricted to its own narrow orthodoxy, coupled with a severe decline in methodological sophistication and indeed basic research skills and procedures. The declining calibre of graduate training has, inevitably, had an enormous adverse effect on the calibre of ‘professional’ refereeing for publication, both by journals and by presses.

Now that we have the technical resources for purely electronic publication, at an enormous savings and economy of distribution in comparison to print media, many publishers are doing their utmost to keep their print media profitable, or to make exorbitant profits from much less expensive electronic publication. Both tendencies are countered, to an extent, by newly established, typically open-access electronic journals. These developments are very welcome and important, and many of these new e-journals are by international standards high-calibre operations. Nevertheless, it will take time for ‘reputation’ to accrue to genuinely deserving e-journals, and (one hopes) to shake out the mediocre or dishonest pretenders.

One final point which merits emphasis is that the notion of ‘monoglot’ scholarship only arose ca. 1950, primarily amongst Anglophones, and was sanctions by law in only one region (the former Soviet Union). Thirty years ago, scholars working on Ancient Greek philosophy were fluent in the main modern European languages and kept abreast of research published in Greek, German, French and English. Now my German colleagues note that often a German monograph appears on a neglected topic in Ancient Greek philosophy, only to suffer neglect by an English book on the same topic published a decade later. The pitfalls of ‘Eurenglish’ (e.g. in Brussels) I shall not detail; we simply must return to teaching, facilitating and expecting mastery of multiple languages.

For these and many other reasons, these are very difficult times for scholarship and for the academy. Accordingly, I am all the more committed to maintaining academic excellence. In this connection and in these regards, I wish to underscore that there simply is NO substitute for the expert assessment of individual pieces of research, whether articles, monographs or collections.

Contact details: westphal.k.r@gmail.com

[1] Randy Schekman is Professor of biochemistry at the University of California, Berkeley; he, James Rothman and Thomas Südhof were jointly awarded the 2013 Nobel Prize for physiology or medicine.

[2] Editor’s Note – Ironically and appropriately, given the topic of this article, our Digital Editor is unable to render Cyrillic text on any of the computers in the SERRC office in Toronto. These technical difficulties constitute another reason to read Dr. Westphal’s original pdf copy.

[3] Ferrini (2015), O’Neill (2015) and Scott (2015) appear in a special issue, titled per Ferrini’s editorial introduction; Humanities is sponsored by the Academia Europaea, now published with open access by MDPI (Multidisciplinary Digital Publishing Institute, Basel); previously published by Cambridge University Press.

[4] Published by the US National Library of Medicine, National Institutes of Health: National Center for Biotechnology Information (NCBI).

Author Information: Saana Jukola and Henrik Roeland Visser, Bielefeld University, sjukola@uni-bielefeld.de and rvisser@uni-bielefeld.de.

Jukola, Saana; and Henrik Roland Visser. “On ‘Prediction Markets for Science,’ A Reply to Thicke” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 1-5.

The pdf of the article includes specific page numbers. Shortlink: https://wp.me/p1Bfg0-3Q9

Please refer to:

Image by The Bees, via Flickr

 

In his paper, Michael Thicke critically evaluates the potential of using prediction markets to answer scientific questions. In prediction markets, people trade contracts that pay out if a certain prediction comes true or not. If such a market functions efficiently and thus incorporates the information of all market participants, the resulting market price provides a valuable indication of the likelihood that the prediction comes true.

Prediction markets have a variety of potential applications in science; they could provide a reliable measure of how large the consensus on a controversial finding truly is, or tell us how likely a research project is to deliver the promised results if it is granted the required funding. Prediction markets could thus serve the same function as peer review or consensus measures.

Thicke identifies two potential obstacles for the use of prediction markets in science. Namely, the risk of inaccurate results and of potentially harmful unintended consequences to the organization and incentive structure of science. We largely agree on the worry about inaccuracy. In this comment we will therefore only discuss the second objection; it is unclear to us what really follows from the risk of harmful unintended consequences. Furthermore, we consider another worry one might have about the use of prediction markets in science, which Thicke does not discuss: peer review is not only a quality control measure to uphold scientific standards, but also serves a deliberative function, both within science and to legitimize the use of scientific knowledge in politics.

Reasoning about imperfect methods

Prediction markets work best for questions for which a clearly identifiable answer is produced in the not too distant future. Scientific research on the other hand often produces very unexpected results on an uncertain time scale. As a result, there is no objective way of choosing when and how to evaluate predictions on scientific research. Thicke identifies two ways in which this can create harmful unintended effects on the organization of science.

Firstly, projects that have clear short-term answers may erroneously be regarded as epistemically superior to basic research which might have better long-term potential. Secondly, science prediction markets create a financial incentive to steer resources towards research with easily identifiable short-term consequences, even if more basic research would have a better epistemic pay-off in the long-run.

Based on their low expected accuracy and the potential of harmful effects on the organization of science, Thicke concludes that science prediction markets might be a worse ‘cure’ than the ‘disease’ of bias in peer review and consensus measures. We are skeptical of this conclusion for the same reasons as offered by Robin Hanson. While the worry about the promise of science prediction markets is justified, it is unclear how this makes them worse than the traditional alternatives.

Nevertheless, Thicke’s conclusion points in the right direction: instead of looking for a more perfect method, which may not become available in the foreseeable future, we need to judge which of the imperfect methods is more palatable to us. Doing that would, however, require a more sophisticated evaluation of the different strengths and weakness of the different available methods and how to trade those off, which goes beyond the scope of Thicke’s paper.

Deliberation in Science

An alternative worry, which Thicke does not elaborate on, is the fact that peer review is not only expected to accurately determine the quality of submissions and conclude what scientific work deserves to be funded or published, but it is also valued for its deliberative nature, which allows it to provide reasons to those affected by the decisions made in research funding or the use of scientific knowledge in politics. Given that prediction markets function through market forces rather than deliberative procedure, and produce probabilistic predictions rather than qualitative explanations, this might be (another) aspect on which the traditional alternative of peer review outperforms science prediction markets.

Within science, peer review serves two different purposes. First, it functions as a gatekeeping mechanism for deciding which projects deserve to be carried out or disseminated – an aim of peer review is to make sure that good work is being funded or published and undeserving projects are rejected. Second, peer review is often taken to embody the critical mechanism that is central to the scientific method. By pointing out defects and weaknesses in manuscripts or proposals, and by suggesting new ways of approaching the phenomena of interest, peer reviewers are expected to help authors improve the quality of their work. At least in an ideal case, authors know why their manuscripts were rejected or accepted after receiving peer review reports and can take the feedback into consideration in their future work.

In this sense, peer review represents an intersubjective mechanism that guards against the biases and blind spots that individual researchers may have. Criticism of evidence, methods and reasoning is essential to science, and necessary for arriving at trustworthy results.[1] Such critical interaction thus ensures that a wide variety of perspectives in represented in science, which is both epistemically and socially valuable. If prediction markets were to replace peer review, could they serve this second, critical, function? It seems that the answer is No. Prediction markets do not provide reasons in the way that peer review does, and if the only information that is available are probabilistic predictions, something essential to science is lost.

To illustrate this point in a more intuitive way: imagine that instead of writing this comment in which we review Thicke’s paper, there is a prediction market on which we, Thicke and other authors would invest in bets regarding the likelihood of science prediction markets being an adequate replacement of the traditional method of peer review. From the resulting price signal we would infer whether predictions markets are indeed an adequate replacement or not. Would that allow for the same kind of interaction in which we now engage with Thicke and others by writing this comment? At least intuitively, it seems to us that the answer is No.

Deliberation About Science in Politics

Such a lack of reasons that justify why certain views have been accepted or rejected is not only a problem for researchers who strive towards getting their work published, but could also be detrimental to public trust in science. When scientists give answers to questions that are politically or socially sensitive, or when controversial science-based recommendations are given, it is important to explain the underlying reasons to ensure that those affected can – at least try to – understand them.

Only if people are offered reasons for decisions that affect them can they effectively contest such decisions. This is why many political theorists regard the ability of citizens to demand an explanation, and the corresponding duty of decision-makers to be responsive to such demands, as a necessary element of legitimate collective decisions.[2] Philosophers of science like Philip Kitcher[3] rely on very similar arguments to explain the importance of deliberative norms in justifying scientific conclusions and the use of scientific knowledge in politics.

Science prediction markets do not provide substantive reasons for their outcome. They only provide a procedural argument, which guarantees the quality of their outcome when certain conditions are fulfilled, such as the presence of a well-functioning market. Of course, one of those conditions is also that at least some of the market participants possess and rely on correct information to make their investment decisions, but that information is hidden in the price signal. This is especially problematic with respect to the kind of high-impact research that Thicke focuses on, i.e. climate change. There, the ability to justify why a certain theory or prediction is accepted as reliable, is at least as important for the public discourse as it is to have precise and accurate quantitative estimates.

Besides the legitimacy argument, there is another reason why quantitative predictions alone do not suffice. Policy-oriented sciences like climate science or economics are also expected to judge the effect and effectiveness of policy interventions. But in complex systems like the climate or the economy, there are many different plausible mechanisms simultaneously at play, which could justify competing policy interventions. Given the long-lasting controversies surrounding such policy-oriented sciences, different political camps have established preferences for particular theoretical interpretations that justify their desired policy interventions.

If scientists are to have any chance of resolving such controversies, they must therefore not only produce accurate predictions, but also communicate which of the possible underlying mechanisms they think best explains the predicted phenomena. It seems prediction markets alone could not do this. It might be useful to think of this particular problem as the ‘underdetermination of policy intervention by quantitative prediction’.

Science prediction markets as replacement or addition?

The severity of the potential obstacles that Thicke and we identify depends on whether science prediction markets would replace traditional methods such as peer review, or would rather serve as addition or even complement to traditional methods. Thicke provides examples of both: in the case of peer review for publication or funding decisions, prediction markets might replace traditional methods. But in the case of resolving controversies, for instance concerning climate change, it aggregates and evaluates already existing pieces of knowledge and peer review. In such a case the information that underlies the trading behavior on the prediction market would still be available and could be revisited if people distrust the reliability of the prediction market’s result.

We could also imagine that there are cases in which science prediction markets are used to select the right answer or at least narrow down the range of alternatives, after which a qualitative report is produced which provides a justification of the chosen answer(s). Perhaps it is possible to infer from trading behavior which investors possess the most reliable information, a possibility explored by Hanson. Contrary to Hanson, we are skeptical of the viability of this strategy. Firstly, the problem of the underdetermination of theory by data suggests that different competing justifications might be compatible with the observation trading behavior. Secondly, such justifications would be post-hoc rationalizations, which sound plausible but might lack power to discriminate among alternative predictions.

Conclusion

All in all, we are sympathetic to Michael Thicke’s critical analysis of the potential of prediction markets in science and share his skepticism. However, we point out another issue that speaks against prediction markets and in favor of peer review: Giving and receiving reasons for why a certain view should be accepted or rejected. Given that the strengths and weaknesses of these methods fall on different dimensions (prediction markets may fare better in accuracy, while in an ideal case peer review can help the involved parties understand the grounds why a position should be approved), it is important to reflect on what the appropriate aims in particular scientific and policy context are before making a decision on what method should be used to evaluate research.

References

Hanson, Robin. “Compare Institutions To Institutions, Not To Perfection,” Overcoming Bias (blog). August 5, 2017. Retrieved from: http://www.overcomingbias.com/2017/08/compare-institutions-to-institutions-not-to-perfection.html

Hanson, Robin. “Markets That Explain, Via Markets To Pick A Best,” Overcoming Bias (blog), October 14, 2017 http://www.overcomingbias.com/2017/10/markets-that-explain-via-markets-to-pick-a-best.html

[1] See, e.g., Karl Popper, The Open Society and Its Enemies. Vol 2. (Routledge, 1966) or Helen Longino, Science as Social Knowledge. Values and Objectivity in Scientific Inquiry (Princeton University Press, 1990).

[2] See Jürgen Habermas, A Theory of Communicative Action, Vols1 and 2. (Polity Press, 1984 & 1989) & Philip Pettit, “Deliberative democracy and the discursive dilemma.” Philosophical Issues, vol. 11, pp. 268-299, 2001.

[3] Philip Kitcher, Science, Truth, and Democracy (Oxford University Press, 2001) & Philip Kitcher, Science in a democratic society (Prometheus Books, 2011).

Author Information: Adam Riggio, McMaster University, adamriggio@gmail.com; Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk

Shortlink: http://wp.me/p1Bfg0-22s

Editor’s Note:

Adam Riggio

Before I start my critical points regarding Chapter Five in Knowledge: The Philosophical Quest in History, I want to say how much I appreciate the opportunity for this dialogue. The institutional structure of research universities tends to prevent prestigious research chairs from engaging in one-on-one debate with unaffiliated scholar/writers like me. Especially since I can become highly and fundamentally critical of some of your perspectives and priorities.  Continue Reading…