Archives For community-based social knowledge

Author Information: Saana Jukola and Henrik Roeland Visser, Bielefeld University, sjukola@uni-bielefeld.de and rvisser@uni-bielefeld.de.

Jukola, Saana; and Henrik Roland Visser. “On ‘Prediction Markets for Science,’ A Reply to Thicke” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 1-5.

The pdf of the article includes specific page numbers. Shortlink: https://wp.me/p1Bfg0-3Q9

Please refer to:

Image by The Bees, via Flickr

 

In his paper, Michael Thicke critically evaluates the potential of using prediction markets to answer scientific questions. In prediction markets, people trade contracts that pay out if a certain prediction comes true or not. If such a market functions efficiently and thus incorporates the information of all market participants, the resulting market price provides a valuable indication of the likelihood that the prediction comes true.

Prediction markets have a variety of potential applications in science; they could provide a reliable measure of how large the consensus on a controversial finding truly is, or tell us how likely a research project is to deliver the promised results if it is granted the required funding. Prediction markets could thus serve the same function as peer review or consensus measures.

Thicke identifies two potential obstacles for the use of prediction markets in science. Namely, the risk of inaccurate results and of potentially harmful unintended consequences to the organization and incentive structure of science. We largely agree on the worry about inaccuracy. In this comment we will therefore only discuss the second objection; it is unclear to us what really follows from the risk of harmful unintended consequences. Furthermore, we consider another worry one might have about the use of prediction markets in science, which Thicke does not discuss: peer review is not only a quality control measure to uphold scientific standards, but also serves a deliberative function, both within science and to legitimize the use of scientific knowledge in politics.

Reasoning about imperfect methods

Prediction markets work best for questions for which a clearly identifiable answer is produced in the not too distant future. Scientific research on the other hand often produces very unexpected results on an uncertain time scale. As a result, there is no objective way of choosing when and how to evaluate predictions on scientific research. Thicke identifies two ways in which this can create harmful unintended effects on the organization of science.

Firstly, projects that have clear short-term answers may erroneously be regarded as epistemically superior to basic research which might have better long-term potential. Secondly, science prediction markets create a financial incentive to steer resources towards research with easily identifiable short-term consequences, even if more basic research would have a better epistemic pay-off in the long-run.

Based on their low expected accuracy and the potential of harmful effects on the organization of science, Thicke concludes that science prediction markets might be a worse ‘cure’ than the ‘disease’ of bias in peer review and consensus measures. We are skeptical of this conclusion for the same reasons as offered by Robin Hanson. While the worry about the promise of science prediction markets is justified, it is unclear how this makes them worse than the traditional alternatives.

Nevertheless, Thicke’s conclusion points in the right direction: instead of looking for a more perfect method, which may not become available in the foreseeable future, we need to judge which of the imperfect methods is more palatable to us. Doing that would, however, require a more sophisticated evaluation of the different strengths and weakness of the different available methods and how to trade those off, which goes beyond the scope of Thicke’s paper.

Deliberation in Science

An alternative worry, which Thicke does not elaborate on, is the fact that peer review is not only expected to accurately determine the quality of submissions and conclude what scientific work deserves to be funded or published, but it is also valued for its deliberative nature, which allows it to provide reasons to those affected by the decisions made in research funding or the use of scientific knowledge in politics. Given that prediction markets function through market forces rather than deliberative procedure, and produce probabilistic predictions rather than qualitative explanations, this might be (another) aspect on which the traditional alternative of peer review outperforms science prediction markets.

Within science, peer review serves two different purposes. First, it functions as a gatekeeping mechanism for deciding which projects deserve to be carried out or disseminated – an aim of peer review is to make sure that good work is being funded or published and undeserving projects are rejected. Second, peer review is often taken to embody the critical mechanism that is central to the scientific method. By pointing out defects and weaknesses in manuscripts or proposals, and by suggesting new ways of approaching the phenomena of interest, peer reviewers are expected to help authors improve the quality of their work. At least in an ideal case, authors know why their manuscripts were rejected or accepted after receiving peer review reports and can take the feedback into consideration in their future work.

In this sense, peer review represents an intersubjective mechanism that guards against the biases and blind spots that individual researchers may have. Criticism of evidence, methods and reasoning is essential to science, and necessary for arriving at trustworthy results.[1] Such critical interaction thus ensures that a wide variety of perspectives in represented in science, which is both epistemically and socially valuable. If prediction markets were to replace peer review, could they serve this second, critical, function? It seems that the answer is No. Prediction markets do not provide reasons in the way that peer review does, and if the only information that is available are probabilistic predictions, something essential to science is lost.

To illustrate this point in a more intuitive way: imagine that instead of writing this comment in which we review Thicke’s paper, there is a prediction market on which we, Thicke and other authors would invest in bets regarding the likelihood of science prediction markets being an adequate replacement of the traditional method of peer review. From the resulting price signal we would infer whether predictions markets are indeed an adequate replacement or not. Would that allow for the same kind of interaction in which we now engage with Thicke and others by writing this comment? At least intuitively, it seems to us that the answer is No.

Deliberation About Science in Politics

Such a lack of reasons that justify why certain views have been accepted or rejected is not only a problem for researchers who strive towards getting their work published, but could also be detrimental to public trust in science. When scientists give answers to questions that are politically or socially sensitive, or when controversial science-based recommendations are given, it is important to explain the underlying reasons to ensure that those affected can – at least try to – understand them.

Only if people are offered reasons for decisions that affect them can they effectively contest such decisions. This is why many political theorists regard the ability of citizens to demand an explanation, and the corresponding duty of decision-makers to be responsive to such demands, as a necessary element of legitimate collective decisions.[2] Philosophers of science like Philip Kitcher[3] rely on very similar arguments to explain the importance of deliberative norms in justifying scientific conclusions and the use of scientific knowledge in politics.

Science prediction markets do not provide substantive reasons for their outcome. They only provide a procedural argument, which guarantees the quality of their outcome when certain conditions are fulfilled, such as the presence of a well-functioning market. Of course, one of those conditions is also that at least some of the market participants possess and rely on correct information to make their investment decisions, but that information is hidden in the price signal. This is especially problematic with respect to the kind of high-impact research that Thicke focuses on, i.e. climate change. There, the ability to justify why a certain theory or prediction is accepted as reliable, is at least as important for the public discourse as it is to have precise and accurate quantitative estimates.

Besides the legitimacy argument, there is another reason why quantitative predictions alone do not suffice. Policy-oriented sciences like climate science or economics are also expected to judge the effect and effectiveness of policy interventions. But in complex systems like the climate or the economy, there are many different plausible mechanisms simultaneously at play, which could justify competing policy interventions. Given the long-lasting controversies surrounding such policy-oriented sciences, different political camps have established preferences for particular theoretical interpretations that justify their desired policy interventions.

If scientists are to have any chance of resolving such controversies, they must therefore not only produce accurate predictions, but also communicate which of the possible underlying mechanisms they think best explains the predicted phenomena. It seems prediction markets alone could not do this. It might be useful to think of this particular problem as the ‘underdetermination of policy intervention by quantitative prediction’.

Science prediction markets as replacement or addition?

The severity of the potential obstacles that Thicke and we identify depends on whether science prediction markets would replace traditional methods such as peer review, or would rather serve as addition or even complement to traditional methods. Thicke provides examples of both: in the case of peer review for publication or funding decisions, prediction markets might replace traditional methods. But in the case of resolving controversies, for instance concerning climate change, it aggregates and evaluates already existing pieces of knowledge and peer review. In such a case the information that underlies the trading behavior on the prediction market would still be available and could be revisited if people distrust the reliability of the prediction market’s result.

We could also imagine that there are cases in which science prediction markets are used to select the right answer or at least narrow down the range of alternatives, after which a qualitative report is produced which provides a justification of the chosen answer(s). Perhaps it is possible to infer from trading behavior which investors possess the most reliable information, a possibility explored by Hanson. Contrary to Hanson, we are skeptical of the viability of this strategy. Firstly, the problem of the underdetermination of theory by data suggests that different competing justifications might be compatible with the observation trading behavior. Secondly, such justifications would be post-hoc rationalizations, which sound plausible but might lack power to discriminate among alternative predictions.

Conclusion

All in all, we are sympathetic to Michael Thicke’s critical analysis of the potential of prediction markets in science and share his skepticism. However, we point out another issue that speaks against prediction markets and in favor of peer review: Giving and receiving reasons for why a certain view should be accepted or rejected. Given that the strengths and weaknesses of these methods fall on different dimensions (prediction markets may fare better in accuracy, while in an ideal case peer review can help the involved parties understand the grounds why a position should be approved), it is important to reflect on what the appropriate aims in particular scientific and policy context are before making a decision on what method should be used to evaluate research.

References

Hanson, Robin. “Compare Institutions To Institutions, Not To Perfection,” Overcoming Bias (blog). August 5, 2017. Retrieved from: http://www.overcomingbias.com/2017/08/compare-institutions-to-institutions-not-to-perfection.html

Hanson, Robin. “Markets That Explain, Via Markets To Pick A Best,” Overcoming Bias (blog), October 14, 2017 http://www.overcomingbias.com/2017/10/markets-that-explain-via-markets-to-pick-a-best.html

[1] See, e.g., Karl Popper, The Open Society and Its Enemies. Vol 2. (Routledge, 1966) or Helen Longino, Science as Social Knowledge. Values and Objectivity in Scientific Inquiry (Princeton University Press, 1990).

[2] See Jürgen Habermas, A Theory of Communicative Action, Vols1 and 2. (Polity Press, 1984 & 1989) & Philip Pettit, “Deliberative democracy and the discursive dilemma.” Philosophical Issues, vol. 11, pp. 268-299, 2001.

[3] Philip Kitcher, Science, Truth, and Democracy (Oxford University Press, 2001) & Philip Kitcher, Science in a democratic society (Prometheus Books, 2011).

Author Information: Inkeri Koskinen, University of Helsinki, inkeri.koskinen@helsinki.fi

Koskinen, Inkeri. “Not-So-Well-Designed Scientific Communities.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 54-58.

The pdf of the article includes specific page numbers. Shortlink: http://wp.me/p1Bfg0-3PB

Please refer to:

Image from Katie Walker via Flickr

 

The idea of hybrid concepts, simultaneously both epistemic and moral, has recently attracted the interest of philosophers, especially since the notion of epistemic injustice (Fricker 2007) became the central topic of a lively and growing discussion. In her article, Kristina Rolin adopts the idea of such hybridity, and investigates the possibility of understanding epistemic responsibility as having both epistemic and moral qualities.

Rolin argues that scientists belonging to epistemically well-designed communities are united by mutual epistemic responsibilities, and that these responsibilities ought to be understood in a specific way. Epistemically responsible behaviour towards fellow researchers—such as adopting a defense commitment with respect to one’s knowledge claims, or offering constructive criticism to colleagues—would not just be an epistemic duty, but also a moral one; one that shows moral respect for other human beings in their capacity as knowers.

However, as Rolin focuses on “well-designed scientific communities”, I fear that she fails to notice an implication of her own argument. Current trends in science policy encourage researchers in many fields to take up high-impact, solution-oriented, multi-, inter-, and transdisciplinary projects. If one can talk about “designing scientific communities” in this context, the design is clearly meant to challenge the existing division of epistemic labour in academia, and to destabilise speciality communities. If we follow Rolin’s own argumentation, understanding epistemic responsibility as a moral duty can thus become a surprisingly heavy burden for an individual researcher in such a situation.

Epistemic Cosmopolitanism

According to Rolin, accounts of epistemic responsibility that appeal to self-interested or epistemic motives need to be complemented with a moral account. Without one it is not always possible to explain why it is rational for an individual researcher to behave in an epistemically responsible way.

Both the self-interest account and the epistemic account state that scientists behave in an epistemically responsible way because they believe that it serves their own ends—be it career advancement, fame, and financial gain, or purely epistemic individual ends. However, as Rolin aptly points out, both accounts are insufficient in a situation where the ends of the individual researcher and the impersonal epistemic ends of science are not aligned. Only if researchers see epistemically responsible behaviour as a moral duty, will they act in an epistemically responsible way even if this does not serve their own ends.

It is to some degree ambiguous how Rolin’s account should be read—how normative it is, and in what sense. Some parts of her article could be interpreted as a somewhat Mertonian description of actual moral views held by individual scientists, and cultivated in scientific communities (Merton [1942] 1973). However, she also clearly gives normative advice: well-designed scientific communities should foster a moral account of epistemic responsibility.

But when offering a moral justification for her view, she at times seems to defend a stronger normative stance, one that would posit epistemic responsibility as a universal moral duty. However, her main argument does not require the strongest reading. I thus interpret her account as partly descriptive and partly normative: many researchers treat epistemic responsibility as a moral duty, and it is epistemically beneficial for scientific communities to foster such a view. Moreover, a moral justification can be offered for the view.

When defining her account more closely, Rolin cites ideas developed in political philosophy. She adopts Robert Goodin’s (1988) distinction between general and special moral duties, and names her account epistemic cosmopolitanism:

Epistemic cosmopolitanism states that (a) insofar as we are engaged in knowledge-seeking practices, we have general epistemic responsibilities, and (b) the special epistemic responsibilities scientists have as members of scientific communities are essentially distributed general epistemic responsibilities (Rolin 2017, 478).

One of the advantages of this account is of particular interest to me. Rolin notes that if epistemically responsible behaviour would be seen as just a general moral duty, it could be too demanding for individual researchers. Any scientist is bound to fail in an attempt to behave in an entirely epistemically responsible manner towards all existing scientific speciality communities, taking all their diverse standards of evidence into account. This result can be avoided through a division of epistemic labour. The general responsibilities can be distributed in a way that limits the audience towards which individual scientists must behave in an epistemically responsible way. Thus, “in epistemically well-designed scientific communities, no scientist is put into a position where she is not capable of carrying out her special epistemic responsibilities” (Rolin 2017, 478).

Trends in Science Policy

Rolin’s main interest is in epistemically well-designed scientific communities. However, she also takes up an example I mention in a recent paper (Koskinen 2016). In it I examine a few research articles in order to illustrate situations where a relevant scientific community has not been recognised, or where there is no clear community to be found. In these articles, researchers from diverse fields attempt to integrate archaeological, geological or seismological evidence with orally transmitted stories about great floods. In other words, they take the oral stories seriously, and attempt to use them as historical evidence. However, they fail to take into account folkloristic expertise on myths. This I find highly problematic, as the stories the researchers try to use as historical evidence include typical elements of the flood myth.

The aims of such attempts to integrate academic and extra-academic knowledge are both emancipatory—taking the oral histories of indigenous communities seriously—and practical, as knowledge about past natural catastrophes may help prevent new ones. This chimes well with certain current trends in science policy. Collaborations across disciplinary boundaries, and even across the boundaries of science, are promoted as a way to increase the societal impact of science and provide solutions to practical problems. Researchers are expected to contribute to solving the problems by integrating knowledge from different sources.

Such aims have been articulated in terms of systems theory, the Mode-2 concept of knowledge production and, recently, open science (Gibbons et al. 1994; Nowotny et al. 2001; Hirsch Hadorn et al. 2008), leading to the development of solution-oriented multi, inter-, and transdisciplinary research approaches. At the same time, critical feminist and postcolonial theories have influenced collaborative and participatory methodologies (Reason and Bradbury 2008; Harding 2011), and recently ideas borrowed from business have led to an increasing amount of ‘co-creation’ and ‘co-research’ in academia (see e.g. Horizon 2020).

All this, combined with keen competition for research funding, leads in some areas of academic research to increasing amounts of solution-oriented research projects that systematically break disciplinary boundaries. And simultaneously they often challenge the existing division of epistemic labour.

Challenging the Existing Division of Epistemic Labour

According to Rolin, well-designed scientific communities need to foster the moral account of epistemic responsibilities. The necessity becomes clear in such situations as are described above: it would be in the epistemic interests of scientific communities, and science in general, if folklorists were to offer constructive criticism to the archaeologists, geologists and seismologists. However, if the folklorists are motivated only by self-interest, or by personal epistemic goals, they have no reason to do so. Only if they see epistemic responsibility as a moral duty, one that is fundamentally based on general moral duties, will their actions be in accord with the epistemic interests of science. Rolin argues that this happens because the existing division of epistemic labour can be challenged.

Normally, according to epistemic cosmopolitanism, the epistemic responsibilities of folklorists would lie mainly in their own speciality community. However, if the existing division of epistemic labour does not serve the epistemic goals of science, this does not suffice. And if special moral duties are taken to be distributed general moral duties, the way of distributing them can always be changed. In fact, it must be changed, if that is the only way to follow the underlying general moral duties:

If the cooperation between archaeologists and folklorists is in the epistemic interests of science, a division of epistemic labour should be changed so that, at least in some cases, archaeologists and folklorists should have mutual special epistemic responsibilities. This is the basis for claiming that a folklorist has a moral obligation to intervene in the problematic use of orally transmitted stories in archaeology (Rolin 2017, 478–479).

The solution seems compelling, but I see a problem that Rolin does not sufficiently address. She seems to believe that situations where the existing division of epistemic labour is challenged are fairly rare, and that they lead to a new, stable division of epistemic labour. I do not think that this is the case.

Rolin cites Brad Wray (2011) and Uskali Mäki (2016) when emphasising that scientific speciality communities are not eternal. They may dissolve and new ones may emerge, and interdisciplinary collaboration can lead to the formation of new speciality communities. However, as Mäki and I have noted (Koskinen & Mäki 2016), solution-oriented inter- or transdisciplinary research does not necessarily, or even typically, lead to the formation of new scientific communities. Only global problems, such as biodiversity loss or climate change, are likely to function as catalysts in the disciplinary matrix, leading to the formation of numerous interdisciplinary research teams addressing the same problem field. Smaller, local problems generate only changeable constellations of inter- and transdisciplinary collaborations that dissolve once a project is over. If such collaborations become common, the state Rolin describes as a rare period of transition becomes the status quo.

It Can be Too Demanding

Rather than a critique of Rolin’s argument, the conclusion of this commentary is an observation that follows from the said argument. It helps us to clarify one possible reason for the difficulties that researchers encounter with inter- and transdisciplinary research.

Rolin argues that epistemically well-designed scientific communities should foster the idea of epistemic responsibilities being not only epistemic, but also moral duties. The usefulness of such an outlook becomes particularly clear in situations where the prevailing division of epistemic labour is challenged—for instance, when an interdisciplinary project fails to take some relevant viewpoint into account, and the researchers who would be able to offer valuable criticism do not benefit from offering it. In such a situation researchers motivated by self-interest or by individual epistemic goals would have no reason to offer the required criticism. This would be unfortunate, given the impersonal epistemic goals of science. So, we must hope that scientists see epistemically responsible behaviour as their moral duty.

However, for a researcher working in an environment where changeable, solution-oriented, multi-, inter-, and transdisciplinary projects are common, understanding epistemic responsibility as a moral duty may easily become a burden. The prevailing division of epistemic labour is challenged constantly, and without a new, stable division necessarily replacing it.

As Rolin notes, it is due to a tolerably clear division of labour that epistemic responsibilities understood as moral duties do not become too demanding for individual researchers. But as trends in science policy erode disciplinary boundaries, the division of labour becomes unstable. If it continues to be challenged, it is not just once or twice that responsible scientists may have to intervene and comment on research that is not in their area of specialisation. This can become a constant and exhausting duty. So if instead of well-designed scientific communities, we get their erosion by design, we may have to reconsider the moral account of epistemic responsibility.

References

Fricker, M. Epistemic injustice: power and the ethics of knowing. Oxford: Oxford University Press, 2007.

Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. & Trow, M. The new production of knowledge: the dynamics of science and research in contemporary societies. London: Sage, 1994.

Goodin, R. “What is So Special about Our Fellow Countrymen?” Ethics 98 no. 4 (1988): 663–686.

Hirsch Hadorn, G., Hoffmann-Riem, H., Biber-Klemm, S., Grossenbacher-Mansuy, W., Joye, D., Pohl, C., Wiesmann, U., Zemp, E. (Eds.). Handbook of Transdisciplinary Research. Berlin: Springer, 2008.

Harding, S. (Ed.). The postcolonial science and technology studies reader. Durham and London: Duke University Press, 2011.

Horizon 2020. Work Programme 2016–2017. European Commission Decision C (2017)2468 of 24 April 2017.

Koskinen, I. “Where is the Epistemic Community? On Democratisation of Science and Social Accounts of Objectivity.” Synthese. 4 August 2016. doi:10.1007/s11229-016-1173-2.

Koskinen, I., & Mäki, U. “Extra-academic transdisciplinarity and scientific pluralism: What might they learn from one another?” The European Journal of Philosophy of Science 6, no. 3 (2016): 419–444.

Mäki, U. “Philosophy of Interdisciplinarity. What? Why? How?” European Journal for Philosophy of Science 6, no. 3 (2016): 327–342.

Merton, R. K. “Science and Technology in a Democratic Order.” Journal of Legal and Political Sociology 1 (1942): 115–126. Reprinted as “The Normative Structure of Science.” In R. K Merton, The Sociology of Science. Theoretical and Empirical Investigations. Chicago: University of Chicago Press, 1973: 267–278.

Nowotny, H., Scott, P., & Gibbons, M. Re-thinking science: knowledge and the public in an age of uncertainty. Cambridge: Polity, 2001.

Reason, P. and Bradbury, H. (Eds.). The Sage Handbook of Action Research: Participative Inquiry and Practice. Sage, CA: 2008.

Rolin, K. “Scientific Community: A Moral Dimension.” Social Epistemology 31, no. 5 (2017), 468–483.

Wray, K. B. Kuhn’s Evolutionary Social Epistemology. Cambridge: Cambridge University Press, 2001.

Author Information: Maya J. Goldenberg, University of Guelph, mgolden@uoguelph.ca

Goldenberg, Maya J. “Diversity in Epistemic Communities: A Response to Clough.” Social Epistemology Review and Reply Collective 3, no. 5 (2014): 25-30.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1oY

Please refer to:

Abstract

In Clough’s reply paper to me (2013a), she laments how feminist calls for diversity within scientific communities are inadvertently sidelined by our shared feminist empiricist prescriptions. She offers a novel justification for diversity within epistemic communities and challenges me to accept this addendum to my prior prescriptions for biomedical research communities (Goldenberg 2013) on the grounds that they are consistent with the epistemic commitments that I already endorse. In this response, I evaluate and accept her challenge.

Introduction

In “Feminist Theories of Evidence and Biomedical Research Communities: A Reply to Goldenberg” (2013a), Sharyn Clough addresses the feminist concern of lack of diversity within the composition of scientific communities. She correctly notes that this problem gets sidelined by the form of feminist empiricism that both she and I endorse—what I called “values as evidence” feminist empiricism, and differentiated from the predominant “community-based social knowledge” feminist empiricism of Helen Longino (1990) and Lynn Hankinson Nelson (1990; 1993) (Goldenberg 2013). Continue Reading…