Archives For expert knowledge

Author information: Kjartan Koch Mikalsen, Norwegian University of Science and Technology, kjartan.mikalsen@ntnu.no.

Mikalsen, Kjartan Koch. “An Ideal Case for Accountability Mechanisms, the Unity of Epistemic and Democratic Concerns, and Skepticism About Moral Expertise.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 1-5.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3S2

Please refer to:

Image from Birdman Photos, via Flickr / Creative Commons

 

How do we square democracy with pervasive dependency on experts and expert arrangements? This is the basic question of Cathrine Holst and Anders Molander’s article “Public deliberation and the fact of expertise: making experts accountable.” Holst and Molander approach the question as a challenge internal to a democratic political order. Their concern is not whether expert rule might be an alternative to democratic government.

Rather than ask if the existence of expertise raises an “epistocratic challenge” to democracy, they “ask how science could be integrated into politics in a way that is consistent with democratic requirements as well as epistemic standards” (236).[1] Given commitment to a normative conception of deliberative democracy, what qualifies as a legitimate expert arrangement?

Against the backdrop of epistemic asymmetry between experts and laypersons, Holst and Molander present this question as a problem of accountability. When experts play a political role, we need to ensure that they really are experts and that they practice their expert role properly. I believe this is a compelling challenge, not least in view of expert disagreement and contestation. In a context where we lack sufficient knowledge and training to assess directly the reasoning behind contested advice, we face a non-trivial problem of deciding which expert to trust. I also agree that the problem calls for institutional measures.

However, I do not think such measures simply answer to a non-ideal problem related to untrustworthy experts. The need for institutionalized accountability mechanisms runs deeper. Nor am I convinced by the idea that introducing such measures involves balancing “the potential rewards from expertise against potential deliberative costs” (236). Finally, I find it problematic to place moral expertise side-by-side with scientific expertise in the way Holst and Molander do.

Accountability Mechanisms: More than Non-ideal Remedies

To meet the challenge of epistemic asymmetry combined with expert disagreement, Holst and Molander propose three sets of institutional mechanisms for scrutinizing the work of expert bodies (242-43). First, in order to secure compliance with basic epistemic norms, they propose laws and guidelines that specify investigation procedures in some detail, procedures for reviewing expert performance and for excluding experts with a bad record of accomplishment, as well as sanctions against sloppy work.

Second, in order to review expert judgements, they propose checks in the form of fora comprising peers, experts in other fields, bureaucrats and stakeholders, legislators, or the public sphere. Third, in order to assure that expert groups work under good conditions for inquiry and judgment, they propose organizing the work of such groups in a way that fosters cognitive diversity.

According to Holst and Molander, these measures have a remedial function. Their purpose is to counter the misbehavior of non-ideal experts, that is, experts whose behavior and judgements are biased or influenced by private interests. The measures concern unreasonable disagreement rooted in experts’ over-confidence or partiality, as opposed to reasonable disagreement rooted in “burdens of judgement” (Rawls 1993, 54). By targeting objectionable conduct and reasoning, they reduce the risk of fallacies and the “intrusion of non-epistemic interests and preferences” (242). In this way, they increase the trustworthiness of experts.

As I see it, this is to attribute a too limited role to the proposed accountability mechanisms. While they might certainly work in the way Holst and Molander suggest, it is doubtful whether they would be superfluous if all experts were ideal experts without biases or conflicting interests.

Even ideal experts are fallible and have partial perspectives on reality. The ideal expert is not omniscient, but a finite being who perceives the world from a certain perspective, depending on a range of contingent factors, such as training in a particular scientific field, basic theoretical assumptions, methodological ideals, subjective expectations, and so on. The ideal expert is aware that she is fallible and that her own point of view is just one among many others. We might therefore expect that she does not easily become a victim of overconfidence or confirmation bias. Yet, given the unavoidable limits of an individual’s knowledge and intellectual capacity, no expert can know what the world looks like from all other perspectives and no expert can be safe from misjudgments.

Accordingly, subjecting expert judgements to review and organizing diverse expert groups is important no matter how ideal the expert. There seems to be no other way to test the soundness of expert opinions than to check them against the judgements of other experts, other forms of expertise, or the public at large. Similarly, organizing diverse expert groups seems like a sensible way of bringing out all relevant facts about an issue even in the case of ideal experts. We do not have to suspect anyone of bias or pursuance of self-serving interests in order to justify these kinds of institutional measures.

Image by Birdman Photos via Flickr / Creative Commons

 

No Trade-off Between Democratic and Epistemic Concerns

An important aspect of Holst and Molander’s discussion of how to make experts accountable is the idea that we need to balance the epistemic value of expert arrangements against democratic concerns about inclusive deliberation. While they point out that the mechanisms for holding experts to account can democratize expertise in ways that leads to epistemic enrichment, they also warn that inclusion of lay testimony or knowledge “can result in undue and disproportional consideration of arguments that are irrelevant, obviously invalid or fleshed out more precisely in expert contributions” (244).

There is of course always the danger that things go wrong, and that the wrong voices win through. Yet, the question is whether this risk forces us to make trade-offs between epistemic soundness and democratic participation. Holst and Molander quote Stephen Turner (2003, 5) on the supposed dilemma that “something has to give: either the idea of government by generally intelligible discussion, or the idea that there is genuine knowledge that is known to few, but not generally intelligible” (236). To my mind, this formulation rests on an ideal picture of public deliberation that is not only excessively demanding, but also normatively problematic.

It is a mistake to assume that political deliberation cannot include “esoteric” expert knowledge if it is to be inclusive and open to everyone. If democracy is rule by public discussion, then every citizen should have an equal chance to contribute to political deliberation and will-formation, but this is not to say that all aspects of every contribution should be comprehensible to everyone. Integration of expert opinions based on knowledge fully accessible only to a few does not clash with democratic ideals of equal respect and inclusion of all voices.

Because of specialization and differentiation, all experts are laypersons with respect to many areas where others are experts. Disregarding individual variation of minor importance, we are all equals in ignorance, lacking sufficient knowledge and training to assess the relevant evidence in most fields.[2] Besides, and more fundamentally, deferring to expert advice in a political context does not imply some form of political status hierarchy between persons.

To acknowledge expert judgments as authoritative in an epistemic sense is simply to acknowledge that there is evidence supporting certain views, and that this evidence is accessible to everyone who has time and skill to investigate the matter. For this reason, it is unclear how the observation that political expert arrangements do not always harmonize with democratic ideals warrants talk of a need for trade-offs or a balancing of diverging concerns. In principle, there seems to be no reason why there has to be divergence between epistemic and democratic concerns.

To put the point even sharper, I would like to suggest that allowing alleged democratic concerns to trump sound expert advice is democratic in name only. With Jacob Weinrib (2016, 57-65), I consider democratic law making as essential to a just legal system because all non-democratic forms of legislation are defective arrangements that arbitrarily exclude someone from contributing to the enactment of the laws that regulate their interaction with others. Yet, an inclusive legislative procedure that disregards the best available reasons is hardly a case of democratic self-legislation.

It is more like raving blind drunk. Legislators that ignore state-of-the-art knowledge are not only deeply irrational, but also disrespectful of those bound by the laws that they enact. Need I mention the climate crisis? Understanding democracy as a process of discursive rationalization (Habermas 1996), the question is not what trade-offs we have to make, but how inclusive legislative procedures can be made sufficiently truth sensitive (Christiano 2012). We can only approximate a defensible democratic order by making democratic and epistemic concerns pull in the same direction.

Moral vs Scientific and Technical Expertise

Before introducing the accountability problem, Holst and Molander consider two ideal objections against giving experts an important political role: ‘(1) that one cannot know decisively who the knowers or experts are’ and ‘(2) that all political decisions have moral dimensions and that there is no moral expertise’ (237). They reject both objections. With respect to (1), they convincingly argue that there are indirect ways of identifying experts without oneself being an expert. With respect to (2), they pursue two strategies.

First, they argue that even if facts and values are intertwined in policy-making, descriptive and normative aspects of an issue are still distinguishable. Second, they argue that unless strong moral non-cognitivism is correct, it is possible to speak of moral expertise in the form of ‘competence to state and clarify moral questions and to provide justified answers’ (241). To my mind, the first of these two strategies is promising, whereas the second seems to play down important differences between distinct forms of expertise.

There are of course various types of democratic expert arrangements. Sometimes experts are embedded in public bodies making collectively binding decisions. At other occasions, experts serve an advisory function. Holst and Molander tend to use “expertise” and “expert” as unspecified, generic terms, and they refer to both categories side-by-side (235, 237). However, by framing their argument as an argument concerning epistemic asymmetry and the novice/expert-problem, they indicate that they have in mind moral experts in advisory capacities and as someone in possession of insights known to a few, yet of importance for political decision-making.

I agree that some people are better informed about moral theory and more skilled in moral argumentation than others are, but such expertise still seems different in kind from technical expertise or expertise within empirical sciences. Although moral experts, like other experts, provide action-guiding advice, their public role is not analogous to the public role of technical or scientific experts.

For the public, the value of scientific and technical expertise lies in information about empirical restraints and the (lack of) effectiveness of alternative solutions to problems. If someone is an expert in good standing within a certain field, then it is reasonable to regard her claims related to this field as authoritative, and to consider them when making political decisions. As argued in the previous section, it would be disrespectful and contrary to basic democratic norms to ignore or bracket such claims, even if one does not fully grasp the evidence and reasoning supporting them.

Things look quite different when it comes to moral expertise. While there can be good reasons for paying attention to what specialists in moral theory and practical reasoning have to say, we rarely, if ever, accept their claims about justified norms, values and ends as authoritative or valid without considering the reasoning supporting the claims, and rightly so. Unlike Holst and Molander, I do not think we should accept the arguments of moral experts as defined here simply based on indirect evidence that they are trustworthy (cf. 241).

For one thing, the value of moral expertise seems to lie in the practical reasoning itself just as much as in the moral ideals underpinned by reasons. An important part of what the moral expert has to offer is thoroughly worked out arguments worth considering before making a decision on an issue. However, an argument is not something we can take at face value, because an argument is of value to us only insofar as we think it through ourselves. Moreover, the appeal to moral cognitivism is of limited value for elevating someone to the status of moral expert. Even if we might reach agreement on basic principles to govern society, there will still be reasonable disagreement as to how we should translate the principles into general rules and how we should apply the rules to particular cases.

Accordingly, we should not expect acceptance of the conclusions of moral experts in the same way we should expect acceptance of the conclusions of scientific and technical expertise. To the contrary, we should scrutinize such conclusions critically and try to make up our own mind. This is, after all, more in line with the enlightenment motto at the core of modern democracy, understood as government by discussion: “Have courage to make use of your own understanding!” (Kant 1996 [1784], 17).

Contact details: kjartan.mikalsen@ntnu.no

References

Christiano, Thomas. “Rational Deliberation among Experts and Citizens.” In Deliberative Systems: Deliberative Democracy at the Large Scale, ed. John Parkinson and Jane Mansbridge. Cambridge: Cambridge University Press, 2012.

Habermas, Jürgen. Between Facts and Norms.

Holst, Cathrine, and Anders Molander. “Public deliberation and the fact of expertise: making experts accountable.” Social Epistemology 31, no. 3 (2017): 235-250.

Kant, Immanuel. Practical Philosophy, ed. Mary Gregor. Cambridge: Cambridge University Press, 1996.

Kant, Immanuel. Anthropology, History, and Edcucation, ed. Günther Zöller and Robert B. Louden. Cambridge: Cambridge University Press, 2007.

Rawls, John. Political Liberalism. New York: Columbia University Press, 1993.

Turner, Stephen. Liberal Democracy 3.0: Civil Society in an Age of Experts. London: Sage Publications Ltd, 2003.

Weinrib, Jacob. Dimensions of Dignity. Cambridge: Cambridge University Press, 2016.

[1] All bracketed numbers without reference to author in the main text refer to Holst and Molander (2017).

[2] This also seems to be Kant’s point when he writes that human predispositions for the use of reason “develop completely only in the species, but not in the individual” (2007 [1784], 109).

Author Information: Saana Jukola and Henrik Roeland Visser, Bielefeld University, sjukola@uni-bielefeld.de and rvisser@uni-bielefeld.de.

Jukola, Saana; and Henrik Roland Visser. “On ‘Prediction Markets for Science,’ A Reply to Thicke” Social Epistemology Review and Reply Collective 6, no. 11 (2017): 1-5.

The pdf of the article includes specific page numbers. Shortlink: https://wp.me/p1Bfg0-3Q9

Please refer to:

Image by The Bees, via Flickr

 

In his paper, Michael Thicke critically evaluates the potential of using prediction markets to answer scientific questions. In prediction markets, people trade contracts that pay out if a certain prediction comes true or not. If such a market functions efficiently and thus incorporates the information of all market participants, the resulting market price provides a valuable indication of the likelihood that the prediction comes true.

Prediction markets have a variety of potential applications in science; they could provide a reliable measure of how large the consensus on a controversial finding truly is, or tell us how likely a research project is to deliver the promised results if it is granted the required funding. Prediction markets could thus serve the same function as peer review or consensus measures.

Thicke identifies two potential obstacles for the use of prediction markets in science. Namely, the risk of inaccurate results and of potentially harmful unintended consequences to the organization and incentive structure of science. We largely agree on the worry about inaccuracy. In this comment we will therefore only discuss the second objection; it is unclear to us what really follows from the risk of harmful unintended consequences. Furthermore, we consider another worry one might have about the use of prediction markets in science, which Thicke does not discuss: peer review is not only a quality control measure to uphold scientific standards, but also serves a deliberative function, both within science and to legitimize the use of scientific knowledge in politics.

Reasoning about imperfect methods

Prediction markets work best for questions for which a clearly identifiable answer is produced in the not too distant future. Scientific research on the other hand often produces very unexpected results on an uncertain time scale. As a result, there is no objective way of choosing when and how to evaluate predictions on scientific research. Thicke identifies two ways in which this can create harmful unintended effects on the organization of science.

Firstly, projects that have clear short-term answers may erroneously be regarded as epistemically superior to basic research which might have better long-term potential. Secondly, science prediction markets create a financial incentive to steer resources towards research with easily identifiable short-term consequences, even if more basic research would have a better epistemic pay-off in the long-run.

Based on their low expected accuracy and the potential of harmful effects on the organization of science, Thicke concludes that science prediction markets might be a worse ‘cure’ than the ‘disease’ of bias in peer review and consensus measures. We are skeptical of this conclusion for the same reasons as offered by Robin Hanson. While the worry about the promise of science prediction markets is justified, it is unclear how this makes them worse than the traditional alternatives.

Nevertheless, Thicke’s conclusion points in the right direction: instead of looking for a more perfect method, which may not become available in the foreseeable future, we need to judge which of the imperfect methods is more palatable to us. Doing that would, however, require a more sophisticated evaluation of the different strengths and weakness of the different available methods and how to trade those off, which goes beyond the scope of Thicke’s paper.

Deliberation in Science

An alternative worry, which Thicke does not elaborate on, is the fact that peer review is not only expected to accurately determine the quality of submissions and conclude what scientific work deserves to be funded or published, but it is also valued for its deliberative nature, which allows it to provide reasons to those affected by the decisions made in research funding or the use of scientific knowledge in politics. Given that prediction markets function through market forces rather than deliberative procedure, and produce probabilistic predictions rather than qualitative explanations, this might be (another) aspect on which the traditional alternative of peer review outperforms science prediction markets.

Within science, peer review serves two different purposes. First, it functions as a gatekeeping mechanism for deciding which projects deserve to be carried out or disseminated – an aim of peer review is to make sure that good work is being funded or published and undeserving projects are rejected. Second, peer review is often taken to embody the critical mechanism that is central to the scientific method. By pointing out defects and weaknesses in manuscripts or proposals, and by suggesting new ways of approaching the phenomena of interest, peer reviewers are expected to help authors improve the quality of their work. At least in an ideal case, authors know why their manuscripts were rejected or accepted after receiving peer review reports and can take the feedback into consideration in their future work.

In this sense, peer review represents an intersubjective mechanism that guards against the biases and blind spots that individual researchers may have. Criticism of evidence, methods and reasoning is essential to science, and necessary for arriving at trustworthy results.[1] Such critical interaction thus ensures that a wide variety of perspectives in represented in science, which is both epistemically and socially valuable. If prediction markets were to replace peer review, could they serve this second, critical, function? It seems that the answer is No. Prediction markets do not provide reasons in the way that peer review does, and if the only information that is available are probabilistic predictions, something essential to science is lost.

To illustrate this point in a more intuitive way: imagine that instead of writing this comment in which we review Thicke’s paper, there is a prediction market on which we, Thicke and other authors would invest in bets regarding the likelihood of science prediction markets being an adequate replacement of the traditional method of peer review. From the resulting price signal we would infer whether predictions markets are indeed an adequate replacement or not. Would that allow for the same kind of interaction in which we now engage with Thicke and others by writing this comment? At least intuitively, it seems to us that the answer is No.

Deliberation About Science in Politics

Such a lack of reasons that justify why certain views have been accepted or rejected is not only a problem for researchers who strive towards getting their work published, but could also be detrimental to public trust in science. When scientists give answers to questions that are politically or socially sensitive, or when controversial science-based recommendations are given, it is important to explain the underlying reasons to ensure that those affected can – at least try to – understand them.

Only if people are offered reasons for decisions that affect them can they effectively contest such decisions. This is why many political theorists regard the ability of citizens to demand an explanation, and the corresponding duty of decision-makers to be responsive to such demands, as a necessary element of legitimate collective decisions.[2] Philosophers of science like Philip Kitcher[3] rely on very similar arguments to explain the importance of deliberative norms in justifying scientific conclusions and the use of scientific knowledge in politics.

Science prediction markets do not provide substantive reasons for their outcome. They only provide a procedural argument, which guarantees the quality of their outcome when certain conditions are fulfilled, such as the presence of a well-functioning market. Of course, one of those conditions is also that at least some of the market participants possess and rely on correct information to make their investment decisions, but that information is hidden in the price signal. This is especially problematic with respect to the kind of high-impact research that Thicke focuses on, i.e. climate change. There, the ability to justify why a certain theory or prediction is accepted as reliable, is at least as important for the public discourse as it is to have precise and accurate quantitative estimates.

Besides the legitimacy argument, there is another reason why quantitative predictions alone do not suffice. Policy-oriented sciences like climate science or economics are also expected to judge the effect and effectiveness of policy interventions. But in complex systems like the climate or the economy, there are many different plausible mechanisms simultaneously at play, which could justify competing policy interventions. Given the long-lasting controversies surrounding such policy-oriented sciences, different political camps have established preferences for particular theoretical interpretations that justify their desired policy interventions.

If scientists are to have any chance of resolving such controversies, they must therefore not only produce accurate predictions, but also communicate which of the possible underlying mechanisms they think best explains the predicted phenomena. It seems prediction markets alone could not do this. It might be useful to think of this particular problem as the ‘underdetermination of policy intervention by quantitative prediction’.

Science prediction markets as replacement or addition?

The severity of the potential obstacles that Thicke and we identify depends on whether science prediction markets would replace traditional methods such as peer review, or would rather serve as addition or even complement to traditional methods. Thicke provides examples of both: in the case of peer review for publication or funding decisions, prediction markets might replace traditional methods. But in the case of resolving controversies, for instance concerning climate change, it aggregates and evaluates already existing pieces of knowledge and peer review. In such a case the information that underlies the trading behavior on the prediction market would still be available and could be revisited if people distrust the reliability of the prediction market’s result.

We could also imagine that there are cases in which science prediction markets are used to select the right answer or at least narrow down the range of alternatives, after which a qualitative report is produced which provides a justification of the chosen answer(s). Perhaps it is possible to infer from trading behavior which investors possess the most reliable information, a possibility explored by Hanson. Contrary to Hanson, we are skeptical of the viability of this strategy. Firstly, the problem of the underdetermination of theory by data suggests that different competing justifications might be compatible with the observation trading behavior. Secondly, such justifications would be post-hoc rationalizations, which sound plausible but might lack power to discriminate among alternative predictions.

Conclusion

All in all, we are sympathetic to Michael Thicke’s critical analysis of the potential of prediction markets in science and share his skepticism. However, we point out another issue that speaks against prediction markets and in favor of peer review: Giving and receiving reasons for why a certain view should be accepted or rejected. Given that the strengths and weaknesses of these methods fall on different dimensions (prediction markets may fare better in accuracy, while in an ideal case peer review can help the involved parties understand the grounds why a position should be approved), it is important to reflect on what the appropriate aims in particular scientific and policy context are before making a decision on what method should be used to evaluate research.

References

Hanson, Robin. “Compare Institutions To Institutions, Not To Perfection,” Overcoming Bias (blog). August 5, 2017. Retrieved from: http://www.overcomingbias.com/2017/08/compare-institutions-to-institutions-not-to-perfection.html

Hanson, Robin. “Markets That Explain, Via Markets To Pick A Best,” Overcoming Bias (blog), October 14, 2017 http://www.overcomingbias.com/2017/10/markets-that-explain-via-markets-to-pick-a-best.html

[1] See, e.g., Karl Popper, The Open Society and Its Enemies. Vol 2. (Routledge, 1966) or Helen Longino, Science as Social Knowledge. Values and Objectivity in Scientific Inquiry (Princeton University Press, 1990).

[2] See Jürgen Habermas, A Theory of Communicative Action, Vols1 and 2. (Polity Press, 1984 & 1989) & Philip Pettit, “Deliberative democracy and the discursive dilemma.” Philosophical Issues, vol. 11, pp. 268-299, 2001.

[3] Philip Kitcher, Science, Truth, and Democracy (Oxford University Press, 2001) & Philip Kitcher, Science in a democratic society (Prometheus Books, 2011).

Author Information: Reiner Grundmann, University of Nottingham, Reiner.Grundmann@nottingham.ac.uk

Grundmann, Reiner. “Regarding Experts and Expertise: A Reply to Szymanski.” Social Epistemology Review and Reply Collective 4, no. 7 (2015): 19-22.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2az

Please refer to:

experts_cover

Image credit: Routledge Press

The opening sentence of Erika Szymanski’s review encapsulates her tone and approach: ‘If you are looking for a provocative argument about what being an expert means in contemporary information-driven cultures, I would offer that your time is better spent somewhere other than Stehr and Grundmann’s Experts: The Knowledge and Power of Expertise (Routledge 2011).’

Unfortunately, she does not tell us what is provocative about the book, nor what better provocative books should be read instead. Towards the end of the review she comes to the view that the ‘central motion’ of the book is uncontroversial. Maybe it would have been a good idea to state upfront that she is in two minds about the book, and explain in what sense it is (un)controversial.  Continue Reading…

Author Information: Erika Szymanski, University of Otago, szymanskiea@hotmail.com

Szymanski, Erika. “Review—Experts: The Knowledge and Power of Expertise.Social Epistemology Review and Reply Collective 4, no. 5 (2015): 33-36.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-25x

experts_cover

Image credit: Routledge Press

Experts: The Knowledge and Power of Expertise
Nico Stehr and Reiner Grundmann
Routledge
146 pp.

Erika Szymanski, University of Otago

If you are looking for a provocative argument about what being an expert means in contemporary information-driven cultures, I would offer that your time is better spent somewhere other than Stehr and Grundmann’s Experts: The Knowledge and Power of Expertise (Routledge 2011).

The book reads more as a conservative intellectual history situating the “expert” in knowledge societies than a new position statement. That history is useful: they define and contextualize the expert as contemporary case studies often fail to do; they raise many questions about the role of experts as a general group that usually remain invisible in those studies. Unanswered as often as not, these questions might serve as a productive repository for future debate. Be forewarned, however, that you may find little that feels genuinely new as a reward for wading through Stehr and Grundmann’s sometimes-dense prose.  Continue Reading…