Posing Questions, Eschewing Hierarchies: A Response to Katikireddi, Justin Parkhurst

Author Information: Justin Parkhurst, London School of Hygiene and Tropical Medicine, Justin.Parkhurst@lshtm.ac.uk

Parkhurst, Justin. “Posing Questions, Eschewing Hierarchies: A Response to Katikireddi.” [1]. Social Epistemology Review and Reply Collective 4, no. 12 (2015): 62-67.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2xd

Please refer to:

evidence_based_practice

Image credit: Fort Belvoir Community Hospital, via flickr

Vittal Katikireddi (2015) raises a number of points in response to our original article (Parkhurst and Abeysinghe 2014) to which I respond here. In one respect, there is general agreement with many of Katikireddi’s points. What may differ is the perspective we take in terms of the phenomenon we are observing—not that there has not been advancements in thinking on the use of evidence in the policy sciences community—which there no doubt has been—but that these insights often sit unrecognised in popular discourses within social policy and public policy circles who continue to look to use evidence in functional ways to improve effectiveness or efficiency of decisions. 

On Evidence Based Policies

It is not unusual to hear in meetings, or to read in government ministry reports, repeated calls for ‘evidence based policy’ (the UK’s ‘what works centres’ perhaps particularly illustrating how this discourse can be embodied in formal institutions). This is despite widespread rejection of that term as a clear indicator of a real goal in academic literature. We have similarly found it common in our experience to hear individuals in policy making discussion fora to raise points such as ‘evidence can mean many things, like knowledge and not just research’ concepts which are now well known and have been thoroughly established and described for some time—from Carol Weiss’ widely cited (in the academic literature at least) description of the multiple meanings of evidence use in the 1970s (Weiss 1977, Weiss 1979) to more modern comprehensive treatments of the subject such as that of Sandra Davies, Huw Nutley and colleagues in the last decade.

As such, our goal in advancing a new model of ‘appropriateness’ to think about ‘good evidence’ for policy was less an attempt to pioneer a new mode of thought on evidence use in social policy, but rather an incremental attempt to construct a framework and language that might allow some of these key insights to be more understood or more easily applied by planners and programme actors who may otherwise risk continuing the same mistakes or ‘reinvention of the wheel’ when it comes to thinking about evidence.

In terms of the specific points made to our original paper, there are four to which we will reply.

1. The review critiques the normative nature of our approach, suggesting that this is ‘contestable and requires justification’.

In some ways we cannot deny this. Discussing ‘good’ evidence for policy naturally implies a normative position. Yet our language of appropriateness is intended to provide a pathway through this by defining the ‘goodness’ of evidence in a policy making environment based on how well the evidence serves the needs and goals of those who use it. So while it might be easy to dismiss the ‘evidence-based policy’ idea as flawed, the paper is attempting to grapple with the issue in relation to the existing frames of debate (which privilege the approach of Evidence Based Practice (EBP)) which is in itself normative, though usually implicitly so. Instead we aim to re-define ‘good’ evidence from a pragmatic perspective in terms of how well evidence works to achieve policy goals—all the while recognising that political goals themselves are value based and contested. In essence, this approach requires making explicit the currently implicit normative basis of policy decisions to overcome the past critique that the language of ‘evidence-based policy’ depoliticising or obfuscates the political nature of decision-making.

We do not deny or ignore the fact that there will be debate over the policy goals of decision makers, and we would agree that that politics is inherently political because it involves ‘who gets what, when, and how’ as Lasswell (1990 (1936)) famously described. But evidence is typically promoted and embraced for its ability to more effectively or efficiently achieve outcomes.

Our focus on appropriateness is targeted to the policy making and planning audiences who conceptualise and aim to utilise evidence in this way. Particular outcomes (or policy goals) are indeed political and subject to debate, but our appropriateness framework requires those goals to be explicit in order to be able to judge when evidence proves useful to achieve the (normative) goals selected (Harold D. Lasswell, 1970). In this light, our critique of the application of hierarchies is fundamentally to suggest that existing hierarchies (which are often embraced) in fact do not always serve the goals of policy makers by obscuring the value goals that are fundamental to policy decisions, and as such, the application of hierarchies risk skewing decisions to issues that are already measured or conducive to measurement through particular methods.

Finally, Kattikireddi provides examples of cases where the selected pieces of evidence works to frame policy debates themselves, rather than simply serving the needs of pre-debated and pre-decided policy agendas. While we are presenting an ideal-type way of distinguishing the functional role of evidence in serving political ends (ends which are debatable), we would agree with this limitation and recognise the insights of authors from the field of critical policy studies who reflect on how policy problems are constructed through debate, rhetorical persuasion, and discursive exercises of power (Bacchi 2009; Fischer 2003; Stone 2002). From this perspective, evidence use and appeals to evidence do indeed have rhetorical power (see also Hammersley 2013), yet we would hope the appropriateness framework would help to elucidate this as well if applied, by requiring values to be made explicit by those who think evidence can or should be used in functional ways to improve decision making.

2. The review reflects on whether the philosophy of science perspective directly challenges Evidence Based Medicine (EBM) and Evidence Based Practice (EBP).

Three points are made about the challenges to EBP raised by the philosophy of science. First it argues that the growing discussions over ‘stratified medicine’ implies that the concerns over external validity are not somehow more relevant to public health concerns dealing with social behaviour and social change. Second it argues that statistical heterogeneity can be (at times?) solved with more or better evidence to explore exactly what did work for whom. And finally it argues that epidemiology itself has had to critically reflect on how to identify causality—citing the Bradford Hill conditions—as a means to resolve the causality/generalisability dilemmas.

On the one hand we would argue that our paper is seeking to clarify/improve the terms of the existing emphasis on EBP, rather than shift paradigms, as noted above. However, we would also argue that the different way in which the social world and the natural world is known cannot easily be addressed by these suggestions either. The point is made that external validity can also be a challenge to clinical medicine because of variability in human response to treatments such as drugs. While this is no doubt true and interesting, we would argue it is qualitatively different to the external validity challenges raised by social interventions. The social context in which interventions related to behaviour, cognition, interest, etc. are implemented will fundamentally shape the mechanism of effect through which those intervention work. These social contexts can change dramatically over space and time—so the same intervention delivered in the same place a decade apart may have a different mechanism of effect, and hence a radically different outcome. The same can occur when an intervention is delivered in the same place (or even to the same people) at different points in time.

Within the sociology and philosophy of science, it is generally accepted that the body—and professional and societal understandings of the body and medicine—very widely across different social contexts. Examples can be seen in the variation between clinical practice across cultures, or changing criteria for defining health and illness (the social construction of the Diagnostic and Statistical Manual of Mental Disorders (DSM) being a widely-cited contemporary example (c.f. Cooksey and Brown 1998; Gaines 1992)). Nevertheless, within the ’evidence-based policy’ paradigm under which many policymakers act the health and illness of populations is understood to be directly and un-problematically measurable—an idea that underpins the popularity of evidence-based medicine. Thus, though perspectives of the philosophy of science do work to problematise the assumptions made by evidence-based policy, these more fundamental deconstructions have not widely been incorporated into contemporary social policy adoption of the concept. Given the practical focus of the current paper, we were more concerned with bringing preliminary insights from these fields in ways to shift thinking of current use of policy concepts.

While interactions on a biochemical level can indeed vary between individuals, we would still posit that it is fundamentally different to the social realities of human interaction and behaviour, as this is conscious and continually reconstructing meaning. This difference also means that greater statistical power and sub-group analysis still may not solve all the challenges of meta-analysis, as while this might better illustrate which groups found an intervention effective—if that intervention mechanism derives from the social context, then the results of the meta analysis could differ if the same included studies were repeated at other times or in other locations. For some interventions there may be no predictability at all. This does not mean all social interventions are unpredictable of course. Yet for many interventions in social policy and public health, we do not have a great deal of certainty of predictable effect. Bradford Hills causality criteria can still be applied to explain whether a social intervention had an effect (e.g. if it has a temporal relationship, a dose-response relationship, etc.), but they do not say anything about mechanisms of causality, and therefore cannot answer whether we can expect the same causation elsewhere or at a later point in time; other forms of knowledge are needed for this. In the clinical sciences it is the amassed knowledge of human anatomy and biochemistry that allow generalisations of causality. In economics it is evidence of market behaviour seen over centuries. In social and behavioural interventions it may be something else that is often still lacking, or may never be perfectly achieved.

3. The reviewer encourages us to consider the interplay between ethical and epistemic forms.

We do not disagree with this, and indeed in addition to the need to consider the importance of knowledge creation, as the reviewer notes, it is also increasingly important to articulate the lived reality of evidence from the policy-makers viewpoint (e.g work done by Katherine Smith in the health field (c.f. K. Smith 2013; K. E. Smith and Joyce 2012; K. E. Smith and Stewart 2015). These insights, perhaps point more broadly to Science and Technology Studies’ exploration of the co-production between science/knowledge and policy to look at how knowledge creation (and to a lesser extent utilisation) manifests in political realities around issues seen to be informed by science (Hoppe 2010; S. Jasanoff 2011, 2004; S. S. Jasanoff 1987).

Our paper targets those aiming to increase evidence use and who see it as providing a functional role to improve evidence, but we would hope that by making values explicit and requiring reflection on social goals—as we feel is needed to use evidence ‘appropriately’—this would help make these more performative uses of evidence more evident as well, even if the model we propose does not solve those separate conceptual issues on what different utilisations of evidence actually achieve in terms of their productive political effects.

4. Finally, the review suggests that there is a need for further development of what it means for evidence use to be ‘appropriate’.

We would agree, and hope that there can be further discussion around these ideas. We acknowledge previous authors who have stated that policy concerns can go beyond those things amenable to forms of evidence like experimental trials (e.g. Petticrew and Roberts 2003, cited in our original paper and the review)—yet our appropriateness framework provides 3 strategic question rather than just this one which can be used to guide decision makers:

I. What are the policy concerns at hand (and is the evidence selected the most useful to address the multiple policy concerns at hand)? (informed by political science and politically informed reflections on Evidence Based Practice);

II. Are the data constructed in ways that best serve policy goals? (informed by sociology of knowledge);

III. Do we have reason to believe that the evidence is applicable to our local policy context? (informed by the philosophy of science).

We recognise that these questions do not address all challenges of evidence use in policy making, including the performative aspects of how evidence utilisation itself can dynamically delineate policy priorities or define what is seen to be policy relevant. Yet we feel a more systematic application of our framework can be helpful to advocates of Evidence Based Practice (or champions of ‘what works’) in their desire to use the ‘best’ evidence to inform policy while at the same time making the political decisions inherent in policy making more explicit. By asking these three questions explicitly, rather than picking hierarchies that judge intervention effect, we hope this can provide a practical means to overcome some of the key problems that can arise from blind applications of hierarchies.

References

Bacchi, Carol Lee. Analysing Policy: What’s the Problem Represented to Be? Frenchs Forest NSW: Pearson Australia, 2009.

Cooksey, Elizabeth C. and Phil Brown. “Spinning on Its Axes: DSM and the Social Construction of Psychiatric Diagnosis.” International Journal of Health Services 28, no. 3 (1998): 525-554.

Fischer, Frank. Reframing Public Policy. Oxford: Oxford University Press, 2003.

Gaines, Atwood D. “From DSM-I to III-R; Voices of Self, Mastery and the Other: A Cultural Constructivist Reading of US Psychiatric Classification.” Social Science & Medicine, 35, no. 1 (1992): 3-24.

Hammersley, Martyn. The Myth of Research-Based Policy and Practice. London: Sage, 2013.

Hoppe, Robert. “From ‘Knowledge Use’ Towards ‘Boundary Work’: Sketch of an Emerging New Agenda for Inquiry into Science-Policy Interaction.” In Knowledge Democracy: Consequences for Science, Politics and Media, edited by Roel in ‘t Veld, 169-186. Heidelberg: Springer, 2010.

Jasanoff, Sheila. “Constitutional Moments in Governing Science and Technology.” Science & Engineering Ethics 17, no. 4 (2011): 621-638.

Jasanoff, Sheila, ed. States of Knowledge: The Co-Production of Science and the Social Order. London: Routledge, 2004.

Jasanoff, Sheila S. “Contested Boundaries in Policy-Relevant Science.” Social Studies of Science, 17, no.2 (1987): 195-230.

Katikireddi, Srinivasa Vittal. “Reply to ‘What Constitutes “Good” Evidence for Public Health and Social Policy Making? From Hierarchies to Appropriateness’.” Social Epistemology Review and Reply Collective 4, no. 8 (2015): 51-55.

Lasswell, Harold D. “The Emerging Conception of the Policy Sciences.” Policy Sciences 1, no. 3 (1970): 3-14.

Lasswell, Harold D. Politics: Who Gets What, When, How. Gloucester, MA: Peter Smith Publisher, [1936] 1990.

Parkhurst, Justin O. and Sudeepa Abeysinghe. “What Constitutes ‘Good’ Evidence for Public Health and Social Policy Making? From Hierarchies to Appropriateness.” Social Epistemology Review and Reply Collective 3, no 10 (2014): 40-52.

Petticrew, Mark and H Roberts. “Evidence, Hierarchies, and Typologies: Horses for Courses.” Journal of Epidemiology and Community Health, 57 (2003): 527-529.

Smith, Katherine. Beyond Evidence Based Policy in Public Health: The Interplay of Ideas. Basingstoke: Palgrave Macmillan, 2013.

Smith, Katherine E. and Kerry E. Joyce. “Capturing Complex Realities: Understanding Efforts to Achieve Evidence-Based Policy and Practice in Public Health.” Evidence & Policy: A Journal of Research, Debate and Practice 8, no. 2 (2012): 57-78.

Smith, Katherine E. and Ellen Stewart. “‘Black Magic’ and ‘Gold Dust’: The Epistemic and Political Uses of Evidence Tools in Public Health Policy Making.” Evidence & Policy: A Journal of Research, Debate and Practice 11, no. 3 (2015): 415-437.

Stone, Deborah. Policy Paradox: The Art of Political Decision-Making. London: W.W. Norton & Company, 2002.

Weiss, Carol H. “Research for Policy’s Sake: The Enlightenment Function of Social Research.” Policy Analysis 3, no. 4 (1977): 531-545.

Weiss, Carol H. “The Many Meanings of Research Utilization.” Public Administration Review 39, no. 5 (1979): 426-431.

[1] My thanks to Sudeepa Abeysinghe for her input in developing this response. Please refer to our co-authored article (2014) on the Social Epistemology Review and Reply Collective.



Categories: Critical Replies

Tags: , , , , ,

Leave a Reply