Author Information: Srinivasa Vittal Katikireddi, University of Glasgow, firstname.lastname@example.org
Katikireddi, Srinivasa Vittal. “Reply to ‘What Constitutes “Good” Evidence for Public Health and Social Policy Making? From Hierarchies to Appropriateness’.” Social Epistemology Review and Reply Collective 4, no. 8 (2015): 51-55.
The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2eE
Please refer to:
- Parkhurst, Justin O and Sudeepa Abeysinghe. “What Constitutes ‘Good’ Evidence for Public Health and Social Policy Making? From Hierarchies to Appropriateness.” Social Epistemology Review and Reply Collective 3, no 10 (2014): 40-52.
Image credit: kafka4prez, via flickr
The academic community has long considered how knowledge can and should influence decision-making. The evidence-based medicine movement rose to prominence in the 1990s, with its influence extending from clinical decisions to areas of social policy. Parkhurst and Abeysinghe provide a useful addition to the literature which ambitiously draws on three different disciplinary perspectives—political science, philosophy of science and the sociology of knowledge—to reflect on the limitations of evidence hierarchies for informing policy decisions (2014). Public health is perhaps a natural focus of enquiry, drawing as it does on clinical disciplines as well as the social and political sciences.
The authors start by noting the increasing debate about the use of evidence hierarchies within the academic literature that has been coupled with a rise in a discourse of being ‘evidence based’. They argue that a political science perspective highlights the existence of multiple competing goals for policymakers to consider—therefore technical evaluations of evidence may serve merely to obscure political considerations and challenges in the production of evidence may favour individualistic interventions that are more amenable to randomised controlled trials.
Philosophy of science highlights the tension between causality and generalisability, which has arguably been ignored within evidence-based medicine (EBM), drawing as it does on an epidemiological focus on internal validity at the expense of applicability of evidence. Sociologists question the process by which knowledge is constructed, so that what counts as evidence is influenced by existing power structures. Parkhurst and Abeysinghe conclude that these three complementary critiques of EBM point to a new direction for evidence informing policy—a shift to an appropriate use of evidence whereby good evidentiary practice: “Reflects a process of making values explicit, considering causal mechanisms, and questioning evidentiary forms with respect to policy maker goals and needs” (Parkhurst and Abeysinghe 2014, 47).
Their paper adds an alternative view of thinking about the influence of evidence on policy. However, it perhaps should be considered a work in progress—one which may benefit from even deeper engagement with the three disciplines they draw upon.
Normative and Descriptive Views of the Policy Process
Political science frequently makes a distinction between descriptive and normative views of the policy process i.e. how policy is made and how policy should be made (Hogwood and Gunn 1984). In relation to the former, there is certainly support for the authors’ claim that the discourse of EBM has become prominent in policymaking. For example, the last English public health white paper, ‘Healthy Lives, Healthy People’ (Department of Health 2010), repeatedly adopted a position of being ‘evidence based’, but empirical assessments suggested otherwise (Katikireddi et al. 2011). Assessments of the evidence base underpinning policy in this way could potentially serve to illuminate, rather than obscure, political considerations.
The reasons for ‘Healthy Lives, Healthy People’ not meeting the standards it set itself are no doubt complicated but the issue of multiple competing goals of policy which Parkhurst and Abeysinghe point to are no doubt important. Their contribution implicitly suggests that decision-making should be informed by the evidence but that the manner in which evidence should be used needs refinement. This normative perspective is not necessarily self-evident but the incorporation of ‘evidence based’ rhetoric within policy statements presents an important argument for decisions being informed by evidence too. However, the normative nature of both EBM and the authors’ suggested approach is contestable and requires justification.
A move towards ‘appropriate’ evidence which is informed by an explicit elucidation of the different goals underpinning decision-making should be understood as a normative endeavour. Such an approach of clear articulation of goals does not reflect the reality of policymaking as it currently operates (Smith and Katikireddi 2013), therefore echoing the transformative endeavour of EBM. Both evidence hierarchies and appropriate evidence do not consider the potential for evidence to shape the goals of decision-makers, rather than just the means of achieving them (Weiss 1977). For example, evidence has arguably been more important in setting the terms of the debate around minimum unit pricing of alcohol rather than just defining the best means of reducing alcohol-related harms (Katikireddi, Bond, and Hilton 2014, Katikireddi et al. 2014). Similarly, the role of ill-defined ideas in shaping decision-makers’ understanding of health inequalities has been shown to be crucial (Smith 2007, Smith 2013).
Philosophy of Science
The need for locally applicable evidence for decision-making is often asserted within public health, on the basis that effects are more likely to differ between settings than might be the case for biomedical interventions. Is this really the case? It may not be the case that external validity is really a more important challenge for public health and social interventions, than for clinical interventions. The growing interest in stratified medicine suggests that actually human individuals do actually differ considerably, resulting in many individuals taking medications without benefiting (Trusheim, Berndt, and Douglas 2007). This poses a challenge for EBM within its own sphere of enquiry.
Even if the greater challenge to public health rather than biomedical interventions is true, it may not necessarily be as damning an issue as suggested. Indeed, the example provided by Parkhurst and Abeysinghe of a meta-analysis which might suggest an intervention is ineffective, rather than being more accurately identified as effective in some contexts and harmful or ineffective in others, should theoretically be disentangled by an adequate exploration of statistical heterogeneity if enough evidence has been collected. Furthermore, stating concerns about applicability of evidence often conceals other more political reasons for not pursuing an intervention.
The tension between causality and generalisability is perhaps better thought of as a challenge to EBM itself, rather than its extension to public health and social policy. It might be fruitful to think back to the epidemiological thinking that underpinned the development of EBM in the first place to consider how different forms of evidence could contribute to decision-making. Austin Bradford Hill suggested several factors which might help assess whether a causal relationship exists (Hill 1965). Incorporating in broader forms of local evidence in this way might allow assessments of applicability to be informed by the best available evidence from elsewhere.
Public health and social policy often occur in cross-sectoral spaces, necessitating decision-makers to draw upon diverse bodies of evidence. Lorenc and colleagues have found that policymakers’ views of what constitutes evidence differ in subtle ways from those operating solely in the health sector, with experiential and economic evidence often especially valued (Lorenc et al. 2014). Differing cultures of evidence therefore operate but how they should be deemed appropriate and who should determine this remain questions for investigation.
The process by which the evidence base is constructed may or may not serve the interests of public health, as illustrated by Parkhurst and Abeysinghe. Drawing attention to the importance of the construction of variables is welcome but why moving to ‘appropriate’ evidence represents an improvement is unclear. An alternative approach is to explicitly consider the interplay between ethical and epistemic aspects of evidence creation (Katikireddi and Valles 2014). This presents a framework which allows public health researchers and practitioners to critically reflect on how their use and development of the evidence base contributes to public health goals.
The Future of Evidence and Policy
Parkhurst and Abeysinghe call for a move from hierarchies of evidence to appropriate evidence. Such a call is not new (Petticrew and Roberts 2003). Their articulation of what ‘appropriate’ might mean is helpful but will benefit from greater detail that will hopefully emerge through iterative discussions and debates. While they focus on hierarchies of evidence as their departure point, this is increasingly becoming a ‘straw man’—for example even the Cochrane Library, which is often viewed as amongst EBM’s strongest advocates, incorporate diverse forms of evidence (including qualitative studies). The UK’s revised Research Excellence Framework (REF) means greater engagement by researchers with policymakers is likely. However, whether striving to meet the stated goals of policymakers will be the most effective means of improving public health is unclear, with there being a risk of critical intellectual spaces being squeezed (Smith 2010). Their article has outlined a vision for how evidence could inform policy but whether such an approach could be realisable remains uncertain.
Department of Health (UK). “Healthy Lives, Healthy People: Our Strategy for Public Health in England.” London: Department of Health, 2010.
Hill, Austin Bradford. “The Environment and Disease: Association or Causation?” Proceedings of the Royal Society of Medicine 58 , no. 5 (1965): 295-300.
Hogwood, Brian W, and Lewis A Gunn. Policy Analysis for the Real World. Oxford: Oxford University Press, 1984.
Katikireddi, Srinivasa Vittal, and Sean A. Valles. “Coupled Ethical–Epistemic Analysis of Public Health Research and Practice: Categorizing Variables to Improve Population Health and Equity.” American Journal of Public Health, e1-e7 (2014). doi: 10.2105/ajph.2014.302279.
Katikireddi, Srinivasa Vittal, Lyndal Bond, and Shona Hilton. “Changing Policy Framing as a Deliberate Strategy for Public Health Advocacy: A Qualitative Policy Case Study of Minimum Unit Pricing of Alcohol.” Milbank Quarterly 92, no. 2 (2014): 250-283.
Katikireddi, Srinivasa Vittal, Martin Higgins, Lyndal Bond, Chris Bonell, and Sally Macintyre.”How Evidence Based is English Public Health Policy?” BMJ 343, d7310 (2011). doi: 10.1136/bmj.d7310.
Katikireddi, Srinivasa Vittal, Shona Hilton, Chris Bonell, and Lyndal Bond. “Understanding the Development of Minimum Unit Pricing of Alcohol in Scotland: A Qualitative Study of the Policy Process.” PLoS ONE 9, no. 3, e91185. (2014.). doi: 10.1371/journal.pone.0091185.
Lorenc, Theo, Elizabeth F. Tyner, Mark Petticrew, Steven Duffy, Fred P. Martineau, Gemma Phillips, and Karen Lock. “Cultures of Evidence across Policy Sectors: Systematic Review of Qualitative Evidence.” The European Journal of Public Health (2014). doi: 10.1093/eurpub/cku038.
Parkhurst, Justin, and Sudeepa Abeysinghe. 2014. “What Constitutes ‘Good’ Evidence for Public Health and Social Policy Making? From Hierarchies to Appropriateness.” Social Epistemology Review and Reply Collective 3, no. 10 (2014): 34-46.
Petticrew, Mark and Helen Roberts. “Evidence, Hierarchies, and Typologies: Horses for Courses.” Journal of Epidemiology and Community Health 57, no. 7 (2003): 527-9.
Smith, Katherine. Beyond Evidence Based Policy in Public Health: The Interplay of Ideas. Palgrave Macmillan, 2013.
Smith, Katherine E. “Research, Policy and Funding Academic Treadmills and the Squeeze on Intellectual Spaces.” The British Journal of Sociology 61 (2010): 176-195.
Smith, Katherine Elizabeth. 2007. “Health Inequalities in Scotland and England: The Contrasting Journeys of Ideas from Research into Policy.” Social Science & Medicine 64 (7): 1438-1449.
Smith, Katherine Elizabeth, and Srinivasa Vittal Katikireddi. “A Glossary of Theories for Understanding Policymaking.” Journal of Epidemiology and Community Health 67 (2013): 198-202. doi: 10.1136/jech-2012-200990.
Trusheim, Mark R., Ernst R. Berndt, and Frank L. Douglas “Stratified Medicine: Strategic and Economic Implications of Combining Drugs and Clinical Biomarkers.” Nat Rev Drug Discov 6 , no. 4 (2007): 287-293.
Weiss, Carol H. “Research for policy’s sake: The Enlightenment Function of Social Science Research.” Policy Analysis 3, no. 4 (1977): 531-545.
Categories: Critical Replies
Kattikireddi raises a number of points in response to our original article which I respond to here. In one respect, there is general agreement with many of the reviewer’s points. What may differ is the perspective we take in terms of the phenomenon we are observing – not that there has not been advancements in thinking on the use of evidence in the policy sciences community – which there no doubt has been – but that these insights often sit unrecognised in popular discourses within social policy and public policy circles who continue to look to use evidence in functional ways to improve effectiveness or efficiency of decisions.
It is not unusual to hear in meetings, or to read in government ministry reports, repeated calls for ‘evidence based policy’ (the UK’s ‘what works centres’ perhaps particularly illustrating how this discourse can be embodied in formal institutions). This is despite widespread rejection of that term as a clear indicator of a real goal in academic literature. We have similarly found it common in our experience to hear individuals in policy making discussion fora to raise points such as ‘evidence can mean many things, like knowledge and not just research’. concepts which are now well known and have been thoroughly established and described for some time – from Carol Weiss’ widely cited (in the academic literature at least) description of the multiple meanings of evidence use in the 1970s (Weiss, 1977, 1979), to more modern comprehensive treatments of the subject such as that of Sandra Davies, Huw Nutley and colleagues in the last decade.
As such, our goal in advancing a new model of ‘appropriateness’ to think about ‘good evidence’ for policy was less an attempt to pioneer a new mode of thought on evidence use in social policy, but rather an incremental attempt to construct a framework and language that might allow some of these key insights to be more understood or more easily applied by planners and programme actors who may otherwise risk continuing the same mistakes or ‘reinvention of the wheel’ when it comes to thinking about evidence.
In terms of the specific points made to our original paper, there are four that we would reply to:
1) The review critiques the normative nature of our approach, suggesting that this is ‘contestable and requires justification’:
In some ways we cannot deny this. Discussing ‘good’ evidence for policy naturally implies a normative position. Yet our language of appropriateness is intended to provide a pathway through this by defining the ‘goodness’ of evidence in a policy making environment based on how well the evidence serves the needs and goals of those who use it. So while it might be easy to dismiss the ‘evidence-based policy’ idea as flawed, the paper is attempting to grapple with the issue in relation to the existing frames of debate (which privilege the approach of EBP) which is in itself normative, though usually implicitly so. Instead we aim to re-define ‘good’ evidence from a pragmatic perspective in terms of how well evidence works to achieve policy goals – all the while recognising that political goals themselves are value based and contested. In essence, this approach requires making explicit the currently implicit normative basis of policy decisions to overcome the past critique that the language of ‘evidence-based policy’ depoliticising or obfuscates the political nature of decision making.
We do not deny or ignore the fact that there will be debate over the policy goals of decision makers, and we would agree that that politics is inherently political because it involves ‘who gets what, when, and how’ as Lasswell (1990(1936)) famously described. But evidence is typically promoted and embraced for its ability to more effectively or efficiently achieve outcomes. –Our focus on appropriateness is targeted to the policy making and planning audiences who conceptualise and aim to utilise evidence in this way. Particular outcomes (or policy goals) are indeed political and subject to debate, but our appropriateness framework requires those goals to be explicit in order to be able to judge when evidence proves useful to achieve the (normative) goals selected. (Harold D. Lasswell, 1970)In this light, our critique of the application of hierarchies is fundamentally to suggest that existing hierarchies (which are often embraced) in fact do not always serve the goals of policy makers by obscuring the value goals that are fundamental to policy decisions, and as such, the application of hierarchies risk skewing decisions to issues that are already measured or conducive to measurement through particular methods.
Finally, Kattikireddi provides examples of cases where the selected pieces of evidence works to frame policy debates themselves, rather than simply serving the needs of pre-debated and pre-decided policy agendas. While we are presenting an ideal-type way of distinguishing the functional role of evidence in serving political ends (ends which are debatable), we would agree with this limitation and recognise the insights of authors from the field of critical policy studies who reflect on how policy problems are constructed through debate, rhetorical persuasion, and discursive exercises of power (Bacchi, 2009; Fischer, 2003; Stone, 2002). From this perspective, evidence use and appeals to evidence do indeed have rhetorical power (see also, Hammersley, 2013), yet we would hope the appropriateness framework would help to elucidate this as well if applied, by requiring values to be made explicit by those who think evidence can or should be used in functional ways to improve decision making.
2) The review reflects on whether the philosophy of science perspective directly challenges EBM/EBP:
Three points are made about the challenges to EBP raised by the philosophy of science. First it argues that the growing discussions over ‘stratified medicine’ implies that the concerns over external validity are not somehow more relevant to public health concerns dealing with social behaviour and social change. Second it argues that statistical heterogeneity can be (at times?) solved with more or better evidence to explore exactly what did work for whom. And finally it argues that epidemiology itself has had to critically reflect on how to identify causality – citing the Bradfod Hill conditions – as a means to resolve the causality/generalisability dilemmas.
On the one hand we would argue that our paper is seeking to clarify/improve the terms of the existing emphasis on EBP, rather than shift paradigms, as noted above. However, we would also argue that the different way in which the social world and the natural world is known cannot easily be addressed by these suggestions either. The point is made that external validity can also be a challenge to clinical medicine because of variability in human response to treatments such as drugs. While this is no doubt true and interesting, we would argue it is qualitatively different to the external validity challenges raised by social interventions. The social context in which interventions related to behaviour, cognition, interest, etc. are implemented will fundamentally shape the mechanism of effect through which those intervention work. These social contexts can change dramatically over space and time – so the same intervention delivered in the same place a decade apart may have a different mechanism of effect, and hence a radically different outcome. The same can occur when an intervention is delivered in the same place (or even to the same people) at different points in time.
Within the sociology and philosophy of science, it is generally accepted that the body – and professional and societal understandings of the body and medicine – very widely across different social contexts. Examples can be seen in the variation between clinical practice across cultures, or changing criteria for defining health and illness (the social construction of the Diagnostic and Statistical Manual of Mental Disorders (DSM) being a widely-cited contemporary example (c.f. Cooksey & Brown, 1998; Gaines, 1992)). Nevertheless, within the ’evidence-based policy’ paradigm under which many policymakers act the health and illness of populations is understood to be directly and un-problematically measurable – an idea that underpins the popularity of evidence-based medicine. Thus, though perspectives of the philosophy of science do work to problematise the assumptions made by evidence-based policy, these more fundamental deconstructions have not widely been incorporated into contemporary social policy adoption of the concept. Given the practical focus of the current paper, we were more concerned with bringing preliminary insights from these fields in ways to shift thinking of current use of policy concepts.
While interactions on a biochemical level can indeed vary between individuals, we would still posit that it is fundamentally different to the social realities of human interaction and behaviour, as this is conscious and continually reconstructing meaning. This difference also means that greater statistical power and sub-group analysis still may not solve all the challenges of meta-analysis, as while this might better illustrate which groups found an intervention effective – if that intervention mechanism derives from the social context, then the results of the meta analysis could differ if the same included studies were repeated at other times or in other locations. For some interventions there may be no predictability at all. This does not mean all social interventions are unpredictable of course. Yet for many interventions in social policy and public health, we do not have a great deal of certainty of predictable effect. Bradford Hills causality criteria can still be applied to explain whether a social intervention had an effect (e.g. if it has a temporal relationship, a dose-response relationship, etc.), but they do not say anything about mechanisms of causality, and therefore cannot answer whether we can expect the same causation elsewhere or at a later point in time; other forms of knowledge are needed for this. In the clinical sciences it is the amassed knowledge of human anatomy and biochemistry that allow generalisations of causality. In economics it is evidence of market behaviour seen over centuries. In social and behavioural interventions it may be something else that is often still lacking, or may never be perfectly achieved.
3.) The reviewer encourages us to consider the interplay between ethical and epistemic forms:
We do not disagree with this, and indeed in addition to the need to consider the importance of knowledge creation, as the reviewer notes, it is also increasingly important to articulate the lived reality of evidence from the policy-makers viewpoint (e.g work done by Katherine Smith in the health field (c.f. K. Smith, 2013; K. E. Smith & Joyce, 2012; K. E. Smith & Stewart, 2015). These insights, perhaps point more broadly to Science and Technology Studies’ exploration of the co-production between science/knowledge and policy to look at how knowledge creation (and to a lesser extent utilisation) manifests in political realities around issues seen to be informed by science (Hoppe, 2010; S. Jasanoff, 2011, 2004; S. S. Jasanoff, 1987). Our paper targets those aiming to increase evidence use and who see it as providing a functional role to improve evidence, but we would hope that by making values explicit and requiring reflection on social goals – as we feel is needed to use evidence ‘appropriately’ – this would help make these more performative uses of evidence more evident as well, even if the model we propose does not solve those separate conceptual issues on what different utilisations of evidence actually achieve in terms of their productive political effects.
4.) Finally, the review suggests that there is a need for further development of what it means for evidence use to be ‘appropriate’.
We would agree, and hope that there can be further discussion around these ideas. We acknowledge previous authors who have stated that policy concerns can go beyond those things amenable to forms of evidence like experimental trials (e.g. Petticrew & Roberts, 2003, cited in our original paper and the review) – yet our appropriateness framework provides 3 strategic question rather than just this one which can be used to guide decision makers:
I. What are the policy concerns at hand (and is the evidence selected the most useful to address the multiple policy concerns at hand)? (informed by political science and politically informed reflections on EBP);
II. Are the data constructed in ways that best serve policy goals? (informed by sociology of knowledge);
III. Do we have reason to believe that the evidence is applicable to our local policy context? (Informed by the philosophy of science).
We recognise that these questions do not address all challenges of evidence use in policy making, including the performative aspects of how evidence utilisation itself can dynamically delineate policy priorities or define what is seen to be policy relevant. Yet we feel a more systematic application of our framework can be helpful to advocates of EBP (or champions of ‘what works’) in their desire to use the ‘best’ evidence to inform policy while at the same time making the political decisions inherent in policy making more explicit. By asking these three questions explicitly, rather than picking hierarchies that judge intervention effect, we hope this can provide a practical means to overcome some of the key problems that can arise from blind applications of hierarchies.
Bacchi, C.L. (2009). Analysing policy: What’s the problem represented to be? Frenchs Forest NSW: Pearson Australia.
Cooksey, E.C., & Brown, P. (1998). Spinning on its axes: DSM and the social construction of psychiatric diagnosis. International Journal of Health Services, 28, 525-554.
Fischer, F. (2003). Reframing public policy. Oxford: Oxford University Press.
Gaines, A.D. (1992). From DSM-I to III-R; voices of self, mastery and the other: A cultural constructivist reading of US psychiatric classification. Social Science & Medicine, 35, 3-24.
Hammersley, M. (2013). The myth of research-based policy and practice. London: Sage.
Hoppe, R. (2010). From “knowledge use” towards “boundary work”: sketch of an emerging new agenda for inquiry into science-policy interaction. In R. in t Veld (Ed.), Knowledge democracy. Consequences for science, politics and media pp. 169-186). Heidelberg: Springer.
Jasanoff, S. (2011). Constitutional Moments in Governing Science and Technology. Science & Engineering Ethics, 17, 621-638.
Jasanoff, S. (Ed.) (2004). States of knowledge: the co-production of science and the social order. London: Routledge.
Jasanoff, S.S. (1987). Contested boundaries in policy-relevant science. Social Studies of Science, 17, 195-230.
Lasswell, H.D. (1970). The Emerging Conception of the Policy Sciences. Policy Sciences, 1, 3.
Lasswell, H.D. (1990(1936)). Politics; who gets what, when, how. Gloucester, MA: Peter Smith Publisher.
Petticrew, M., & Roberts, H. (2003). Evidence, hierarchies, and typologies: horses for courses. Journal of Epidemiology and Community Health, 57, 527-529.
Smith, K. (2013). Beyond evidence based policy in public health: The interplay of ideas: Palgrave Macmillan.
Smith, K.E., & Joyce, K.E. (2012). Capturing complex realities: understanding efforts to achieve evidence-based policy and practice in public health. Evidence & Policy: A Journal of Research, Debate and Practice, 8, 57-78.
Smith, K.E., & Stewart, E. (2015). ‘Black magic’ and ‘gold dust’: the epistemic and political uses of evidence tools in public health policy making. Evidence & Policy: A Journal of Research, Debate and Practice, 11, 415-437.
Stone, D. (2002). Policy paradox: the art of political decision-making. London: W.W. Norton & Company.
Weiss, C.H. (1977). Research for policy’s sake: The enlightenment function of social research. Policy analysis, 531-545.
Weiss, C.H. (1979). The many meanings of research utilization. Public Administration Review, 39, 426-431.