What Constitutes ‘Good’ Evidence for Public Health and Social Policy Making? From Hierarchies to Appropriateness, Justin O. Parkhurst and Sudeepa Abeysinghe

Author Information: Justin O. Parkhurst, London School of Hygiene and Tropical Medicine, justin.parkhurst@lshtm.ac.uk ; Sudeepa Abeysinghe, University of Edinburgh, Sudeepa.Abeysinge@ed.ac.uk

Parkhurst, Justin O and Sudeepa Abeysinghe. “What Constitutes ‘Good’ Evidence for Public Health and Social Policy Making? From Hierarchies to Appropriateness.” Social Epistemology Review and Reply Collective 3, no 10 (2014): 40-52.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1E3

Abstract

Within public health, and increasingly other areas of social policy, there are widespread calls to increase or improve the use of evidence for policy making. Often these calls rest on an assumption that increased evidence utilisation will be a more efficient or effective means of achieving social goals. Yet a clear elucidation of what can be considered ‘good evidence’ for policy is rarely articulated. Many of the current discussions of best practice in the health policy sector derive from the evidence-based medicine (EBM) movement, embracing the ‘hierarchy of evidence’ that places experimental trials as preeminent in terms of methodological quality. However, a number of problems arise if these hierarchies are used to rank or prioritise policy relevance.

Challenges in applying evidence hierarchies to policy questions arise from the fact that the EBM hierarchies rank evidence of intervention effect on a specified and limited number of outcomes. Previous authors have noted that evidence forms at the top of such hierarchies typically serve the needs and realities of clinical medicine, but not necessarily public policy. We build on past insights by applying three disciplinary perspectives from political science, the philosophy of science and the sociology of knowledge to illustrate the limitations of a single evidence hierarchy to guide health policy choices, while simultaneously providing new conceptualisations suited to achieve health sector goals. In doing so, we provide an alternative approach that re-frames ‘good’ evidence for health policy as a question of appropriateness.

Rather than adhering to a single hierarchy of evidence to judge what constitutes ‘good’ evidence for policy, it is more useful to examine evidence through the lens of appropriateness. The form of evidence, the determination of relevant categories and variables, and the weight given to any piece of evidence, must suit the policy needs at hand. A more robust and critical examination of relevant and appropriate evidence can ensure that the best possible evidence of various forms is used to achieve health policy goals.

Evidence Based Policy and the Hierarchy of Evidence 

The introduction of the concept of evidence-based policy (EBP) has marked an important shift in modern political processes. While the health sector has particularly championed this idea [1, 2], similar calls to use evidence to guide policy are increasingly seen in other social policy realms as well [c.f. 3, 4, 5]. National governments have further looked to institutionalise such approaches – with the UK, for instance, launching in 2013 a set of ‘What Works’ centres explicitly designed around the health model [6].

As has been well documented, the field of public health’s embrace of the concept of EBP has historically evolved from the tradition of evidence-based medicine (EBM) [7-9].  EBM has generally been considered a ‘success’ to the extent that the movement has transformed the way in which patients are assessed and treated. Partly this success has been due to the standardisation of clinical decision-making, providing physicians with a transparently objective and scientific method of choosing treatment options [10, 11].

The answer to the question of what constitutes ‘good evidence’ to guide clinical practice lies in the ‘hierarchies of evidence’. These hierarchies set out the process through which research can be evaluated, with the largest scale and most ‘objective’ or ‘scientific’ forms of evidence understood as inhabiting the ‘top’ of the hierarchy. Top-level evidence is typically seen to result from research methods exhibiting key characteristics including: large and representative sample size, control for experimenter and participant bias; control for external variables; the study of a singular experimental variable; and value-neutrality [12]. It is understood that for clinical interventions, these factors are best constituted in the form of Randomised Controlled Trials (RCTs) [13]. Non-experimental methods – such as case studies, observational data, or case-controlled studies – are seen as less useful forms of intervention research, due to their inability to control for confounding variables, and the greater potential for bias to be introduced as some stage in the research protocol. [14].

A number of sources have developed specific evidence hierarchies. The UK’s National Institute of Health and Care Excellence (NICE), for example, produces recommendations which are awarded ‘grades’ from ‘A’ (recommendations being based directly on RTCs or meta-analyses of RTCs) to ‘D’ (recommendations based upon expert opinion or inferences from upper-level studies) [15]. Similarly, the GRADE (Grading of Recommendations, Assessment, Development and Evaluation) criteria evaluates biomedical evidence, again judging evidence from RCTs as ‘high quality’, with observational data as ‘low quality’, and other methods as ‘very low quality’ [16]. Other examples exist as well, including the The Strength of Recommendations Taxonomy (SORT) [17], and the Centre for Evidence Based Medicine (CEBM) at the University of Oxford [18] While there are some variations, common to all of these approaches is the methodological superiority attributed to experimental evidence [and Annex 1 of 19 for further examples].

Challenges: ‘Good Evidence’ for Policy?

EBM was seen to increase objectivity, transparency, and certainty in respect to professional practice. Since these are also ideal goals often espoused for policy-making, it is unsurprising that the logic of EBM has underpinned current discussions of the use of evidence in policy. A natural implication of the embrace of evidence hierarchies is that ‘good’ evidence for policy would also come from the top of these evidence hierarchies. A number of challenges arise, however, deriving from the question of whether the ‘good’ evidence to guide clinical interventions is the same as the ‘good evidence’ to guide policy decisions. For instance, even within health policy, economic or social factors – which may not be conducive to study via RCTs – will necessarily be implicit within policy concerns. Such issues surrounding evidence-informed policy are particularly salient when strength of evidence is discussed; which is often presented in terms of the idea of methodological quality, from a scientific (research community) perspective, rather than applicability from a policy (decision-makers) perspective.[20, 21]

Several authors have warned against using evidence hierarchies exclusively to guide policy making [c.f. 22, 23]. Writing in the British Medical Journal, for instance, Black has argued that EBP is ‘qualitatively different’ to EBM, urging caution in the application of principles from clinical medicine to the realm of policy.[24] It has been further noted that for most policy making situations, the relevant considerations go beyond clinical and immediate health related issues, to involve areas of social, political or economic concern; or as Glaszou and colleagues succinctly put it, “different types of question require different types of evidence” [25: p 39]. Indeed, Petticrew and Roberts have argued that a typology based on the type of question being addressed (e.g. acceptability, effectiveness, satisfaction, etc.) is more appropriate for policy guidance than a single hierarchy [23]. Indeed, drawing on political science and philosophical insights, Russel et al. further argue that: “Policy-making is the formal struggle over ideas and values”[26: p 40], and criticise it as ‘naive rationalism’ to assume evidence itself is value free and can be placed in hierarchies when it comes to decision making. As a result, calls for methodological aptness and a context-based selection of evidence have emerged [23, 27, 28].

Despite these calls, there remains a common use of language for policy to be ‘evidence based’, with a recurring embrace of experimental trials as ‘good evidence’ to inform policy. Within the debates,  there also can appear to be a false dichotomy between those who call for more evidence, and those who critique evidence itself as constructed and political [29] – with constructivist ideas often frustrating public health program officers who typically require actionable information. As such, this paper aims to contribute to the debate in a critical, but pragmatic manner. We indentify three disciplinary fields that underpin (often implicitly) many of the critical challenges levelled against evidence hierarchies. Drawing on Political Science (and policy studies, specifically), the Philosophy of Science, and Sociology (including medical sociology and the sociology of knowledge), we explore how each of these fields problematises the use of evidence hierarchies for policy making. However, we do this in order to develop ways forward to improve our understanding of what constitutes ‘good’ evidence for public health goals. We share Petticrew and Robert’s concern with identifying a more appropriate use of evidence for policy, and we identify how each of the three disciplines drawn upon each, in their own right, provide clues about how to ensure greater appropriateness of evidence for public health (and social) policy.

Political Science: Decisions Involve More Than Clinical Outcomes

The fact that there are a range of non-clinical outcomes that are often important to consider in policy debates [c.f. 22, 23, 25] has been one of the main criticisms of the idea of ‘evidence-based’ policy, and has led some to shift to use of the term ‘evidence-informed’ policy instead.  The idea that health outcomes are but one of multiple important potential issues, however, would be a conceptual starting point for political science, which takes it as a given that policy decisions involve choices between sets of possible outcomes [30, 31] and where the allocation of social values and relative weights to multiple social, political, or economic concerns is understood to be an inherent feature of the political process [32].

From a political science perspective, then, two problems arise with the direct application of evidence hierarchies to guide policy decisions. First, as has been noted elsewhere, public health policy decisions typically involve choice between competing sets of concerns, and not just technical evaluations of effectiveness. While one social value guiding decisions will inevitably be the clinical effectiveness (or cost-effectiveness) of an intervention, other values such as social desirability and acceptability, or impact on individual liberties, human rights, and equity may all be valid considerations for public health actors [23, 33, 34]. Yet for none of these are RCTs the correct form of evidence to measure their importance or scale. Prioritising evidence from experimental methods serves to obscure, rather than remove, political considerations – imposing a de-facto political position that holds clinical outcomes of morbidity and mortality reduction (i.e. those things conducive to RCT evidence) above other social values.

Even when looking within health specific concerns, a second political challenge is that those interventions conducive to experimentation may not be a public health priority. Complex social or health systems interventions are often less suitable to experimentation, and as such, a focus on evidence from the top of a hierarchy may shift attention away from such issues. This may serve to medicalise public health if it prioritises treatment and individual level policies over efforts to address what are increasingly argued to be neglected public health concerns, such as the social determinants of health [35, 36], or the structural drivers of illness [see 37 in respect to HIV/AIDS].

However, the political realities of public health decision making do not eliminate the importance of evidence. Petticrew and Roberts have noted that “different types of research question are best answered by different types of study,”[23] (page 528), but what becomes apparent from a political perspective, is a need to elucidate which questions are of relevance to a particular policy decision, in order to make those judgements of appropriateness. The implications for public health policy making would be to emphasise the need to make the underlying values and competing decision criteria explicit – akin to what Schön and Rein, writing form a critical policy studies perspective, describe as a process of ‘frame reflection’ [38]. Public health goals are not simply to increase clinical efficacy (which hierarchies of evidence are designed to assist), but must instead address multiple considerations including health equity, social acceptability, human rights, and social justice. These relevant policy concerns should be identified ex ante, in order to have transparency within the policy concerns at stake, and to better identify the relevant evidence bases that speak to those concerns.

Philosophy of Science: Generalisability and Evidence in Context

A second theme appearing in the literature critical of EBP is conceptually rooted in thinking about causality and generalisability within the Philosophy of Science. While some authors in this discipline share the political science concern that the technical language of hierarchies serves to obscure the political nature of policy making [c.f. 39], others have particularly noted that many public health and social policy concerns present external validity problems that experimental methods and meta-analyses are unable to address [40-42].

At the core of these arguments is the recognition that the mechanisms through which an intervention works in one context may be very different, or produce different, results elsewhere; particularly when dealing with social or behavioural interventions [41, 43]. While experimental trials are designed to improve internal validity (to show the intervention actually had an effect), this says nothing about the external validity of the result. For biomedical interventions, external validity is not ensured by the trial design, but rather derives from expected similarities in human biochemistry or anatomy [44]. In social, behavioural, and health services interventions (which are increasingly the mainstay of public health planning), there fewer such guarantees, or alternative evidence is needed to justify the expectation of similar effects elsewhere.

This challenge particularly affects meta-analysis, which often sit at the top of evidence hierarchies above individual RCTs, as the method can combine findings from multiple trials to evaluate intervention effect. Yet meta-analysis relies on an assumption that the same mechanism of effect exists across trial sites (and exists in the general population). Were a meta-analysis, however, to synthesise trials showing positive effects of an intervention in one setting, and negative effects in another, the conclusion might be ‘the intervention shows flat results’, when a more accurate (and more useful) conclusion for policy could be that ‘the intervention works for some groups in some contexts, and do not work for other groups in other contexts’ [c.f. 43]. An example might be an intervention of a cash transfer to prevent HIV – this could reduce HIV risk taking in a context where poverty leads people to rely on transactional sex, while increasing risk in a setting where increased wealth is associated with increased social (and sexual) networking [45].

Alternatives such as realist approaches have developed in response to the recognition that social context can determine the mechanism of effect for many interventions [43, 46]. In such situations, the appropriate evidence will not just be that which is measured in a trial, but also evidence of applicability or locally expected effect. Examples of such evidence (on mechanisms in context) might include ethnographic studies, for instance, or local surveys – evidence types typically ranked particularly low in hierarchies. As Cartwright has explained “[f]or policy and practice we do not need to know ‘it works somewhere’. We need evidence for ‘it-will-work-for-us’” [41: p 1401].

Sociology: Construction of Problems and Populations

The final discipline supporting critical reflections of evidence hierarchies is that of Sociology – particularly the traditions of medical sociology and the sociology of scientific knowledge. Sociological enquiry begins from the understanding that ill health (or good health) is not a purely biological occurrence. Patterns of health and illness are shaped through social categories of gender [47, 48], ethnicity [49], geography [50], class and socio-economic disparities [51, 52], and other determining structures.  An understanding of which kinds of evidence speaks to these issues can therefore help to improve public health outcomes [19]. This is increasingly recognised within the field of public health itself [53], but blind imposition of evidentiary hierarchies can serve to hinder, rather than enable, such a shift, by focusing the research and policy gaze on those strategies conducive to experimentation – rather than considering broader social-structural factors that are fundamental in the patterning of population health outcomes.

Sociologists also recognise that what counts as evidence – including how variables are constructed and chosen – is often an artefact of the context or culture within which it is produced [54]. Science itself is not produced in a social vacuum, but is rather also a product of social realities and actions [55]{Krieger, 1992 #3731}. When applied to the field of health, medical sociologists have, for instance, explored how concepts like ethnicity, race or social class do or do not get adequately captured in much health research [56-58]. Critical examination of disease categories and concepts can therefore allow new ways to consider public health intervention approaches [59, 60]. The current need to develop new approaches to address the social determinants of health would represent a contemporary example of this [36, 61].

From a sociologically informed perspective, public health actors can critically reflect on the population groups, data variables, and nature of health and illness categories utilised within bodies of evidence, to question how these constructions best serve their goals of improved population health, disease reduction, or heath equity. It may be that those things technically easy to measure, quantify, or alter in experiments may not be the most appropriate constructions of health and illness to serve public health needs.

From Hierarchy to Appropriateness

The three disciplines presented above each highlight problems in applying a hierarchy of evidence to prioritise heath policy decisions. They also, however, each point to ways to re-conceptualise a good use of evidence to ensure it is best aligned with the normative goals of public health. Policy makers need to identify the multiple criteria on which their decision is based, to address the contextual specificity of the interventions they aim to implement, and to consider if existing disease and population definitions suit the ultimate goals of public health improvement. An appropriate use of evidence, therefore, would be one which is transparent about the policy concerns at hand, which questions whether intervention effects will be expected in the target area, and which is critically aware of different ways to classify populations and health problems.

This does not mean hierarchies of evidence have no relevance. Rigour and quality will always remain important, but the measure of quality for different types of evidence will derive from the appropriate sciences that generate such evidence. Current hierarchies of evidence emphasise qualities that are appropriate for identifying intervention effect, and typically do not say anything about generalisability. Policy concerns will usually require additional types of evidence (not just evidence of intervention effect), will need to consider complex situations where simple causal relationships are not the norm, and will further require evidence of whether possible interventions will work in the desired setting. Multiple research methods – be they experiments, interviews, observations, etc. – will be needed, and each will be underpinned by its own standards of quality and validity. So for example, if public acceptability is an important policy consideration, evidence from survey research may be appropriate. Evaluation of survey quality would obviously include an assessment of statistical power, reliability, and internal and external validity (through consideration of sampling, sample size, triangulation, standardisation of delivery, etc.)[62]. Observational or ethnographic research, on the other hand, may be useful to policy makers in understanding the cultural context that surrounds a certain policy option. These methods emphasise the importance of understanding processes through the perspective of participants [63]. Evidentiary rigour for these methods is therefore related to aspects such as researchers’ immersion in the research context, validation by feeding back their findings to participants, and continued reflexivity of the researcher [64].

Other examples abound, but ultimately, when selecting evidence, what is essential is for decision makers to firstly identify the types of information they need on which to base their decision (from their decision criteria) after which, the appropriate evidence can be judged and evaluated. Each research tradition comes with its own criteria for establishing rigour. Good evidence for policy shifts from following a single hierarchy to the question of whether that evidence is appropriate to the policy consideration and needs, with quality assessment derived from the relevant research tradition.

Summary

The calls for evidence to inform policy has been embraced in Public Health and across other social policy fields more broadly. Yet in the rush to be more evidence-based, there have been associated calls for the increased use of evidence hierarchies to guide selection of ‘good evidence’ for policy making. Such an approach has been widely critiqued from inside and outside the public health sphere, and remain a persistent challenge to public health planning. The use of hierarchies in this way has been described by Boaz and colleagues (2002) as focussing upon the ‘noise’ (i.e. methodological strengths from a natural scientific perspective) produced by evidence, rather than the ‘signal’ (message conveyed, and aims of, a particular piece of research or research field) [27].  As such, this can be counterproductive to achieving public health policy goals – particularly when those goals revolve around more than improving treatment efficacy.

To move the discussion forward, we have futher developed the concept of ‘appropriateness’ of evidence for public health policy based on insights from the disciplines of political science, the philosophy of science, and the sociology. Political science illustrates that appropriate evidence will be that which correctly speaks to the multiple decision criteria under consideration – and as such there is a need to render explicit the social concerns and values being considered. Hierarchies provide important ways to rank evidence in terms of intervention effect, yet intervention effect is typically only one of issues relevant to health policy decisions. The philosophy of science shows that appropriate evidence will consider the generalisability of pieces of evidence. Hierarchies typically are concerned with questions of internal validity, yet policy concerns must consider the similarity of causal mechanisms to be certain of local effect. Finally, sociology illustrates how appropriate evidence will be that which best achieves normative goals, which may require questioning existing classifications of populations and disease.

From a perspective of appropriateness, good evidentiary practice reflects a process of making values explicit, considering causal mechanisms, and questioning evidentiary forms with respect to policy maker goals and needs. Evidence remains crucial, and the quality of evidence remain important, but the ‘good’ evidence for policy becomes that which best serves public health needs, not that which best fits any single methodological criteria. This paper has drawn on three core disciplines it its critique of evidence hierarchies, but has used each on in turn to extend a pragmatic goal of improving the use of evidence in achieving health goals. It is argued that a better understanding of what constitutes ‘good’ evidence for policy can allow past critical authors concerns to be incorporated, while providing a useful way forward for public health actors tasked with increasing evidence use in policy and planning.

Acknowledgements

The authors would like to acknowledge conceptual inputs from Arturo Alvarez-Rosete, Steffanie Ettelt and Benjamin Hawkins. This paper is an output from the Getting Research Into Policy in Health (GRIP-Health) research programme. Funding for the GRIP-Health research programme is provided by the European Research Council (Project ID# 282118). The views expressed here are solely those of the authors and do not necessarily reflect the funding body or the host institutions.

References

1. Berridge, Virginia and Stanton Jenny. “Science and Policy: Historical Insights.” Social Science & Medicine 49, no. 2 (1999): 1133-1138.

2. Cookson, Richard. “Evidence-based Policy Making in Health Care: What it is and What it Isn’t. Journal of Health Services, Research, & Policy 10 (2005): 118-121.

3. Slavin, Robert E. “Perspectives on Evidence-Based Research in Education—What Works? Issues in Synthesizing Educational Program Evaluations.” Educational Researcher 37, no. 1 (2008): 5-14.

4. MacKenzie, Doris Layton. “Evidence-Based Corrections: Identifying What Works.” Crime & Delinquency 46, no. 4 (2000): 457-471.

5. Davies HTO, Sandra M Nutley, and Peter C Smith. What Works? Evidence Based Policy and Practice in Public Service. Bristol: Polity Press, 2000.

6. UK Government. “What Works: Evidence Centres for Social Policy.” In Book What Works: Evidence Centres for Social Policy. UK Cabinet Office, 2013.

7.  Petticrew, Mark. “Public Health Evaluation: Epistemological Challenges to Evidence Production and Use.” Evidence & Policy: A Journal of Research, Debate and Practice 9 (2013): 87-95.

8. Smith, Katherine. “Understanding the Influence of Evidence In Public Health Policy: What Can We Learn From the ‘Tobacco Wars’?” Social Policy & Administration 47 (2013): 382-398.

9. Berridge, Virginia and B. Thom. “Research And Policy: What Determines The Relationship?” Policy Studies 17 (1996): 23-34.

10. “Canadian Taskforce on the Periodic Health Examination: The Canadian Guide to Clinical Preventative Medicine.” In Book The Canadian Guide to Clinical Preventative Medicine. Canada Communication Group, 1994.

11. Guyatt, Gordon, et al. “Evidence-Based Medicine Working Group: Evidence-Based Medicine. A New Approach to Teaching the Practice Of Medicine.” Jama 268, no. 17 (1992): 2420-2425.

12. Merton, Robert K. The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press, 1973.

13. Chalmers, Thomas C, Smith Harry Jr., Bradley Blackburn, Bernard Silverman, Biruta Schroeder, Dinah Reitman, and Ambroz Ambroz. “A Method for Assessing the Quality of a Randomized Control Trial.” Controlled Clinical Trials 2 (1981): 31-49.

14. Borgerson, Kirstin. “Valuing Evidence: Bias and the Evidence Hierarchy of Evidence-based Medicine.” Perspectives in Biology and Medicine 52 (2009): 218-233.

15. NICE. “Guideline Development Methods 11 – Creating Guideline Recommendations.” [http://www.nice.org.uk/niceMedia/pdf/GDM_Chapter11_0305.pdf]

16. Oxman, Andy D., Group GW: “Grading Quality of Evidence and Strength of Recommendations. British Medical Journal 328 (2004): 1490-1494.

17. Ebell, Mark H, Jay Siwek, Berry D Weiss, Steven H Woolf, Jeffrey Susman, Bernard Ewigman, and Marjorie Bowman. “Strength of Recommendation Taxonomy (SORT): A Patient-Centred Approach to Grading Evidence in the Medical Literature.” American Family Physician 69 (2004): 548-556.

18. CEBM. “Welcome.” http://www.cebm.ox.ac.uk/

19. Nutley, Sandra, Alison Powell,and Huw Davies. “What Counts as Good Evidence?” 2012. http://www.alliance4usefulevidence.org/assets/What-Counts-as-Good-Evidence-WEB.pdf

20. Lavis, John N, Dave Robertson, Jennifer M Woodside, Christopher B McLeod, and Julia Abelson. “How Can Research Organizations More Effectively Transfer Research Knowledge to Decision Makers?” Milbank Quarterly 81 (2003): 221-248.

21. Mitton, Craig, Carol Adair, Emily McKenzie, Scott B Patten, Brenda Waye Perry. “Knowledge Transfer and Exchange: Review and Synthesis of the Literature.” Milbank Quarterly 85, no. 4  (2007): 729-768.

22. Booth, Andrew. “On Hierarchies, Malarkeys and Anarchies of Evidence.” Health Information & Libraries Journal 27 (2010): 84-88.

23. Petticrew, Mark and Helen Roberts. “Evidence, Hierarchies, and Typologies: Horses for Courses.” Journal of Epidemiology and Community Health 57 (2003): 527-529.

24. Black, Nick. “Evidence Based Policy: Proceed With Care.” British Medical Journal 323 (2001): 275-279.

25. Glasziou Paul, Jan Vandenbroucke, and Iain Chalmersn. “Assessing the Quality of Research.” British Medical Journal 328 (2004): 39.

26. Russell Jill, Trisha Greenhalgh, Emma Byrne, and Janet McDonnell. “Recognizing Rhetoric in Health Care Policy Analysis.” Journal of Health Services Research & Policy 13 (2008): 40-46.

27. Boaz Annette and Ashby Deborah. Fit For Purpose?: Assessing Research Quality for Evidence Based Policy and Practice. London: ESRC UK Centre for Evidence Based Policy and Practice, 2003.

28.  Dobrow Mark J, Vivek Goel and R.E.G. Upshur. “Evidence-Based Health Policy: Context and Utilisation.” Social Science and Medicine 58, no. 1 (2004): 207-218.

29. Krieger, Nancy. “The Making of Public Health Data: Paradigms, Politics, and Policy.” Journal of Public Health Policy 13 (1992): 412-427.

30. Lasswell, Harold D. Politics; Who Gets What, When, How. Gloucester, MA: Peter Smith Publisher, 1990.

31. Stone, Deborah. Policy Paradox: The Art of Political Decision-Making. London: W.W. Norton & Company, 2002.

32. Easton, David. “The Political System: An Inquiry into the State of Political Science.” In Book The Political System: An Inquiry Into The State Of Political Science. New York: Alfred A. Knopf, 1971.

33. Clark, Sarah, Albert Weale. “Social Values in Health Priority Setting: A Conceptual Framework.” Journal of Health Organization and Management 26 (2012): 293-316.

34. Barnes, Amy and Justin Parkhurst. “Can Global Health Policy Be Depoliticised? A Critique Of Global Calls For Evidence-Based Policy.” In Handbook of Global Health Policy. Edited by Garrett W. Yamey, Garrett Brown, and Sarah Wamala. Hoboken, NJ: Wiley-Blackwell, 2014

35. Smith, Katherine E. Beyond Evidence Based Policy In Public Health: The Interplay Of Ideas. Hampshire: Palgrave Macmillan, 2013.

36. Marmot Michael, Friel S. “Global Health Equity: Evidence For Action On The Social Determinants Of Health.” Journal Of Epidemiology And Community Health 62 (2008): 1095-1097.

37. Auerbach, Judith D, and Justin O Parkhurst and Carolos F. Cáceres. “Addressing Social Drivers Of HIV/AIDS For The Long-Term Response: Conceptual And Methodological Considerations.” Global Public Health 6 (2011): S293-S209.

38. Schön, Donald A. and Martin Rein. Frame Reflection: Toward The Resolution Of Intractable Policy Controversies. New York: Basic Books, 1994.

39. Goldenberg, Maya J. “On Evidence And Evidence-Based Medicine: Lessons From The Philosophy Of Science.” Social Science & Medicine 62 (2006): 2621-2632.

40.  Worrall J: Evidence: philosophy of science meets medicine. Journal of Evaluation in Clinical Practice 2010, 16:356-362.

41. Cartwright, Nancy. “A Philosopher’s View Of The Long Road From Rcts To Effectiveness.” The Lancet 377 (2011): 1400-1401.

42. Cartwright, Nancy and Jeremy Hardie. “Evidence-Based Policy: A Practical Guide To Doing It Better.” In Book Evidence-Based Policy: A Practical Guide To Doing It Better. Oxford: Oxford University Press, 2012.

43.  Pawson Ray and Nick Tilley. Realistic Evaluation. London: Sage Publications, 1997.

44. Victoria, Cesar G, Jean-Pierre Habicht, and Jennifer Bryce. “Evidence-Based Public Health: Moving Beyond Randomized Trials.” American Journal of Public Health 94 (2004): 400-405.

45. Parkhurst, Justin O. “Understanding the Correlations Between Wealth, Poverty and Human Immunodeficiency Virus Infection In African Countries.” Bulletin of the World Health Organization 88 (2010): 519-526.

46. Pawson, Ray, Trisha Greenhalgh, Gill Harvey, and Kieran Walshe. “Realist Review – A New Method of Systematic Review Designed for Complex Policy Interventions.” Journal of Health Services Research & Policy 10 (2005): 21-34.

47. Courtenay, Will H. “Constructions of Masculinity and their Influence on Men’s Well-Being: A Theory of Gender and Health.” Social Science and Medicine 50 (2000): 1385-1402.

48. Doyal, Lesley. “Gender Equity In Health: Debates and Dilemmas.” Social Science and Medicine 51 (2000): 931-940.

49. Krieger Nancy, Jarvis T Chen, Pamela D Waterman, and David H Rehkopf, S.V.  Subramanian. “Race/Ethnicity, Gender, And Monitoring Socioeconomic Gradients In Health: A Comparison Of Area-Based Socioeconomic Measures—The Public Health Disparities Geocoding Project.” Journal Information 93, no. 10 (2003): 1655-1671.

50.  Gatrell, Anthony C and Susan Elliott. Geographies of Health: An Introduction. John Wiley & Sons, 2009.

51. Marmot, Michael and Richard G. Wilkinson. Social Determinants of Health. OUP Oxford; 2009.

52. Wilkinson, Richard G. Unhealthy Societies: The Afflictions of Inequality. Routledge; 2002.

53. Krieger, Nancy. “The Making of Public Health Data: Paradigms, Politics, and Policy.” Journal of Public Health Policy 13 (1992): 412-427.

54.  Bloor, David. Knowledge and Social Imagery. London: Routledge & Keagan Paul; 1976.

55. Kuhn Thomas S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press; 1970.

56.       Krieger Nancy, Williams DR, and Moss NE. “Measuring Social Class in US Public Health Research: Concepts, Methodologies, and Guidelines.” Annual Review of Public Health  18 (1997): 341-378.

57. Morrissey, Michael. “Ethnicity and Race in Drug and Alcohol Research.” Health Sociology Review 14 (2005): 111-120.

58. Collins Chiquita A and David R Williams. “Segregation and Mortality: The Deadly Effects of Racism?” Sociological Forum 14, no. 3 (1999): 495-523.

59. Blaxter, Mildred. “Diagnosis as Category and Process: The Case of Alcoholism.” Social Science & Medicine Part A: Medical Psychology & Medical Sociology 12 (1978): 9-17.

60. Imrie Rob. “Demystifying Disability: A Review of the International Classification of Functioning, Disability And Health.” Sociology of Health & Illness 26, no. 3 (2004) : 287-305.

61. Williams, GH: “The Determinants of Health: Structure, Context and Agency.” Sociology of Health & Illness 25 (2003): 131-154.

62. Moser, Sir Claus and Graham Kalton. “Survey Methods in Social Investigation.” In Survey Methods in Social Investigation. New York: Basic Books, 1971.

63. Hammersley Martyn and Paul Atkinson. Ethnography: Principles in Practice. Routledge, 2007.

64. Davies, Charlotte A. Reflexive Ethnography: A Guide to Researching Selves and Others. Routledge; 2008.



Categories: Pre-Prints

Tags: , , , , ,

3 replies

Trackbacks

  1. Reply to “What Constitutes ‘Good’ Evidence for Public Health and Social Policy Making? From Hierarchies to Appropriateness” Srinivasa Vittal Katikireddi « Social Epistemology Review and Reply Collective
  2. Posing Questions, Eschewing Hierarchies: A Response to Katikireddi, Justin Parkhurst « Social Epistemology Review and Reply Collective
  3. Cleaning out bookmarks related to this project: Part 1 | 50shadesofevidence

Leave a Reply

Discover more from Social Epistemology Review and Reply Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading