Evidence-based policy

(Redirected from Evidence based policy)

Evidence-based policy (also known as evidence-based governance) is a concept in public policy that advocates for policy decisions to be grounded on, or influenced by, rigorously established objective evidence. This concept presents a stark contrast to policymaking predicated on ideology, 'common sense', anecdotes, or personal intuitions. The methodology employed in evidence-based policy often includes comprehensive research methods such as randomized controlled trials (RCT).[1] Good data, analytical skills, and political support to the use of scientific information are typically seen as the crucial elements of an evidence-based approach.[2]

An individual or organisation is justified in claiming that a specific policy is evidence-based if, and only if, three conditions are met. First, the individual or organisation possesses comparative evidence about the effects of the specific policy in comparison to the effects of at least one alternative policy. Second, the specific policy is supported by this evidence according to at least one of the individual's or organisation's preferences in the given policy area. Third, the individual or organisation can provide a sound account for this support by explaining the evidence and preferences that lay the foundation for the claim.[3]

The effectiveness of evidence-based policy hinges upon the presence of quality data, proficient analytical skills, and political backing for the utilization of scientific information.[2]

While proponents of evidence-based policy have identified certain types of evidence, such as scientifically rigorous evaluation studies like randomized controlled trials, as optimal for policymakers to consider, others argue that not all policy-relevant areas are best served by quantitative research. This discrepancy has sparked debates about the types of evidence that should be utilized. For example, policies concerning human rights, public acceptability, or social justice may necessitate different forms of evidence than what randomized trials provide. Furthermore, evaluating policy often demands moral philosophical reasoning in addition to the assessment of intervention effects, which randomized trials primarily aim to provide.[4]

In response to such complexities, some policy scholars have moved away from using the term evidence-based policy, adopting alternatives like evidence-informed. This semantic shift allows for continued reflection on the need to elevate the rigor and quality of evidence used, while sidestepping some of the limitations or reductionist notions occasionally associated with the term evidence-based. Despite these nuances, the phrase "evidence-based policy" is still widely employed, generally signifying a desire for evidence to be used in a rigorous, high-quality, and unbiased manner, while avoiding its misuse for political ends.[5]

History

edit

The shift towards contemporary evidence-based policy is deeply rooted in the broader movement towards evidence-based practice. This shift was largely influenced by the emergence of evidence-based medicine during the 1980s.[1] However, the term 'evidence-based policy' was not adopted in the medical field until the 1990s.[6] In social policy, the term was not employed until the early 2000s.[7]

The initial instance of evidence-based policy was manifested in tariff-making in Australia. The legislation necessitated that tariffs be informed by a public report issued by the Tariff Board. This report would cover the tariff, industrial, and economic implications.[8]

History of evidence-based medicine

edit

Evidence-based medicine (EBM) is a term that was first introduced by Gordon Guyatt.[9] Nevertheless, examples of EBM can be traced back to the early 1900s. Some contend that the earliest instance of EBM dates back to the 11th century when Ben Cao Tu Jing from the Song dynasty suggested a method to evaluate the efficacy of ginseng.[10]

Many scholars regard evidence-based policy as an evolution from "evidence-based medicine", where research findings are utilized to support clinical decisions. In this model, evidence is collected through randomized controlled trials (RCTs) which compare a treatment group with a placebo group to measure outcomes.[11]

While the earliest published RCTs in medicine date back to the 1940s and 1950s,[1] the term 'evidence-based medicine' did not appear in published medical research until 1993.[6] In the same year, the Cochrane Collaboration was established in the UK. This organization works to keep all RCTs up-to-date and provides "Cochrane reviews", which present primary research in human health and health policy.[12]

The usage of the keyword EBM has seen a significant increase since the 2000s, and the influence of EBM has substantially expanded within the field of medicine.[13]

History of evidence-based policy making

edit

The application of randomized controlled trials in social policy was notably later than in the medical field. Although elements of an evidence-based approach can be traced back as far as the fourteenth century, it was popularized more recently during the tenure of the Blair Government in the United Kingdom.[8] This government expressed a desire to shift away from ideological decision-making in policy formulation.[8] For instance, a 1999 UK Government white paper, Modernising Government, emphasized the need for policies that "really deal with problems, are forward-looking and shaped by evidence rather than a response to short-term pressures; [and] tackle causes not symptoms."[14]

This shift in policy formulation led to an upswing in research and activism advocating for more evidence-based policy-making. As a result, the Campbell Collaboration was established in 1999 as a sibling organization to the Cochrane Collaboration.[11][15] The Campbell Collaboration undertakes reviews of the most robust evidence, analyzing the impacts of social and educational policies and practices.

The Economic and Social Research Council (ESRC) furthered the drive for more evidence-based policymaking by granting £1.3 million to the Evidence Network in 1999. Similar to both the Campbell and Cochrane Collaborations, the Evidence Network functions as a hub for evidence-based policy and practice.[11] More recently, the Alliance for Useful Evidence was established, funded by the ESRC, Big Lottery, and Nesta, to advocate for the use of evidence in social policy and practice. The Alliance, operating throughout the UK, promotes the use of high-quality evidence to inform decisions on strategy, policy, and practice through advocacy, research publication, idea sharing, advice, event hosting, and training.

The application of evidence-based policy varies among practitioners. For instance, Michael Kremer and Rachel Glennerster, curious about strategies to enhance students' test scores, conducted randomized controlled trials in Kenya. They experimented with new textbooks and flip charts, and smaller class sizes, but they discovered that the only intervention that boosted school attendance was treating intestinal worms in children.[16] Their findings led to the establishment of the Deworm the World Initiative, a charity highly rated by GiveWell for its cost-effectiveness.[16]

Recent discussions have emerged about the potential conflicts of interest in evidence-based decision-making applied to public policy development. In their analysis of vocational education in prisons run by the California Department of Corrections, researchers Andrew J. Dick, William Rich, and Tony Waters found that political factors inevitably influenced "evidence-based decisions," which were ostensibly neutral and technocratic. They argue that when policymakers, who have a vested interest in validating previous political judgments, fund evidence, there is a risk of corruption, leading to policy-based evidence making.[17]

Methodology

edit

Evidence-based policy employs various methodologies, but they all commonly share the following characteristics:

  • They test a theory as to why the policy will be effective and what the impacts of the policy will be if it is successful.
  • They include a counterfactual: an analysis of what would have occurred if the policy had not been implemented.
  • They incorporate some measurement of the impact.
  • They examine both direct and indirect effects that occur because of the policy.
  • They identify uncertainties and control for external influences outside of the policy that may affect the outcome.
  • They can be tested and replicated by a third party.[citation needed]

The methodology used in evidence-based policy aligns with the cost-benefit framework. It is designed to estimate a net payoff if the policy is implemented. Due to the difficulty in quantifying some effects and outcomes of the policy, the focus is primarily on whether benefits will outweigh costs, rather than assigning specific values.[8]

Types of evidence in evidence-based policy making

edit

Various types of data can be considered evidence in evidence-based policy making.[18] The scientific method organizes this data into tests to validate or challenge specific beliefs or hypotheses. The outcomes of various tests may hold varying degrees of credibility within the scientific community, influenced by factors such as the type of blind experiment (blind vs. double-blind), sample size, and replication. Advocates for evidence-based policy strive to align societal needs (as framed within Maslow's Hierarchy of needs) with outcomes that the scientific method indicates as most probable.[19]

Quantitative evidence

edit

Quantitative evidence for policymaking includes numerical data from peer-reviewed journals, public surveillance systems, or individual programs. Quantitative data can also be collected by the government or policymakers themselves through surveys.[18] Both evidence-based medicine (EBM) and evidence-based public health policy constructions extensively utilize quantitative evidence.

Qualitative evidence

edit

Qualitative evidence comprises non-numerical data gathered through methods such as observations, interviews, or focus groups. It is often used to craft compelling narratives to influence decision-makers.[18] The distinction between qualitative and quantitative data does not imply a hierarchy; both types of evidence can be effective in different contexts. Policymaking often involves a combination of qualitative and quantitative evidence.[19]

Scholarly communication in policy-making

edit
 
Example of the policy cycle concept

Academics provide input to policy beyond the production of content relating to issues addressed via policy through various channels:

  • some studies investigate existing policies (policy studies)[20]
  • some studies include policy options with varying levels of specificity or detail[21] or compare possible rough pathway-options[22]
  • some science-related organizations devise concrete policy proposals[23]
  • some academics engage in science communication or activism in various ways such as by holding press conferences, actively engaging with news media, engaging direct action themselves to attract media attention,[24] writing collectively-signed public documents,[25] social media activities,[26] or creating open letters.

Evidence-based policy initiatives by non-governmental organizations

edit

Overseas Development Institute

edit

The Overseas Development Institute (ODI) asserts that research-based evidence can significantly influence policies that have profound impacts on lives. Illustrative examples mentioned in the UK's Department for International Development's (DFID) new research strategy include a 22% reduction in neonatal mortality in Ghana, achieved by encouraging women to initiate breastfeeding within one hour of childbirth, and a 43% decrease in mortality among HIV-positive children due to the use of a widely accessible antibiotic.

Following numerous policy initiatives, the ODI conducted an evaluation of their evidence-based policy efforts. This analysis identified several factors contributing to policy decisions that are only weakly informed by research-based evidence. Policy development processes are complex, seldom linear or logical, thus making the direct application of presented information by policy-makers an unlikely scenario. These factors encompass information gaps, secrecy, the necessity for rapid responses versus slow data availability, political expediency (what is popular), and a lack of interest among policy-makers in making policies more scientifically grounded. When a discrepancy is identified between the scientific process and political process, those seeking to reduce this gap face a choice: either to encourage politicians to adopt more scientific methods or to prompt scientists to employ more political strategies.

The ODI suggested that, in the face of limited progress in evidence-based policy, individuals and organizations possessing relevant data should leverage the emotional appeal and narrative power typically associated with politics and advertising to influence decision-makers. Instead of relying solely on tools like cost–benefit analysis and logical frameworks,[27] the ODI recommended identifying key players, crafting compelling narratives, and simplifying complex research data into clear, persuasive stories. Rather than advocating for systemic changes to promote evidence-based policy, the ODI encouraged data holders to actively engage in the political process.

Furthermore, the ODI posited that transforming a person who merely 'finds' data into someone who actively 'uses' data within our current system necessitates a fundamental shift towards policy engagement over academic achievement. This shift implies greater involvement with the policy community, the development of a research agenda centered on policy issues instead of purely academic interests, the acquisition of new skills or the formation of multidisciplinary teams, the establishment of new internal systems and incentives, increased investment in communications, the production of a different range of outputs, and enhanced collaboration within partnerships and networks.

The Future Health Systems consortium, based on research undertaken in six countries across Asia and Africa, has identified several key strategies to enhance the incorporation of evidence into policy-making.[28] These strategies include enhancing the technical capacity of policy-makers; refining the presentation of research findings; leveraging social networks; and establishing forums to facilitate the connection between evidence and policy outcomes.[29][30]

The Pew Charitable Trusts

edit

The Pew Charitable Trusts is a non-governmental organization dedicated to using data, science, and facts to serve the public good.[31] One of its initiatives, the Results First, collaborates with different US states to promote the use of evidence-based policymaking in the development of their laws.[32] The initiative has created a framework that serves as an example of how to implement evidence-based policy.

Pew's five key components of evidence-based policy are:[31]

  1. Program Assessment: This involves systematic reviews of the available evidence on the effectiveness of public programs, the development of a comprehensive inventory of funded programs, categorization of these programs by their proven effectiveness, and identification of their potential return on investment.
  2. Budget Development: This process incorporates the evidence of program effectiveness into budget and policy decisions, prioritizing funding for programs that deliver a high return on investment. It involves integrating program performance information into the budget development process, presenting information to policymakers in user-friendly formats, including relevant studies in budget hearings and committee meetings, establishing incentives for implementing evidence-based programs and practices, and building performance requirements into grants and contracts.
  3. Implementation Oversight: This ensures that programs are effectively delivered and remain faithful to their intended design. Key aspects include establishing quality standards for program implementation, building and maintaining capacity for ongoing quality improvement and monitoring of fidelity to program design, balancing program fidelity requirements with local needs, and conducting data-driven reviews to improve program performance.
  4. Outcome Monitoring: This involves routinely measuring and reporting outcome data to determine whether programs are achieving their desired results. It includes developing meaningful outcome measures for programs, agencies, and the community, conducting regular audits of systems for collecting and reporting performance data, and regularly reporting performance data to policymakers.
  5. Targeted Evaluation: This process involves conducting rigorous evaluations of new and untested programs to ensure they warrant continued funding. This includes leveraging available resources to conduct evaluations, targeting evaluations to high-priority programs, making better use of administrative data for program evaluations, requiring evaluations as a condition for continued funding for new initiatives, and developing a centralized repository for program evaluations.

Cost-benefit analysis in evidence-based policy

edit

Cost-benefit analysis (CBA) is a method used in evidence-based policy. It is an economic tool used to assess the economic, social, and environmental impacts of policies. The aim is to guide policymakers toward decisions that increase societal welfare.[33]

The use of cost-benefit analysis in policy-making was first mandated by President Ronald Reagan's Executive Order 12291 in 1981. This order stated that administrative decisions should use sufficient information regarding the potential impacts of regulation. Maximizing the net benefits to society was a primary focus among the five general requirements of the order.[34]

Later presidents, including Bill Clinton and Barack Obama, modified but still emphasized the importance of cost-benefit analysis in their executive orders. For example, Clinton's Executive Order 12866 kept the need for cost-benefit analysis but also stressed the importance of flexibility, public involvement, and coordination among agencies.[35]

During Obama's administration, Executive Order 13563 further strengthened the role of cost-benefit analysis in regulatory review. It encouraged agencies to consider values that are hard or impossible to quantify, like equity, human dignity, and fairness.[36]

The use of cost-benefit analysis in these executive orders highlights its importance in evidence-based policy. By comparing the potential impacts of different policy options, cost-benefit analysis aids in making policy decisions that are based on empirical evidence and designed to maximize societal benefits.

Critiques

edit

Evidence-based policy has faced several critiques. Paul Cairney, a professor of politics and public policy at the University of Stirling in Scotland, contends[37] that proponents of the approach often underestimate the complexity of policy-making and misconstrue how policy decisions are typically made. Nancy Cartwright and Jeremy Hardie[38] question the emphasis on randomized controlled trials (RCTs), arguing that evidence from RCTs is not always sufficient for making decisions. They suggest that applying experimental evidence to a policy context requires an understanding of the conditions present within the experimental setting and an assertion that these conditions also exist in the target environment of the proposed intervention. Additionally, they argue that the prioritization of RCTs could lead to the criticism of evidence-based policy being overly focused on narrowly defined 'interventions', which implies surgical actions on one causal factor to influence its effect.

The concept of intervention within the evidence-based policy movement aligns with James Woodward's interventionist theory of causality.[39] However, policy-making also involves other types of decisions, such as institutional reforms and predictive actions. These other forms of evidence-based decision-making do not necessitate evidence of an invariant causal relationship under intervention. Hence, mechanistic evidence and observational studies are often adequate for implementing institutional reforms and actions that do not alter the causes of a causal claim.[40]

Furthermore, there have been reports[41] of frontline public servants, such as hospital managers, making decisions that detrimentally affect patient care to meet predetermined targets. This argument was presented by Professor Jerry Muller of the Catholic University of America in his book The Tyranny of Metrics.[42]

See also

edit

References

edit
  1. ^ a b c Baron, Jon (1 July 2018). "A Brief History of Evidence-Based Policy". The Annals of the American Academy of Political and Social Science. 678 (1): 40–50. doi:10.1177/0002716218763128. ISSN 0002-7162. S2CID 149924800.
  2. ^ a b Head, Brian (2010). "2 Evidence-based policy: principles and requirements" (PDF). Strengthening Evidence Based Policy in the Australian Federation: Roundtable Proceedings. Vol. 1. Productivity Commission. pp. 13–26. ISBN 978-1-74037-311-1.
  3. ^ Gade, Christian (2023). "When is it justified to claim that a practice or policy is evidence-based? Reflections on evidence and preferences". Evidence & Policy. 20 (2): 244–253. doi:10.1332/174426421X16905606522863. S2CID 261138726.   This article incorporates text available under the CC BY 4.0 license.
  4. ^ Petticrew, M (2003). "Evidence, hierarchies, and typologies: Horses for courses". Journal of Epidemiology & Community Health. 57 (7): 527–9. doi:10.1136/jech.57.7.527. PMC 1732497. PMID 12821702.
  5. ^ Parkhurst, Justin (2017). The Politics of Evidence: from Evidence Based Policy to the Good Governance of Evidence (PDF). London: Routledge. doi:10.4324/9781315675008. ISBN 978-1138939400.[page needed]
  6. ^ a b Guyatt, G. H. (1 December 1993). "Users' guides to the medical literature. II. How to use an article about therapy or prevention. A. Are the results of the study valid? Evidence-Based Medicine Working Group". JAMA: The Journal of the American Medical Association. 270 (21): 2598–2601. doi:10.1001/jama.270.21.2598. PMID 8230645.
  7. ^ Hammersley, M. (2013). The Myth of Research-Based Policy and Practice. Sage. ISBN 9781446280805.
  8. ^ a b c d Banks, Gary (29 May 2009). "Evidence-Based Policy Making: What is It? How Do We Get It?". ANU Public Lecture Series. Productivity Commission.
  9. ^ Guyatt, Gordon H. (1 March 1991). "Evidence-based medicine". ACP Journal Club. 114 (2): A16. doi:10.7326/ACPJC-1991-114-2-A16. ISSN 1056-8751. S2CID 78930206. Archived from the original on 8 December 2021. Retrieved 8 December 2021.
  10. ^ Payne-Palacio, June R.; Canter, Deborah D. (2016). The Profession of Dietetics: A Team Approach. Jones & Bartlett Learning. ISBN 978-1284126358. Archived from the original on 30 April 2024. Retrieved 16 December 2021.
  11. ^ a b c Marston, G.; Watts, R. (2003). "Tampering with the evidence: a critical appraisal of evidence-based policy-making" (PDF). The Drawing Board: An Australian Review of Public Affairs. 3 (3): 143–163. ISSN 1832-1526.
  12. ^ The Cochrane Collaboration Archived 25 December 2019 at the Wayback Machine Retrieved 10 September 2014.
  13. ^ Claridge, Jeffrey A.; Fabian, Timothy C. (1 May 2005). "History and Development of Evidence-based Medicine". World Journal of Surgery. 29 (5): 547–553. doi:10.1007/s00268-005-7910-1. ISSN 1432-2323. PMID 15827845. S2CID 21457159. Archived from the original on 30 April 2024. Retrieved 8 December 2021.
  14. ^ "Evidence-based policy making". Department for Environment, Food and Rural Affairs. 21 September 2006. Archived from the original on 14 January 2011. Retrieved 6 March 2010.
  15. ^ The Campbell Collaboration Archived 28 January 2018 at the Wayback Machine Retrieved 10 September 2014.
  16. ^ a b Thompson, Derek (15 June 2015). "The Greatest Good". The Atlantic. Archived from the original on 20 August 2019. Retrieved 6 March 2017.
  17. ^ Dick, Andrew J.; Rich, William; Waters, Tony (2016). Prison Vocational Education and Policy in the United States. Palgrave Macmillan. pp. 11–40, 281–306. doi:10.1057/978-1-137-56469-6. ISBN 978-1-137-56469-6.
  18. ^ a b c Brownson, Ross C.; Chriqui, Jamie F.; Stamatakis, Katherine A. (2009). "Understanding Evidence-Based Public Health Policy". American Journal of Public Health. 99 (9): 1576–83. doi:10.2105/AJPH.2008.156224. ISSN 0090-0036. PMC 2724448. PMID 19608941.
  19. ^ a b Court, Julius; Sutcliffe, Sophie (November 2005). "Evidence-Based Policymaking: What is it? How does it work? What relevance for developing countries?" (PDF). Overseas Development Institute). Archived (PDF) from the original on 3 December 2021. Retrieved 8 December 2021.
  20. ^ Hoffman, Steven J.; Baral, Prativa; Rogers Van Katwyk, Susan; Sritharan, Lathika; Hughsam, Matthew; Randhawa, Harkanwal; Lin, Gigi; Campbell, Sophie; Campus, Brooke; Dantas, Maria; Foroughian, Neda; Groux, Gaëlle; Gunn, Elliot; Guyatt, Gordon; Habibi, Roojin; Karabit, Mina; Karir, Aneesh; Kruja, Krista; Lavis, John N.; Lee, Olivia; Li, Binxi; Nagi, Ranjana; Naicker, Kiyuri; Røttingen, John-Arne; Sahar, Nicola; Srivastava, Archita; Tejpar, Ali; Tran, Maxwell; Zhang, Yu-qing; Zhou, Qi; Poirier, Mathieu J. P. (9 August 2022). "International treaties have mostly failed to produce their intended effects". Proceedings of the National Academy of Sciences. 119 (32): e2122854119. Bibcode:2022PNAS..11922854H. doi:10.1073/pnas.2122854119. ISSN 0027-8424. PMC 9372541. PMID 35914153.
  21. ^ "AR6 Synthesis Report: Climate Change 2023 — IPCC". Archived from the original on 2 May 2023. Retrieved 18 April 2023.
  22. ^ Weidner, Till; Guillén-Gosálbez, Gonzalo (15 February 2023). "Planetary boundaries assessment of deep decarbonisation options for building heating in the European Union". Energy Conversion and Management. 278: 116602. Bibcode:2023ECM...27816602W. doi:10.1016/j.enconman.2022.116602. hdl:20.500.11850/599236. ISSN 0196-8904.
  23. ^ "GermanZero - Creating a better climate". germanzero.de. Archived from the original on 17 May 2023. Retrieved 17 May 2023.
  24. ^ Capstick, Stuart; Thierry, Aaron; Cox, Emily; Berglund, Oscar; Westlake, Steve; Steinberger, Julia K. (September 2022). "Civil disobedience by scientists helps press for urgent climate action". Nature Climate Change. 12 (9): 773–4. Bibcode:2022NatCC..12..773C. doi:10.1038/s41558-022-01461-y. ISSN 1758-6798. S2CID 251912378. Archived from the original on 4 September 2022. Retrieved 17 May 2023.
  25. ^ Ripple, William J; Wolf, Christopher; Newsome, Thomas M; Barnard, Phoebe; Moomaw, William R (5 November 2019). "World Scientists' Warning of a Climate Emergency". BioScience. doi:10.1093/biosci/biz088. hdl:2445/151800.
  26. ^ Bik, Holly M.; Goldstein, Miriam C. (23 April 2013). "An Introduction to Social Media for Scientists". PLOS Biology. 11 (4): e1001535. doi:10.1371/journal.pbio.1001535. ISSN 1545-7885. PMC 3635859. PMID 23630451.
  27. ^ "Policy Entrepreneurs: Their Activity Structure and Function in the Policy Process". Journal of Public Administration Research and Theory. 1991. doi:10.1093/oxfordjournals.jpart.a037081. hdl:10945/53405.
  28. ^ Syed, Shamsuzzoha B; Hyder, Adnan A; Bloom, Gerald; Sundaram, Sandhya; Bhuiya, Abbas; Zhenzhong, Zhang; Kanjilal, Barun; Oladepo, Oladimeji; Pariyo, George; Peters, David H (2008). "Exploring evidence-policy linkages in health research plans: A case study from six countries". Health Research Policy and Systems. 6: 4. doi:10.1186/1478-4505-6-4. PMC 2329631. PMID 18331651.
  29. ^ Hyder, A.A.; Corluka, A.; Winch, P.J.; El-Shinnawy, A.; Ghassany, H.; Malekafzali, H.; Lim, M.K.; Mfutso-Bengo, J.; Segura, E.; Ghaffar, A. (2011). "National policy-makers speak out: are researchers giving them what they need?". Health Policy and Planning. 26 (1): 73–82. doi:10.1093/heapol/czq020. PMC 4031573. PMID 20547652.
  30. ^ Hyder, A; Syed, S; Puvanachandra, P; Bloom, G; Sundaram, S; Mahmood, S; Iqbal, M; Hongwen, Z; Ravichandran, N; Oladepo, O; Pariyo, G; Peters, D (2010). "Stakeholder analysis for health research: Case studies from low- and middle-income countries". Public Health. 124 (3): 159–166. doi:10.1016/j.puhe.2009.12.006. PMID 20227095.
  31. ^ a b "Evidence-Based Policymaking: A guide for effective". Pew-MacArthur Results First Initiative. November 2014.
  32. ^ "About The Pew Charitable Trusts". pew.org. Archived from the original on 30 April 2024. Retrieved 8 December 2021.
  33. ^ Boardman, Anthony E. "Cost-Benefit Analysis: Concepts and Practice". Cambridge Core. Retrieved 14 May 2023.
  34. ^ "Executive Order 12291". Wikisource. Archived from the original on 14 May 2023. Retrieved 14 May 2023.
  35. ^ "Executive Orders Disposition Tables Clinton - 1993". National Archives. 15 August 2016. Archived from the original on 6 June 2023. Retrieved 14 May 2023.
  36. ^ "Executive Order 13563 - Improving Regulation and Regulatory Review". Obama White House Archives. 18 January 2011. Archived from the original on 4 June 2023. Retrieved 14 May 2023.
  37. ^ Cairney, Paul (2016). The politics of evidence-based policy making. Palgrave Macmillan. ISBN 978-1-137-51781-4. OCLC 946724638.
  38. ^ Cartwright, Nancy; Hardie, Jeremy (2012). Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford University Press. ISBN 978-0199986705. Archived from the original on 30 April 2024. Retrieved 2 October 2020.
  39. ^ Woodward, James (2005). Making Things Happen: A Theory of Causal Explanation. Oxford University Press. ISBN 978-0198035336.
  40. ^ Maziarz, Mariusz (2020). The Philosophy of Causality in Economics: Causal Inferences and Policy Proposals. Routledge. ISBN 978-0-429-34642-2. OCLC 1143838629.
  41. ^ "Government by numbers: how data is damaging our public services". Apolitical. Archived from the original on 11 April 2018. Retrieved 10 April 2018.
  42. ^ Muller, Jerry Z. (2017). The tyranny of metrics. Princeton University Press. ISBN 978-0691174952. OCLC 1005121833.

Further reading

edit
edit