Below are resources to learn more about causal program evaluation, randomized control trials, and using research and evidence in program implementation.
EII’s Certificate Program provides courses, some ala carte and some as part of the certificate, to learn more about designing and implementing low-cost, high-quality program evaluation.
These two articles provide more information about the pros and cons of using RCTs in education.
Whitehurst, G. (2012). The Value of Experiments in Education. Education Finance and Policy, 7(2), 107-123.
Schanzenbach, D. (2012). Limitations of Experiments in Education Research. Education Finance and Policy, 7(2), 219-232.
These books provide more in-depth information on the design and implementation of RCTs.
Murnane, R. & Willett, J. (2011). Methods Matter: Improving Causal Inference in Educational and Social Science Research.
New York: Oxford University Press.
Dunning, T. (2012). Natural Experiments in the Social Sciences. New York: Cambridge University Press.
Gerber, A. S., & Green, D. P. (2012). Field Experiments: Design, Analysis, and interpretation. New York: WW Norton.
This report describes how to incorporate multiple types of social science research methods in RCTs to assess intervention effects.
This guide and the accompanying chart focus on using different types of research methods along with an RCT. The objectives of the guide and chart are to provide: i) a rationale for incorporating multiple methods into RCTs, ii) a guide for designing RCTs with multiple methods, iii) examples from the literature that illustrate the various steps in the process, and iv) examples from the literature that illustrate how additional data collected with multiple methods may be used to determine whether an intervention cause the desired effect, why does the intervention works better for some participants than for others, and the intervention can be redesigned to be more effective or efficient.
This book chapter describes a study on how school decision-makers attend to evidence in their decision making.
Coburn, C. E., (2010). The Partnership for District Change: Challenges of Evidence Use in a Major Urban District. In C. E. Coburn & M. K. Stein (Eds.), Research and Practice in Education: Building Alliances, Bridging the Divide (pp.167-182). Lanham, MD: Rowman & Littlefield Publishers, Inc.
Below are links to additional resources.
This article discusses problems that make it difficult to transfer programs that appear to work in one place to a new or different setting and offers some suggestions for how to realistically try out a new program.
Recognizing and Conducting Opportunistic Experiments in Education: A Guide for Policymakers and Researchers
Opportunistic experiments are type of randomized controlled trial that study the effects of a planned intervention or policy change with minimal added disruption and cost. This guide defines opportunistic experiments and provides examples, discusses issues to consider when identifying potential opportunistic experiments, and outlines the critical steps to complete opportunistic experiments. It concludes with a discussion of the potentially low cost of conducting opportunistic experiments and the potentially high cost of not conducting them. Readers will also find a checklist of key questions to consider when conducting opportunistic experiments.
Identifying and Implementing Educational Practices Supported By Rigorous Evidence: A User Friendly Guide
This Guide seeks to provide educational practitioners with user-friendly tools to distinguish practices supported by rigorous evidence from those that are not. Section I of this Guide discusses what a randomized control trial is and outlines evidence indicating that such trials should play a role in education.
This document, produced by the Department of Education, provides guidance to State educational agencies (SEAs), local educational agencies (LEAs), schools, educators, and partner organizations to assist them in selecting and using “evidence-based” activities, strategies, and interventions, as defined in Title VIII of the Elementary and Secondary Education Act of 1965 (ESEA), as amended by the Every Student Succeeds Act of 2015 (ESSA).
This blog post by Bill Penuel from the University of Colorado Boulder discusses how researchers can provide evidence-based support for professional development and decision making in research-practice partnerships with districts.