What is Causality?

 

Video Transcript

To help schools and school districts execute high quality program evaluations, the Education Innovation Institute, at the University of Northern Colorado, has created a multi-faceted training program to help you in, Evaluating, What, Works.

Causality vs. Correlation.  

So what sets this kind of evaluation apart? The key word is “rigorous.”  Program evaluations that use rigorous statistical methods, can determine whether or not a program or initiative is causing targeted outcomes, such as improved assessment scores or graduation rates.

Showing causality carries much more weight than just showing correlation. At best, correlation is only able to suggest a program’s promise. Being able to show whether a program actually causes results, gives you the tools to direct your resources, toward programs that truly make a difference for the students you serve.

Treatment Groups and Control Groups

Evaluations that will show you causality, require you to follow certain conventions. The design of the evaluation, needs to demonstrate if changes were caused by the program, and not from outside factors.

An essential component is to create a Treatment Group, and a Control Group. The outcomes from the Treatment group, who participated in the program, must be able to be compared to those of the Control Group who did not.

However, the method by which the groups are chosen, is as important as the groups themselves.

Randomization

Suppose a district wants to adopt a new reading program for elementary students. It asks for teachers to volunteer to try it out. If the reading scores of participating students, come in higher than the scores of their peers who did not receive the program, it would not be clear if the improved performance was due to the program or due to some characteristics shared by the volunteer teachers, such as knowing more about reading development, having more teaching experience, or having a knack for developing relationships with students.

Randomization is the best method to assign people to either get the treatment, or to be a control, reducing the odds that the two groups differ in some important way. If teachers are randomly assigned to either use, or not use the program, characteristics like the ones previously mentioned, are more likely to be distributed between the two groups and not concentrated in just the treatment group.

This process also works well at the school level. Suppose a district decides to replace its current elementary reading program, with the new pilot program, in all of its schools at once. If reading scores then shoot up, it’s hard to know what caused the bump. Was it the new program? Or some other factor, such as a later district-wide start-time, that let kids get more sleep, or a new grant, to buy more books for all the school libraries.

If the district had, instead, piloted the new curriculum, in only half its schools, randomly chosen, and if the students who got the new curriculum, also had higher test scores, you could eliminate the new start-time or extra library books as the cause.

Removing Barriers

Evaluations that use this type of design to create their treatment and control groups, are called “Randomized Controlled Trials” or “RCT’s.” Two barriers have traditionally kept school districts from conducting RCT’s. One of these barriers, the ethical and political issues concerning RCT’s, and who receives the new program, will be treated in a later video.

The other barrier is cost. For many years, RCT’s were thought to be too expensive for districts to implement. The key to keeping costs low, is to use data you collect for other purposes. Schools and districts already routinely administer summative and formative testing, the scores of which are frequently used as outcome measures to evaluate a program’s impact. Therefore, with many academic programs, since the data is already in existence, there is no additional cost when implementing RCT’s.

When to Hire a Professional

As effective as RCT’s are at evaluating causality, they are not the only way to go and, in fact, they are not appropriate for some types of program evaluation. Sometimes randomization isn’t feasible. Maybe because the program to be evaluated is already in use, and the treatment group already chosen. In cases like that, a control group can be simulated through statistical techniques. Although designing the statistical algorithms, necessary to account for non-randomization, often requires hiring a professional evaluator.

Getting Started

The Education Innovation Institute offers affordable services and products, designed to help create evaluations that best suit the needs of your school or district. We’ll get you started before you implement new programs, and aid in enhancing your understanding of those already underway. We can help you in Evaluating What Works.