Evidence-based interventions (EBI) are treatments that have been proven effective (to some degree) through outcome evaluations. As such, EBI are treatments that are likely to be effective in changing target behavior if implemented with integrity. The EBI movement has an extensive history across Medicine, Clinical and Counseling Psychology. In the 1990’s the EBI movement was extended to Education/SchoolPsychology. While there has been a great deal of intervention research and general discussion, the development of an agreed upon “list” of EBI has not occurred.

Before such a list would be useful, there are several critical features of EBI that must be understood to result in effective use. We refer to those features as “EBI Fine Print”.

EBI Fine Print

  1. EBI are validated for a specific purpose with a specific population. As such, EBIs are only useful for a range of problems and as such, must be paired up with the right situation. If you match an EBI with a problem is it not designed to address, there is no reason to think that it will work. A hammer is an effective tool, but not with a screw.
  2. EBI assumes that the treatment is used in the manner that it was researched. As such, changing parts of an intervention, while typical, can invalidate the EBI. There are many ways to change an intervention (frequency, materials, target, and on and on), which can alter the effectiveness of the EBI.
  3. EBI are typically validated with large group research, or a series of small group studies. While large group research is ideally suited for the documentation of interventions, which typically have a strong effect with a specific problem, it is common that within that large group there are cases where the intervention was not effective. In other words, large group research documents interventions as likely effective, not surely effective for a specific case. It is critical to remember that even the most effective interventions are often ineffective with a specific case. For an individual case, the true documentation of “evidence based” is produced only after the intervention is implemented and outcome data is produced which documents a change in the target behavior in the desired manner.

The general implication of the EBI Fine Print is that a list of EBIs is just a nice place to start. There are several additional steps that are critical for the correct use of EBI in applied settings.

Methods of Identifying Interventions Which are “Evidence Based”

There are a number of methods for establishing when an intervention can be deemed “evidence based”. For example, several groups use a meta-analytic approach with the goal of assigning an effect size to certain interventions. This approach has some substantial advantages including providing a clear mathematical “rating” so that a number of competing educational program can be considered. The What Works Clearinghouse is arguably the best example of such a site. This method is most appropriate for comprehensive academic or social behavior programs. Such programs can be applied across large populations and their general effectiveness can be measured. As such, this should be the first level of validation considered by groups looking to adopt schoolwide, or large scale intervention programs (e.g. an academic or schoolwide social behavior curriculum). While understanding the importance of this approach, there are some critical weaknesses that require that other concepts of defining “evidence based” are considered. Specifically, intervention packages require that teachers, schools and often districts select and invest in the programs that are often quite expensive (cost of intervention package, cost of training, etc.). In addition, most intervention programs are not small endeavors, but rather large commitments for the teachers and often school administrations. While it is critical that all practices are evidence based, there are only so many comprehensive reading or social behavior programs that a teacher can do. Finally, such programs are typically validated over large groups. Effect sizes report the “typical effect” of an intervention across the participants. Using this model, it is understood that a “strong” effect demonstrated across 10,000 children, was not universally “strong” for all 10,000 children. In all likelihood, the intervention was in fact ineffective for some children, but there were other cases where there was a very strong impact that balanced out the cases with a weak or no effect. In the end, validation at the group level only means that an intervention is more likely to be effective, not that it will be effective with all children.

Recognizing that the intervention program validation method has inherent limitations, another protocol is necessary to offer educational professionals a complete array of EBI. Specifically, at the level of an individual child, or a small group of children, it is critical to shift the focus of validation from a content area (e.g. reading, mathematics or social behavior) and focus on the function of the problem behavior or academic difficulty. The EBI Network protocol was designed to examine the literature base for simple interventions that can be done in most classes with little resource commitment. These are interventions that a teacher or an intervention team can select and tryout with a target student or group of students demonstrating a common problem. It is critical to understand that intervention selection is only the first step using this model, and that all selected interventions will be implemented with fidelity, target outcomes will be measured, and the effectiveness of the interventions will be determined by the outcome data rather than a prior decision. Using this model, the EBI Network protocol has the following steps.

  1. Examine scholarly publications (research journals) for interventions that have one or more experimental studies reporting some level of effectiveness. Priority is given to interventions with a series of experimental studies documenting some level of effectiveness (e.g. Cover, Copy, Compare) or those based on a strongly supported theoretical orientation (e.g. positive reinforcement).
  2. Sort selected interventions into categories based on what common academic or behavior problem they address. Please see the Common Reasons for School Problems page for an explanation of the framework used to sort interventions.
  3. Develop simple protocols that teachers or other educational professionals can use to try out the intervention. These protocols all include the following elements:
    • Intervention Name
    • Brief Description
    • Overview of the common problem the intervention is designed to address
    • Overview of the intervention procedures
    • Overview of the critical components of the intervention (intervention procedures that are considered essential for fidelity purposes)
    • Overview of the assumptions of the intervention (often includes limitations)
    • Materials needed
    • Citations
  4. Develop YouTube videos modeling the interventions.
  5. Develop Evidence Briefs for the interventions.

These briefs are then presented in two tables (academic and behavior interventions), which organize them based on the common problem the intervention is designed to address. It is critical to note that this method of intervention validation assumes that users understand that the selection of an intervention is only the first step in a defensible problem solving process. It is essential that all selected interventions are implemented with fidelity, target outcomes be measured, and the effectiveness of the interventions be determined by the outcome data. The true documentation that an intervention is “evidence based” for a specific case occurs only when there is outcome data indicating a change in the target behavior.