The Research Paradox: Too Much and Too Little

In a brilliant post, Johns Hopkins University researcher Robert Slavin pushes back on a number of contemporary claims about educational research. The essence of the argument is that, thanks to the research frameworks of the Every Student Succeeds Act (ESSA), abundant research suggests strong and moderate effects on student achievement of several educational interventions. The challenge for teachers, leaders, and policymakers is that, while the quantity of research can be overwhelming, the quantity of high-quality research with widely applicable findings can seem paltry.

 

Too Much Research

 

The sheer quantity of academic research in education is overwhelming. Google “research on effective reading instruction,” and half a second later you’ll get about 148 million hits – and  that’s not an exaggeration. Even much more refined searches will reveal thousands of studies, burying even the most diligent reader under a mountain of pages. With such an abundance of research, how is possible that, as Emily Hanford noted in the New York Times, fewer than four in 10 syllabi for literacy instruction in university and college teacher preparation programs use the best scientific evidence to help future teachers become effective instructors? This leads the cynic to say that “you can always find an expert to say anything,” so even the most patently ineffective practices in education (and medicine, psychology, and a host of other fields) can claim to be “research-based.” It’s not that we lack research, but rather many teachers and administrators fail to use the effective filters for research quality that Slavin (and the ESSA) recommend. Often the high-quality studies are drowned out by the amount of research published in the increasing number of academic journals, including many new online journals and, worst of all, “sponsored” journals with prestigious-sounding names in which researchers pay to have their articles published.

 

The Misleading Nature of “Statistical Significance”

 

Another problem with the proliferation of unhelpful research is the misunderstanding of the nature of “statistically significant.” While some vendors imply to consumers of educational research that the word “significant” in this context means “important” or “effective,” all it really means is that when two groups of students were compared to each other, the differences in achievement were unlikely to be due to random variation. It doesn’t mean that the program involved caused the variation, nor does it mean that the environment of the study was remotely similar to the environment of the classroom. It just means that the differences are unlikely to be random.

 

Moreover, the statistical procedures that lead to a declaration that differences are statistically significant are influenced by sample size: the larger the sample, the smaller the level of a difference in performance that can be labeled as significant. There are many pharmaceuticals that create effects in patients that are statistically significant but are not used because they are not clinically significant.

 

Most importantly, classroom teachers apply any educational intervention as part of a complex variety of other teaching practices, whereas drawing inferences from a single intervention suggests that teachers are only using one strategy and are applying it under the same conditions as the study. That is why schools invest millions of dollars every year on “research-based practices” that never have the same effects in the schools as the salespeople for the program claimed.

 

Too Little Research

 

The frailties of many studies should not lead us to throw up our hands and give up, instead using gut instinct and seat-of-the-pants judgment, or the most recent sales pitch, to make curriculum, instruction, and leadership decisions. Rather, we must, as Slavin suggests, narrow the focus on research that nets the criteria of ESSA for strong and moderate effects. The Slavin article quoted in the first sentence of this post provides the link to the latest and best ESSA research.

 

But I wouldn’t stop there. In order to translate research into practice, with evidence of impact in your school and district, I recommend that you consider a practice that I call the “science fair for adults.” Teachers and administrators identify three key ideas.

1.     They identify a specific challenge in academics, behavior, attendance, or other area that is important to them.

2.     They identify a specific intervention – just one change in teaching practice – and describe in detail how they implement that change.

3.     They assess results – ideally with the same students in the same classroom, same schedule, same budget, same union contract – so the only difference in a comparison of results before and after the new teaching practice was that teaching practice itself.

 

I have seen faculties that were resistant to even high-quality research change from skeptics to advocates when quality external research was accompanied by this experimental approach that undermined the frequent challenge, “that won’t work with our kids.”

 

Educational research need not be a guessing game nor a matter of the volume (size) or the volume (noise) of the purveyors of research. If we want to make better use of research, then the two criteria that guide our decisions will be quality and relevance. Slavin’s wise counsel leads us to quality. The Science Fair will provide relevance.

 

 

The New Teacher Project

Help encourage the next generation of teachers! The New Teacher Project is providing fellowships for aspiring educators. Learn more here, and pass this along to people in teacher preparation programs. Thanks!

Subscribe to receive our blog updates

Related Posts

  • Using Text Annotation to Support the Writing Process

    April 29, 2026
    Contributing author: Dr. Marisa Rivas

    Read More
  • Is It Really Alternative—or Just a Different Address?

    In my work supporting alternative schools and programs, I’ve found that too often continuation and alternative settings inherit the same graduation requirements, schedules, grading systems, instructional routines, and pacing that failed students the first time. They are simply in a smaller setting and frequently with even fewer resources. In many cases, rigid credit requirements minimize flexibility for students and instead condemn them to hours of tedious, computer-based credit recovery.

    Read More
  • A Team of One: Rethinking Singletons in Collaborative Learning Teams

    It’s one of the most common, and most limiting, statements we hear when it comes to PLCs, or what we call Collaborative Learning Teams (CLTs). Whether it’s a lone 5th grade teacher, a single PE teacher, a music teacher, the only Chemistry teacher, a specialist, or someone teaching across multiple grade levels, the conclusion is often the same: there’s no one to collaborate with. And just like that, the work stops, not because it can’t happen, but because we’ve defined collaboration too narrowly.

    Read More