The Research Paradox: Too Much and Too Little

October 27, 2019

In a brilliant post, Johns Hopkins University researcher Robert Slavin pushes back on a number of contemporary claims about educational research. The essence of the argument is that, thanks to the research frameworks of the Every Student Succeeds Act (ESSA), abundant research suggests strong and moderate effects on student achievement of several educational interventions. The challenge for teachers, leaders, and policymakers is that, while the quantity of research can be overwhelming, the quantity of high-quality research with widely applicable findings can seem paltry.

 

Too Much Research

 

The sheer quantity of academic research in education is overwhelming. Google "research on effective reading instruction," and half a second later you’ll get about 148 million hits – and  that’s not an exaggeration. Even much more refined searches will reveal thousands of studies, burying even the most diligent reader under a mountain of pages. With such an abundance of research, how is possible that, as Emily Hanford noted in the New York Times, fewer than four in 10 syllabi for literacy instruction in university and college teacher preparation programs use the best scientific evidence to help future teachers become effective instructors? This leads the cynic to say that "you can always find an expert to say anything," so even the most patently ineffective practices in education (and medicine, psychology, and a host of other fields) can claim to be "research-based." It's not that we lack research, but rather many teachers and administrators fail to use the effective filters for research quality that Slavin (and the ESSA) recommend. Often the high-quality studies are drowned out by the amount of research published in the increasing number of academic journals, including many new online journals and, worst of all, "sponsored" journals with prestigious-sounding names in which researchers pay to have their articles published.

 

The Misleading Nature of "Statistical Significance"

 

Another problem with the proliferation of unhelpful research is the misunderstanding of the nature of "statistically significant." While some vendors imply to consumers of educational research that the word "significant" in this context means "important" or "effective," all it really means is that when two groups of students were compared to each other, the differences in achievement were unlikely to be due to random variation. It doesn't mean that the program involved caused the variation, nor does it mean that the environment of the study was remotely similar to the environment of the classroom. It just means that the differences are unlikely to be random.

 

Moreover, the statistical procedures that lead to a declaration that differences are statistically significant are influenced by sample size: the larger the sample, the smaller the level of a difference in performance that can be labeled as significant. There are many pharmaceuticals that create effects in patients that are statistically significant but are not used because they are not clinically significant.

 

Most importantly, classroom teachers apply any educational intervention as part of a complex variety of other teaching practices, whereas drawing inferences from a single intervention suggests that teachers are only using one strategy and are applying it under the same conditions as the study. That is why schools invest millions of dollars every year on "research-based practices" that never have the same effects in the schools as the salespeople for the program claimed.

 

Too Little Research

 

The frailties of many studies should not lead us to throw up our hands and give up, instead using gut instinct and seat-of-the-pants judgment, or the most recent sales pitch, to make curriculum, instruction, and leadership decisions. Rather, we must, as Slavin suggests, narrow the focus on research that nets the criteria of ESSA for strong and moderate effects. The Slavin article quoted in the first sentence of this post provides the link to the latest and best ESSA research.

 

But I wouldn't stop there. In order to translate research into practice, with evidence of impact in your school and district, I recommend that you consider a practice that I call the "science fair for adults." Teachers and administrators identify three key ideas.

1.     They identify a specific challenge in academics, behavior, attendance, or other area that is important to them.

2.     They identify a specific intervention – just one change in teaching practice – and describe in detail how they implement that change.

3.     They assess results – ideally with the same students in the same classroom, same schedule, same budget, same union contract – so the only difference in a comparison of results before and after the new teaching practice was that teaching practice itself.

 

I have seen faculties that were resistant to even high-quality research change from skeptics to advocates when quality external research was accompanied by this experimental approach that undermined the frequent challenge, "that won't work with our kids."

 

Educational research need not be a guessing game nor a matter of the volume (size) or the volume (noise) of the purveyors of research. If we want to make better use of research, then the two criteria that guide our decisions will be quality and relevance. Slavin’s wise counsel leads us to quality. The Science Fair will provide relevance.

 

 

The New Teacher Project

Help encourage the next generation of teachers! The New Teacher Project is providing fellowships for aspiring educators. Learn more here, and pass this along to people in teacher preparation programs. Thanks!

Subscribe to receive our blog updates

Douglas Reeves