For the past two decades, there has been an inexorable rise of “data-driven decision-making” in schools. The appeal was obvious: Isn’t data better than intuition, hunches, or guesswork? Of course, we thought, data should be the basis of our decisions. But data about what? While the sophistication of data gathering and analysis has grown dramatically, along with the volume of charts, graphs, and tables that bury administrators and teachers, the usefulness of all of this data has not appreciably improved. Why? Because we continue to focus too narrowly on effects rather than causes. Here are four ways to supercharge your data analysis right now.
First, identify all of the potential causes of student achievement. Although these causes might include poverty and English language ability, every educator knows that there are many more, including effective instruction, feedback, relationships, writing across the curriculum, collaborative scoring, time available for instruction, differentiation, intervention programs, and a host of other key variables that adults control. However, if the “data analysis” only includes the information on family income and student English-language ability, then those two variables will dominate the conversation. If we do not systematically consider teaching and leadership variables as part of data analysis, we are saying that teaching and leadership are irrelevant.
Second, in order to analyze student growth, data analysis must consider more than end-of-year exams. In order to understand student growth, we need to examine growth by the same student, within the same year, with the same teacher. Year-to-year comparisons that purport to show growth almost always compare students in different years with different teachers and widely varying student environments. In schools with a great deal of student mobility, these year-to-year comparisons are not even comparing the same students, rendering the entire “growth” exercise a misleading waste of time.
Third, in order to link instructional causes with educational causes, class-by-class data analysis is essential. In one recent example, I studied different ninth-grade algebra classes with dramatically different results in terms of achievement, grades, and growth. The only way to understand these differences is to examine both the scores and the underlying teaching practices on a class-by-class basis. It turned out that the classes with the highest success had significantly different professional practices in instruction, classroom engagement, practice methods, assessment, and grading policies when compared to their low-success peers. But in most schools, even looking at class-by-class comparisons is difficult or impossible because of the institutional resistance caused by the perception that test scores will be used to evaluate teachers. “You can look at fourth grade,” the reasoning goes, “but don’t you dare post the results of individual fourth-grade teachers.” This robs the school of any possible insights from the most effective practices of individual teachers. Moreover, the premise of the opposition to class-by-class comparison – that teachers will be evaluated based on test scores – is demonstrably false. Next time you hear the objection, ask for one – just one – example of a teacher who was disciplined or terminated for low test scores. Teacher evaluation guru Kim Marshall says that this has happened only once, and upon appeal, the district lost the case for termination. The argument that “you can’t compare results from different classes” is a myth that does not help students and does not prevent the greatest threat to teachers, which is persistent reluctance to apply the most effective instructional practices more broadly.
Fourth, don’t let leaders off the hook. There are essential elements of data analysis that are the responsibility of building and district leaders. These include time allocated to instruction (see my previous blog post for how high schools are giving teachers less time to teach while increasing the demands of the curriculum), the availability of proactive interventions (during school works far better than after school), and the effective use of meetings of collaborative teams within a professional learning community. It’s not enough for leaders to insist that teachers are “data-driven” if the leaders themselves do not model this by systematically observing, documenting, and replicating the most effective teaching and leadership practices.
Schools are drowning in data right now. What they still need is actionable information in order to make better decisions.