One piece of data that I hear thrown around from time to time is that teachers stop improving after 3-5 years on the job. I hear it mostly in the context of “ed reform”, and usually from people who have taught for less than five years. Thinking about that claim, and the teachers I know and have had, makes me kinda mad.
I did a quick google search and found that it is a much more popular statistic to mention than to cite a source for. I found dozens of sites mention it with no evidence or old, broken links. I finally dug up what looks like the original paper, by Steven Rivkin, Eric Hanushek, and John Kain — interestingly, all economists and not educators. They do say that — and they actually go further, to say that teachers stop improving after three years:
There appear to be important gains in teaching quality in the first year of experience and smaller gains over the next few career years. However, there is little evidence that improvements continue after the first three years.
Looking at their research, the essential feature that is left out of most references to it is what they are actually measuring — value added to student test scores. So let’s reframe what they have to say, but in a way that respects the profession of teaching, and puts their work in perspective. Here’s my canned response to someone who tries to spew this nonsense in my presence:
It’s definitely true that teachers’ ability to increase test scores improves dramatically after their first year in the classroom. And there is some evidence that teachers’ ability to increase student test scores stops improving after three years. But this only tells us how well their students did on a standardized test. Think about a teacher who influenced you — someone whose class changed the way you think, and stuck with you for years afterward. The kind of teacher you want your children to have. Was that teacher in their first three years of teaching? I know mine weren’t.
Oh and let’s not even get into the issues with VAM (value added measurement):
Using data from six school districts, the initial report examines correlations between student survey responses and value-added scores computed both from state tests and from higher-order tests of conceptual understanding. The study finds that the measures are related, but only modestly. The report interprets this as support for the use of value-added as the basis for teacher evaluations. This conclusion is unsupported, as the data in fact indicate that a teacher’s value-added for the state test is not strongly related to her effectiveness in a broader sense. Most notably, value-added for state assessments is correlated 0.5 or less with that for the alternative assessments, meaning that many teachers whose value-added for one test is low are in fact quite effective when judged by the other. As there is every reason to think that the problems with value-added measures apparent in the MET data would be worse in a high-stakes environment, the MET results are sobering about the value of student achievement data as a significant component of teacher evaluations.
Maybe we should add that after three years, teachers seem to care less about garbage tests and perhaps spend that time actually teaching meaningful content.