File not found
Professional Development

Smarick's Questions Are Fair But We Still Know That Most Teachers Are Not Improving Substantially Over Time

Dan Weisberg of The New Teacher Project responds to Andy Smarick's critique of their recent report on teacher improvement, The Mirage. He takes issue with Andy's interpretation of what the report actually says and seems to wonder if schools don't even know if their Professional Development is helping teachers to improve, how can Smarick possibly know? It's clear that Weisberg respects Smarick very much which makes the back and forth between them both informative and also a fun read.  
Andy knows the ins and outs of teacher evaluation as well as anyone, so we respect his healthy skepticism on this front. Before I address his specific concerns, though, it’s worth pointing out that our findings about teacher development aren’t as dire as he and others have made them out to be. In our research, we found thousands of teachers who improved from year to year. Clearly, some kinds of professional development are helping individual teachers. The problem is that at the systemic level, these teachers are the exception instead of the rule. That brings us back to Andy’s questions about our methodology. He’s right that nobody has found a perfect way to measure teacher performance, and that many evaluation ratings aren’t as accurate as we’d like them to be (often because they’re inflated). That’s true even in school systems that have worked hard to improve their evaluation systems in recent years, like the districts we studied. But evaluation systems don’t have to be perfect to give us meaningful trends, especially when we’re studying thousands of teachers. In addition to analyzing overall ratings, we looked at individual measures like value-added data and observation scores—even scores for specific skills. That helped us discover, for example, many veteran teachers hadn’t yet mastered crucial instructional skills like student engagement, even though they earned a high overall evaluation rating. Most tellingly, all the measures we looked at pointed in the same direction—toward most teachers not improving substantially over time. And it’s not just us seeing these patterns: Our results square with the most recent large, randomized, controlled studies on the issue by the American Institutes of Research.
Daniel Weisberg has worked at TNTP as the Executive Vice President for Performance Management group, General Counsel, and as Vice President of Policy. He co-authored TNTP’s acclaimed study on the failures of the nation’s teacher evaluation systems, The Widget Effect, which has helped to catalyze evaluation reforms in more than 30 states since 2009. More recently, he ...

Join the Movement