485 Days and Counting: NYC's Education Department Stymies Public Records Requests, Both Big and Small

The 74 Interview: Alexis Morin on Students For Education Reform, Youth Power & Achieving Educational Justice

How Trump’s Immigration Crackdown Is Traumatizing Students Across the U.S. — Including Many Born Here

10 Keys to How the Class of 2021 Views the World: They Never Used Landlines or Desktops—But Are Emoji Experts

Success Academy Students Outscore Every District in New York State on Annual English and Math Exams

10 Tips for Immigrant Students, Families to Be Safe Part of LAUSD’s New ‘We Are One’ Guide

Alliance College-Ready Public Schools: AMPing Up Its Alumni Network to Track & Guide Students Through College

New Study: KIPP Pre-K Has Big — and Possibly Lasting — Impact on Early Student Achievement

My First Solar Eclipse! 17 Eye-Opening Photos of Kids Experiencing Science Along the Path of Totality

Bullying on the Rise in NYC Middle and High Schools, NYDN Analysis of Student Surveys Shows

Immigration Agents Inside Schools? Why Some Activists Are Warning Undocumented Students About Trump’s Policy Shifts

NYC Numbers Show City’s Unassigned Teachers Paid $10,000 More on Average Than Those Teaching Kids Full Time

As Immigrant Students Worry About a New School Year, Districts & Educators Unveil Plans to Protect Their Safety (and Privacy)

A D.C. Breakthrough as Traditional Public School Students Post Gains on PARCC Test, Outperforming Charters

This Week in ESSA: Final 4 First-Round States Get Federal Feedback, 6 States Now Approved, Chiefs for Change Weighs In

‘No One Is Above the Law’: Divisive Trump Surrogate Carl Paladino Removed From Buffalo School Board

Veto Override Uncertain as Fight Over Funding Illinois Schools Moves to the House

Noble Network of Charter Schools: It’s Not Just About Going to College, but About Global Perspective & Leaving Chicago

74 Interview: David Hardy on Putting Purpose Before Politics and Kids Before Adults in Leading Ohio’s 2nd State-Takeover District

For Schools, an Eclipse Conundrum: To Open or Close? For Fun or for Science?

2 New Reports Show That We Really Don’t Have a Great Way to Evaluate Teachers

November 17, 2015

Talking Points

Both test scores and teacher observations are flawed — but the best bet might be to use both of them.

Research highlights faults with common measures of teacher performance — so where does that leave policymakers?

Sign Up for Our Newsletter

A pair of recently released reports that each find serious flaws in the two most common methods used to evaluate teachers — observing them in the classroom and trying to pinpoint how much they affect individual student achievement — could leave policymakers wondering where to turn next.

A Nov. 10 statement released by the American Educational Research Association (AERA) and picked up by national news outlets calls into question the use of what are known as value-added models. The group cites studies that show flaws in value-added — statistical measures that attempt to isolate a teacher’s impact on student growth  — including inconsistency from year to year and the shortcomings of standardized tests in gauging student learning.

Using value-added models to evaluate teachers and principals, or the programs that train them, comes with “considerable risks of misclassification and misinterpretation.” On the other hand, the report points to teacher observation as “a promising alternative.”1

Not so fast.

A paper presented a couple days later at a policy conference in Miami finds that teacher observations suffer from many of the same flaws that plague value-added measures.

For instance, a teacher’s observation score may be significantly biased by the students she teaches. Specifically, teachers of students with higher test scores tend to get higher ratings. The researchers, including lead author Matthew Steinberg of the University of Pennsylvania and Rachel Garrett of the American Institutes for Research, found that math teachers with the highest-achieving students were nearly seven times more likely to get the top observation rating than teachers with the lowest-achieving students. This generally lines up with a 2014 Brookings Institution report that found a similar bias in observations.

Steinberg and Garrett’s paper suggests that how students are placed into classrooms could impact teachers’ observation score. For example, a principal might match students with behavioral problems to a specific teacher with strong classroom management skills. But that teacher with a classroom full of unruly students could then be rated lower.2

Just as the AERA statement warned against relying too heavily on value-added measures, Steinberg and Garrett  urge “greater caution… when making high-stakes personnel decisions based largely on teachers’ classroom observation scores.”
 

Criticism of the value-added approach has gotten much greater attention — both from researchers and the media — than the weaknesses of teacher observation, which could mean that value-added is taking an unfair beating. After all, since both approaches have flaws — some of them similar — it’s difficult to decide which measure is better.

“I think [value-added] opponents tend to set unreasonably unattainable targets for what [it] has to achieve in order to be used at all,” says Morgan Polikoff, a University of Southern California professor.

Researchers actually have a better grasp of the strengths and limitations of value-added than of observations. There is evidence, for instance, that strong value-added is positively related to long-run student outcomes like income and college attendance. No such evidence, one way or the other, exists for observations.

So where do these conflicting findings leave policymakers? If both measures are flawed, can any high-stakes decisions be made based on them?

Before trying to answer that question, consider one important caveat: High-stakes decisions in education are unavoidable. School districts either grant a teacher tenure or they don’t; they either give a teacher a raise or they don’t; they either dismiss a teacher who may be struggling or they don’t.

These important decisions can’t be ducked; the key question, rather, is how they are made.

Steinberg and Polikoff agree that the best bet is using both measures. If the two point in the same direction, particularly year after year, that likely says something meaningful about a teacher’s performance.

Combining imperfect but useful measures — rather than ignoring either one because of its faults — might be as much as we can hope for. That may be cold comfort to teachers who face evaluation based on flawed data, but the only alternative to an imperfect evaluation system is no system at all.


Footnotes:

1. It’s not clear why AERA refers to observations as an “alternative,” since it appears that every district that uses a value-added model for teacher evaluation also uses some form of observation. (Return to story)

2. Some have raised similar concerns with value-added, though more recent research has suggested non-random sorting may not be a major issue for value-added. (Return to story)