Atlanta School Turnaround Gets A Big Boost With $2.1 Million Grant From Walton Family Foundation

Must-See: Hundreds of NYC Students Rallied Thursday at Stonewall in Support of Transgender Classmates

After Reported Opposition, DeVos Defends Ending Transgender Protections to Friendly CPAC Crowd

How YouCAN Is Growing Grassroots Education Leaders to Improve Their Schools and Communities

A Toast to Ale-Truism: Pub Patrons’ Donations Help Put Books in Hands of Portland’s Needy Kids

An Oval Office Brawl Over Transgender Kids: 7 Things We Know About the DeVos-Sessions Showdown

10 Kids + 1 Teacher = The Smallest School in the Tiniest District in America’s Least Populated State

Hard to Game, Easy to Use: Chronic Absenteeism Gains Ground as New ESSA Measure of Student Success

The 74 Interview: L.A. Education Activist Yolie Flores on Schools, Politics, and Why She’s Running for Congress

No Birds No Bees: Texas Students Rarely Get Quality Sex Ed Despite High Teen Birth Rates

Study: Weakening Tenure in Louisiana May Have Caused Thousands of Teachers to Quit

Obama-Era Protections for Transgender Students to Be Revoked, Gavin Grimm Supreme Court Case at Risk

‘We Must Draw a Line in the Sand’: Inside Nevada’s New Campaign to Rescue Failing Schools

Suburban Districts Cry Foul As Connecticut Governor Looks to Shift State Funds to Poorer Urban Schools

‘A Huge Win’: Washington State Charter School Supporters Celebrate Major Courtroom Victory

CogniToys Dino Can Answer Kids’ Questions All Day Long, in and out of the Classroom

At Philadelphia’s Mastery Charter Network, Culture Is Key to Turning Around Failing Schools

WATCH: Understand Open Enrollment in 120 Seconds or Less

Elementary Schools Across America Celebrate Their First 100 Days of School With Style on Social Media

How’s Your School Culture? ESSA Says It’s Important, but Only 1 in 3 Students Nationwide Think It’s Good

2 New Reports Show That We Really Don’t Have a Great Way to Evaluate Teachers

November 17, 2015

Talking Points

Both test scores and teacher observations are flawed — but the best bet might be to use both of them.

Research highlights faults with common measures of teacher performance — so where does that leave policymakers?

Sign Up for Our Newsletter

A pair of recently released reports that each find serious flaws in the two most common methods used to evaluate teachers — observing them in the classroom and trying to pinpoint how much they affect individual student achievement — could leave policymakers wondering where to turn next.

A Nov. 10 statement released by the American Educational Research Association (AERA) and picked up by national news outlets calls into question the use of what are known as value-added models. The group cites studies that show flaws in value-added — statistical measures that attempt to isolate a teacher’s impact on student growth  — including inconsistency from year to year and the shortcomings of standardized tests in gauging student learning.

Using value-added models to evaluate teachers and principals, or the programs that train them, comes with “considerable risks of misclassification and misinterpretation.” On the other hand, the report points to teacher observation as “a promising alternative.”1

Not so fast.

A paper presented a couple days later at a policy conference in Miami finds that teacher observations suffer from many of the same flaws that plague value-added measures.

For instance, a teacher’s observation score may be significantly biased by the students she teaches. Specifically, teachers of students with higher test scores tend to get higher ratings. The researchers, including lead author Matthew Steinberg of the University of Pennsylvania and Rachel Garrett of the American Institutes for Research, found that math teachers with the highest-achieving students were nearly seven times more likely to get the top observation rating than teachers with the lowest-achieving students. This generally lines up with a 2014 Brookings Institution report that found a similar bias in observations.

Steinberg and Garrett’s paper suggests that how students are placed into classrooms could impact teachers’ observation score. For example, a principal might match students with behavioral problems to a specific teacher with strong classroom management skills. But that teacher with a classroom full of unruly students could then be rated lower.2

Just as the AERA statement warned against relying too heavily on value-added measures, Steinberg and Garrett  urge “greater caution… when making high-stakes personnel decisions based largely on teachers’ classroom observation scores.”
 

Criticism of the value-added approach has gotten much greater attention — both from researchers and the media — than the weaknesses of teacher observation, which could mean that value-added is taking an unfair beating. After all, since both approaches have flaws — some of them similar — it’s difficult to decide which measure is better.

“I think [value-added] opponents tend to set unreasonably unattainable targets for what [it] has to achieve in order to be used at all,” says Morgan Polikoff, a University of Southern California professor.

Researchers actually have a better grasp of the strengths and limitations of value-added than of observations. There is evidence, for instance, that strong value-added is positively related to long-run student outcomes like income and college attendance. No such evidence, one way or the other, exists for observations.

So where do these conflicting findings leave policymakers? If both measures are flawed, can any high-stakes decisions be made based on them?

Before trying to answer that question, consider one important caveat: High-stakes decisions in education are unavoidable. School districts either grant a teacher tenure or they don’t; they either give a teacher a raise or they don’t; they either dismiss a teacher who may be struggling or they don’t.

These important decisions can’t be ducked; the key question, rather, is how they are made.

Steinberg and Polikoff agree that the best bet is using both measures. If the two point in the same direction, particularly year after year, that likely says something meaningful about a teacher’s performance.

Combining imperfect but useful measures — rather than ignoring either one because of its faults — might be as much as we can hope for. That may be cold comfort to teachers who face evaluation based on flawed data, but the only alternative to an imperfect evaluation system is no system at all.


Footnotes:

1. It’s not clear why AERA refers to observations as an “alternative,” since it appears that every district that uses a value-added model for teacher evaluation also uses some form of observation. (Return to story)

2. Some have raised similar concerns with value-added, though more recent research has suggested non-random sorting may not be a major issue for value-added. (Return to story)