What a Big Personalized Learning Study Showed About 5 Ways to Rethink Teaching and Learning

Republican Tennessee Lawmakers Gear Up for New Voucher Proposals in Wake of Trump Election

Rhode Island’s Big Bet on Classroom Innovation: A Statewide Personalized Learning Initiative

LIVESTREAM: Watch Betsy DeVos's Senate Confirmation Hearing; Read Prepared Remarks

The Question I’d Ask Betsy DeVos: Two Dozen Education Leaders Preview Tuesday’s Confirmation Hearing

Building the 3.0 High School: How Summit Basecamp Schools Pairs Teaching With Technology

Your DeVos Hearing CliffsNotes: These Are the 10 Education Issues Sure to Surface in the Senate

What Do Americans Think of School Choice? Depends on How You Ask the Question

With Amazon Inspire, Teachers Nationwide Can Share Classroom Materials, Learn From One Another

NYC Parents, Advocates Rally to Demand That City Address Its ‘Middle School Deserts’

Poll — 75% of Millennials Support School Choice; Majority of Americans Like Trump’s $20 Billion Plan

Supreme Court Justices Eye Possible Higher Special Ed Standard, but What That Should Be Eludes Them

Raw Supreme Court Transcript: The Justices Debate Special Education During the Endrew F. Case

Trump Nominees Sessions and Kelly Duck DACA, Immigration Questions at Confirmation Hearings

3 Big Problems in How Schools Hire Teachers — and What Research Says About How to Solve Them

Why These Two Tulsa Educators Returned Home to Save the Latino Students Too Often Left Behind

Special Education at the Supreme Court: 7 Things to Know About Wednesday’s Endrew F. Case

Inside the Fight to Save Detroit’s Schools: Betsy DeVos’s Campaign for Choice Over Local Control

Attention, Senators: The Questions 14 Education Experts Would Put to Betsy DeVos Wednesday

Inside the 2004 Denver Summit That Changed the Course of America’s Public Charter Schools

2 New Reports Show That We Really Don’t Have a Great Way to Evaluate Teachers

November 17, 2015

Talking Points

Both test scores and teacher observations are flawed — but the best bet might be to use both of them.

Research highlights faults with common measures of teacher performance — so where does that leave policymakers?

Sign Up for Our Newsletter

A pair of recently released reports that each find serious flaws in the two most common methods used to evaluate teachers — observing them in the classroom and trying to pinpoint how much they affect individual student achievement — could leave policymakers wondering where to turn next.

A Nov. 10 statement released by the American Educational Research Association (AERA) and picked up by national news outlets calls into question the use of what are known as value-added models. The group cites studies that show flaws in value-added — statistical measures that attempt to isolate a teacher’s impact on student growth  — including inconsistency from year to year and the shortcomings of standardized tests in gauging student learning.

Using value-added models to evaluate teachers and principals, or the programs that train them, comes with “considerable risks of misclassification and misinterpretation.” On the other hand, the report points to teacher observation as “a promising alternative.”1

Not so fast.

A paper presented a couple days later at a policy conference in Miami finds that teacher observations suffer from many of the same flaws that plague value-added measures.

For instance, a teacher’s observation score may be significantly biased by the students she teaches. Specifically, teachers of students with higher test scores tend to get higher ratings. The researchers, including lead author Matthew Steinberg of the University of Pennsylvania and Rachel Garrett of the American Institutes for Research, found that math teachers with the highest-achieving students were nearly seven times more likely to get the top observation rating than teachers with the lowest-achieving students. This generally lines up with a 2014 Brookings Institution report that found a similar bias in observations.

Steinberg and Garrett’s paper suggests that how students are placed into classrooms could impact teachers’ observation score. For example, a principal might match students with behavioral problems to a specific teacher with strong classroom management skills. But that teacher with a classroom full of unruly students could then be rated lower.2

Just as the AERA statement warned against relying too heavily on value-added measures, Steinberg and Garrett  urge “greater caution… when making high-stakes personnel decisions based largely on teachers’ classroom observation scores.”
 

Criticism of the value-added approach has gotten much greater attention — both from researchers and the media — than the weaknesses of teacher observation, which could mean that value-added is taking an unfair beating. After all, since both approaches have flaws — some of them similar — it’s difficult to decide which measure is better.

“I think [value-added] opponents tend to set unreasonably unattainable targets for what [it] has to achieve in order to be used at all,” says Morgan Polikoff, a University of Southern California professor.

Researchers actually have a better grasp of the strengths and limitations of value-added than of observations. There is evidence, for instance, that strong value-added is positively related to long-run student outcomes like income and college attendance. No such evidence, one way or the other, exists for observations.

So where do these conflicting findings leave policymakers? If both measures are flawed, can any high-stakes decisions be made based on them?

Before trying to answer that question, consider one important caveat: High-stakes decisions in education are unavoidable. School districts either grant a teacher tenure or they don’t; they either give a teacher a raise or they don’t; they either dismiss a teacher who may be struggling or they don’t.

These important decisions can’t be ducked; the key question, rather, is how they are made.

Steinberg and Polikoff agree that the best bet is using both measures. If the two point in the same direction, particularly year after year, that likely says something meaningful about a teacher’s performance.

Combining imperfect but useful measures — rather than ignoring either one because of its faults — might be as much as we can hope for. That may be cold comfort to teachers who face evaluation based on flawed data, but the only alternative to an imperfect evaluation system is no system at all.


Footnotes:

1. It’s not clear why AERA refers to observations as an “alternative,” since it appears that every district that uses a value-added model for teacher evaluation also uses some form of observation. (Return to story)

2. Some have raised similar concerns with value-added, though more recent research has suggested non-random sorting may not be a major issue for value-added. (Return to story)