Baltimore’s New Mayor Proposes $288 Million in Education Spending to Save City’s Schools

Nevada’s Governor Rushes to Save Education Savings Accounts. But Will Program Survive Legislature’s Democrats?

Will the Same Conservative Coalition That Derailed Health Care Bill Now Kill Federal School Choice?

DeVos Hints at ESSA as Means for Feds to Push School Choice but Downplays Federal Oversight

As NY Lawmakers Mull Budget to Expand Charter Schools, Fears of Federal Cuts May Shift Political Alliances

Arne Duncan Makes Two Big Endorsements in L.A. School Board Race, Throws Support Behind Reformers

Our School’s Too White? Outraged Parents Vow to Lie About Child’s Race to Keep City From Removing Teachers

Ivanka Trump, Betsy DeVos Tout STEM Education to 200 Students at Air & Space Museum

Black Girls 6 Times as Likely to Be Suspended as Whites. ‘Let Her Learn’ Looks to Reverse the Trend

Can Mónica García Unify L.A.? How the Longest-Serving School Board Member Cruised to Her Fourth Election Win

WATCH: 4,000 Kids Take Over NYC’s Radio City Music Hall With America’s Biggest Academic Pep Rally

Where Education Research, Politics & Policy Intersect: 3 States Reveal How Data Help Shape Their ESSA Plans

Tennessee Bets Big on Personalized Learning, Launching Pilot Program & Eyeing Big 2020 Goals

MUST-SEE: School Films Epic 12-Minute ‘Trolls’ Music Video to Lift Spirits of Sick 5-Year-Old Girl

In Uniting Students With Prospective Employers, the Whether Job Search App Wins SXSWedu Tech Competition

More HS Students Are Graduating, but These Key Indicators Prove Those Diplomas Are Worth Less Than Ever

Race & Class: Chicago Schools Sue State, Claim Minority Kids See 78 Cents Per Dollar Sent to White Schools

KIPP v. UFT: Charter Network Sues Union, Arguing It Doesn’t Represent School’s Teachers

Supreme Court Sets New Standard for Special Ed, Unanimously Rejects Minimal School Progress

D.C. Approves ESSA Accountability Plan That Emphasizes Testing Standards & Transparency

2 New Reports Show That We Really Don’t Have a Great Way to Evaluate Teachers

November 17, 2015

Talking Points

Both test scores and teacher observations are flawed — but the best bet might be to use both of them.

Research highlights faults with common measures of teacher performance — so where does that leave policymakers?

Sign Up for Our Newsletter

A pair of recently released reports that each find serious flaws in the two most common methods used to evaluate teachers — observing them in the classroom and trying to pinpoint how much they affect individual student achievement — could leave policymakers wondering where to turn next.

A Nov. 10 statement released by the American Educational Research Association (AERA) and picked up by national news outlets calls into question the use of what are known as value-added models. The group cites studies that show flaws in value-added — statistical measures that attempt to isolate a teacher’s impact on student growth  — including inconsistency from year to year and the shortcomings of standardized tests in gauging student learning.

Using value-added models to evaluate teachers and principals, or the programs that train them, comes with “considerable risks of misclassification and misinterpretation.” On the other hand, the report points to teacher observation as “a promising alternative.”1

Not so fast.

A paper presented a couple days later at a policy conference in Miami finds that teacher observations suffer from many of the same flaws that plague value-added measures.

For instance, a teacher’s observation score may be significantly biased by the students she teaches. Specifically, teachers of students with higher test scores tend to get higher ratings. The researchers, including lead author Matthew Steinberg of the University of Pennsylvania and Rachel Garrett of the American Institutes for Research, found that math teachers with the highest-achieving students were nearly seven times more likely to get the top observation rating than teachers with the lowest-achieving students. This generally lines up with a 2014 Brookings Institution report that found a similar bias in observations.

Steinberg and Garrett’s paper suggests that how students are placed into classrooms could impact teachers’ observation score. For example, a principal might match students with behavioral problems to a specific teacher with strong classroom management skills. But that teacher with a classroom full of unruly students could then be rated lower.2

Just as the AERA statement warned against relying too heavily on value-added measures, Steinberg and Garrett  urge “greater caution… when making high-stakes personnel decisions based largely on teachers’ classroom observation scores.”
 

Criticism of the value-added approach has gotten much greater attention — both from researchers and the media — than the weaknesses of teacher observation, which could mean that value-added is taking an unfair beating. After all, since both approaches have flaws — some of them similar — it’s difficult to decide which measure is better.

“I think [value-added] opponents tend to set unreasonably unattainable targets for what [it] has to achieve in order to be used at all,” says Morgan Polikoff, a University of Southern California professor.

Researchers actually have a better grasp of the strengths and limitations of value-added than of observations. There is evidence, for instance, that strong value-added is positively related to long-run student outcomes like income and college attendance. No such evidence, one way or the other, exists for observations.

So where do these conflicting findings leave policymakers? If both measures are flawed, can any high-stakes decisions be made based on them?

Before trying to answer that question, consider one important caveat: High-stakes decisions in education are unavoidable. School districts either grant a teacher tenure or they don’t; they either give a teacher a raise or they don’t; they either dismiss a teacher who may be struggling or they don’t.

These important decisions can’t be ducked; the key question, rather, is how they are made.

Steinberg and Polikoff agree that the best bet is using both measures. If the two point in the same direction, particularly year after year, that likely says something meaningful about a teacher’s performance.

Combining imperfect but useful measures — rather than ignoring either one because of its faults — might be as much as we can hope for. That may be cold comfort to teachers who face evaluation based on flawed data, but the only alternative to an imperfect evaluation system is no system at all.


Footnotes:

1. It’s not clear why AERA refers to observations as an “alternative,” since it appears that every district that uses a value-added model for teacher evaluation also uses some form of observation. (Return to story)

2. Some have raised similar concerns with value-added, though more recent research has suggested non-random sorting may not be a major issue for value-added. (Return to story)