It’s a common refrain in education that research isn’t used wisely, or at all, to inform policy. As states have to redesign their accountability systems under the Every Student Succeeds Act (ESSA), the new federal K-12 law, policymakers have the opportunity anew to use evidence to help guide their decisions.
That was the topic of a panel at the Association for Education Finance and Policy conference in Washington, D.C., earlier this month. The discussion featured representatives from the Louisiana Department of Education, the Tennessee Department of Education, and a group of eight California districts — including Los Angeles, Oakland, and San Francisco— known as CORE. (A member of the Rhode Island Department of Education was also present, but his comments were off the record, so he cannot be quoted.)
The trio discussed the benefits — and challenges — of using research to inform ESSA accountability design. A few important themes emerged.
A preference for ‘growth’ but political pushback
Like many states, Tennessee is wrestling with how to weigh students’ absolute achievement versus their growth when measuring their schools. Researchers generally say that the progress students make is preferable for isolating the impact of schools and holding them accountable.
That doesn’t mean that such a move is politically easy, noted Mary Batiwalla, the Tennessee Department of Education’s executive director of accountability.
“The lowest-achieving school could receive an ‘A’ under our [proposed] system — very low-achieving, but showing what we consider to be remarkable and life-changing growth,” she said. “It’s a tough conversation to have with folks because there is this very accepted notion that ‘If you say that school that is very low-performing is an A school, you are lying to parents.’ ”
In line with research, Batiwalla pointed to potential unintended consequences of judging schools by absolute performance or students’ raw standardized test scores.
“When we think about the downstream of impacts of labeling schools that are doing remarkable things for their students, even when those students start at very low places — [it’s] retention of high-quality teachers, recruitment,” she said.
A focus on growth may also lead to pushback from districts previously deemed high-performing. “Tennessee has decided we no longer want to reward simply having high absolute achievement,” Batiwalla said. “If you have students who come in at a certain level, the expectation is that you grow those students.”
“We haven’t released any of this modeling [of the new ratings] publicly, so I don’t think there’s been as big of an outlash as there might be if we were to actually release these lists,” she said. “There have been some case studies that we’ve provided that some of our higher-performing districts have been able to back into and say, ‘Hey, that C school, that’s me! Wait a second — get on the phone with the state legislator.’ ”
“We’ll see how this plays out,” she added.
Are schools ready for non-academic measures? California thinks so
California’s CORE districts have been among the pioneers in evaluating schools by measures that go beyond test scores and high school graduation rates.These include social-emotional or non-cognitive skills, such as motivation, focus, and confidence. States nationwide now have the opportunity to expand accountability systems along these same lines using the “fifth indicator” of performance under ESSA.
“We know these factors are incredibly important. The research tells us that, and so does our own felt experience,” said Noah Bookman, chief strategy officer for CORE, a coalition of some of California’s largest districts that joined forces to try to better measure and improve schools.
In addition to using chronic absenteeism and suspension rates for measurement, the districts have incorporated surveys that ask students various questions to gauge their “growth mindset, self-efficacy, self-management, and social awareness.”
The practice has proved somewhat controversial. Angela Duckworth, a University of Pennsylvania researcher who developed the idea of “grit,” — passion and perseverance in reaching a goal — specifically criticized CORE in a New York Times op-ed.
“We’re nowhere near ready — and perhaps never will be — to use feedback on character as a metric for judging the effectiveness of teachers and schools,” she wrote. “We shouldn’t be rewarding or punishing schools for how students perform on these measures.”
The CORE districts don’t measure grit specifically, as they see it as falling under the self-management measure.
One issue Duckworth raised is “reference bias,” or the idea that students would judge themselves relative to their peers, not based on an absolute standard. This means that it’s possible that a school could improve all students’ skills, but surveys might not capture this as students don’t see themselves getting better compared with each other.
Indeed, one study, co-authored by Duckworth, showed that students in Boston charter schools made large gains on standardized tests and had high attendance rates but actually saw dips in their self-reported non-cognitive skills. The researchers posit that this is due to reference bias.
Duckworth also worries that attaching stakes to such measures will distort their accuracy, encouraging cheating and other forms of gaming.
Bookman of CORE says that the point isn’t to create a high-stakes system.
“Our philosophy [is] the data is used for good, supposed to help you get better — a flashlight, not a hammer,” he said.
One analysis of the California districts found that their measures of non-cognitive skills were correlated with schools’ achievement, attendance, and suspension rates, suggesting that they capture useful information.
“We’re still, to be candid, learning how that works. How do people respond to the information? Is it helpful? Is not helpful? Is it causing the perverse consequences we’re worried about? Do the data stand up over time?” Bookman said. “The only way we’ll find out how well they’re working in this kind of a context is to put them there and see how it works.”
School turnarounds are hard; who should lead them?
As evidenced by the disappointing recent study on the federal government’s $7 billion school turnaround plan, figuring out how to improve a struggling school is difficult — and research doesn’t provide definitive answers on this front. The panelists’ comments reflected this reality, particularly on the questions of whether states should take control of low-performing schools.
“State takeover in certain contexts looks very different and I think has struggled, where in other places it’s been very successful,” said Jessica Baghian, an assistant superintendent of the Louisiana Department of Education.
“Is the state the right entity to do turnaround work?” wondered Batiwalla of Tennessee. “Based on the research, we don’t have a lot of good evidence that we are doing this very well, and we have some evidence that in Shelby County specifically the district itself has had a lot of success [leading its own turnaround].”
Indeed, research on this question has come to mixed conclusions. In New Orleans, the state-driven expansion of charter schools and infusion of new resources post-Katrina has yielded large test-score improvements.
“I think part of the reason why New Orleans has been so successful is because we have a very strong [charter] authorizer in our state board who has made tough decisions and built real urgency around, ‘You have x amount of time to deliver, and if you don’t, we have higher-quality providers who can step in,’ ” Baghian said.
One study found that New Orleans’s closures of low-performing charter and district schools led to big gains for students.
But in Tennessee, as Batiwalla suggested, the state-run district has not produced any improvement in achievement, though a locally driven turnaround effort did. Still, Batiwalla noted, “The threat of state takeover perhaps energizes the district to respond in a way that leads to improvements.”
“We’ve got to get better at getting better,” she said. “It’s really, really hard work.”
Bookman of California was largely silent on the issue of specific interventions for struggling schools. Last year, The 74 reported that the state had not spelled out what would happen in those cases.
That has not changed, apparently.
A recent Los Angeles Timesarticle describing California’s new dashboard of school ratings across a variety of measures noted, “It remains unclear how the dashboard will be used with regard to those schools that need help.”