OpinionEvery Student Succeeds Act  

Javurek: ESSA’s Innovative Assessment Initiative Was a Great Idea, But the Execution Was Flawed. Here’s How the Ed Department Can Fix It

By Abby Javurek | April 6, 2021

(Getty Images)

When the federal government created the Innovative Assessment Demonstration Authority (IADA) in 2015 as part of the Every Student Succeeds Act (ESSA), it was widely hailed as a positive and hopeful development. Educators and policymakers had been talking for years about the pitfalls of treating end-of-year state tests and classroom assessments as disconnected events that don’t work together. Under the IADA, states were encouraged to think outside the box when it came to summative assessments, while still meeting the law’s accountability requirements.

That sounded good to education experts who had long advocated for the creation of balanced assessment systems that allow for periodic classroom tests and end-of-year state exams to be aligned to measure student outcomes and also provide data that could inform classroom strategies and improve instruction.

On paper, the IADA enabled states to start considering truly different approaches to assessment and accountability that were better connected to teaching, learning and schoolwide improvement plans. But in practice, it limited states’ options and imposed a significant burden on their fiscal and staffing resources. The result? Only a handful of states have pursued the kinds of innovations that the IADA called for, leaving the law’s potential largely unmet.

Today, as a new presidential administration gets underway and seeks to make its mark on education policy, the IADA is set to be examined by Congress and the U.S. Department of Education for reauthorization. Now is the time to take stock of where the law went wrong and reimagine it for the future.

Children deserve a first-rate education; and the public deserves first-rate reporting on it.

Please support our journalism.

First, because no federal funding was provided under the IADA, the cost of piloting innovative assessment systems was simply too high for many states facing chronic budgetary and staffing shortfalls — not to mention widespread pushback against new exams for already overtested students. States were required to maintain their existing assessment and accountability systems in parallel with any new systems they rolled out, requiring double testing of students and significant additional administrative burden for staff. In short, even if the desire to innovate was there, the funding that was needed to do so was often lacking.

Second, states have generally understood the IADA’s requirements around “comparability” of assessment systems to mean that the results of any new tests should be largely identical to, or able to replace, the results of existing assessments. In other words, the IADA seems to favor variations on systems states are already using. This is a disincentive from piloting systems that might focus on new goals, such as making tests more efficient, measuring critical thinking or better measuring the complete range of learning targets for a grade.

Finally, the compressed timeline laid out by the IADA allowed only five years to move from piloting a new assessment system in a handful of districts to statewide implementation. This is a daunting mandate, considering the huge amount of time and effort involved in setting a vision, building a new statewide assessment aligned to that vision and getting districts ready to appropriately implement it so the rollout is a success. Even if states had the desire and will to try something new, the IADA timeline strongly favored preexisting efforts.

Still, there’s much to be learned from states that have already found ways to develop innovative systems for assessment and accountability. Louisiana, Nebraska and Kentucky, for example, have approached the challenge in different ways but with common themes: putting teaching and learning first, dedicating the time and energy to focus on what needs to change and starting from a robust theory of action that is grounded in the research about learning sciences.

To give states a true pathway to innovation and flexibility, the federal government will need to reimagine the IADA and provide the necessary supports to make it work. Specifically, the U.S. Department of Education should:

  • Fund the IADA. Creating new systems of curriculum, instruction, assessment and accountability is expensive and time-consuming. Some states have been able to find philanthropic dollars to get them started, but this is hardly a long-term solution — especially when states need to continue supporting their old systems as they roll out the new ones. With proper funding, states will have the support they need to move forward.
  • Give it time. The word “innovation” may connote speed, but it takes years to get all the components of a system up and running. To help states succeed, the IADA should relax its requirements that states begin showing results right away, and should recognize the planning years as an integral part of the process. Pressuring states to move quickly only discourages them from innovating, unless they are already well along that path.
  • Expand the standard of comparability. When states reasonably interpret the IADA’s requirements around comparability to essentially mean “the same score or measurement as your existing assessment,” systems that take a fundamentally different approach to teaching and learning are disincentivized. If this self-defeating problem is properly addressed in the next iteration of the IADA, states will have an easier path to pursuing true innovation.
  • Keep the guardrails protecting equity, but challenge what “same” means. The IADA contains provisions to ensure that concerns about equity are addressed. The IADA should keep these guardrails, but with the understanding that the comparability of assessment systems need not mean “the same,” as discussed above. States can innovate in meaningful ways without losing the important equity goals in the law. Assessments that look different should have data that looks different than that resulting from traditional multiple-choice end-of-year tests. States can partner with technical experts and research organizations to evaluate the quality of their new tests against the goals of their innovation, rather than simply against traditional measures based on what it means to have a quality multiple-choice test.

As the federal government looks toward changing and improving the IADA, states have their own role to play in paving the way for a more effective and sustainable version. Specifically, they should start with the end in mind: How do they want to see students’ learning reflected in their communities? States should discuss assessment and accountability not as isolated concepts, but as part of a broader conversation about how to define quality curriculum, teaching and learning, and how to set their vision around these definitions.

If successful, states will avoid the unproductive path of prioritizing models that simply collect data for collection sake while hoping that leads to systemic change. Instead, they will effect change by prioritizing support for students and student outcomes.

Although the IADA has not yet fulfilled its promise, its core premise — that innovation is the key to moving the needle on academic achievement — remains true and is more relevant than ever. Even before the onset of the COVID-19 pandemic, we were due for some hard conversations about assessment and accountability systems. The urgency of the current situation has accelerated the need for innovation and flexibility as schools, districts and states grapple with the impacts of the pandemic on their communities — while recognizing how acutely they need high-quality data that provides real recommendations for how to meet students’ needs today and into the future.

Abby Javurek is vice president, solution vision and impact, government affairs & partnerships at NWEA, a not-for-profit provider of assessment solutions.

Related

Sign up for The 74’s newsletter

Submit a Letter to the Editor