Analysis

Analysis: How Do You Measure a Year? How States Should Assess Student Learning in a Pandemic

By Laura Slover and Michael Cohen | March 8, 2021

Annual state end-of-year tests have been the mainstay of federal and state accountability systems for nearly 30 years. In spring 2020, given pandemic-related disruptions in learning, the federal Department of Education waived its requirement for annual testing in reading and math. There has been a lively discussion in the field about the pros and cons of resuming testing this spring, given the continuing disruption to schooling. The department effectively ended the debate with its recent letter to chief state school officers. In short, it has maintained the requirement that states administer end-of-year tests to all students and report the disaggregated results publicly. That’s the right decision. Understanding the full impact of the COVID-19 disruption and developing plans to mitigate its long-term impact requires up-to-date information on academic performance in every school. While this year’s assessment data will be imperfect, some data are better than no data at all. Another year without assessments would negatively affect the goal of advancing equity.

The department has offered states flexibility to shorten the tests, administer them remotely or to give them over the summer or at the beginning of the 2021-22 school year. Here’s what we think states should do:

  • Administer assessments this spring. Administering tests should pose no new issues in states and districts where classes have been held in person. In hybrid situations, schools can safely rotate students into classrooms, as they already are doing; in remote situations, many schools are already using virtual proctoring to monitor test-taking. Delaying the testing window to the summer or when school begins in August or September would be a mistake. This approach has been tried by a group of New England states — and leaders of that effort have concluded it’s not the right approach, as a variety of logistical challenges undermined the usefulness of the results. Students and teachers will derive greater benefit if fall testing is reserved for diagnostic assessments that can inform instruction.
  • Do not incorporate the results of state assessments into accountability ratings or otherwise use them to rate schools or apply sanctions. The department has invited states to request waivers of key accountability requirements. States should take the government up on this offer. As the Aspen Institute and the Center for Assessment advised in an October policy brief, drawing conclusions about school effectiveness based on exams administered this year is not feasible, and therefore, the results should not be used in state accountability systems. There should be no punitive consequences, such as state takeover or loss of autonomy for personnel decisions, nor should the results be used for teacher evaluation. In fact, states should not even produce accountability metrics this year, as they could be misused. Instead, test results should be used to target additional resources to provide services, such as high-dosage tutoring and extended learning time by extending the school day and/or year, including during the summer.
  • Collect school- and district- level data on students’ opportunity to learn. This includes data regarding access to the internet and devices for remote/online learning; availability of a rich, culturally responsive curriculum (and resources to adapt highly rated curriculum to meet students’ needs); extended learning time; and high-dosage tutoring. Even the most basic and limited information about whether students were learning in person or virtually, and how many hours of instruction they had each day/week, will be helpful in understanding the impact of the pandemic.

Related

Study: Chicago Tutoring Program Delivered Huge Math Gains; Personalization May Be the Key

  • Invest in assessment literacy training. Assessment data are useful only if they can be put to good use. This requires a deep level of understanding among teachers and school leaders about how to interpret data from multiple sources and use them to make decisions about things like curriculum adjustments, instructional strategies and groupings of students. It also requires some assessment-specific training in order to understand the metrics associated with interim/diagnostic tests used at the local level.
  • Attend vigorously to students’ (and teachers’) social-emotional needs. Attention to social-emotional needs of students is always important. This pandemic — and the health risks and social isolation it has borne — has made doing so all the more urgent. Schools should nurture all members of their communities.
  • Use high-quality diagnostic assessments to target support to each student. While state end-of-year tests are aligned to standards and are key levers for equity, assessments that are aligned to the local curriculum will offer the greatest value for evaluating students’ needs in the fall. Teachers need timely and detailed information about individual students’ strengths that can be built upon, as well as areas that will require additional support to enable them to tackle grade-level work. These are the details that diagnostic assessments produce.

Diagnostic assessments should be aligned to the content, scope and sequence of the curriculum in order to be fair, useful and a good investment of time and money. Those that aren’t aligned to the curriculum may provide confusing information, or even worse, may falsely represent student learning. It does no good to test things out of sequence or at a different level of complexity than is required. And tests that simply provide a score, a percentile ranking and a predicted end-of-year result don’t provide teachers with the information they need to make critical instructional choices.

Related

Analysis: High-Quality, High-Dosage Tutoring Can Reduce Learning Loss. A Blueprint for How Washington, States & Districts Can Make It Happen

The use of diagnostic assessments should help schools avoid the trap of assuming students have learned nothing since March 2020. Good diagnostics should reveal precisely what students do know, as well as what they haven’t learned yet. It would be a mistake to think that all of last year’s content must be retested and retaught. Instead, using diagnostic data, schools should focus on the most essential prerequisite skills for grade-level work.

Finally, diagnostics are not just one-and-done exercises. States should support school districts in the use of high-quality diagnostics throughout the year.They should develop selection criteria that prioritize quality, provide training and subsidize costs when districts select tests that meet the bar of quality. States could go as far as to provide diagnostics to all districts or let districts select from a list of vetted assessments. Regardless, they should ensure that districts use diagnostics that align with both state standards and their local curriculum.

As important as statewide assessments are, this moment provides room for states to take the long view. Once they have addressed today’s challenges, states should explore approaches that can improve end-of-year assessments – indeed, assessment systems as a whole – moving forward. Stated simply, we need a new paradigm for assessment and related accountability policies that focus on what matters most: challenging academic standards and attention to social emotional learning, a high-quality and culturally responsive curriculum and aligned instructional materials (including classroom tests) that define and drive student learning. This will be a long-term effort. Now is the time to start.

Laura Slover is CEO of CenterPoint Education and former CEO of Parcc Inc., which led the multi-state Partnership for the Assessment of College and Careers. Mike Cohen is a senior fellow at CenterPoint Education. He was the president of Achieve from 2003-20 and served as assistant secretary for elementary and secondary education in the Clinton administration

Related

Sign up for The 74’s newsletter

Submit a Letter to the Editor