Beyond the Scantron: Inside Our New Series on Why We Test Students, How Testing Can Lead to More Equitable Schools Amid the Pandemic & the Push to Build Better Assessments
This piece is part of “Beyond the Scantron: Tests and Equity in Today’s Schools,” a three-week series produced in partnership with the George W. Bush Institute to examine key elements of high-quality exams and illuminate why using them well and consistently matters to equitably serve all students. Read all the pieces in this series as they are published here. Read our previous accountability series here.
The role of testing dominates many education policy conversations from classroom practice to state accountability systems to college admissions. Stakeholders — including educators, parents, policymakers, and district leaders — can get lost in the types of tests and the science behind their design. What makes an assessment high quality? What tests should be used for what purpose? Why do they matter — and to whom? And how has COVID-19 complicated it all?
State tests were suspended in the spring of 2020 as most in-person schooling came to an abrupt halt, thanks to COVID-19. How do we know if students learned much after March? Which children are on track — and who are we leaving behind as the system pivots rockily to a virtual model?
In May, the University of California system also suspended the use of the ACT and SAT in admissions, ostensibly to build their own assessment of college readiness by 2024, an enormously complex task. Will other higher education institutions follow?
Standardized tests were a surefire way to stir up controversy before the pandemic, and that debate has only intensified in today’s environment. Some claim tests are racist. Others claim that high-quality tests are one of our best tools to push for equity in the system. Some claim it’s not fair to measure public schools by how well their students are learning. Others say there is simply nothing more important to measure.
Why have tests — essential and long-standing education tools — become so controversial and misunderstood? And why are stakeholders, including educators, so often poorly informed about testing?
In this series, the expert voices of parents, educators, researchers and policymakers discuss how assessments are built, how they are used, how they can be improved and the critical role they play for students during the pandemic and beyond. Below you’ll find links to the new interviews as we publish them, as well as a quick glossary of terms that commonly occur throughout the series.
Newest Interviews in the Series
Ed Tech CEO Larry Berger: Why the Pandemic Is No Excuse to Abscond Accountability and ‘Disruptions Are Great Opportunities to Try Something New’ (Read the full interview)
Former School CEO and Dallas Superintendent Mike Miles: Why testing is essential to know where students are at — and is a key tool in preparing them for the 2030 workforce? (Read the full interview)
The Bush Institute’s William McKenzie: How a STAAR is born — In Texas, those closest to daily instruction play a major role in developing the state exam (Read the full piece)
Former Massachusetts Education Deputy Jeffrey Nellhaus: Back-to-school 2020 should start with assessments to determine how COVID-19 shutdown impacted student achievement (Read the full interview)
Former Louisiana School Chief John White: The next monumental task in education is to create tests that build knowledge (Read the full interview)
National Parents Union’s Keri Rodrigues: What tests reveal about the quality of an education — and why student advocates must take more than a book report to a knife fight over higher standards (Read the full interview)
Learning Heroes Co-Founders Bibb Hubbard and Cindi Williams: Two parent leaders on how to make state tests and results understandable for families (Read the full interview)
Texas History Teacher Cindy Riney: Why kids attending a class that is tested ‘are going to learn more than one that’s not’ (Read the full interview)
Special Education Expert Sheryl Lazarus: How fair, high-quality tests have led to improved instruction for students with disabilities and English learners (Read the full interview)
Education Researcher Mark Dynarski: Translating assessment jargon for parents — A family-friendly primer on testing (Read the full interview)
Harvard’s Andrew Ho: The 3 Ws of testing — and how to figure out what students have lost academically to COVID-19 (Read the full interview)
Brightbeam’s Chris Stewart: How the pandemic will widen America’s achievement gaps — and how a “data vacation” could leave us clueless about the crisis (Read the full interview)
College Board’s Auditi Chakravarty: The pandemic’s big ‘A-Ha’ moment on assessments was less about technology and more about the challenges of equitably testing from home (Read the full interview)
Center for Assessment’s Scott Marion: How large-scale testing in the fall could set us up for a ‘remediation mindset’ (Read the full interview)
The Bush Institute’s Anne Wicks: 6 ways educators and policymakers can ensure equity in testing — and across schools — through the pandemic (See the full list)
Beyond the Scantron Glossary of Terms
Advanced Placement: Courses offered at the high school level that allow students to earn college credit.
Advanced Placement or “AP” tests: Standardized exams that allow students to show mastery of content and skills in an Advanced Placement course.
Reliability: Test consistently measures what it says it will.
Validity: Whether the test is measuring what it should measure.
Comparability: Scores, standards, and structure of one assessment are aligned with another. For example, some districts align their local exams with the state standards and exam.
Psychometrician: Someone who devises, constructs, and standardizes exams.
Cut scores: The score which separates levels of student achievement on a scale. Multiple cut scores on a scale create bands of individual student performance such as advanced, proficient, needs improvement, or unsatisfactory.
Diagnostic assessment: A pre-evaluation to help teachers determine a students’ skills and knowledge. The exam gives teachers the ability to adapt materials to students’ current knowledge in advance of a unit of study.
District assessment: District tests that evaluate student achievement aligned to a specific grade-level.
Formative assessment: Data that teachers collect daily in their classroom to help inform future instruction; examples include weekly quizzes and exit tickets (quick end of day/class assessments).
Summative assessment: An evaluation of student learning at the end of a unit, semester, or school year that is usually high stakes; examples include a state test, midterm/final exam, or a final research paper.
Standards: Learning goals determined by a state agency for what each student should be able to master in each grade level.
Standardized assessment: Any form of a test that requires all test takers to answer the same questions in the same way. It is scored uniformly.
Test prep: Materials focused on preparing a student to master an exam.
ACT: A standardized test used primarily for college admission. The different sections include English, math, reading, science, and writing (optional). The highest score a student can receive is a 36.
SAT: A standardized test administered by the College Board used primarily for college admission. The different sections include math, reading and writing, and an optional essay portion. An overall score can total up to 1600.
NAEP: The National Assessment of Educational Progress, also known as the Nation’s Report Card. The test is administered to students in fourth, eighth, and 12th grades around the country, primarily assessing reading and mathematics. Results are reported as percentages of students performing at or above achievement levels at the state level (select district level scores available). It is comparable across states, making it an important policy and research tool.
Anne Wicks is the Ann Kimball Johnson Director of the George W. Bush Institute’s Education Reform Initiative.
Alex Dowdy is program manager of education reform for the George W. Bush Institute.Submit a Letter to the Editor