Louisiana Pilot Program Tests New Kind of Reading Exam That Could Be a Model

Aldeman: The U.S. is about the only country that tries to assess reading comprehension without taking students' background knowledge into account.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Imagine you have a test coming up. Wouldn’t you like to know what might be on it?

That may seem like a fair question, but millions of kids are sitting down for reading tests this spring with no idea what they will cover. 

This approach to reading tests makes the U.S. an international outlier. According to David Steiner, a professor at Johns Hopkins University, most countries test kids on a core body of knowledge that’s widely communicated in advance. American-style tests do the opposite. Their content is a surprise to everyone, so no one can get a leg up and prepare unfairly. But this effectively treats reading comprehension as a separate, isolated skill apart from background knowledge, and researchers like Hugh Catts have pointed out that this isn’t aligned with the research on how kids learn to read.

Comprehension is a function of the ability to decode the letters on the page, combine them into words and then understand what those words mean. 

But whether people can understand what they’re reading is also tied up with their knowledge about that topic. Writers like Nat Wexler and Robert Pondiscio have helped popularize what’s informally known as the baseball study, in which researchers divided junior high students into four groups based on their reading ability and knowledge about baseball. The students were then evaluated on their comprehension of a new passage about a baseball game.

As expected, those who were strong readers and had some familiarity with baseball did the best. But critically, kids who weren’t particularly strong readers but knew about the sport were able to understand the unfamiliar passage better than the supposedly strong readers who didn’t have much prior knowledge. For these students, background information was more powerful than incoming reading ability.

Unfortunately, the American approach to literacy tests attempts to measure reading comprehension alone. This disconnect is a major reason why it’s so much harder to move the needle in reading than it is in math — math test scores move up or down more easily in response to instruction, while reading scores do not.

But worst of all is the effect that the tests have had on reading instruction in schools. As Doug Lemov, managing director of Uncommon Schools and the author of Teach Like a Champion, pointed out in a recent Education Next essay, English classes are increasingly devoted to reading a series of short, unrelated passages and asking students to find the main idea behind them.

There’s a derogatory term for this type of instruction — teaching to the test. But what if states had tests that were worth teaching to?

A pilot program in Louisiana could present an alternative model for the country. The state has been experimenting with a new kind of exam that is closely aligned to a state-created curriculum called Guidebooks. The test is administered three times during the year, in fall, winter and spring.

Other states and districts may use such a staggered sequence, but Louisiana is unique in that it is specific about what books will be covered on each test for each grade level. For example, seventh graders should be expected to be asked about The Giver, by Lois Lowry, in the fall, and other clearly identified books are chosen for different grades and exams.

The test itself is also unique. It contains three sections, asking students to read an unfamiliar passage that connects in some way to a book they’ve covered in class; answer questions about it; and then compare and contrast it with the familiar text.

The point is not to drill students on the plot details of a given book — it’s to immerse them fully in a piece of high-quality literature and then ask them to explain what they learned. That helps students benefit from what psychologists call the testing effect, in which the act of taking an exam helps deepen the learning process. In this way, the Louisiana pilot functions more like Advanced Placement or International Baccalaureate programs than like the typical state assessment, which asks students to find the main idea of a text they’ve never seen before.

Teachers also seem to appreciate Louisiana’s approach. In a focus group, one reported, “We love these assessments, they are FAIR to our kids and our teachers and we are excited about the future of [English Language Arts] instruction with these in place.” 

The test has also shifted classroom instructional practices. One teacher told John White, the former state chief of Louisiana behind the pilot program, “We used to devote time to test prep, and we would just do practice [state] tests. We don’t do that anymore. We devote our time to diving into the unit and making sure that students have a strong understanding, as much background knowledge as we can possibly give them.”

It’s unclear whether Louisiana will be able to expand the test beyond its current pilot period, but it still presents lessons for state and federal policymakers.

First, state leaders should take a hard look at their literacy tests; those that serve more as general IQ tests than as true measures of student learning may inadvertently encourage teachers to focus on low-level comprehension skills. Critics of all political stripes might worry about a state endorsing a particular curriculum or point of view, but without taking a stand, states are left with content-free tests like they have now.

Second, even places like Louisiana, that understand the instructional benefits of a high-quality curriculum, still face transition issues. The feds granted Louisiana flexibility to test out its new model as a pilot, but it came with no additional funding and the state had to apply separately for a competitive grant to build the test. Moreover, the state was required to simultaneously operate both its old test and the new one.

Worse, the feds mandated that the new model produce results that could be compared to the old one’s. A group of civil rights organizations led by Education Trust pushed back and suggested the rules allow for “an alternative method for demonstrating comparability that … will provide for an equally rigorous and statistically valid comparison.” The Biden administration has signaled its willingness to help states be more innovative with their tests, but tethering too closely to an old system is a great way to stifle innovation.

These challenges are large but not insurmountable. In the midst of a national push to reshape how reading is taught, state leaders should take a closer look at how their tests can nudge schools to invest more effort into building students’ background knowledge and for children to spend more time immersed in reading and discussing reading whole books.

Disclosure: Chad Aldeman is an occasional consultant for NWEA, which is one of the organizations behind the Louisiana testing pilot.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.

On The 74 Today