Explore

Testing Anxiety, Boredom & Guesses: What Expert Steven Wise Has Learned About Exams and ‘Rapid-Guessing Behavior’ — and What That Tells Him About Your Child’s Score

Steven Wise (NWEA)

See previous 74 interviews: Presidential candidate Cory Booker talks about the success of Newark’s school reforms, Chan Zuckerberg Initiative Co-CEO Priscilla Chan on supporting the “whole child,” then-Rep. and now–Colorado Gov. Jared Polis on school funding and special education. See the full archive here.

Quick — without looking it up on Google, can you define “edge-aversion”? Here’s a hint: It’s a decision-theory term describing what’s also known as middle bias. That is, a test-taker’s tendency to pick anything but the top or bottom option on a multiple-choice question.

To a psychometrician, it’s a tell that the answer was a guess. Give testing experts an entire exam, and you might be surprised what they can discern.

Meet Dr. Steven Wise, a senior research fellow at NWEA’s Collaborative for Student Growth and an expert on so-called rapid-guessing behavior. Wise recently released a report on the phenomenon, the student disengagement it illuminates and possible responses.

A nonprofit, NWEA pioneered MAP tests — assessments taken on a computer that supply progressively harder questions as a student performs well, pinpointing the limits of his or her knowledge. Because the adaptive tests, previously known as the Measures of Academic Progress, both chart student spring-over-fall growth and identify specific missing skills, schools frequently use them as diagnostic tools that help teachers support individual children.

When students guess, their test results aren’t valid, lowering the value of the information their teachers might use to support them. But understanding why a student is guessing can have broader implications.

This interview has been edited for length and clarity.

The 74: What are some reasons that people guess on a test?

Wise: The most obvious one would be they guess because they don’t know the answer. If it’s a multiple-choice test, we often say, “Well, if I guess maybe I’ll get lucky,” so you try to work out the answer and if you don’t know it, you guess and hope for the best. But there’s another time people guess; if they’re not engaged in the act of taking the test. In that case, people generally want to get the test over with, and so they just guess, fairly rapidly, to get through, move on to the next item.

If it’s the second type, this behavior we call rapid guessing, we can detect that. When we give computer-based tests, which we’re starting to do with greater frequency, we can gather information about how long people spend on items. A characteristic behavior when a person is disengaged is they’ll start answering rapidly. And by rapidly, I mean faster than people generally could even read and understand the challenge posed by the item.

I’ll give you an example. Items for which people, on average, may spend, let’s say, 40 seconds, they may answer in two or three seconds. You don’t even have time to really read what it was, much less figure it out. We can infer they’re guessing partly because the accuracy rate of those answers is just about what would correspond to random chance.

Now, if they’re guessing for the first reason, because they don’t know, that’s tougher, because if they spend a minute on the item or five minutes on the item and they answer, you don’t know if that was a thoughtful, “I know the answer, here’s my choice,” versus, “I have no idea, I’m just going to guess.”

Beyond not knowing the answer, are there other reasons people would be disengaged?

One of the things that we have found is that, not surprisingly, kids who are generally disengaged from school are more likely to show rapid-guessing behavior. People who are disengaged from school are not going to try particularly hard on tests you put in front of them.

But sometimes you get people who are generally highly engaged that aren’t. Maybe they’re not feeling well that day, or maybe their parents had a fight the night before, or their pet passed away or there’s something going on in their life and they said, “You know? I’m just not really into focusing on this right now.” Sometimes you see these more episodic instances.

Students at my younger child’s high school this spring had declining ACT practice scores. I wondered if the extrinsic motivation, whatever it is that gets you to sit up and try hard, might have worn off with repetition.

People get tired of testing. A key aspect of this is, what’s in it for the student? Are there consequences associated with performance? If you’re taking the ACT for real, there are very real consequences. If you don’t try and you don’t do your best, you may not get into the school you want to get into. But if it’s a practice test, a person might go, “You know, this isn’t really that important. I ought to be trying, but I don’t feel like it.”

What does it look like from the student’s standpoint? Are there consequences associated with this that would encourage their engagement? Consider things like the statewide accountability tests. They’re high-stakes for educators. High-stakes for schools. From the student’s standpoint, they may say, “I don’t get a grade for this. If I don’t try, who’s going to know or care, and why should I put forth effort if it doesn’t matter to me?”

Now, I should also add that even in those circumstances, most students try. Most students — I don’t know if it is conditioning or just their interest in achievement — when you put a test in front of them, give pretty good effort. There’s a small proportion of the students who react by saying, “I don’t feel like trying.”

There’s discussion about student disengagement, particularly in high school, as something that’s infrequently addressed in education improvement efforts. You’ve described some natural consequences that matter to students, such as college admissions. But is there a loop to be closed in terms of making school more engaging, or making a particular subject more relevant to a student?

That is the broader question, and the work I’ve done is fairly narrow. We want to know what [students] know and can do. What is their achievement level at this point in time and how do we get the best score. I focus on getting a good measure of that student.

But that’s only part of that broader issue of how engaged students are in school, and that affects everything. I mean, that affects not only how they do on tests, but when they’ll go home and do homework, when they’ll pay attention in class — a variety of outcomes that are very important to educators.

I don’t pretend to be able to understand how to address all these needs, but they’re there and they are incredibly important. The work on social-emotional learning, the SEL work, I think, is geared toward attending to those. I think we’re working out how to measure it, but we’re still not sure how to understand it. People are getting a better appreciation of the degree to which the students’ mindset and attitudes are important to their learning.

When you’re measuring humans, they won’t always do things you’ll expect them to do. They’ll get too anxious to answer, or try to cheat, or a variety of things. The test could adapt to that behavior as well.

You can’t have a valid score if you don’t have an engaged test-taker. While that seems obvious, I think sometimes we lose sight of it. What can we do about it? If disengagement impairs validity, how do we successfully address it?

Is there anything you found that you didn’t expect to?

At NWEA, we’re all about giving information to educators that they can use instructionally. Now we’re asking [them] to no longer think of these scores as unitary. Now we’re going to give you engagement information as well, so sometimes you’re going to see scores that are less trustworthy than others. And you’re going to have to be able to think about that when you interpret a score.

What’s striking to me is that it’s taken that long to get that point across. NWEA is probably the only organization in the world that’s seriously attending to this. Others are starting to, but I found it kind of puzzling that this notion hasn’t gotten out there more. Because I think if people who give assessments can detect disengaged test-taking, they have a responsibility to report that to the people they’re providing scores to.

Yet we provide varying levels of information, adequate and not, depending on factors that have nothing to do with your research, such as who the state department of education’s vendor is that year.

One thing always stands out that’s curious to me is, nobody ever does work on statewide accountability tests. I’d like to believe it’s because the state departments of education may not want to know, and their providers may not want to know, because there’s an interesting dynamic going on.

NWEA took a risk, essentially, because they’re saying to our partners, the people that use our tests, “Oh, by the way, all of these scores that we are giving you aren’t trustworthy.” It’s essentially admitting that all your scores aren’t perfect. It makes sense because you can’t control what the test-taker does. But I found it odd that for the statewide accountability test, nobody talks about this.

Disengagement happens. The idea we can do something about it comes from the people who give tests giving more thought about the conditions under which they test, things like time of day, the environment, distractions and noise and the like, the instructions you give to your students before they take the test, the attitude of the teacher who’s giving the test. If the teacher looks like this is not a big deal, the students pick up on that.

We also have the issue of things we can do. For instance, our test does something called engagement monitoring. It looks for rapid guessing during the course of the test, and if a student becomes disengaged, the test proctor’s computer gets notified.

And then what happens?

We just published a study that shows the positive effects of this. Presumably, the proctor does something. If Johnny is disengaged, hopefully the proctor knows Johnny and maybe the best way to intervene. We find that if you can get somebody re-engaged, it matters quite a bit in terms of their performance. When we implemented this feature, after notification was given to the proctor, engagement went up, performance went up.

When we get people that, despite that, aren’t particularly engaged, we encourage our partners to retest. I’ve just analyzed some data focused on kids that were retested within one day, so there’s not enough time for them to change how much they know and can do. We found if you can get them from disengaged to engaged, it can make a huge difference in their test performance.

If we can change the way students are engaged, it will have benefits for both their performance and, consequently, the validity of the scores. We’ll make better inferences about them.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today