Explore

Exclusive: Phonics? Learning Styles? Teachers Confounded by Education Research May Soon Turn to New AI Chatbots for Help

At least two groups are working on bots that would make peer-reviewed research, buried in expensive academic journals, accessible for everyday use.

Eamonn Fitzmaurice/The 74

Help fund stories like this. Donate now!

As students across the U.S. enter their first full school year with access to powerful AI tools like ChatGPT and Bard, many educators remain skeptical of their usefulness — and preoccupied with their potential to help kids cheat.

But this fall, a few educators are quietly charting a different course they believe could change everything: At least two groups are pushing to create new AI chatbots that would offer teachers unlimited access to sometimes confusing and often paywalled peer-reviewed research on the topics that most bedevil them. 

Their aspiration is to offer new tools that are more focused and helpful than wide-ranging ones like ChatGPT, which tends to stumble over research questions with competing findings. And like many kids faced with questions they can’t answer, it has a frustrating tendency to make things up.

Tapping into curated research bases and filtering out lousy results would also make the bots more reliable: If all goes according to plans, they’d cite their sources.

The result, supporters say, could revolutionize education. If their work takes hold, millions of teachers for the first time could routinely access high-quality research and make it part of their everyday workflow. Such tools could also help stamp out adherence to stubborn but ill-supported fads in areas from “learning styles” to reading instruction.

So far, the two groups are each feeling their way around the vast undertaking, with slightly different approaches.

In June, the International Society for Technology in Education introduced Stretch AI, a tool built on content vetted by ISTE and the Association for Supervision and Curriculum Development. (The two groups merged in 2022.) ISTE has made it available in beta form to selected users. All of the chatbot’s content is educator-focused, and it’s trained solely on materials developed or approved by the two organizations. 

Richard Culatta

Now its creators say that within about six months, they expect that the tool will also be able to scour outside, peer-reviewed education research and return “pretty understandable, pretty meaningful results” from vetted journals, said Richard Culatta, ISTE’s CEO.

“There’s this big gap between what we know in the research and what happens in practice,” he said. One reason: Most research is published in a format that “is just totally inaccessible to teachers.”

Case in point: A set of 2019 studies by the Jefferson Education Exchange, a nonprofit supported by the University of Virginia’s Curry School of Education, found that while educators prefer research they can act on — and that’s presented in a way that applies to their work — only about 16% of teachers actually use research to inform instruction.

So he and others are building a digital tool, “purpose-built for educators by educators,” that can translate research into practice, using “very practical language that teachers understand.”

For instance, a teacher could ask the chatbot, “What does the research say about creating a healthy school culture?” or “What’s the evidence for teaching phonics to developing readers?” One could also ask it to suggest activities that are appropriate for middle school students learning about digital citizenship.

Joseph South, ISTE’s chief learning officer, said teachers want the latest research, but are up against formidable obstacles. “They have to find the article in the journal that happens to relate to the thing that they want to do,” he said. “They have to somehow understand academic-speak. They have to have the time to read this, and they have to translate it into something useful.”

While ChatGPT can comb through journals it has access to, translate and summarize the research, he said, it’s not reliable. The typical chatbot — and thus the typical end user — doesn’t know whether the results are from a credible, peer-reviewed journal or not, and it may not necessarily care.

Joseph South

“We do, though,” he said. “So we can do that filtering and let the AI do its magic.”

As with its beta version, the new chatbot will also cite the sources used to generate each response. And it’ll let users know when it simply doesn’t have enough information to return a reliable response.

Developers are still in the early stages of deciding what academic journals to include. For now, they’re experimenting with a handful of key research articles, but will expand the chatbot’s range if initial prototypes prove helpful to educators.

Culatta and South, both veterans of the U.S. Department of Education, have spent years working on the research-to-practice problem, offering, in effect, translation services for research findings. “We’ve spent so much work trying to figure out how to do it and it’s just never really worked,” he said. “It’s just always been a struggle. And we actually think that this could be the first for-real, sustainable, scalable approach to taking research and getting it into language that actually could be used by teachers.”

Daniel Willingham

Daniel Willingham, a professor of psychology at the University of Virginia and a well-known translator of education research, said his limited experience with ChatGPT has shown that when asked about a subject where there’s general consensus, such as “What is the effect of sleep on memory?” it produces helpful results. But it isn’t very good at synthesizing conflicting findings.

It’s also inconsistent in its willingness to reveal, in Willingham’s words, that “‘I really don’t know anything about that.’ And so it, you know, just makes stuff up.”

A paid ChatGPT subscriber, Willingham said he gets “really useful” results only about 20% of the time. “But it requires plenty of verification on my part. And this is all within my area of expertise, so it’s not very hard for me to verify.”

Tapping ‘What Works’

ISTE isn’t the only organization pushing to make education research more widely accessible via chatbot. The Learning Agency, a Washington, D.C.-based consulting firm, is also testing a bare-bones version of a bot designed to offer answers to education research queries.

Unlike ISTE’s, the agency’s tool taps an already existing, if finite, resource: the U.S. Department of Education’s What Works Clearinghouse, or more specifically its Doing What Works Library, a curated collection of materials developed by the department’s Institute of Education Sciences.

“We were inspired to basically create a special version of ChatGPT that was exposed to more high-quality educational data and research evidence on what works,” said Perpetual Baffour, the group’s research director.

In a sense, she said, much of the work had already been done, since the library, though limited, exists to translate research findings into more digestible forms for educators. The result is a prototype that offers what Baffour calls “evidence-based education advice” on topics from adolescent literacy to dropout prevention and school turnarounds.

Perpetual Baffour

Baffour noted that the app development tool LangChain allowed the agency to create a “simple but robust” prototype within a day.

At the moment, the version they’re testing is “very basic,” Baffour said. The agency is still considering what it’ll look like in the future. 

“It only knows as much as the Doing What Works Library tells it,” she said. So queries about topics that are in the library return rich results. But asking it about topics that aren’t can be problematic. 

For example, ask it about myths around the aforementioned learning styles and it defaults to a more general knowledge base scraped from Wikipedia articles, transcripts of recorded conversations and materials from “different corners of the Internet,” Baffour said. “And as you can imagine, those sources might not have the most up-to-date and accurate information about education,” Baffour said.

Indeed, a query about the topic returns this: “The myth concerning learning styles is that there is one specific style that works best for everyone. This is not true, as different people learn in different ways and have different preferences for how they learn. Additionally, there is no evidence that suggests that focusing on one particular learning style is more effective than focusing on multiple styles.”

Not exactly accurate or helpful.

In the first place, the widely believed “myth” holds that people with different learning styles learn best when their preferred mode of learning is indulged — not that one style works best for everyone. At a more basic level, while many people may express preferences for ways to take in new information and study — receiving instruction verbally, for example, instead of via pictures — scientists have yet to find good evidence that material tuned to these preferences confers any advantages

Unfortunately, at the moment the agency’s bot doesn’t confess whether it knows a lot or little about a topic. Baffour said they want to change that soon. For now, however, that’s just an aspiration.

“I think you’re more likely to get a confident chatbot producing inaccurate information than you are to get a self-aware chatbot admitting its false and incomplete knowledge,” she said. 

Willingham, the UVA researcher, said a useful education-focused chatbot would not just have to incorporate reliable findings, but put them in context. For example, an answer to a query about the evidence for phonics instruction would properly note that, while the record is fairly strong, a lot of mediocre research and “hyperbolic claims” made in support of alternative methods serve to cloud the overall picture — a delicate but accurate detail.

“How is an aggregator going to negotiate that?” he said. 

Asked if he thought a chatbot might soon replace him, Willingham, the author of several books and a TikTok channel that translate learning science into plain English, said he wouldn’t make any predictions. 

“I was never much of a futurist, but I hocked my crystal ball 15 years ago,” he said.

Help fund stories like this. Donate now!

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today