Explore

What’s the Best Way to Tell If an Ed Tech Product Works? Science of Learning Can Help

Kucirkova: Instead of debating randomized controlled trials versus co-design, it’s time to completely rethink the way these tools are assessed.

Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

Educational technology such as apps and learning platforms are used by millions of children in classrooms and at home. Recent reports suggest that not all ed tech, including some of the most popular tools, are supporting learning.  It’s crucial to evaluate whether these tools are truly effective. But how to tell what works?

A debate about how best to do this has long centered on two competing approaches: randomized controlled trials (RCTs) and co-design. Kirk Walters from West Ed and Katie Boody Adorno from LeanLab represent the opposing views. Walters argues that RCTs, which test tools in controlled environments, provide the most reliable and objective data on what works in the classroom. Boody Adorno champions co-design, a process that involves teachers and students in shaping technology to ensure it meets their needs. Both believe that relying on the other type of evidence leaves ed tech evaluations flawed.

Framing this as a choice between two methods misaligns with the principles of the Science of Learning. The Science of Learning studies how people learn and how teaching methods can be improved through research. It combines expertise from psychology, neuroscience and education to determine the most effective strategies for diverse students and resources. Because learning depends on many related influences — a student’s background, teaching methods, culture and classroom situation — Science of Learning uses various methods to understand what works best. 

When learning scientists gather evidence on whether and how a technology changes education, they select a method based on the goal of the evaluation. If the aim is to understand how a tool can be created to fit teachers’ needs, then co-design methods are the best fit. If the goal is to measure a tool’s impact on specific learning outcomes with statistical precision, then RCTs are more appropriate.

So, why does not everyone simply use both approaches? There are both philosophical and pragmatic reasons for this. 

Typically, learning scientists specialize: They are either experts in the quantitative (number-based) tradition, where RCTs are considered the highest form of evidence, or they focus on qualitative (descriptive) studies, which emphasize deep exploration of a topic. Similarly, research firms and consulting labs tend to stick to one research method. They train their teams in a specific approach and streamline their reports according to that method. This allows them to deliver quick results, which is great for fast-moving ed tech companies.

But focusing too narrowly on one method means ed tech providers don’t get the full picture of how their tools actually impact classrooms. Important insights, especially those that fall outside the chosen research approach, are often overlooked. As a result, teachers and users are left without a clear understanding of what truly works — and what doesn’t.

This lack of clarity is a major problem. I’ve been in numerous meetings where ed tech companies came with a predefined list of outcomes, asking researchers to conduct a study to confirm only those results. They assumed that by sponsoring the research, they could get the outcomes they needed to boost their marketing. Such a report would be then used to sell the technology to schools, turning what should be objective research into a sales pitch.

This isn’t just happening behind closed doors; it also happens with some public calls for proposals. If you look at research requests from some large ed tech companies, you’ll often find that their terms dictate exactly what the study should find and which positive results they want to highlight. But this is not how real research works, nor is it an accurate reflection of how learning happens in the classroom.

Learning isn’t as simple as either/or. It’s a complex process in which multiple factors work together, with trade-offs among different approaches. What really matters is understanding how learning progresses in different ways over time and in various situations. For instance, studies have debunked the old idea of separating emotions and thinking: The same brain circuits process both. This means that a study focusing only on how a tool supports children’s emotional learning, without considering cognitive factors like memory, won’t give a complete picture of how it truly impacts learning.

If ed tech evaluations are to align with the latest and best science of how learning works, it’s time to completely rethink the way these tools are assessed.

First, studies that follow solid Science of Learning principles must share all findings, whether they are positive, negative or neutral. Transparency is crucial for both good research and the development of better products. If providers worry that being honest about what doesn’t work will hurt their business, they should think again.The market is oversaturated, and schools are tired of marketing hype. Being upfront about both successes and failures can build trust and lead to tools that genuinely make a difference.

Second, all research — regardless of its source or method — must uphold the highest standards of quality. Just as there can be poorly executed co-design studies, there can be flawed randomized controlled trials. What truly matters is applying careful, accurate and reliable methods. This should be the North Star for all researchers.

Third, to truly understand how learning works in different settings and with various tools, researchers and teachers need to accumulate evidence over time. Especially with new tools, like those powered by artificial intelligence, it’s crucial to run studies that not only describe and measure learning, but also help engineer and improve it across different students, grades and settings. Relying on just one study to understand its effects is like reading only one chapter of a book — it’s just a small piece of an ongoing story.These three paradigm shifts would bring much-needed accountability to ed tech research, ultimately leading to better learning experiences and outcomes for students.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today