Explore

Commentary: 5 Things Educators Can Do to Make Ed Tech Research Work Better for Their Schools

Education Images/UIG via Getty Images

Finally! There is a movement to make education research more relevant to people working in school districts who are asking, “Is this product likely to work in a school like mine?”

At various conferences that feature innovations in education and technology, we’ve been hearing about a rebellion against the way the federal Education Department wants research to be conducted. The department’s approach has anointed the randomized control trial as the gold standard for demonstrating that a product, program, or policy caused an outcome.

The problem is, that approach is concerned with the purity of the research design, not whether it is relevant to a school, given its population, resources, etc. For example, in an 80-school randomized control trial the Empirical team conducted under a federal contract on a statewide STEM program, we were required to report the average effect, which showed a small but significant improvement in math scores. If you refer to a table on page 104 of the report, you’ll find that while the program improved math scores on average, it didn’t improve math scores for minority students.

The department’s reviewers had reasons couched in experimental design for downplaying anything but the primary, average finding, but this ignored the potential needs of schools with large minority student populations.

The irrelevance of the federal approach to research was highlighted last year at the EdTech Efficacy Research Symposium, a gathering of 275 academic researchers, ed tech innovators, funders, and others convened by an organization now called the Jefferson Education Exchange (JEX). The rallying cry coming out of the symposium was to eschew the department’s brand of research and begin collecting product reviews from front-line educators. This would become a Consumer Reports for ed tech.

Factors associated with differences in implementation are cited as a major target for data collection. Bart Epstein, CEO of JEX, points out: “Variability among and between school cultures, priorities, preferences, professional development, and technical factors tend to affect the outcomes associated with education technology. A district leader once put it to me this way: ‘A bad intervention implemented well can produce far better outcomes than a good intervention implemented poorly.’”

Here’s the problem: Good implementation of a program can translate into gains on education outcomes, such as improved achievement, reduction in discipline referrals, and retention of staff. But without evidence that the product itself caused a gain, all you are measuring are the ease of implementation and staff and student engagement. You wouldn’t be able to say whether the educators and students were wasting their time with a product that doesn’t work.

We at Empirical share the concern that established ways of conducting research do not answer educators’ basic question: “Is this product likely to work in a school like mine?” But we have a different way of understanding the problem. From years of working on federal contracts, often as a small-business subcontractor, we understand that the Education Department cannot afford to oversee a large number of small contracts.

When there is a policy or program to evaluate, it puts out multimillion-dollar, multiyear contracts for research that is labor-intensive because each study is a customized project, with the team gathering data and surveys from schools, entering them into a computer database, designing and running statistical analyses, and writing up an interpretation of the findings in a report. It is a process that is both expensive and slow and ultimately yields a thumbs-up or thumbs-down on that particular policy or program.

But there’s still a need for causal research design that can link conditions such as resources, demographics, or teacher effectiveness with educational outcomes. The rebellion against the traditional federal approach could increase the number of studies by lowering their cost and turnaround time. For example, by taking advantage of data routinely collected by ed tech products, researchers can see how many times a teacher assigned tasks or students opened the program, providing an accurate measure of implementation without expensive and burdensome surveys. With this and other efficiencies afforded by technology, this reduction could reach a factor of 100.

Instead of one study of ed tech products that costs $3 million and takes five years, think 100 studies that cost $30,000 each and are completed in less than a month. If, for each product, there are five to 10 studies that are combined, they would provide enough variation and numbers of students and schools to detect differences in kinds of schools, kinds of students, and patterns of implementation so as to find where it works best. As each new study is added, our understanding of how and with whom it works improves.

It won’t be enough to have reviews of product implementation. We need an independent measure of whether — when implemented well — the intervention is capable of a positive outcome. We need to know that it can cause a difference, and under what conditions. We don’t want to throw out research designs that can detect and measure effect sizes, but we should stop paying for those that are slow and expensive.

Educators can join this movement by expecting any ed tech product being pitched to district staff to have at least a rationale for why it should work for them (required in the Every Student Succeeds Act’s base level of evidence). Beyond expecting some level of evidence, educators should take these actions.

● Develop a community with other districts or with professional organizations to share results, since one study can never provide a clear picture of the product’s effectiveness. An example of this is Digital Promise’s League of Innovative Schools.

● Invite companies to run studies in your district (in exchange for a discount on the product), even if the results of spring testing may not be available in time to support a local decision.

● Consider participation in a pilot, in which a representative set of schools (not just the lowest-performing) or teachers (not just the most experienced) uses the product.

● Keep in mind that research organizations may meet privacy requirements for data sharing more easily than the companies.

● Be prepared to allow a research organization to survey teachers, but ensure there is a purposeful and convincing rationale for the additional burden.

Nothing in these proposals breaks the rules and regulations of ESSA’s evidence standards for research. The fact that ESSA embraces the federal approach doesn’t prevent educators, developers, and researchers from going beyond the requirements. They can collect data on implementation, calculate subgroup impacts, and use their own data to generate evidence sufficient for their own decisions.

Denis Newman is chairman and CEO of Empirical Education. Hannah D’Apice is research manager at Empirical Education.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today