Analysis: For Ed Tech That Actually Works, Embrace the Science of Learning
But as curricula and learning tools are prepared for rigorous evaluation, we should think about how existing research on teaching and learning have informed their design. Building a movement around research and impact must include advocating for products based on learning research. Otherwise, we are essentially taking a “wait and hope” strategy to development: wait until we have something built and hope it works.
When we make a meal, we want to at least have a theory about what each ingredient we include will contribute to the overall meal. How much salt do we put in to flavor it perfectly? When do we add it in? Similarly, when creating a curriculum or technology tool, we should be thinking about how each element impacts and optimizes overall learning. For example, how much and when do we add in a review of already-learned material to ensure memory retention? For this, we can turn to learning science as a guide.
We know a lot about how people learn. Our understanding comes from fields as varied as cognitive and educational psychology, motivational psychology, neuroscience, behavioral economics, and computer science. There are research findings that have been replicated repeatedly across dozens of studies. If we want to create educational technology tools that ultimately demonstrate efficacy, these learning science findings should serve as the foundation, integrating the insights from decades of research into how people learn and how teachers teach into product design from the beginning.
So what do we know about how people learn? You could turn to foundational texts like Clark and Mayer’s e-Learning and the Science of Instruction, Dan Schwartz’s The ABCs of How We Learn, and Hattie and Yates’s Visible Learning for detail. Or you could look to the excellent summaries compiled by Deans for Impact, LearningScientists.org, and Digital Promise Global.
Here are a few examples:
Spaced practice: We know that extending practice over time is better than cramming all practice into the few days before an exam. Spaced practice strengthens information retention and keeps it fresh over time, interrupting the “forgetting curve.” Implementing spaced practice could be as simple as planning out review time. Technology can help implement spaced practice in at least two ways: 1) prompting students to make their own study calendars and 2) proactively presenting already-learned information for periodic review.
Retrieval practice: What should that practice look like? Rather than rereading or reading and highlighting, we know it is better for students to actually retrieve the information from memory because retrieving the information actually changes the nature of the memory for the information. It strengthens and solidifies the learning, as well as provides more paths to access the learning when you need it. Learners creating flashcards have known about this strategy for a long time. RetrievalPractice.org offers useful information and helpful applications building on this important principle. There is a potential danger point here for designers not familiar with learning literature. Since multiple-choice activities are easier to score with technology, it is tempting to create these kinds of easy questions for retrieval practice. However, learning will be stronger if students practice freely recalling the information rather than simply recognizing the answer from choices.
Elaboration: Taking new information and expanding on it, linking it to other known information and personal experience, is another way to improve memory for new concepts. Linking new information to information that is already known can make it easy to recall later. In addition, simply expanding on information and explaining it in different ways can make retrieval easier. One way to practice this is to take main ideas and ask how they work and why. Another method is to have students draw or fill in concept maps, visually linking ideas and experiences together. There are a number of online tools that have been developed for creating concept maps, and current research is focusing on how to provide automated feedback on them.
So how many educational technology products actually incorporate these known practices? How do they encourage students to engage in these activities in a systematic way?
There is also significant research about how technology supports teaching practices, which should inform how a product is designed to be used in the classroom.
For example, there is a solid research base on how to design activities that introduce new material prior to formal instruction. It suggests that students should initially be given a relatively difficult, open-ended problem that they are asked to solve. Students, of course, tend to struggle with this activity, with almost no students able to generate the “correct” approach. However, the effort students spend in this activity has been shown to build a better foundation for future instruction to build on as students have a better understanding of the problem to be solved (e.g., Wiedmann, Leach, Rummel & Wiley, 2012; Belenky & Nokes-Malach, 2012). It is clearly important that this type of activity be presented to students as a chance to explore and that failure is accepted, expected, and encouraged. In contrast, an activity meant to be part of practice following direct instruction would likely include more step-by-step feedback and hints. So, if someone wants to design activities to be used prior to instruction, they might 1) select a fundamental idea from a lesson, 2) create multiple cases for which students must find an all-encompassing rule, and 3) situate those cases in an engaging scenario.
Schwartz of Stanford University tested this idea with students learning about ratios — without telling them they were learning about ratios. Three cases with different ratios were created based on the number of objects in a space. This was translated into the number of clowns in different-sized vehicles, and students were asked to develop a “crowded clowns index” to measure how crowded the clowns are in the vehicles. Students are not specifically told about ratios, but must uncover that concept themselves.
Product developers should consider research like this when designing their ed tech tools, as well as when they’re devising professional development programs for educators who will use those technologies in the classroom.
Product makers must consider these questions when designing ed tech: Will the activity the technology facilitates be done before direct instruction? Will it be core instruction? Will it be used to review? How much professional development needs to be provided to teachers to ensure the fidelity of implementation at scale?
Too often, designers think there is a singular answer to this series of questions: “Yes.” But in trying to be everything, we are likely to end up being nothing. Existing research on instructional uses of technology can help developers choose the best approach and design for effective implementation.
With this research as foundation, though, we still have to cook the dish and taste it. Ultimately, applying learning science at scale to real-world learning situations is an engineering activity. It may require repeated iterations and ongoing measurement to get the mix of ingredients “just right” for a given audience, or a given challenging learning outcome. We need to make sure to carefully understand and tweak our learning environments, using good piloting techniques to find out both whether our learners and teachers can actually execute what we intend as we intended it (Is the learning intervention usable? Are teachers and students able to implement it as intended?), and whether the intervention gives us the learning benefits we hoped for (effectiveness).
The key is that research should be informing development from the very beginning of an idea for a product, and an evidence-based “learning engineering” orientation should continue to be used to monitor and iterate changes to optimize impact. If we are building from a foundation of research, we are greatly increasing the probability that, when we get to those iterated and controlled trials after the product is created, we will in fact see improvements over time in learning outcomes.
Follow the conversation on social media with the hasthag #ShowTheEvidence.
Authors:
- Kristen DiCerbo, Vice President, Education Research, Pearson
- Aubrey Francisco, Chief Research Officer, Digital Promise
- Bror Saxberg, Chief Learning Officer, Kaplan
- Melina Uncapher, Assistant Professor, Department of Neurology, UC San Francisco
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter