Explore

Analysis: Ed Tech Decision Makers Are Under Pressure in Higher Education

Brittany Shine, 25, completes work for her anatomy and physiology studies class in an eLearning computer lab at Arapahoe Community College on January 27, 2016, in Littleton, Colorado. Getty Images
This is the fourth in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the first, second, and third pieces.
In higher education, ed tech decision-makers are in the hot seat. They face the demands of end users, ranging from Luddites to technophiles and the pressures of vendors who have answers to everything — even when there is no question to start with. We have seen ed tech tools and applications proliferating in an environment demanding that higher education keep up with the 21st century, serve a wider audience, and better prepare students for careers.

At the same time, we now expect decision makers to ensure that their ed tech choices lead to better student outcomes. This might be higher grades, greater course completion rates, or a faster time to graduation.

These standards are not imposed on many other decisions in higher education. Faculty tenure is not based on solid evidence that students have learned anything from their courses. Ed tech is expected to be the silver bullet for many of the challenges of higher education, so decision makers are under pressure to deliver on these expectations.

As part of the Edtech Efficacy Research Academic Symposium, we want to know how ed tech decisions are being made and what can be done to support and improve that process. To that end, over the past year, we interviewed 52 decision makers in higher education ranging from presidents and chief information officers to directors of digital and eLearning.

We found a community buffeted by a variety of influences, facing a complex decision-making process that could be improved by tailoring the research gathered to match the context and magnitude of the decision.

Here are just some of the things we discovered.

Decision makers struggle to process an excess of information on ed tech products and trends

They are diligent in constantly gathering information, largely collected from colleagues, whether at their own university or other institutions of higher education, and at ed tech–related network events such as conferences and consortium meetings. While there is safety in being a “near-follower,” there is also a risk of becoming trapped in a higher education echo-chamber. The challenge is that the data gathered through these informal connections are rarely grounded in any rigorous evidence regarding ed tech effectiveness.

Institutions identified as ed tech opinion leaders, change makers, and innovation leaders were also the ones most likely to step outside higher education circles and talk to startups and other organizations about how to solve challenges with technology and how to overcome impediments to productive implementation.

If we want to see more of this kind of innovative culture in higher education, people will need to be incentivized to take risks. They need support to collect good-enough evidence to make decisions, and room for error as well as trial.

We found that improved decision-making should focus on needs, involve multiple stakeholders, and look for solid evidence

Tension exists at many institutions between starting the decision-making process with needs and starting with the solutions. In some instances, institutions follow a (more or less) rational model of decision-making, first identifying the needs and subsequently looking for appropriate ed tech tools to address them. Others start with the ed tech tools and try to match them up with unsolved problems, whether or not there is evidence to suggest they are an appropriate solution. This is the garbage can model of decision-making. Other institutions work from both ends of the spectrum, keeping track of ongoing needs while monitoring available solutions.

A common theme that arose in our interviews is the need to obtain buy-in from all those who will be involved in implementing and using the product. The challenge is balancing buy-in with efficiency and focus. Nonprofits aim to build buy-in during the decision-making process, sometimes spending excessive amounts of time, money, and effort building consensus for choices between only marginally different product options. For-profits are more likely to make a decision centrally, but sometimes too swiftly to allow for adequate involvement of stakeholders or anticipation of implementation challenges.

We have seen some decisions deferred to departments and individual faculty members, particularly for items that cost little but facilitate the work of researchers and educators. This shift brings positives and negatives. Freedom of choice and freedom from “red tape” leads to redundancy in functionality among acquired tools. IT struggles to support countless products without having a chance to vet them. And buyers unknowingly click through license agreements that transgress regulations on issues such as data privacy. Finding ways to standardize and streamline ed tech acquisitions is a priority.

There is little doubt that ed tech decisions should be made collaboratively by a mix of administrative and academic leaders and IT experts. Adequate attention must be paid up front to the potential demands of scaling up desirable applications. These include change management and estimating total cost of ownership. It’s not just the purchase price of the product that needs to be considered; there’s also ongoing support, training, and expanding digital infrastructure. Currently, ed tech decision makers rarely ask for evidence that an ed tech product will improve student learning. A culture of continuous improvement needs to be built through iterative ed tech decision-making cycles.

While research is happening, it should match the context and magnitude of the decision

All ed tech decision makers conduct research, loosely defined, to inform their decision-making. Most commonly, this involves gathering input from faculty, staff, and students about their ed tech–related needs and experiences, and reviewing student outcomes after implementing an ed tech strategy or product. But the emphasis is more on user experience and whether the technology is well implemented than on whether it improves student learning.

An abundance of digital data may yield the perception that ed tech decisions are being made based on evidence, but, as many researchers would argue, data are only as useful as the questions that are asked of them.

Scientifically based research relevant to learning through ed tech is rarely consulted. This is partly because so little exists, but also because there appears to be a strong preference among higher education decision makers for locally produced information.

Duplication of effort occurs with many of the same ed tech products being piloted at multiple institutions. There is clearly room for an online repository for sharing the results of ed tech pilots and studies. A set of guidelines for robust design of pilot studies would also be helpful, for example, recommending the inclusion of comparison groups and an emphasis on measuring actual student learning, as opposed to only grades earned or course completions. Institutions should also collaborate to conduct multi-site pilots. These efforts could collect common indicators of success at large scale and across diverse users and contexts. To streamline the ed tech selection and procurement process, this online repository could be combined with a platform that facilitates ed tech acquisitions along the lines of the University of North Carolina’s Learning Technology Commons.

Funders could support the production of better research evidence to inform ed tech decision-making by establishing tiered levels of funding for ed tech. The degree of methodological rigor should mirror the level of higher education investment in the product. For example, the acquisition of a software package that costs $20,000 might merit a few faculty and student tests in a user experience lab. On the other hand, adaptive learning systems in which universities might collectively invest hundreds of millions of dollars would merit a large-scale, multi-site, randomized controlled trial to assess impact on student learning. Large investments should also be optimized by a commitment to iterative evidence-gathering to inform continuous product improvement.

Our interviews of these 52 key stakeholders was just the beginning of our work to better align efficacy research in ed tech with the decisions being made by practitioners and institutions. Our hope is that those involved in these critical decisions that impact students, institutions, and ultimately student success will engage in our ongoing work in this area and help work out how to continuously improve the decision-making process for these important tools.

Note: The interviews referred to in this article were part of a study, “EdTech Decision-making in Higher Education,” conducted by Working Group B for the EdTech Efficacy Research Academic Symposium held in Washington, D.C., in May 2017. The full report for this study can be found here.

Help fund stories like this. Donate now!

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today