An Ed Tech Insider Pleads for More Equitable Tools
In her new book, Anne Trumbore pushes for innovations that provide ‘more returns to learners than to ed tech investors’

Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter
As much as anyone writing about education technology today, Anne Trumbore has had a front-row seat for its development.
As a young person living in the San Francisco Bay area in the early 2000s, she stumbled into teaching at Stanford University’s experimental Online High School, working with Patrick Suppes, an early innovator in ed tech. His work, reaching back to the 1960s, popularized the notion of computers as “automatic tutors,” a vision now playing out with AI tutors from Khan Academy, Amira Learning and others.
Trumbore openly admits that she “kind of backed into this business,” working in the entertainment field when she landed a summer job teaching writing at Stanford.
“I didn’t have preconceived notions of what technology could or should do in education,” she said. “The animating question was, ‘How can we use this tool?’”
Trumbore eventually joined the Stanford-led team that launched Coursera and ushered in Massive Open Online Courses (MOOCs) in the 2010s.
In her new book The Teacher in the Machine: A Human History of Education Technology, Trumbore calls herself an “ensemble player in the transformation of online education from experimental and low status to ‘innovative’ and ‘disruptive.’ I have also helped make wealthy institutions, venture capitalists, and more than a few professors even wealthier,” she writes.
Trumbore introduces readers to Suppes and to two other key ed tech pioneers of the mid-20th century: MIT’s Seymour Papert and Don Bitzer of the University of Illinois.
Bitzer created PLATO, a groundbreaking, networked course distribution system that could educate up to 1,000 people at once. He and his team developed interactive touch screens and learning management systems, among other innovations.
Papert, a South Africa-born mathematician who studied with the Swiss child psychologist Jean Piaget, applied the latter’s groundbreaking work to education, popularizing the idea that children should learn about computers by programming them. A co-founder of MIT’s Artificial Intelligence Laboratory, he created LOGO, a programming language for children that served as an inspiration for the computer language Scratch and countless young people’s robotics programs.
Despite all the hype surrounding new, AI-enhanced products, most can be traced back to these three pioneers and their teams, Trumbore said. “We make the same discoveries about online education decade after decade because we do not acknowledge — or know — the history of the field,” she writes. “There is evidence that this ignorance is not an accident.”
Trumbore, who worked at the University of Pennsylvania and now creates professional certificate and degree classes at the University of Virginia, recently talked with The 74’s Greg Toppo. She warned that 17 years after the first MOOCs appeared, they’ve failed to bring education to many of the students who most need it: low-income, nontraditional students who could use extra help focusing and mastering difficult material.
Trumbore sees the same dynamic playing out with generative artificial intelligence, warning that universities must be “more clear-eyed about their business partnerships with technology companies, more thoughtful about their motives in distributing education ‘to the masses,’ and ultimately take inspiration from the past’s successes and failures in order to create more equitable educational experiences that provide more returns to learners than to edtech investors.”
This interview has been lightly edited for length and clarity.
I want to actually start with talking about Stanford’s Online High School. So you were an actual teacher?
Yes.
Live classes, obviously projected over many thousands of miles?
Yes.
Was the original conception that it was a high school that just happened to have students throughout the world? Was everybody there at the same time for class?
It was synchronous. The idea came in 2004. We got a grant from the Malone Family Foundation to start thinking about this. There’s a series of schools still called the Malone Consortium, and they share content because not every private school can have a Chinese literature course. And this idea of providing access, connecting students to great instructors, was something Malone had been thinking about and servicing for a while. And in its model there were similarities to what the Education Program for Gifted Youth [now Stanford’s Pre-Collegiate Studies program] was doing with its calculus courses.That whole unit evolved to provide access to students who couldn’t take a calculus course because they lived on Martha’s Vineyard or in Alaska, or someplace where they didn’t have access to it. And so this just was an extension, in some ways, to see if we could do it.
You say the technology wasn’t very good at the time.
It was early, and folks didn’t even have broadband. So that was also a really interesting challenge. Actually before we did the Online High School we started teaching synchronous college classes in the summer, and so that was our beta test case, and I did that for a while. The success of those programs became the basis for the grant application to Malone, which became the basis for, “Let’s do a high school.” And then we formed the full high school, and then we went and got accredited, and it’s thriving today.
That led, in short order, to the phenomena of Coursera and edX. And as you say, you were there as it was taking shape. More than a decade later, what does the MOOC space look like to you? Has it fulfilled its purpose, or has it got the same illness as a lot of ed tech, which is that it sort of lost its way?
I think the answer to both questions is “Yes.”
If you were an idealistic, super empathetic, early proponent and ed tech evangelist, we were opening up the gates of Harvard and Yale and Princeton and Stanford and Penn and all these wonderful places. It is true that today, which was not true 15 years ago, anybody with an Internet connection can log in and see what’s being taught, or a pretty close version of what’s being taught, at some of these schools. And we forget how amazing that was to have access to that. So in that case, the answer is Yes.
The answer is also “Yes” that it’s lost its way, because the business model behind it is a traditional free-market business model: Scale quickly, make profit, follow users, drive engagement. Don’t worry about making it the best learning experience. Worry about making it the biggest, most appealing learning experience and let the customer decide what it is they’re interested in — let the customer drive the content.
I guess that’s where I’ve ended up: I believe in the promise of ed tech. I don’t think that the promise of ed tech and the free-market business model are compatible.
I’m taking a Coursera course. It seems perfectly fine to me. It sounds like you would make the case that it could be better if there wasn’t a focus on profit?
There’s a lot that gets lost when we focus on frictionless delivery of content and not on an education experience. Education is difficult. It’s expensive to provide. That’s why we invented Coursera. I think that for educated folks, or for super agentic, bright kids, it’s wonderful. You don’t need much else. I think the problem comes when we say that Coursera is sufficient as an education. And the folks at Coursera would say it’s not sufficient as an education.
Early in the book, you say, “Just beneath the shiny surface of the latest ed tech marvel is the work of Suppes, Papert and Bitzer and many people on their teams who’ve worked, sometimes unknowingly, to extend the past of educational technology into the future.”
I mean, these were, by our standards, primitive forays. The computers, literally you were having to dial into the mainframe on campus. I guess I want to just make sure I understand the lessons those three have to teach us.
It’s two things at the same time. One is that the goals they had are very similar to what we have today. So one, they’re worthy goals. And two, we haven’t invented the technology to achieve those goals.
We’ve been talking about individual tutors for almost 70 years, 60 years for sure, quite publicly. So this idea that these are necessary pursuits, that this is going to improve education, I think, is a foundation no one questions. No one questions why we would want a computerized tutor. Believe me, I’ve searched. I did find, I think, three articles that are like, “Hey, hold on here.” Folks are all in on this.
For Bitzer, what was amazing about him — and he was the true engineer among the group — was using, literally, duct tape sometimes, putting together this system. But the vision of that system and what it enabled, that it enabled communication among students who were learning at the same time — that’s how we now measure success. “How many people do you have on the platform?” That did not exist. That was like, “What are you kidding me? In 1962?” Really revolutionary.
And with Papert, this idea that we shouldn’t let the computer program the child, but the child should program the computer, I think, is probably the most relevant to the tidal wave or avalanche of ChatGPT in education, that we need to train all these kids to have AI skills. I’m not saying we shouldn’t, but what are we teaching them? Why are we teaching them this? And is this really the right thing to do? And are they using it as a tool, or are they being used as a tool?
I think these questions are highly relevant, but we forget to ask them, because there is this cycle of funding and social capital for being an innovator and all this stuff that gets really exciting if you’re brand new and chasing the bright, shiny object. Nobody wants to hear about the past.
I want to drill down on Papert. I think he’s the most interesting of the three in a lot of ways, mostly because he had this fascinating background, studying with Piaget. Explain to me how his thinking about “the child programming the computer” is playing out now with things like ChatGPT.
It’s different, obviously, school to school. In some schools, students are using ChatGPT as a tool to create things that are useful to them, not to create an assignment: “Hey, you guys need to get together and design a water pump.” Or “Design a Pixar character and build it with our 3D printer.” And it’s a group of kids working together in a team. And they use ChatGPT to come up with some models. They do that, then they send it to the 3D printer. I think that’s a great use of these tools. And there’s someone guiding them. That’s the children using it as a tool. “Kids, today we’re going to do Lesson 4 of OpenAI Academy,” that’s less good. That’s not using it as a tool.
I firmly believe in what Papert was saying, and I think technology is used best when you use it to empower a learner to be more human and to unlock creativity, and that is very possible with these tools. It really isn’t how we deploy them. The way they’re being marketed, sold and consumed, it’s faster to just say, “Watch out: You’re not going to get a job if you can’t master these tools. You’ve got to get ahold of these tools and watch some videos and learn some stuff.” Rather than the more time-consuming, “Hey, try this thing. Try this thing. Use it to make this. Use it to make that.”
It strikes me that the public conversation around ed tech has a weird format. It’s binary, which maybe is appropriate: People want to talk about things like MOOCs and such as either the most amazing thing ever or a total failure. I’m curious if you have a thought on why that is, and how we can emerge from that?
That dialogue, which always cracked me up as well, part of it is innovation, “the violence of forgetting.” I love that phrase, and I end with it in the book. In order for this to have been brand new and amazing, there’s this narrative that’s fueled by cash and investment and people’s eyeballs and all this stuff, that it was a huge success, and then the binary of that, to your point, is that it was a huge failure. Once you add that much money and power into these things, people are not interested in a more nuanced answer, I would argue, to anything. But it’s especially true here.
Again, I think MOOCs have done a lot of good. It’s amazing that they exist. It’s awesome. But they didn’t cure cancer. They didn’t lift the continent of Africa out of the educational attainment that it currently has. At the time, when people were layering on these hopes for MOOCs, we were like, “This is hilarious.” I mean, we were more stunned and like, “Oh my God, more work.” But in retrospect, it was like, “This is kind of nuts that these folks are flocking to Coursera’s offices, and we’re having lunch with Tom Friedman, and he’s writing about it, and you’re so busy making it.” This label gets attached to it, it’s like, “Is this what we’re doing?” And it’s intoxicating to believe that.
I have to apologize, maybe not on behalf of Tom Friedman, but on behalf of journalism. I think we’re part of the problem. I take your point that, on the one hand, it’s amazing that these things exist, but on the other hand, they didn’t cure all these ills. What was the accomplishment? I think there was a lot of hope during the pandemic that this was a world that was going to save us from that catastrophe, and it didn’t turn out to be true, mostly.
So two parts: One, I do think certainly in America we really love to give our power away to technology. We just love it. It’s over and over and over again, starting with the camera. You can look back to anything: the washing machine, cake mix — any of these efficiency-solving technologies that come out, particularly during the course of the 20th century and into the 21st. We tend to take a pretty passive stance toward them and imbue them with all these characteristics that they’re almost God-like. They’re going to save us from ___. And that’s great marketing copy. And those two things are interlinked. But we love doing it, because look at any article about ChatGPT on any given day, and it’s the same idea. So there’s something in our national consciousness that really wants to believe that there’s this amazing technological solution just around the corner that is going to cure everything. Twitter was supposed to democratize democracy. So I think that’s part of the problem.
But what MOOCs did, and why they’re great, is that if I want to know about something, and I want to know more than just asking ChatGPT or doing a Google search or looking at Wikipedia, I can log on and for free, or for a relatively modest fee, I can learn about this stuff. I mean, just even seeing the modules mapped out, if you’ve never taken Python, and you’re like, “I don’t even know what Python is,” and then you look at Chuck’s [University of Michigan professor Charles Russell Severance’s] course, and you see, “Oh, now I kind of know what a computer science course looks like.”
Right.
That’s amazing. And I think the nearness of it is also really interesting, that people feel like it’s so much more approachable now, and not as exclusive. I think that was some of the good that came out of it; and the fact that companies are offering these to employees so that they can learn how to do things better or use them for their own self-actualization or to make themselves better at work, I think is great.
And then from some work I did when I was at Penn: People were something like 600% more likely to say “I think I’m going to apply to a higher ed program” after completing one of these courses. People’s conception of themselves changed after they were able to complete a course.
I don’t want to rain on that parade, but I do want to bring up a point you make in the book, which is that a lot of times the people who take advantage of this or benefit from it are people with a lot of agency already.
One hundred percent. They give additional agency to those who have it.
How do we solve that?
I think that providing education in a format that worked for people at the top, say 20% of intellectual distribution and access to higher education — I mean, the first MOOCs were modeled on courses at Princeton, on courses at Stanford — that is not, by definition, accessible education to everyone. Letting everyone into the classroom doesn’t mean they’re going to get it or understand it. And that, I think, underlies that idea, because the inventors, the funders, many of the initial employees, are all part of that group. They all went to the same collection of colleges. They all know people who know people from there. So this is a way of education that works. It’s great to scale that.
So maybe you catch some people who truly are excluded only because of geography or finances, but they know how to learn that way. That doesn’t begin to address the vast number of people who don’t learn that way, who by the time they’re in the third video, they’re texting or asleep or bored or checked out. It can’t possibly solve the problem. One thing I often say in talks is that if access to education were to solve the world’s education problems, we wouldn’t have all these institutions of higher education, because libraries would have solved everything. Andrew Carnegie had the answer.
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter