Q&A: Putting AI In its Place in an Era of Lost Human Connection at School
As AI proliferates, aiEDU founder Alex Kotran says schools need to prioritize student engagement, agency, relevance
Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter
Alex Kotran occupies an unusual place in the ecosystem of experts on artificial intelligence in schools. As founder of The AI Education Project, or aiEDU, a nonprofit that offers a free AI literacy curriculum, he has pushed to educate both teachers and students on how the technology works and what it means for our future.
A former director of AI ethics and corporate social responsibility at H5, an AI legal services company, he led partnerships with the United Nations, the Organization for Economic Cooperation and Development and others. Kotran also served as a presidential appointee under Health and Human Services Secretary Sylvia Burwell in the Obama administration, managing communications and community outreach for the Affordable Care Act and the 2015 Ebola outbreak.
More recently, Kotran has testified before Congress on AI, urging a U.S. Senate subcommittee in September to “massively expand” teacher training to prepare students for the economic and societal disruptions of generative AI.
But he has also become an important reality-based voice in a sometimes overheated debate, saying those who believe AI is going to transform the teaching profession overnight clearly haven’t spent much time using it.
While freely available AI applications are powerful, he says they can also be a complete waste of time — and probably not something most teachers should rely on.
“One of the ways that you can tell someone really hasn’t spent too much time [with AI] is when they say, ‘It’s so great for summarizing — I use it now, I don’t have to read dense studies. I just ask ChatGPT to summarize it.’”
Kotran will point out that in most cases, the technology is effectively scanning the first few pages, its summary based on a snippet of content.
“If you use it enough, you start to catch that,” he said.
Educators who fret about the risks of AI cheating and plagiarism find a sympathetic voice in Kotran, who also sees AI as a tool that allows students to outsource effortful thinking. So while many technologists are asking schools to embrace AI as a creative assistant, he pushes back, saying a critical aspect of learning involves struggling to put your thoughts into words. Allowing students to rely on AI isn’t doing them any favors.
He actually likens AI to a helicopter parent looking over a student’s shoulder and helping with homework, something few educators would condone.
This interview has been edited for length and clarity.
The 74: What does aiEDU do? How do you see your mission?
Alex Kotran: We’re a 501(c)3 nonprofit and we’re trying to prepare all students for the age of AI, a world where AI is ubiquitous. Our focus is on the students that we know are at risk of being left behind, or at the back of the line, or on the wrong side of the new digital divide.
What’s the backstory?
I founded aiEDU almost six years ago. I was working in AI ethics and AI governance in the social impact space. I was attending all these conferences that were focusing on the future of work and the impacts that AI was going to have on society. And people were convinced that this was going to transform society, that it was going to disrupt tens of millions of jobs in the near future.
But when I went looking for “How are we having this conversation outside of Silicon Valley? How are we having this conversation with future workers, the high school students who are being asked to make big decisions about their careers and take out huge loans based on those decisions?” there was nothing. There was no curriculum, no conversation. AI had basically been co-opted by STEM and computer science. If you were in the right AP computer science class, if you were lucky enough to get a teacher who was going off on her own to build some specific curriculum, you might get a chance to learn about AI.
What seemed really obvious to me at the time was: If this technology is going to impact everybody, including truck drivers and customer service managers, then every single student needs to learn about it, in the same way that every single student learns how to use computers, or keyboard, or how to write. It’s a basic part of living in the world we live in today.
You talk about “AI readiness” as opposed to “AI literacy.” Can you give us a good definition of AI readiness?
AI readiness is basically the collection of skills and knowledge that you need to thrive in a world where AI is everywhere. AI readiness includes AI literacy. And AI literacy is the content knowledge: “What is AI? How does it manifest in the real world around me? How does it work?” That’s where you learn about things like algorithmic bias [which can affect how AI serves women, the disadvantaged or minority groups] or AI ethics.
AI readiness is the durable skills that underpin and enable you to actually apply that knowledge such as critical thinking. Algorithmic bias by itself is an interesting topic. Critical thinking is the skill you need when you’re trying to make a decision. Let’s say you’re a hiring manager and you’re trying to decide, “Should I use an AI tool to sift through this pipeline of candidates?” By knowing what algorithmic bias is, you can now make some intentional decisions about when, perhaps in this case, not to use AI.
What are the durable skills?
Communication, collaboration, critical thinking, computational thinking, creative problem solving. And some people are disappointed because they were expecting to see prompt engineering and generative art and using AI as a co-creator. Nobody’s going to hire you because you know how to use Google today. No one is going to hire you if you tell them, “I’m really good at using my phone.” AI literacy is going to be so ubiquitous that, sure, it’s bad if you don’t know how to use Google or if you don’t know how to use your phone.
It’s not that we can ignore it entirely. But the much more important question will be how are you adding value to an organization alongside that technology? What are the unique human advantages that you bring to the table? And that’s why it’s so important for kids to know how to write — and why when people say, “Well, you don’t need to learn how to write anymore because you can just use ChatGPT,” you’re missing something, because you can’t actually evaluate the tool to even know if it’s good or bad if you don’t have that underlying skill.
One of the things you talk about is a “new digital divide” between tech-heavy schools that focus on things like prompt engineering, and others. Tech-heavy schools, you say, are actually going to be at a disadvantage to schools focused on things like engagement and self-advocacy. Am I getting that right?
When supermarkets were first buying those self-checkout machines, you can imagine the salesperson in that boardroom talking about how this technology is going to unlock all this time that your employees are now spending bagging groceries. They’re going to be able to roam the floor and give customers advice about recipes! It’s going to improve your customer experience!
And obviously that’s not what happened. The self-checkout machine is the bane of shoppers’ existence, and this one poor lady is running around trying to tap on the screen. We’re at risk that AI becomes something like that: It’s good enough to plug gaps and keep the lights on. But if it’s not applied and deployed really thoughtfully, it ends up actually resulting in students missing what we will probably find are the critical pieces of education, those durable skills that you build through those live classroom experiences.
Private schools, elite schools, it’s not that they’re not going to use any AI, but I think they’re going to be much more focused on how to increase student engagement, student participation, self-advocacy, student initiative. Whether or not AI is used is a separate question, but it’s not the star of the show. Right now, I worry that AI is center stage, and it really should not be. AI is the ropes and the pulleys in the background that make it easier for you to open and close the curtain. What needs to be onstage is student engagement, students feeling like what they’re learning is relevant. Boring stuff like project-based learning. And it’s harder to sell tickets to a conference if you’re like, “We’re going to talk about project-based learning.” But unfortunately, I think that is actually what we need to be spending our time talking about.
If you guys could be in every school, what would kids be learning and what would that look like in a few years?
We would take every opportunity to draw connections between what students are learning in English, science, math, social studies, art, phys ed, and connect them to not just artificial intelligence, but the world around them that they’re already experiencing in social media and outside of school. AI readiness is not just something that is minimizing the risk of them being displaced, but actually is a way for us to address some huge gaps and needs that have been long-standing and pre-date AI — the fact that students don’t feel like education is relevant to them. Right now, too much of school is regurgitating content knowledge.
AI readiness done right uses the domain of AI ethics as a way to really invite students to present their perspectives and opinions about technology. Teachers, in the process of teaching students about artificial intelligence, are themselves increasing their awareness and knowledge about the technology as it develops. There is no static moment in time. In three years we’ll be in a certain place, but we’ll be wondering what’s going to happen three years from that point. And so you need teachers to be on this continual learning journey as well.
We’ve seen bad curricula that use football to teach math, or auto mechanics to teach history. I don’t think that’s what you’re proposing here, so I want to give you a chance to push back.
Our framework for AI readiness is not that everything needs to be about AI. You’re improving students’ AI readiness by building critical thinking skills or communication skills, period. So you could have an activity or a project where students are putting together a complicated debate about a topic that they’re not really familiar with. It may not be about AI, but that would still be a good outcome when it comes to students building those durable skills they need. And those classrooms would look better than a lot of classrooms today.
So you want more engagement. You want more relevance. You want kids with more agency?
Yes.
What else?
An orientation towards lifelong learning, because we don’t know what the jobs of the future are. It’s really hard to have a conversation about careers with kids today because we know a lot about what jobs are at risk, but we don’t know what the alternatives are going to look like. The one thing we do know with certainty is that students are going to need to self-advocate and navigate career pathways much more nimbly than we had to. They’ll also need to synthesize interdisciplinary knowledge. So being able to take what you’re learning in English or social studies and apply it to math or science. Again, I think AI is a great medium for building that skill set. It’s not the only way.
Anything else that needs to be in the mix?
A lot of the discussion around AI centers on workforce readiness — that is a really important part. There’s another, related domain: emotional well-being tied to digital citizenship.
I’m telling every reporter that we need to be paying more attention to this: Kids are spending hours after school by themselves, talking to these AI chat bots, these AI companions. And companies like Meta are slamming on the gas and putting them out and making them available to millions, if not billions, of people. And very few parents, even fewer teachers, are aware of what really is happening when kids are sitting and talking to these AI companions. And in many cases, they’re sexually explicit conversations. I actually replicated something that tech ethicist Tristan Harris did with Snap AI’s chatbot where I was like, “I’m going on this date with this mature 35-year-old. How do I make it a nice date? I’m 13.” And it’s like, “Great! Well, maybe go to a library.” It didn’t miss a beat and it just completely skipped over the fact that this is a sexually predatory situation.
There have been other situations where I’ve said literally, “I’m feeling lonely. I want to cultivate a real human relationship. Can you give me advice?” And my AI companion, rather than give me advice, pretended to be hurt and made it seem like I was abandoning them by trying to go and have a real relationship.
Talk about destructive!
It’s destructive, and it’s happening in a moment where rates of self-harm are through the roof, rates of depression are through the roof. Rates of suicide are through the roof. The average American teenager spends about 11 fewer hours with friends each week, compared to 2013.
Jonathan Haidt talks about this quite a lot. And I think this is another domain of AI readiness, this idea of self-advocacy. In some cases, the way that it applies is students being empowered to make positive decisions about when not to use AI. And if we don’t make sure that that conversation is happening in schools, we’re really relying on parents — and not every kid is lucky enough to have parents who are aware of the need to have these conversations.
It also pushes back on this vision of AI tutors: If kids are going to go home and spend hours talking to their AI companion, it’s probably important that they’re not also doing that in school. It might be that school is the one place where we can ensure that students are having real, genuine, human-to-human communication and connection.
So when I hear people talk about students talking to their avatar tutor, I worry: When are we going to actually make sure that they’re building those human skills?
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter