Explore

Podcast: The Premortem on AI in Education

Rebecca Winthrop joins Class Disrupted to discuss anticipating the negative impacts artificial intelligence may have on students before they occur.

Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Stitcher.

In this episode of Class Disrupted, hosts Michael Horn and Diane Tavenner chat with Rebecca Winthrop, a senior fellow and director at the Brookings Institution, about the impact of AI on education. The conversation kicks off by highlighting Rebecca’s idea of a premortem approach, which involves anticipating the negative impacts of AI before they occur and strategizing how to mitigate these risks. They identify key concerns such as offloading critical thinking, manipulation, and the effects on socialization — and consider how this technology might catalyze a rethinking of the purpose of education.

Listen to the episode below. A full transcript follows.

Michael Horn: Hi everyone, this is Michael Horn. And what you’re about to listen to on Class Disrupted is the conversation Diane and I had with Rebecca Winthrop. Rebecca is the coauthor of a terrific new book, The Disengaged Teen. She is the head of the center for Universal Education at the Brookings Institution, and she has helped stand up a global task force there on AI and education, which forms the basis for our conversation today. Rebecca brings forward a couple interesting perspectives that I want to highlight here. Number one, the importance of doing a premortem on the impact of AI in education. And as she said, a premortem doesn’t focus on the optimistic case for AI. It fast forwards the story to say, knowing what we know now, let’s get ahead of this and imagine the negative impacts from AI and then guard against that.

Second, in her mind, the big premortem risks to worry about are three things. Number one, we can offload cognitive tasks to AI, but as she said, the child development people don’t know what kids have to do on their own and what actually can be offloaded to AI without harmful consequences. Second, she worries about manipulation. And third, she worries about the impact to software socialization from AI. One thing I’m leaving this conversation with is… Rebecca hopes I guess I would say that AI can be this thing that spurs us to have this national dialogue around the purpose of education so that we can really rethink what schooling looks like. Is that the way that this happens? Is it such a big shock that we’ll all come together and have these conversations? Or is it more likely that the real action around system reinvention or system transformation will occur from the grassroots? That is, as in individual communities, education entrepreneurs create new forms or systems of schooling that gain traction over time as more and more people migrate to them and we are left with a series of different systems that have a series of different purposes to them. That’s the question that I’ll leave thinking more about from this episode that you’re about to hear. I hope you enjoy.

Michael Horn: Hey Diane, it is good to see you in a school as well. That is probably pretty energizing. And I will say on this show, the hits keep on rolling. I’m loving all that our guests who have such different perspectives on the vantage point and the question around AI and education are bringing and I am very certain today will be no different.

Diane Tavenner: I couldn’t agree more, Michael. And as those interviews start to become public, we are now hearing from our listeners, which we love and honestly, it’s one of the best parts of doing this podcast, besides getting to have really fun conversations with you and geeking, I’m.

Michael Horn: I’m okay taking a backseat to the listeners.

Diane Tavenner: But I hope we keep hearing more questions and suggestions, especially at this time in the season when we start to think about what’s next. But before I get too far ahead of myself, we have a real treat here today. I think we do.

Michael Horn: Indeed. We have my friend Rebecca Winthrop on the show, and Rebecca is a senior fellow and director of the center for Universal Education at the Brookings Institution. Her research focuses on education globally. That’s how I got to know her most deeply. She pays a lot of attention to the skills that young people need to thrive in work, life and as constructive citizens. So really big, weighty questions. She’s also the co-author with Jenny Anderson, of a very highly acclaimed new book, the Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better. Definitely check it out.

AI’s Impact on Education

Michael Horn: It’s obviously sort of a zeitgeist at this moment, sadly. And the book does a great job, I think, tackling it, helping people put in perspective and sort of think about where do I want my kid on these different journeys as they’re learning? And it’s not necessarily what you think the answer might be for those listening. So definitely check it out. For our purposes in this conversation, I will say not only does the book talk a lot about the the themes that we talk a lot about on this podcast, but Rebecca is also spearheading the Brookings Global Task Force on AI and Education, and we will link to that and the book in the show notes. But suffice to say, she’s been thinking a lot about the questions were most interested in, Diane. And I feel lucky we get to record with her because Rebecca has been like getting to hang out with like people like Drew Barrymore. And I think Hoda was at one of your book events, Rebecca, so you are rolling. The book has definitely hit a nerve.

Thank you so much for joining us. It’s great to see you.

Rebecca Winthrop: Oh, it’s a total pleasure to be here. It’s a treat for me, too.

Michael Horn: You can lie if you say that, given all the folks you’re getting to hang out with. But before we get into the approach of your thinking around AI and education and some of the questions that you’re asking, I would love to hear how and why you got interested in this topic in the first place and how you’ve gone about learning about, you know, AI in general and AI in education specifically.

Rebecca Winthrop: Maybe in reverse order how I’ve gone about learning about it. I mean the I think all of us, I would assume all of us, it certainly, maybe I shouldn’t make this assumption, are out trying stuff in our own lives. So I’ve gone about it. You know, when something new hits, I just want to check it out. So I’ve, you know, I’m now a steady user of GPT4, paying my little, you know, subscription. And it is so much better.

And I’ve tried the, you know, the, the dollies and the this and then that, like PowerPoints. Make an illustration. Do this. What can it do? Like, what can it do? Just, just because you get a little, it’s experiential learning, right? Like you get a little bit more of a sense of its power and its limitations. Well, maybe that’s just how I learn than just reading the text. So in terms of going about learning about it, the first thing I’ve done is just been playing around with it. And I’m no expert by any means, but it certainly has helped me wrap my head around the massive seismic shift, that generative AI is, I think that’s the thing that most.

And this gets to the first part of your question that I was most, you know, almost emotionally struck by was how crazy it is to be able to interact with a machine in my own words. Before we had to learn a different language. We had to learn code to interact and make machines do things. And now it’s in our own language. And that right there to me is a huge fundamental shift that we need to take incredibly seriously. And so then from there I started getting really interested in it because who can, who can not be interested in, if you’re in education and everyone’s talking about it. But also I started being really worried.

I was initially very worried about it because I just come out of all this book research Jenny and I had been doing for the Disengaged Teen. And the big highlight message there is kids are so deeply disengaged in school. And Diane, this has been your life’s work to find a new way of doing school that they’re not disengaged. So this is no new. And Michael, you have been on the forefront of how to use tech well for a long, long time. So I’ve, I’ve been learning from you for years. So it’s not news to both of you. But this book is a sort of broad audience book.

And we found there’s four modes of engagement that kids show up, they show up as passenger mode. Most kids we partnered with Transcend, 50% of kids, that’s kind of their experience in middle school and high school. Achiever mode. They’re like trying to be perfect at everything that’s put in front of them and end up actually being very fragile learners. Resistor mode. These are the quote unquote, you know, problem kids. That’s who we think is disengaged.

We broadly society and they’re avoiding and disrupting, but they have a lot of agency, a lot of gumption. And if you can switch their context, they can get into explorer mode. And the thing that I thought about, GPT3 launched in mid, sort of. Right. We were sort of towards the end of writing the book and I was so worried that it would massively scale how many kids were in passenger mode if we didn’t do it right, if we didn’t figure it out. And so that’s why we, you know, lots and lots of people are doing incredibly good work in different pockets around the globe. And anyways, that’s why we launched our Brookings Global Task Force on AI to try to bring those questions together and bring a different, slightly different methodology.

The Premortem Approach

Diane Tavenner: Rebecca that sort of leads into the first place I’d love for us to go, which is, you know, one of the ways that you approach this work is through premortems. And for, you know, people who don’t know what a premortem is, oftentimes we do post mortems after something to, you know, digest what, dissect what went wrong and what went right and whatnot. But the premortem is when you try to think about that before you’re even in it to really, you know, visualize and imagine the potential negative impacts that could materialize so we can do something about it before we get there. It’s conceptually a more empowering way of thinking about things. And so, you know, I, I’d love to unpack your sort of premortem thinking about this. And we’re going to start with the positive. So let talk us through, if you will, the positive case for AI in education. You know, as you’ve done this sort of premortem forward thinking.

What are the, what are you excited about? What’s the possibility? Right.

Rebecca Winthrop: Yeah, well, Diane, I will, I’ll get there on the positives, but I want to talk a little bit about the premortem piece because what you just did is exactly what everyone in education has done. When we started this premortem exercise because the premortem is you do not start with the positive, which actually has been a problem. The people in education, our people, all of us in our community are sunny optimists. We believe in the potential of human development. And every time we finally had to switch it up, like every time we did the proper premortem. There’s a whole science behind premortem thinking and starting with the risks. And people like rebelled.

They didn’t like it, they felt uncomfortable. So anyways, that’s an interesting observation but the idea of the premortem came out of sort of discussions we’ve been having internally. We had actually came out last, almost a year ago February. Last February we had a great meeting with our leadership council. We have a leadership council at our center and HP hosted us. We were in the Hewlett garage and it was amazing. And then we did a broader conference and we were just around the table trying to figure out how to wrap our hands around how different Gen AI is and what it means for education and knowing that there’s incredible conversations happening in a range of other pockets. And one of the things that I believe strongly in is that we should always look broadly across, not just a solution set can come from anywhere. 

And so even outside of our sector, from the health sector, in this case from cybersecurity. So this is a typical thing done in other sectors, cybersecurity being one. And we never, we can’t there, your listeners might know, but we can’t find a single instance where it’s done in education. And I actually think we should do it for every tech product before we roll it out. And it basically is, let’s figure out how it could all go wrong.

And then put that all on paper and then figure out how to mitigate those so it doesn’t all go wrong. And we should have, should have done this with social media 10 years ago. If we’d had child development folks, educators, teachers, therapists, counselors sitting around the table designing social media with developers, we, I’m sure, I am sure we could have avoided at least 70% of the harms. Now would companies have gone along with it? That a different, you know, question. Let’s parenthesize that like we, these are things that you can, if you go through a very systematic thought process and, and we have an incredible, Mary Burns is an incredible colleague working with us leading this where you, you literally, you know, it’s a very sort of systematic process to think about the risks. Yeah, you want to speed up and go straight to the benefits.

Diane Tavenner: Flip it. We don’t have to follow that. Like, let’s flip it. And so let’s start with that. Like, I mean the worst case scenario of a premortem is the patient dies.

Rebecca Winthrop: Right.

Diane Tavenner: And so like what, what’s the kind of patient dying of AI and education make that case for us and yeah, let’s do it in that order.

Rebecca Winthrop: Yeah, the premortem is like moving the autopsy forward and like, right. How could they die? So I want to caveat this and, you guys have thought about this deeply. So please chime in with your own versions that we are in the midst of the premortem research on the risks side, which includes lots of focus groups with educators, you know, with kids, with ED leaders, our steering group members, etc. So a few of the things. So this is going to be the Rebecca version. This is not the entire task force. A few of the things on the risks that give me pause are talking to, and we have, you know, a number of colleagues on our team who are learning scientists, neuroscientists, and then talking to other colleagues outside of Brookings who know sort of child development, no brain science, no brain development.

And as far as I can tell, we do not know. We royal, we, the people in child development, do not know what are the things that kids have to do on their own to develop critical thinking? You know, agency, key skills, and what could you offload to AI? And to me that is like, I actually am quite. I like just saying that I’m like, oh my God, I’m so nervous. Like, I’m really nervous. I’m nervous for my kids, I’m nervous for the students of the world because, you know, obviously Gen AI can do so much for us. So if one of the main ways that kids develop critical thinking through education at the moment, pretend is learning to write an essay with a thesis statement, picking evidence that supports their argument, putting it in logical order, and, and let’s be honest, like the what seventh graders produce as essays, it’s not a great contribution to humanity. It’s not the product of the essay.

Critical Thinking in the Age of AI 

Rebecca Winthrop: It’s the process that they have to go through to that logical thinking process, understanding what, how you parse truth from fiction. It’s as basic as that. Like where, what, where is data? What is evidence? How do you analyze it for arguments? So there may be another way to develop that critical thinking skill, but at the moment that’s sort of one of the main ways and until we replace, come up with another way that all kids can do it makes me very nervous that sort of Gen AI will sort of, kind of basically offload critical thinking development to our kids. That’s the thing I’m most worried about. And the second I’m most worried about is just, I mean we are at the tip of the iceberg with what this technology can do. And I’m, you know, I am sure we’re going to have all sorts of incredible things in the next seven years that we couldn’t even. That are like straight up Star Trek.

Right. With neural, you know, being able to talk to technology. We can already do that. Like and you know, robotic, you know, R2D2 type scenarios. And so I do worry about manipulation and I do worry about socialization, interpersonal socialization because we see what just a phone flat screen text message interaction does, but for kids, sort of ability to interact face to face. So those to me are the three things that I’m most worried about. But the first one is what makes me really worried.

Are you guys worried about that? Like how do you, how are you thinking about this?

Michael Horn: Oh, I love when you turn it back on us. We’re asking all you folks so we could develop a point of view on this. I think this, the quick answer for me is yes, I am nervous about it given the current way schooling is designed that we have not thought about how to mitigate it. Which maybe is my chance to turn it back to a question to you which is part of the premortem is identifying. And so all three of these risks I think are big. Manipulation is big socialization, we had an entire episode on that question and, and what do relationships look like in the future? Forget about schooling for a moment. Right. With AI bots.

Yeah. Right. And so I guess having identified those as three big ones.

What should we do to. You know like you’re starting to think about the. Yeah. What’s the mitigation piece? Right. Structurally, assignment wise. How do we think about this so that we don’t, you know, we don’t live right into those.

Rebecca Winthrop: Yeah, we haven’t gotten there yet in the task force. So this again.

Michael Horn: Yeah, just speculation.

Yeah, well, but, but let me sharpen the question actually Rebecca, because you just wrote this big book, right. Or I should say important book, the Disengaged Teen, where you thought a lot about the negative implications. Right. Of being in passenger mode and sort of the listlessness, which I think could be a byproduct of, of maybe all three of these. Certainly two of the three. And so how have you thought about that?

Rebecca Winthrop: Yeah, well, I think for me, the mitigation piece I’m going to take your question broadly, Michael. For me, I think we have to, I have like a sort of sequence of types of, levels of types of things we have to think about. So, like, for me, the biggest thing, and you guys have talked about this on your podcast, is really thinking through and being very clear when we’re talking about adult mediated use of particularly Gen AI, less predictive AI and student mediated or child mediated. And I mean that for right now, like, we’re in a massive point of transition. We will eventually come to some new normal eventually. But in our current sort of transition, the discourse around AI and education is so fuzzy and flimsy and unrigorous. You guys are great because you’re surfacing that.

And so often we hear, you know, AI can transform education. It’ll be great. And people reference. And I think, you know, it depends. And when people, certainly from technologists, you know, discourse, you know, it’s true that AI can transform many, many things. It’s unbelievable. Like protein folding, incredible. Spotting viruses in wastewater, amazing.

Like just rapid breakthroughs that are incredible. And all of those are run with by adults who have deep critical thinking and subject matter knowledge and are using the AI as a tool. And that’s very. And then the discourse goes. And then we’ll just give it to schools and it’ll be great and kids can blah, blah, blah. And it’s like, no, well, give it to schools who. So, like, let’s be very clear. Like, is it helping teachers massively teach better or is it helping them do the same more efficiently? Diane, this, you’ve made this point, you know, those are two different things.

And it’s very different from giving just sort of blanketing Gen AI in pedagogy for students to use. You know, given the example of the essay. Right. Like, it might actually, first of all, kids don’t have the content knowledge to understand. So I’ve spent my whole, you know, 20 years talking about the sort of academic skills plus. And now I’m like, oh, my God, let’s not forget about the content knowledge. Like, how will we know, how will kids know how to assess if this, the sniff test, does this seem right?

Michael Horn: Actually, can we put a pin on that just for one sec? Because that’s interesting. Like, you’ve been pushing us to be like, okay, not knowledge for just its own sake, but to do these skills and now you’re worried we might all sort of like sort of blow past it and forget that the knowledge actually is an important base. Is. Am I hearing you right?

Rebecca Winthrop: 100%. Like I’ve been absolutely pushing, which you know, you both have too, with the bringing together of knowledge acquisition with knowledge application. And I do think if we do it right, that’s to me the sunny possibility with Gen AI, maybe it could bring those two things closer together in a more scalable systematic education system wide effort. But I am very worried that people will be like, well, they’ll forget about the knowledge acquisition pieceand that is very scary.

Learning Systems

Diane Tavenner: Can we stay here for a minute? Because I keep asking people to think about the system and no, no one seems to want to go there with me. You’re the first person. So sorry, I can’t help myself. I’m so excited that someone wants to actually talk about a system and especially this space because, you know, I love this space. So you’re thinking that there’s this process of acquiring knowledge and like I think we’re aligned on this great knowledge for knowledge sake is not super useful if you don’t have skills like what are you doing with that knowledge? Are you analyzing? Are you, you know, making an argument? What do you. The skills you need to bring the. So tell, like paint me a picture of how AI might help bring those closer together in a learning system, if you will. Do you have any like, I mean, can you imagine that the.

Rebecca Winthrop: I’m not sure I have a clear vision at a classroom level, but I have a clearer vision at System Transformation Lever.

Diane Tavenner: Okay, okay, that’s great.

Michael Horn: Let’s go there.

Rebecca Winthrop: So one of the things that, you know, sort of in system transformation theory there’s the real sort of shifting of the purpose of a system which is the hardest. This is straight up Donella Meadows Systems transformation theory, which argues just maybe some of the listeners aren’t familiar, you know, that you know, there’s different levers to shift systems sustainably and you know, some of them are shifting how we measure things. Shifting how we allocate resources and those are all important and good, but we tend broadly people who shift systems, but certainly in education to get stuck there. Which means let’s shift our assessment, which is important. We need to do it. You know, let’s shift how we put money and you. It’s much harder to really shift a system that way than if you shift the shared vision and purpose of what a

Education is for. And so that’s a cultural shift. It’s a mindset shift. It includes you know and underneath that it includes shifts in power dynamics. So to me if, if the way in to me Gen AI provides an opportunity to do some be a lever to shift sort of the purpose of ed. So if, if ChatGPT and any other Gen AI tool can pass all the exams that we’re gatekeeping and systems for can do all the most of the assignments and if it can’t do it now it will do you know what I mean? Like it’s going so fast. Exactly. So then we have to, it will force us.

It is forcing us, which is part of the big discussion in this why we did this Brookings Task Force, to think deeply about what is the purpose of education. So we’re bringing, we have, I mean it’s a massive freaking logistical enterprise getting all kids in a jurisdiction to a place at the same time of day. Like that’s a, it’s, it’s just, it’s incredible what schools do logistically. Like so what are we. If you know, we might not. It might be hard to break that up until we have a different world of work because we, you know mainly schools are also doubling as childcare in every single country in the world. It’s the largest nationalized, you know, government supported child care system. So I’m not sure we’re gonna just kids roving around the world.

Reevaluating Education’s Purpose

Rebecca Winthrop: But if we have something we’re doing with kids at certain hours a day, what is the purpose of it? Like is it to identify a problem in their community and then start working backwards about what needs to be fixed, they need to fix it and try to learn the stuff. Here’s content knowledge that they may need that would inform them on how to fix it. And teachers are scaffolding and you know, curating problem solving expeditions and that’s the core thing of what we do. And you sort of learn knowledge and you’re using Gen AI as a dialogue agent. I mean I think Convigo is really interesting and I think it’s a useful use case of how to student. You know interfacing could be helpful for students but more does it free up teachers ability to teach differently? Because I don’t think we will get away from teachers nor do I think we should get away from teachers because human, the human connection piece is so crucial. So to me it’s really we, we cannot. It’s the deep thought about what’s the purpose of education now.

Like we can’t just keep going along, assigning the same tests and trying to ban cheating, you know, like, which is a short term, totally understandable emergency response because we don’t know what we’re doing and we haven’t got our hands around this. And boy, I wish, you know, tech companies would have given school districts a heads up, you know, like.

Diane Tavenner: Yeah maybe I’m not sure that that would have mattered. I must say, I do love what you’re saying. You know, years ago we created this whole experience for educators to go through. That was how do you create an aligned school model, sort of an elegant model. And literally, step one is to determine the purpose of education. So you’re speaking my language here. And it’s an interesting thought that this could be the lever that sort of forces us to rethink because the purposes it’s serving right now are so obviously met in some other way that we don’t have a choice. We have to revisit that. It’s a fascinating way to think about how it could drive system change.

Rebecca Winthrop: Just on that, Diane, Jenny and I, in our book, in the Disengaged Teen book, our meta argument around why engagement matters. And really we’re focused on, you know, explorer mode. We all need more time in explorer mode, which is agentic engagement, the marriage of agency and engagement. And our sort of big argument is it’s really time to move from an age of achievement to an age of agency in education. And we are seeing the age of achievement fraying. We’re seeing it in mastery, competency based, you know, College Board shifting up its, its, you know, ways of assessing new AP test versions. You know, we’re, we’re seeing it fraying and Gen AI, I think, just accelerate the fraying of the age of achievement, which is all about sort of, you know, content acquisition and synthesis and skills within that and sort of repetition back out. But really following instructions.

Diane Tavenner: Yeah. Talk for a moment about the benefit of an age of agency. What does that look like? Why is that a direction we would want to go? And how does maybe AI support it?

Rebecca Winthrop: Right. I think AI could it. I’m not sure where it. I think it could go either way at the moment. I think it really depends on how we use it. But when we talk about an age of agency, the piece that we are really leaning into is all the evidence around the marriage of, of basically agentic engagement, which, you know, Diane, Summit, you designed for agentic engagement. So this idea that when kids have agency over their learning and they have an opportunity to influence the flow of instruction in Little or big ways Summit is on the extreme. That’s a total redesign.

But you can do it in schools. Educators can do it in their classrooms by giving choice, by asking for feedback, by before starting a lecture, asking kids, where do you want to start? Do you have any questions about this topic? Like we’re doing the solar system, where do you want to start? You know, just that shifts the entire mindset of a learner. Right. Much more engaged. So A, they’re more engaged, B, they’re developing skills to really be able to independently chart their learning journey, which is what they’re absolutely going to need when they leave school. No one will be, you know, spoon feeding them. And we see that in the kids who knock it out of the park in the age of achievement. We found so many kids in our research who were excellent achievers in school and fell apart in college because no one is there, you know, spoon feeding them.

And so for us, and the other piece is they’re more engaged, they have, they’re getting sort of agency over, they’re learning much better skills and they’re much happier. It’s so much more fun to have some autonomy and ownership over your life and to try to be the author of your own life. And those are all the reasons why we think it is really imperative and that Gen AI has accelerated this need because, you know, more than ever now, kids are going to have to navigate this world where you’ve got Gen AI, you’re going to have advanced robotics, you’re going to have neural links, you’re going to have like, sooner we’re going to be, I’m sure, interacting with, you know, new robotic people. There’s a whole, it’s a, it’s a wild world that’s coming down the pike and our kids need to lead it rather than be led by it.

Diane Tavenner: That’s. Yeah, Michael, I feel like I’m hogging all the time. Do you have a question?

Michael Horn: Well maybe last question before we wrap up, Rebecca, which is so let’s say we have the purpose conversation. We, if not nationally, at least in strong pockets of communities, we commit to an age of agency and we start to think about what that is. Where does AI like what you know, you’ve been impressed by it in certain cases. So where do you see it perhaps what’s the positive case to be made for it in this rethought, purpose of schooling with a coherent design?

Rebecca Winthrop: I mean, I think the thing that I am most potentially optimistic about, and I know Diane, I think you disagree with me, but in the age of agency, I think if we’re rethinking the purpose, a huge barrier to that is teacher expertise, practice prep. And we’ve got a ton of teachers who’ve been trained in the age of agency and it is not their fault. They’re teaching their heart out and they’re doing their job. And you know, we’re very clear in the book that we, you know, this is not a problem with teachers. They’re squished from above with the system and squished from below, frankly, with parents sort of pressuring them. And so could Gen AI really unlock teacher ability to be experts in a new sort of let’s pretend the school is around solving problems? I think we need a huge piece of that solving problems, being around citizenship and civic in sort of personal, collective and community wide problems.

But I feel like that it could just, if done well, it could really be a massive boost for educators. So it isn’t so scary they’re not thrown into a whole new purpose of ed, a new, entirely new system with different, you know, ways of succeeding without some serious support.

Michael Horn: No, that’s super helpful. I like the vision in general. I’m taking from this conversation that whereas it’s kind of hard to have these national dialogues or dialogues even in communities around purposes, maybe AI is such an abrupt big shift that it actually brings us to the table to say, what the heck are we doing here? Because every single one of the stakeholders is like, this ain’t working. And so let’s talk about what are we actually trying to accomplish here? So maybe we’ll leave it there, Diane, and shift to the last part. Rebecca. We have this tradition that our readers enjoy. Yep. For better or worse.

They keep lists apparently of what Diane and I have read or watched. So. But we want to hear yours. What do you, you know, what have you, or what are you reading, watching, listening to, often outside your day job. But it’s okay if it intersects with it.

Rebecca Winthrop: Well, I have, Well, I don’t watch much, I must say, except for Shrinking, which I rushed through and through. Loved it, loved it, loved it. That was the best.

Michael Horn: Incredible.

Rebecca Winthrop: I can’t wait for like the next season. But I actually don’t watch a lot of stuff. But I do love to read. So I have two things here. One is Unwired Gaining Control over Addictive Technologies by Gaia Bernstein. She. It’s awesome.

She’s a lawyer at Seton hall and she. It’s a really good book and I’m not all the way done. And then the other one is a novel called Dust by Josh. Classy that just came out. It’s like a sci fi. It’s like a new Lord of the Rings.

Michael Horn: Oh, cool.

Rebecca Winthrop: Wow. Wow.

Michael Horn: All right. I like that.

Diane Tavenner: Yeah, I like that too. That’s fun. Well, I have, I have one this week. I was telling Michael, you know, he’s not the only sort of fan, fan, author, fanboy, fan girl. This week I met a woman named Samara Bay, and she has authored a book called Permission to Speak How to Change What Power Sounds Like Starting with you. She’s fascinating. And I got to have coffee with her last week and we did like a joint book club. We switched books and then got to sit down and talk about them.

I know, super, super fun. She’s got this incredible journey. She wanted to be an actor. She became a dialect coach. She worked with tons of famous people like Gal Gadot, et cetera, et cetera, and now has turned her passion of helping people to people who are really trying to bring impact to the world and drive impact in the world and helping them find their voices in public speaking. It’s which, you know, here’s the inside secret. It’s basically figuring out how to get out of your own way is really the secret to it. And so it’s a beautifully written book.

It’s also a super practical guide in many ways and so highly recommend it. Really enjoying it.

Michael Horn: Awesome. Awesome. Diane. I realized, like, I’m starting to outpace. Sorry, the podcast recordings are outpacing my ability to keep up with the reading and so forth. And like Rebecca, I’m not a huge TV person outside of sports and shrinking. So yes, there we go.

Yeah, but, so I, but I’m almost done with a book. Task versus Skills. Squaring the Circle of Work with Artificial Intelligence by Mark Stephen Ramos, he was the Chief Learning Officer at Cornerstone, is no longer there, but has been starting to do some writing and thinking about how AI changes our learning organizations or organizations where people need to be upskilling and reskilling. So far it has been interesting, deeply technical, and kind of enjoyed it. And I’m not at all getting out of work. So apologies on that, but no apologies for having Rebecca here. This has been fantastic.

Diane Tavenner: Thank you.

Michael Horn: Yeah, thank you so much for joining us. And a thank you again to all of you, our listeners. A reminder to check out Rebecca’s book with Jenny Anderson, the Disengaged Teen helping kids learn better, feel better and live better. Check it out, read it, digest it. We’ll have more conversations about it, I suspect. And let’s all stay curious together. We’ll see you next time on Class Disrupted.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today