Explore

LISTEN — Class Disrupted S4 E2: What Does a Real Pilot Look Like in a School?

Horn and Tavenner break down the key points that make pilot programs work… or not

Kristopher Allison/Unsplash

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Class Disrupted is a bi-weekly education podcast featuring author Michael Horn and Summit Public Schools’ Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system amid this pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Stitcher (new episodes every other Tuesday).

In this episode, Diane Tavenner and Michael Horn reflect on how all too often, educators tell them that they’re piloting something, but when they dig in, what they’re doing doesn’t actually sound like a pilot. To make this crystal clear, they put one of Summit Public Schools’ current pilots under the microscope to start to break down just what is a pilot and how do you do it well.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey Diane. I feel like my world is snapped back to pre-COVID right now. It’s like literally nearly overnight where, because I’m teaching at Harvard, as you know, and that combined with all the plane travel and I’m in-person with so many people, it has me just trying to keep up with a pace that we’ve said we were going to try to avoid. And it’s, I will say it’s a lot of fun, but it’s super dizzying right now.

Tavenner: Michael, I hear you on the travel. I hear you about being in-person with people and we did make promises to ourselves and to each other that we were going to be really discriminating when we were deciding, quite frankly, what we would get on a plane to do. But like you, I’m finding it really hard to say no to seeing people. And in many cases, it’s been such a long time since I’ve seen folks and the in-person connection, it just makes a really big difference on so many levels. And that said, I think we should hold ourselves and each other accountable to a reasonable balance. And so I’m going to get on your calendar so we can do a calendar check with each other.

Horn: Yes to calendar checks. And maybe let’s add that, something that we can do before we record each podcast. It’ll be there to make sure that we have intentionality behind what we’re doing. Because as we’re now in our fourth season of Class Disrupted, while the massive disruption COVID caused spurred us to originally create this podcast series because we hoped COVID would be the impetus to bigger changes, we’re now intentionally continuing the podcast because frankly we want to help educators in schooling communities intentionally disrupt their classrooms in the way they’ve always done learning themselves. See how I worked that in there, Diane?

Tavenner: Yep. I registered.

Horn: So we launched the start of the pandemic because we both believed that the massive disruption COVID caused to our schools could be that catalyst for the changes that we need to make, Diane.

Tavenner: I’m going to hold my commentary on how you landed that line, but I will amplify that we are, we’re bringing a lot of intentionality. Last season was a lot about following our curiosity. And I think this season we’re really trying to be intentional so that we can drill down and get into, “What does it take to make the changes that are required in schools?” And I think I’m feeling like there’s some space and some momentum for that for the first time. And we’ll get into topics this season as well to explore if that’s really true. But today, we want to dive right in there and get into a topic that is going to be what schools need to do if they’re really going to change in the way we want them to. And so…

Horn: Yeah, totally agree. Let’s go. Where are you going, Diane?

Tavenner: Well, I want to talk about pilots, not airplane pilots, despite all our silly humor at the top of this, although there’s a lot to discuss about that given how much we’re traveling. Now, do you see what I did there, Michael?

Horn: Yeah, I see it. Yep.

Tavenner: OK. Enough with the silly. What I really want to talk about today is pilot testing and as a means for meaningful change and redesign because it’s something that very often turns out just not to be good in schools.

Horn: Yeah. Look as much as I was ready for you to fix my travel schedule in this episode and forget about the weekly check-ins. And we’re clearly all about parallelism and puns today. But seriously, this is an incredibly important topic for a few reasons, because first, as you know, I write a lot about how schools ought to start big changes with small steps as pilots where they do a lot of testing and learning. But in practice, like you, I find that schools really struggle to do pilots well or correctly for a whole host of reasons. And that leads into the second thing that I’m interested in, which is that it feels like as a result of this, pilots are sort of a four letter word in education circles, both in the schools and among educators, but also the ed tech companies that sell the schools, interestingly enough.

Tavenner: Hmm. Fascinating. There might even be some more groups there that aren’t very excited about pilots either. What you’re saying resonates a lot with me, Michael, because I can’t tell you how often I talk with school people who say they’re piloting something, but when I dig in, it doesn’t sound as if they have any of the key elements required for a true or effective pilot. But this is also really topical for me and for us in Summit right now because we are conducting a few pilots. And I’m really curious to get your feedback on our process and approach because honestly, I think you might be able to make our work better.

Horn: Well, I’d love to dig in Diane, not just because it’s an area of interest for me, but as Clay Christensen loved to say, “Never is it the case that a fully formed idea or innovation comes out of someone’s head and you just implement it whole hog.” It’s always the case that you end up putting something out there, you iterate with others on it, it’s half baked, you flesh it out, you put it into place, you tweak it and so forth. And so to help folks wrap their head around this, I think we really ought to break it down. Your point is an excellent one. People, often what they say is a pilot really isn’t one. So let’s start with the basics. What is a pilot? And perhaps we can make this real. Let’s take one of the pilots you’re doing right now, if you’re okay with it as a case study and just walk through it.

Tavenner: Michael, that sounds perfect. I think the pilot we’re doing related to our school principals, we call them executive directors or EDs, I think that will work really well for what you just suggested. So maybe we can try that one.

Horn: That sounds excellent. So why don’t you just start by describing a bit about what you’re piloting and why you’re doing it, and I’ll foreshadow that in a future episode. I want to break down how you all at Summit choose the pilots that you choose to work on, but that’ll be for another day.

Tavenner: OK, that sounds great. OK, so everyone knows… I think everyone knows just how critical teachers are to student experiences, but I’m not sure everyone realizes that both the research and practice point to school principals as, I’m going to say it, as being even more important, quite frankly. It turns out that the school principal makes a huge difference when it comes to the experience for teachers and students, which really has a huge impact. And Michael, I can tell you from my personal experience and then being really proximate to this, the job is incredibly demanding. It is very often not rewarding. And quite frankly, during the pandemic it became almost unbearable.

And we’re talking a lot about teacher shortages and things, and we’ll come back to that I’m sure in a future episode. But we should be talking about principal shortages and exits as well. All of that to say at Summit, we are really deeply invested in supporting our teachers and our deans to become school principals and executive directors as the pipeline and hoping we can prepare them for success in this super impactful role. And we also are really focused on ensuring that they stay in that role for at least four to eight years and are successful there.

Our network or districts, however you want to think about it, has a lot of supports for these school leaders, these principals and EDs and like most networks and districts, we’ve historically had a role that sort of manages or coaches the school principals. And so for over a decade we’ve tried different configurations of this role, different people, processes, and quite frankly, we’ve just never been satisfied. It just doesn’t feel like it’s effective in a lot of ways. And so this year we want to try a pretty new and different approach and all of Summit School leaders agreed to this. And so we’re piloting what we’re calling the cooperating ED model. And the basic concept is in teaching, there’s a very successful cooperating teacher model.

Essentially an experienced teacher is paired with a developing or onboarding teacher or a teacher in training. Both teachers are teaching and they enter into basically a peer coaching relationship that most people agree improves the practice of both teachers. New teachers learn side by side from a trusted peer who’s doing the same work they’re doing. And the experienced teacher improves because in order to engage with the newer teacher, they have to be really metacognitive and reflective about their practice. And so our hypothesis is that this approach to principal coaching would be more effective than what’s a more hierarchical model where you’ve got this principal supervisor coach, someone who likely did the job previously, but is no longer in it.

Horn: Super interesting, Diane. I cannot wait to dig into what you’re doing. A couple quick clarifying questions, if you will, just because I’m curious and I think others will be as well in the teacher training model, are your novice teachers essentially then co-teaching with the expert teachers in the same learning environment? Because you all obviously have done away with traditional classrooms, so that may work better for you all than say others in a traditional classroom.

Tavenner: Yeah. And I’m not sure we’ve completely done away with traditional classrooms, but that’s a head topic for another day. But yes, the cooperating teachers are working in the same environment as the teacher in training, or in our case we call them residents. So that’s true.

Horn: Perfect. OK. So then to fill out what you’re doing in the pilot then, is that experienced principal or ED actually working alongside the new principal in the same school or are they in charge of two separate schools, in essence?

Tavenner: In the case of this, our principals, they are both leading Summit model schools, Summit Schools, but they are leading different schools, which is not exactly parallel, but we do think it’s close enough given the commonality.

Horn: Super. No, that’s helpful and apologies for taking us down that road, but I think it’s easy to focus on the what rather than the how obviously. But I thought I would just double check my understanding so that we could focus now on how more. In a future episode, as I said, I want to dig into the “why?” Like why you chose this pilot specifically out of all the things that I know you as a perfectionist would probably want to do better. So rather than go in those directions though, I’ll just name it, we’re going to ignore Simon Sinek’s advice and stay with the “how?” As in, “How do you actually do this pilot well?”

Because it’s so important because schools, like so many other places, they’re very resource constrained. You’re working with students to whom you don’t want to do harm in your efforts to make things better. So let’s unpack this concept of a pilot. And I think the first question I have is this, “How do you think about this scope of a pilot or how do you bound it in some ways so that it’s really a pilot?” As opposed to, “Oh, we’re no longer doing this old thing, we’re all doing this new thing.”

Tavenner: Yes, this is such a good place to start, so let me try to stay with the how and not get too much into the content. It’s so tempting to go there, which I think is honestly one of the ways people go wrong in pilots and maybe we can point out a bunch of ways that pilots go wrong as we talk through this. So in our case, we agreed that we would pilot a cooperating principal ED model for up to one year. So we started by bounding a year, and the next thing we did is we said we’re going to use six-week cycles. And I know you’re going to ask.

So what I mean by a six-week cycle is that we will build or design the initial thing that we want to pilot and then we’ll test it with fidelity for six weeks. At the end of the six weeks we’ll come together to learn. And then if we believe there is more to be learned, we will design for the next six week cycle and repeat. If we don’t think there’s more to be learned, then honestly we’ll pull the plug or end the pilot. And our hope is that we won’t have to do that and that we will end the year having successfully iterated on our initial idea and gotten it to a place where we can make a really strong evidence based decision about if we want to adopt that change into our model permanently.

Horn: So what you’ve set up then is six-week rhythms. And I want to make this super explicit for folks listening because what’s critical here is that you have a rhythm where there will be explicit, what I would call checkpoints. And it’s not necessarily that those checkpoints have to be six weeks exactly in duration, but you have to have a rhythm and it has to be a long enough time to truly test out the assumptions or the hypotheses that you’ve explicitly set out to test and learn from. But also that that timeframe is also shorter than a full school year. So it allows you to do a few cycles during the course of the year so you can adapt and iterate as you learn and make some real progress and build momentum. And if it’s too long, what I find is you’ve sort of skipped the minimum viable notion that’s so important to a pilot.

And instead what a school is really aiming for is perfection out of the gate, which as we said upfront, it’s just not going to happen when you’re doing something radically new. And all too often, educators I talk to, they don’t create these strict timeframes to test ideas in, nor do they bound it by less than a school year. So the pilot either loses momentum or frankly they never have a set time when they’re all coming together to ask, “Are our hypotheses proving true? What are we actually learning?” And then make a decision, like an actual decision off of that. But there’s another implication, Diane, which it means you have to start up front with a clear set of hypotheses that you’re actually testing, right?

Tavenner: Yes, absolutely. And again, this is a place where pilots go terribly wrong as we’re big Lean Startup fans. And so we like to use the build, measure, learn cycle. It’s an approach that we combine with best practices from learning science. And so we’ve sort of mashed them together in our own little version. And it’s critically important. And again, this is just where pilots go wrong right out of the gate. You have to start your pilot with testing whatever you’re going to test in a way that you clearly and crisply identify what you expect to learn. And that’s the key thing. The biggest mistake people make is they don’t articulate and record what they expect to learn. And so when they try something new, they don’t actually know if it worked or not because they never articulated what they thought was going to work in the beginning.

And Michael, I am not a big baseball fanatic, I’m sure you can attest to that, but I do have in my mind what I always think of as the Babe Ruth moment, for some weird reason. Don’t laugh at me because I have this… As far as I’m aware, Babe Ruth used to call his shots, if you will, and there’s this black-and-white photo image in my mind of him pointing I think to right field from the batter’s box and literally calling a home run shot.

And so I always think of that when we’re starting a pilot. This is exactly what you must do in a pilot. You have to call your shot. I think the best way to do that in our opinion, is by using an if then statement. And it goes like this, “If we do X, then we believe Y will happen.” And that is what you’re testing in your pilot. In the absence of that sort of level of clarity and discipline, then all you’ll be doing later is making up a story about what happened. And you can make up any story you want. It’s not learning.

Horn: This is so, so, so important, Diane, because it’s another big thing that I see, which is that schools, they’re just not clear about what their hypotheses are upfront. And you just said it brilliantly. I think, “if X, then Y?” It makes sure you’re clarifying what you’re going to do and what you expect to see and how you’d know out of it. And to be clear in my view, it doesn’t just have to be one hypothesis. You could have a few assumptions you’re testing, but you have to take the scientific method, that by the way, we’re supposed to be teaching students and turn it on ourselves because when we’re innovating, we are running an experiment. So this raises the question, Diane, in your mind as an educator on the ground running schools, what makes for a good hypothesis?

Tavenner: I’m so glad you asked this, and again, I want to make sure I’m checking my work with you because now I really get to test the quality of what we’re doing. So to begin with, we think about two hypotheses. The first is… or groups of them, and you’ll see that in a moment. The first is what we call the north star. So this is what we expect to see by the end of the pilot. So in our case for this pilot at the end of this year for a full school year if we make it that far. And then the second hypothesis are going to be what we test in the first cycle. And so we set those big north star ones and then we figure out literally what’s the first six weeks going to be about? And that first six weeks should put us on track to the north star. So we should be aligning those two things, but it needs to be something we can actually test in the first six weeks. And so let me share ours with you and then you can tell me what you think about them.

So for this pilot, we have three north star hypotheses. The first is that if our principals engage in a cooperating ED relationship, then both EDs will access appropriate leader supports. This is going to get a little wonky here, bear with me. Exhausting universal supports before tier two and tier three. The second one is if EDs engage in the cooperating ED relationship, more EDs will be retained in ’23 than in ’22. So year over year we’ll have more retention.

And then if EDs engage in this cooperating relationship, more of them will persist in their role for four or more years. And so that north star is actually obviously beyond the single year, but we can track where they are in their tenure and whatnot. So just a quick note on that first one because that was super wonky. I realize it requires some knowledge about multi-tiered systems of supports. I suspect some of our listeners will have that and some won’t. I’m not sure we want to go there today, but I think at some point we might, it’s a really foundational element, not only of this pilot, but honestly how we develop people and it’s a big part of schools’ and students’ experiences. So I just want to put a pin in that and note that.

Horn: No, that makes sense and it’s super helpful frankly for you to lay this out in this way. I will share, I think that that north star hypothesis or set of hypotheses is so imperative because you’re basically asking the question, “What are you really after by putting this theory of action to work and how would we know?” Right? And so to me, this is where smart goals, specific, measurable, attainable, realistic, time-bound can also be helpful. But it’s all in service of making the theory of action crystal clear, not just to you but to all the team members who are going to have to evaluate this and have… so that the team really knows in a very clear way, “Were we successful or not in this pilot?” And you have that in your north star hypotheses, you have a time-bound goal, it’s clearly that it’s something that is attainable based on more EDs persisting.

It’s also therefore specific, it requires you to know the data, by the way, I would say of your baseline, which a lot of schools, they don’t know what they’re checking against or what they’re measuring against. Now, you didn’t share I guess what percentage above your current or historical baseline you’d want relative to the resources you’re expending on this for it to be successful. I’ll say I think it’s okay given this particular intervention, but in others for those listening, you might want to make that even more specific. So you can ultimately answer the question, is the juice worth the squeeze? Are the efforts we’re putting into this… we don’t want to just see maybe a marginal improvement. We want to see something more significant than that. And then I suspect some people listening are going to jump in with a question which is, “But wait a second. Her ultimate timeline was for four more years. How can they answer that with a six week cycle?”

And you already alluded to this, you can’t with precision, but you can start to see if your first cycle of tests produces the results that you anticipate as measured by something more interim, which that’s where you have your first set of hypotheses. Is this an innovation worth continuing? Should we tweak it or shelve it? And I think that raises the question, which is, “OK, what’s the first cycle of the pilot you did Diane, and what were those hypotheses, those interim hypotheses if you will, that you were testing?”

Tavenner: Yeah, that’s great and super great feedback. I appreciated that. And as you know, we will take it back and incorporate it. So our first cycle hypothesis, and we only have one, we had one, is at the start of the school year because we began this right at the start of the school year, weekly 60-minute meetings will enable principals to access universal supports. That’s it. And let me just share a bit about what’s behind this one.

So first, given that this is new and principals are extraordinarily busy, we wanted honestly to first figure out if they would even or could even make time for a weekly meeting. 60 minutes was a guess around the amount of time, we weren’t sure. So we just decided to test that. And then second, we have this belief that most of what EDs need in order to be successful in their roles is available in a pretty extensive support system that we’ve built. But the hard thing, is a new ED is… knowing where to look for help and support or even knowing what you need, what the support is and what that looks like. And so that’s behind that first hypothesis.

Horn: Love this one. We’ll get into it more in a moment, but I also love it because if it’s wrong, then the whole theory of action is game over. So you’re testing the most critical hypothesis up front, which I think a lot of people would just take for granted and say, “Well of course we build a meeting and people go to it,” but that’s not how life has lived. So love that you stated it as an assumption, stated it as a hypothesis, we can do that in testing it. And this is where we get into my desire and my prior comment about specificity because we all need to be clear now as a team, how would we know if the test was successful or not?

Tavenner: Yes, yes, yes. This is the cycle, the build, measure, learn cycle and I have like… measure is bolded in my head because this is another super common mistake that people… they don’t actually measure what they’re trying to learn as they go. And then how do you know if the test is working or not? Sometimes they try to measure at the end, but then it’s like reconstructing knowledge, which I then think falls again into this constructing-a-story versus really doing something scientific.

So in our case, here’s how we approached our data collection. First of all, we paired a cooperating ED with an onboarding ED. So we made up these seven total pairs. Quick note here, we were really clear that we didn’t know if these pairings would stay the same after the first six weeks or not, and when we pilot things, we find it really important to keep people really open so we don’t want them to get locked into any particular specific element of the pilot. So a little bit of a side, but that’s important. 

We created a standard meeting agenda template for the 60-minute meetings and the experienced ED is responsible for tracking the following quantitative data in the agenda. Did the pair meet this week? How long was the meeting scheduled for? How long did it last? Did you discuss supports? Yes or no? And did you discuss the tier of the support? Yes or no? You will notice those are very simple things. It takes under a minute to answer them. Then the cooperating ED also tracks notes of their conversation that they have, assuming they have it. And we have four basic questions and a pretty simple framework to guide their conversation. And again, this comes from our belief about what EDs and principals should really be focused on.

And the first one is like what are your long term priorities? What are your short term priorities? What barriers are you experiencing in reaching your priorities? And what supports are you accessing? And to honor the pure nature of this, the experienced ED answers the questions as well as looks for opportunities to model and practice leadership skills, as well as gives honest and actionable and timely feedback. We call that HATF in our culture. 

And so each week our pilot project manager collects and shares the data with the cooperating EDs. So for the six weeks she collected the data, shared it back, we reviewed the data, troubleshoot anything that’s preventing us from implementing the exact model that we set out to test and then we repeat it the next week. One last element, in that weekly meeting, the EDs used a very short form consultancy protocol to help them calibrate with each other and reflect on how they were engaging in their role. And that sounds like a lot, but really that’s it. That’s the whole initial design. Pretty simple, not expansive.

Horn: Well, but it sparks a lot of thoughts for me. Diane, I first, I want to say this, clearly I hadn’t thought about until you said it, because success is not that the test was successful, success is that we have an answer to whether the test was successful. And so your data collection process… so we’re not storytelling and justifying something that someone wants to do but actually has a real read on “Did the behavior change or whatever it is we’re measuring actually in fact happen?” That’s success. So it’s not defending my pet idea, it’s really, “Did we get an answer?” That’s success. So the second thing is, one of my tweaks sometimes to the Lean Startup approach is that not actually everything needs to be a build upfront. It could be just a test. So it doesn’t have to be a pilot to test our core hypotheses or what the Lean Startup method would call a minimum viable product.

It could be simpler tests, it could be even desk research or checking with other schools or industries that have done something similar, interviews, whatever. It’s whatever, though, allows you in the clearest, quickest way to check your most critical assumptions or hypotheses. And in this case, you needed to do this pilot to see can people in fact dedicate time to this? And the idea is that if these hypotheses prove true, then you would start to build into more and more advanced prototypes and so forth and operationalizing of the work. In each case though, testing the new most critical hypotheses that could derail success with greater and greater precision each time you go through the checkpoints. Now when I hear what you’ve done in this instance, I think you’ve naturally done this because you’ve prioritized, again, I said it up front, but what are the most critical underlying hypotheses that if we’re wrong it’s going to derail the whole theory of action?

And in this case, I’m just reiterating it at this point, but I think it’s so important you’re saying do the EDs even have time for these meetings? But not just that, I also hear you saying, do the meetings address the things that we think are derailing these new EDs in their own progress? Like meaning do they find these supports that people don’t know or out there or don’t know to ask for or don’t know where to find them? And I think Lean Startup would say, “Test the things that are most impactful to efficacy and sustainability of the effort.”

Discovery-driven planning, which is the framework I like to use as you know, and it’s what… Lean Startup is built upon. It would say a similar thing. Test those things that are most risky and most uncertain. And I would just add to both, this is a really important process when you’re doing something that’s a big departure from the way you’ve always done things. If this isn’t a new process or a new offering, it’s like an incremental advance, you’re just tweaking the lesson plan in class or something like that, then you just implement it and don’t overthink. But this is a process for doing something that really moves the ball forward in leaps and bounds.

Tavenner: Thanks for making visible something that I was taking for granted. We certainly had a lot of desk learning that went into us actually trying something out and it brought us to this moment of wanting to actually try something in real life, if you will. And I love the crispness of test, the thing that could crater the entire effort, which in this case is, that the ED for whatever reason, can’t make time to meet or doesn’t make time to meet. And so I love the insight because how many brilliant fancy systems do we have in schools that fail miserably because people just don’t participate in them and we never test that. We just roll out this whole big system. And some people might say, “Well did you really need six weeks to figure that out?” And I would argue, yes, you can’t get… Look, I can get people to go to a meeting once and generally maybe haphazardly a few times, but will they make it a routine and habit week over week? Especially if they aren’t meeting with their boss? This was a big thing we had to figure out.

Horn: It’s such a good point, Diane. And also it gets out of some people’s criticisms of pilots, which is that people are so energized that they sort of goose the results from the initial pilot. But look, this is all well and good. You’re testing hypotheses, you’re measuring the results of these tests as you do the pilots or whatever, even the initial desk research you set up that convinced you this was worth going forward with. And then you’re learning and at each checkpoint the team must come together and say, “Based on the evidence, based on what we’ve learned, do we keep forging ahead to our next checkpoint? Do we tweak our implementation based on the available evidence or should we just shelve this now and save us all the headache for something that isn’t working?” Which is the beauty of this method by the way. You don’t waste time on something that’s not working.

And I think this raises the question I’d love to ask, which is how do you move beyond a pilot so that it then becomes just the way the work is done and in fact replace the old with the new. And I imagine, in fact, I know we’re going to get more into this at the end of the season depending on how your pilots go, but I just think it might be helpful upfront just to address this because sometimes the other thing I see, Diane, is people just do perpetual pilots and they never replace the old and it’s just another project that lives on forever.

Tavenner: Oh my gosh, pilot hell. Yeah. It’s terrible. This is such an important point because from our perspective, we’ve given ourselves a year to design, iterate, and codify a model that we will decide if it is going to be implemented fully next year. We think the approach we are taking will get to that best possible model and give us good data and information to make an active decision, which is another thing we should talk about at some point. Because I think a lot of people don’t really make decisions on these things.

Horn: I agree.

Tavenner: They just like slide into stuff. So let me share what we’re doing that I think makes this a pilot versus just implementing a new thing. So we weren’t comfortable with totally removing the principal co-supervisor role while we were piloting. And I suspect this happens in a lot of cases and I think that’s part of it, being a pilot, you’re not just ripping out the old system, you’re actually doing some side-by-side stuff here.

And so we also knew that if we would just keep it in place as-is, we couldn’t possibly know if the pilot was working because how do you know what is having the effect or not? So specifically what we did is we kept the principal coach available and I would argue almost like a safety net sort of way, but here’s the key. We’re tracking in detail every single thing that she is doing, who she’s working with, when, why, how, who initiated it. So we will actually have data to see where that crossover may have happened and we can figure out to see what was impacted. Not perfect, but I think helpful.

Horn: Yeah, I think it’s such an important point actually. And just to state it again, pilots initially run alongside what you’ve always done. You don’t just shelve the old with something risky and unproven, it’s just not smart. And you only replace the old thing if and when the new is proved out. It raises another point for me that I think we’ll tackle on another episode maybe, which is how do you resource your pilots effectively when you’re running a school? But for now, I guess the other piece of this is if the new fails then you go back to that old way of doing things. But now I’m super curious because I kind of want to know, have you finished your first cycle yet and what did you learn? What happened?

Tavenner: Yes, we finished it. It’s so exciting. I’m such… I love this stuff as you know. So this is a really important element of the pilot, which is, you have to come together at these checkpoints, we call them step-backs at the end of the six weeks. Essentially what we do in that step-back is we bring all of the data from the six weeks. Our goal is to use it to measure if what we thought would happen actually happened in the first six weeks, and then to decide if and how we would iterate or what we want to learn and how we’re going to do that in the next cycle. So Michael, these are such robust and meaningful conversations when you have been really clear about what you want to learn, collected the data and measured, and now you’re coming back together to iterate or make a decision. And it’s also such a disciplined method for how design improves in the way you intended to. And a big point here is it’s inclusive of all these voices. It’s not like some person designing something and throwing it out there. It’s like everyone comes together. 

So this is what happened for us. The short story is we learned that our first hypothesis was true. Our EDs did meet weekly, they did talk about supports, they talked about them more and more as the weeks progressed and they did access the supports. Check. We learned what we expected to learn. Now we ask ourselves what do we want to learn next? And again, we’ve got our eyes on the big prize, the north star of better retention and longevity. But in the short term, what can we learn that leads us in that direction? And so here is where we decided to go. Our next hypothesis is if our cooperating EDs believe this is key, believe universal supports are sufficient to do the job, both EDs will access appropriate leader supports and exhaust them before moving to higher level supports.

Horn: Super interesting, Diane. I’m struck by a number of things as you talk about that, but also how inclusive it is because the other thing we haven’t said is when you bring the team together, one of my favorite steps, is to ask the team members to surface the hypotheses that they want to test and then to rank them to figure out what they think we ought to test next. Because you figure out where your blind spots are often in that process and where the team isn’t all with you. Some team member thinks, “Actually this thing you think is going to happen, this theory of action, I think it’s incredibly risky.” What a great way to get that feedback without them having to stand out on an island and sort of voice it by themselves. It, I guess, builds the next question for me, which is, “So what are you going to go build and test? How are you going to test that?”

Tavenner: OK, so great. As a result of this data and this really robust conversation and other things that we notice, we asked ourselves what should we continue doing? What should we stop doing and what should we start doing to learn if our new hypothesis is correct? And so that led us to decide the following things that are shaping our next six weeks. 

So we are going to stop mandating the 60-minute meeting because what the data showed us was there were variable lengths of meetings and no one identified a benefit to a specific length of meeting. So we did away with that. We are going to continue with the same pairs because what we noticed was relationships were starting to form and connections were being built and we think that that’s beneficial and will help move things forward. We’re going to continue to collect the data we’ve been collecting and we are going to start a survey that asks both EDs and team members in our organization who provide a lot of the supports to report on which EDs are using the supports, which support and with what frequency. We’re also going to ask if the support is helping them to do their job. So this became sort of a big thing and we want to fuller view with multiple perspectives of which supports are being used by who and to what end. And then finally, we’re going to evolve our weekly mini consultancies with the cooperating EDs. And we just made a few tweaks to the standard weekly agenda. So we’re all committed to implementing this iteration for the next six weeks and repeating the cycle.

Horn: So much in here. And as we wrap up, let me just try to point out a couple things Diane, because I think there’s a lot you do as an expert in this and people probably don’t know, like you’ve presented at Lean Startup conferences and things like that, if I remember correctly, that isn’t always codified. So in essence you’re using a pilot not necessarily based around a problem by the way, but it’s based on the progress you want to make. And we’ll talk more about this in a couple episodes, how you choose what to tackle. But the idea is that your pilot could be around something you’re doing actually relatively well, but you think there’s a way to get much better and other times it’s a legitimate problem. But either way it’s a priority. From there you’re then putting in place a theory of action. You have an initial design, that means, for what you want to do differently. And you have that north star statement of what will change as a result of doing that and how you will know and there’s specificity in that statement.

And from there you’re building a logic model essentially. What are all those little hypotheses that have to prove true for the big one to happen? And your first cycle is in essence to test the most critical of those little hypotheses or assumptions that have to prove true or else the game is over and the pilot just isn’t going to do what you want to do. And look, it turned out it didn’t matter whether the meeting was 60 minutes or not. Love that time was variable. Little side note there, but it did matter that EDs… that they met and it did matter what they did during that meeting. So you tweaked accordingly. And that just shows you could have a robust conversation because you had collected all that data and now you’re going to run a next-six-week cycle to further improve the precision, test that next set of hypotheses, or maybe you more finally test some hypotheses that maybe you only tested at a high level before enough to say this is worth pursuing. 

And each round you’re just going to keep getting more and more precise if and only if the tests are proving largely true. And if so, then you’re going to rip out the old and this will become the new way of doing things. Now to be clear for those listening, we’re going to dig into a bunch of these facets more throughout the season because there is a lot here and a lot to unpack. And I think how you do this in schools is really tricky. It’s different from other environments in some ways. And so I just want to thank your team, Diane for being willing to let us shine your work under the microscope of this public podcast.

Tavenner: Michael, well summarized and thank you for calling us experts. I think in reality we see ourselves as learners and that’s like how we got where we are and continue to try to improve. And I just want to pile on the thanks for the team. We have an incredible team of Summit EDs and they are always willing to make their work public, which I think is just so admirable and they’re just amazing in every way, but their eagerness to grow and improve in public, I just really appreciate so much and a huge thank you to Malia Burns who’s leading this pilot at Summit. Malia is bringing a really deep understanding of the multi-tiered system of supports, school leadership, continuous improvement, and leading with curiosity and discipline. So I just appreciate them also very much and feel really honored that I can share their work.

Horn: It’s all well said. And I will also acknowledge for those listening, that was a lot, we’ve gotten through a lot. People can tell that we love to read books like The Lean Startup or Discovery Driven Growth or read a McGrath Substack newsletter for example, or whatever it is. Guilty as charged. But as we wrap up, let’s pivot — another word. What are you reading or writing, or excuse me, reading or watching or listening outside of all that right now?

Tavenner: Yeah. Folks who listened last season might remember I’m working my way through a big reading list as I prepare for a visit to India. And at the moment I’m deep into the novel Midnight’s Children by Salman Rushdie. This is a late add to my list, maybe a little bit controversial for it. And that’s for another conversation. It’s 40 years old, Michael. But my, what a story and writer. Wow. I understand why it’s considered by some people to be a masterpiece and one of the world’s greatest novels.

Horn: Love it. Love it.

Tavenner: How about you?

Horn: Yeah, well I’m proud actually, but because after breaking my fast on Yom Kippur, my wife and I finished the first season of The Bear on Hulu, which not only means I’m kind of up on something current, so you’re reading something 40 years old and I’m doing something current, Diane, which we should all pause about. But it’s also a series that I think you’ll love because it’s all about good food, which we both appreciate. Interesting choice, I suppose to finish up Yom Kippur, but it’s also about team dynamics and culture, frankly in this case, in a dysfunctional kitchen and turning it around, which if those who listening haven’t figured out team dynamics and culture are things you care a lot about, Diane. And so if they didn’t get that, they should re-listen to this episode to see your passion for it. But it’s an incredible series. Highly recommend, enjoyed finishing the first season of it. And with that, I’ll just say as we’re really into Season four, thank you for joining us on this ride on Class Disrupted.

Michael B. Horn strives to create a world in which all individuals can build their passions and fulfill their potential through his writing, speaking, and work with a portfolio of education organizations. He is the author of several books, including the award-winning Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns and the recently released From Reopen to Reinvent: (Re)creating School for Every Child. He is also the cofounder of the Clayton Christensen Institute, a nonprofit think tank.

Diane Tavenner is CEO of Summit Public Schools and co-founder of the Summit Learning Program. She is a life-long educator, innovator, and the author of Prepared: What Kids Need for a Fulfilled Life. 

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.





On The 74 Today