In Edtech, You Either Bet On Teachers Or You Have To Build One

8 min read Original article ↗
Khanmigo activates and says “Plz click me!”

I was invited to teach a class at Stanford University recently, one led by AI researcher Dora Demszky and attended by a lot of computer science students interested in using AI to do good in education. I tried not to be too much of a downer here, even as I pointed out the dampened teacher uptake of AI and the reality that lots of edtech companies, in spite of their renown, are not helping 95% of students.

During Q&A, a student asked me about the key choices that define a company’s direction in edtech. Three came immediately to mind but I mentioned this one first: “You either bet on teachers or you bet on software, and if you bet on software, you eventually learn you have to build a teacher.”

From its earliest designs, Khan Academy made a big bet on software-as-a-teacher and a very small bet on teachers-as-a-teacher. I’m speaking descriptively here. With Khan Academy, the teacher’s main work is to get their students logged into Khan Academy. That’s the bulk of it.

Khan Academy is not an unusual edtech company in this regard. But something interesting happens whenever these companies try to broaden their appeal from opt-in students, from nerds looking to get nerdier on their own time, to students who can’t opt-out, students who are required to go to school and learn stuff they don’t necessarily want to learn: the companies realize that they have to build a teacher.

A before & after image with “before” Khanmigo sitting static in one corner and “after” Khanmigo asking “Need help?”

Here is what I mean. Previously, the icon for Khan Academy’s AI helper Khanmigo sat in the lower-right corner of every practice screen waiting patiently to be activated by kids. Now, something that kids do in Khan Academy will occasionally trigger Khanmigo to ask more directly, “Need help?”

When does Khanmigo decide to intervene like this? Under what conditions? What does it all mean? These questions have transfixed me recently.

Let me tell you what I think is going on behind-the-scenes at Khan Academy: kids aren’t activating Khanmigo. My guess is that Khanmigo gets called up off the bench in fewer than 10% of student sessions.

This isn’t because Khanmigo is a bad AI tutor. (Though sometimes it’s too helpful and other times it isn’t helpful enough.) But Khanmigo asks kids to do something most of them do not want to do—read academic writing about math. What kids would rather do in Khan Academy (and software like it) is toss answer after answer into the automatic feedback slot machine to see if they get their computer confetti.

A student clicks “Check” on their work and sees digital confetti fly into the air.
Three calls to action!

Generally, schools deploy teachers to encourage students to do the things they don’t want to do. But Khan Academy has imagined a very limited role for teachers. So with this Khanmigo update, they have tried to build a human teacher, or at least build human teacher-like features, out of software.

An image from Polar Express with 3D kids who are in the uncanny valley—real enough to be recognizable but unreal enough to feel creepy.

I sincerely hope Khan Academy is happy in this work because, to me, it sounds like hell. Incredibly difficult. Very low odds of success. A one-way trip into the Uncanny Valley.

Human teachers intervene into student thinking all the time, triggered into action 100 different ways by 100 different students. A student is working faster than expected. Another student is working slower. Another student isn’t working at all. A different student has an error, but stands at the precipice of a revelation. Another has every correct answer and would benefit from a question to deepen their thinking. Two students have different answers but are thinking similarly enough that they should talk to each other. Enough students have the same question to necessitate a whole-class response. Like Khanmigo, a teacher will sometimes say “Need help?” but far more often they’ll say, “Let me help,” and start helping.

Khanmigo has exactly one intervention here—asking “Need help?”—and it intervenes under exactly one condition, which I think I have figured out after trying to trigger it about a dozen times. Khanmigo asks, “Need help?” after:

  1. You enter an answer into an input.

  2. You click out of the input.

  3. One second elapses.

An animated GIF showing how to activate the “Need help?” icon.

It doesn’t matter if your answer is right or wrong. It doesn’t matter if you have just clicked “Check” on your work. Khanmigo will always ask if you need help one second after you click out of the input. This leads to interesting situations where Khanmigo asks if you need help while also telling you “Good work!” simultaneously.

An image of Khanmigo asking “Need help?” above feedback that says “Good work!”
Three calls to action!

I am not here to critique this particular trigger or intervention, but I will note for emphasis: there is only one of them where human teachers have 100. This is what happens when companies realize, perhaps belatedly, that they need human teachers and try to build one out of software. They get 1% of a teacher.

Maybe it’s possible. Maybe you can program a computer to intervene with the sensitivity and skill of something like the median teacher. To me, it sounds like hell, working to create something that is a smidge more teacher-like when real teachers are standing right there.

The way out of this hell is to bet on teachers instead, asking yourself, “What do teachers need that technology uniquely provides?” Asking yourself, “How do I use technology to bring out the best in these humans?”

With the Desmos Activity Builder, we made several specific bets on teachers and technology.

A teacher clicks “Sync to Me” among a panel of teacher features.

Teachers are uniquely effective at launching activities, at developing “collective effervescence” among students, setting the stage, and getting everyone locked in. So we bet on technology that would help teachers pause and pace their lessons, that would help them get students on and off computers quickly.

A question asking students: “2 pizzas bake at 800° F. What should the temperature be for 4 pizzas?” A student responds: “This was my first answer. We might need to increase the time for the amount of pizzas, but doubling the temperature will violently scorch the pizzas.”

Teachers are uniquely effective at monitoring student thinking, at seeing students operating below their best and drawing their best out of them, at finding the right in the wrong, at having conversations that transform the ways kids think about math, about themselves, and about each other. So we bet on technology that would let teachers see student thinking in something close to its raw form—like “violently scorch the pizzas” above—rather than (for example) class-wide aggregations of multiple choice responses.

A student responds: “This was my first answer. We might need to increase the time for the amount of pizzas, but doubling the temperature will violently scorch the pizzas.” A teacher clicks a “Snapshot” button and the words “Snapshot captured!” appears.

Teachers are uniquely effective at responding to student thinking, at knowing when and how to intervene, so we bet on technology that would let them send written comments and snapshot student work for use in whole-class discussion.

If you bet on software, the good news is that novice teachers will find your software difficult to screw up. The bad news is that expert teachers won’t get much more out of your software than novices. And year after year, you will learn new lessons about the human dimensions of teaching and struggle to turn those lessons into software.

If you bet on teachers-as-a-teacher, your work is different. You will likely see greater variance in outcomes. Veteran teachers will likely see much greater results than novices but some of the novices might not see better results than if they had used a software-as-a-teacher.

If you bet on teachers, you are betting on humans, including their assets and liabilities. If you send your work through humans as an amplifier, some will amplify it in uninspiring ways. If you choose this path, your work is to develop teachers as much as their work is to develop students.

I understand why edtech founders so frequently choose software over humans. Software compiles. Humans don’t. When software defies your expectations, you debug and re-compile. When humans defy your expectations, you have to talk to them and talk to them and talk to them some more. You have to imagine yourself happy here.

This is one of the most consequential questions you will answer if you decide to build software for learning. It defines your ceiling. How much do you think teachers matter?

Do I know how to file a support ticket or what? 24 hours later and Khan Academy has changed Khanmigo’s behavior here. It is an objective UI improvement that Khanmigo no longer asks you if you “Need Help?” while also congratulating you on your good work.

But they are still in hell! Khanmigo now asks if I “Need Help?” a few seconds after the pageload, whether or not I have entered anything into the input.

Maybe I have walked away to sharpen my pencil. Maybe I am deep in thought. It doesn’t matter. Khanmigo has decided now’s the time to jam its cute little green hat into my process.

They’ll never get this right because it isn’t possible to get this right. It isn’t possible to take what little this software-as-a-teacher knows about students and respond with the sensitivity of a teacher-as-a-teacher.

Discussion about this post

Ready for more?