Last year around this time, I made a bet with Alex Sarlin, the Global Edtech Lead of the venture capital firm ASU+GSV and co-host of Edtech Insiders, an edtech industry podcast.
Alex made this bet on LinkedIn:
I think by the end of [2025], it’s going to be hard to find an educator who doesn’t use Generative AI on a weekly basis.
I admired the specificity of that prediction, especially stacked against gauzy, non-falsifiable predictions like each of these, each one waving its hands and declaring 2026 “finally for real the year of the agent.”
I commented that I would take his bet and give him better than even odds. Not only wouldn’t we see anything close to 100% weekly usage, I said, but we wouldn’t see more than 80% AI usage of any type—casual, committed, or weekly.
Let’s check out both of those bets.
To Alex’s prediction of ~100% weekly usage, Gallup reported:
Only 32% of teachers report using AI at least weekly, while 28% use it infrequently and 40% still aren’t using it at all.”
Yikes! 32 is a lot less than 100, as far as numbers go!
Gallup also confirmed my prediction that fewer than 80% of educators would use AI in any capacity. You can triangulate Gallup with other sources too. Education Week has asked a national panel of teachers the same question every year for the last three years:
Which of the following best describes your current use of artificial intelligence-driven tools in your classroom?
The percentage of people who used AI at all in their 2025 survey (a little + some + a lot) was just 61%, almost identical to Gallup’s finding.
FWIW, Alex didn’t take me up on the bet I proposed. Nevertheless, I am bound by the Substacker’s oath to report to you that I would have soaked him.
What will happen in 2026?
AI use is certainly increasing among teachers, though that usage still seems confined to the dabblers in “a little” and “some.” The teachers using AI “a lot” are not exactly leaping off the axis. What kind of education technology is AI shaping into? High usage and high impact like slide software or an LMS? Low usage and low impact like the review games that teachers use every couple of weeks before a quiz?
I’m sticking with my predictions from early 2023 that AI will offer “quality-of-life improvements for teachers and students” but stop short of any “two sigma”-style changes to student learning or school staffing. Those predictions felt heterodox in 2023, but now I’d bet my house on any of them.
In 2026, you won’t see committed AI usage (“a lot”) rise above 10%. AI professional development will increase and you’ll see the dabblers swell in numbers—80% usage wouldn’t surprise me. But AI has not yet found its way to the core of teaching or learning. Chatbots and lessonslop still dominate the landscape. AI does lots of things faster and cheaper for teachers, but not enough things better. In 2026, you won’t find a room of educators where more than 10% of them would give up their slide software for MagicSchool. There are levels here.
What are the stakes of this bet?
More than I want to be right here (though I’ll take it) I want to be clear-eyed about the needs of teachers and learners, about the complex cognitive and social relationship they construct over the course of a school year, about the ways technology can and can’t support that relationship. The edtech industry is less sober than I have ever seen it, which turns your sobriety into a superpower.
My plan for edtech sobriety in 2025 involved substitute teaching monthly and helping a teacher out weekly. When 2025 rolled over to 2026, I immediately reset that goal. If you have perhaps misjudged the landscape of teaching and learning, if everything seems a bit blurry and you missed your predictions by 70%, I cannot recommend more highly that you shake your 2025 hangover by getting into a school. Try to teach something to someone who doesn’t really want to learn it, who doesn’t really think they can learn it. The experience will shape you and sharpen your senses and sober you up almost immediately.
Odds & Ends
¶ Eedi’s Chief Impact Officer Bibi Groot answered a question I raised in my review of their chatbot tutor study. Why did their chatbots help weeks later more than in the moment of tutoring? Groot speculates that the chatbot tutors drew more reasoning out of students than the human tutors, which supported a form of learning more amenable to transfer. I don’t know about that, but it sure is a thought! (I loved the graph above, indicating the human tutors eventually start cutting the chatbot tutors off, saying, “Hey—we’re done here.”)
¶ What innovation has re-energized this teacher?
“I feel like I’m coming home every day less exhausted, and I just have more energy to reinvest in the relationships with students,” Hesseltine said. “I’ve been teaching for over 20 years, so I really cannot stress this enough that this has made the biggest impact on education that I’ve ever seen.”
These are the kinds of results AI guys have been promising for three years now. So what is the teacher talking about and what stops us from getting it to more teachers?
¶ Just before the new year, Sal Khan wrote an op-ed in the New York Times considering employment in the age of AI. His solution to the problem of job loss is to pass a tin cup around to the companies profiting from AI and ask them to kindly donate 1% of their profits to programs dedicated to reskilling workers for AI fields. I can only imagine who Khan imagines would put those donations to work and lead the re-skilling effort.
The fund could be run by an independent nonprofit that would coordinate with corporations to ensure that the skills being developed are exactly what are needed. This is a big task, but it is doable; over the past 15 years, online learning platforms have shown that it can be done for academic learning, and many of the same principles apply for skill training.
In any case, the New York Times readers are absolutely scathing:
We have a mechanism for managing change equitably. It’s called the government. We don’t beg corporations for 1%. We tax them appropriately and use the tax funds to improve the lives of citizens and keep production and consumption within the limits that the biosphere has established.
¶ “Friction-maxxing” has entered the chat.
Friction-maxxing is not simply a matter of reducing your screen time, or whatever. It’s the process of building up tolerance for “inconvenience” (which is usually not inconvenience at all but just the vagaries of being a person living with other people in spaces that are impossible to completely control) — and then reaching even toward enjoyment. And then, it’s modeling this tolerance, followed by enjoyment and humor, for our kids.
¶ That article connected me to a recent post from Dylan Kane where he concluded:
That’s a tough reality of teaching: often the path of least resistance is the path of least learning.

















