Ground Work
Part One: On Building AI Literacy That Lasts for Educators
If you work in education right now, you already know the feeling. The tools that were revolutionary last September are obsolete by January. The resource you bookmarked in the fall has been updated twice since you saved it. The breathless enthusiasm and alarm tend to arrive in the same news cycle, sometimes the same article, and sorting signal from noise has become its own exhausting part-time job. We've always had to curate. We've always had to evaluate sources. But the digital ephemera of our professional lives feels like it shifts and updates faster than we can keep up, always in the name of improvement, always with the implicit message that if you blinked, you missed something important.
That feeling wasn’t born from AI, but AI certainly intensifies it.
AI has done so in ways that are sometimes genuinely encouraging: a rural teacher with three grade levels in one room using AI to build differentiated materials in the time it used to take to build one, a struggling student getting patient, targeted practice at 9 pm when no one else is available. On the other hand, it’s at times deeply concerning. AI accelerates us toward whichever end of the spectrum you’re already heading. In the hands of educators with training, shared language, and intentional leadership, the gains compound. Without those things, the risks compound just as fast. Same technology. Wildly different outcomes. The difference is almost never (ok, sometimes is) the tool.
Here's the thing, though. The skills that determine those outcomes: critical thinking, ethical reasoning, the ability to evaluate a source and interrogate an output, and decide what deserves your trust, aren't new. We've been building and reinforcing them for a long time (hello, library and media specialists) in media literacy. In digital citizenship. In every framework educators have developed over the past three decades to help students navigate a world where content streams faster than wisdom. AI literacy isn't a departure from that work.
What we need now is a floor. We don’t know what the ceiling of AI will become. Our students will be doing things with AI we can’t even imagine right now, and that’s not a threat. That’s the whole point. The pace of change isn’t the whole problem. The problem is trying to educate for a ceiling nobody can see yet. When we don’t know where we’re headed, we need shared understanding. A shared language. A growing fluency and skill set that travels with educators and students, no matter what else changes. That’s what AI literacy actually is: not a checklist, not a policy document, not a single professional development session. A floor we all stand on together.
At the Frontier Learning Lab, this is where we start every conversation about AI in Montana schools. Before we talk about tools, before we talk about policy, before we talk about what’s allowed or not allowed, we talk about what AI is. What it isn’t. What it can and can’t do. What it should and shouldn’t do when we’re talking about teaching and learning. And critically: how we can use it well.
That foundation is for everyone. It’s for the superintendent crafting district policy, the high school English teacher redesigning an essay assignment, the fifth grader who just discovered AI can help her brainstorm, and the parent who’s not sure what any of this means for their kid. Different roles, different needs, but the same floor underneath all of it.
Why Call It Literacy?
We’ve been here before. Every time a new kind of content entered everyday life, like print, broadcast media, and the internet. We eventually realized that access alone wasn’t enough. You need to read it critically, produce it responsibly, and recognize when it was working on you. That’s how media literacy was born. Information literacy. Digital literacy. Each one was us saying: this thing is too consequential to leave to instinct.
AI literacy follows the same lineage. But the word matters for another reason, too. Training teaches you a tool. Literacy builds a relationship with the whole category of things. We don’t want educators and students who are trained on ChatGPT. We want people who can navigate AI as a class of technology, whatever form it takes next year or in five years. Training expires. Literacy lasts — it maps.
Safe. Specific. Responsible.
When we move from “understanding AI” to “using AI in educational contexts,” Frontier Learning Lab (FLL) thinks about it through three lenses. Use should be safe. It should be specific. And it should be responsible.
These aren’t separate categories; they work together to evaluate any AI use in a learning environment. Does this keep students and their data protected? Is it tied to a real learning purpose, or is it AI for AI’s sake? Are we being transparent with students, with families, with ourselves about what we’re doing and why?
The answers look different depending on where you sit in the system. Two of the most important lenses to consider are those of the teacher and the student. First, we’ll focus on teachers.
AI as an Augmenter, Not a Replacement
Here’s what we keep getting wrong about AI and teachers: we frame it as a shortcut. A time-saver. A way to do more with less.
That framing undersells it, and it also undersells teachers. The better frame is an augmentation. AI can amplify the work educators already know how to do, because they know how to do it.
An experienced teacher who understands their students, their curriculum, and their learning objectives can use AI to go further, faster, and more creatively than any tool could on its own.
The professional expertise is the ingredient AI can’t supply. The teacher brings that.
This is what “specific” means from an educator’s standpoint. Not “I used AI to generate a worksheet.” Specific means: I have a learning objective, I understand what my students need, and I’m using AI as a deliberate instrument toward that goal. The AI doesn’t drive. The teacher drives.
Part of building that fluency is also learning to evaluate AI outputs, not just generate them. This is something we’ve been building into our work directly. The same critical lens we want students to develop, educators need first. Can you look at an AI-generated draft and recognize where it’s vague, or where it’s flattening or ignoring nuance? We’ve been calling this the “AI slop detector” (find a copy of this on our Resource Basecamp), a way of giving educators concrete criteria to evaluate what AI gives back, rather than accepting it at face value. Prompting well and evaluating critically are two sides of the same professional skill.
It’s not just us. . .
This isn’t just FLL’s frame. The U.S. Department of Labor (DOL) released its own AI Literacy Framework in February 2026, and the through-line is the same one we keep coming back to: human judgment is the non-negotiable ingredient. The DOL frames AI explicitly as an amplifier of human input, naming “building complementary human skills” as a core delivery principle. That’s not workforce jargon. That’s an acknowledgment that the skills we’ve always cared about in education: critical thinking, ethical reasoning, and contextual judgment, aren’t in competition with AI. They’re what make AI worth using.
The framework’s five foundational content areas are: understanding AI principles, exploring AI uses, directing AI effectively, evaluating AI outputs, and using AI responsibly. These will feel familiar if you’ve been in any FLL workshop. Not because we copied them, but because good AI literacy tends to converge on the same things when it’s built from the ground up rather than from the top down.
Our students will enter a professional world where these skills are assumed. Building them in school isn’t preparation for work. It’s preparation for a life where thinking clearly still matters, maybe more than ever.
In Part Two, we'll look at what AI literacy actually means for students — not a rulebook, but a real understanding of what AI can offer them and where human thinking still has to lead.
If you’re an educator, school leader, or just someone trying to figure out what AI means for the young people in your community, you’re already doing this work. Come do it with us.
Explore resources, workshops, and tools at the Frontier Learning Lab Resource Basecamp. The frontier keeps expanding. It’s better when we’re navigating it together. To talk to real people about AI, contact ai.help@mtda.org




