
After presenting to more than 430 educators this week on responsible AI use in the classroom, I’ve been spending time reflecting on the questions and conversations that followed.
What struck me most wasn’t disagreement about rules or policies. It was a shared recognition that this is the moment when these conversations need to happen. Several educators noted how surprised they were by the number of people who remain unwilling to engage with AI at all, even as it increasingly shapes the world our students are navigating.
Two questions, in particular, stayed with me.
The first focused on the environmental impact of AI. This is a necessary and uncomfortable question. AI systems rely on significant resources, energy, water, and infrastructure, and schools are right to pause and consider those costs. AI simply makes those impacts harder to ignore.I will analyze this important topic in a future blog post.
Responsible use, then, isn’t about using AI more. It’s about being intentional. It’s about teachers deciding when AI is appropriate, when it isn’t, and why. Sometimes the most responsible choice is not using a tool at all, and helping students understand that restraint is part of ethical decision-making.
The second question was more fundamental: Does AI belong in school at all?
My quick answer is, yes, of course. We need to develop future-ready students for any work environment. This question also matters because school isn’t about chasing trends or replicating the workforce. But it is about preparing students to think critically, evaluate information, and operate responsibly in the world they are entering. Avoiding AI entirely doesn’t make it irrelevant; it simply removes guided learning and replaces it with silence.
In the framework we shared, teachers determine whether AI can be used, at what level, and for what purpose. Students are not deciding how AI fits into their learning; they are being taught to follow clearly defined expectations. Their responsibility is to acknowledge how AI was allowed to be used for an assignment and to be transparent about how they followed those guidelines.
That clarity matters. It protects learning. It protects trust. And it reinforces that AI is a tool within instruction and not a replacement for thinking, effort, or academic integrity.
What also surfaced in these conversations was something deeper: fear of the unknown. Change is hard, especially when it feels fast, unclear, or imposed. Resistance doesn’t usually come from apathy. Instead, it comes from uncertainty, lack of support, and a sense of not knowing where to begin.
That’s why ongoing, thoughtful professional development is so critical. One session isn’t enough. One document isn’t enough. Educators need time, space, and repeated opportunities to learn, ask questions, and recalibrate their practice as both the technology and our understanding of it evolve.
This is the right time for these conversations, because choosing not to engage is itself a decision with consequences for students.
Responsible AI use isn’t about perfection. It’s about clarity, intentionality, and leadership that recognizes fear without letting it drive the work. And it’s about committing to learning together, even when the path forward feels uncertain.If you’re interested in continuing this conversation or supporting responsible AI conversations in your district, you can find time on my calendar here.
Leave a comment