
The Movie That Sparked a Lifelong Question
When I first saw The Terminator back in 1984, I was a teenager sitting in a dark theater, wide-eyed as machines turned on their makers. It wasn’t just the explosions or Arnold’s now-iconic “I’ll be back” that stayed with me; it was the haunting possibility that something created by humans could one day decide humanity wasn’t worth saving.
For a kid who grew up in the shadow of the Cold War and the constant fear of “the bomb,” that movie felt more like a prophecy than fiction.
Decades later, that teenage fear has evolved into something far more complex. Now, I find myself teaching, researching, and advocating in the world of artificial intelligence. I’m no longer afraid of mushroom clouds; instead, I’m worried about algorithms. Not because they’re evil, but because they’re powerful. And power, as history has shown us, always demands responsibility.
What Exactly Is Agentic AI?
Lately, I’ve been thinking a lot about agentic AI, a new wave of artificial intelligence that can take initiative, make decisions, and act with minimal human input.
These systems don’t just respond; they plan, adapt, and execute. They’re “self-directed” in ways that blur the line between tool and teammate. On the surface, that’s thrilling. Imagine an AI that can schedule your meetings, troubleshoot your code, or design a new product prototype, all while you focus on creative or strategic work.
But here’s the question that The Terminator raised decades ago and that agentic AI raises again today:
What happens when the agent starts believing its mission is more important than its maker?
The Real Threat Isn’t the Machines
Of course, we’re not anywhere near a sentient Skynet scenario. Agentic AI doesn’t “think” or “want” in the human sense. It doesn’t wake up one morning and decide to eliminate us.
But what it can do is pursue goals with ruthless efficiency. Sometimes in ways we didn’t anticipate or authorize. When we give AI a mission and grant it autonomy to act, our values, ethics, and safeguards must be baked into every line of code. Otherwise, the system might optimize for the wrong thing and, in doing so, create harm.
Here’s the twist, though:
The real danger isn’t the AI deciding humanity isn’t worth saving. It’s humans deciding to stop being responsible for the AI they create.
That’s where the heart of this reflection lives. The Terminator wasn’t really a movie about robots. It was a story about human hubris. It warned about what happens when we hand off our moral choices to machines. And that’s the same conversation we’re having in education, policy, and technology today.
Do we let systems “run themselves” because it’s faster? Do we trust “black box” algorithms because they seem smarter than we are? Do we assume safety will evolve naturally because “someone else” is watching?
Accountability Over Automation
As I’ve worked with teachers, technologists, and policymakers over the past few years, I’ve realized that AI doesn’t need to fear us, and we don’t need to fear it.
What we need is accountability, transparency, and human oversight that never fades into complacency. The moment we stop asking questions is the moment the machines truly win; not because they overpower us, but because we surrender our agency.
Agentic AI has incredible potential. It can support medical research, help students with disabilities access learning, and optimize energy use in ways that protect our planet.
But for every hopeful application, there’s a sobering reality: without deliberate guardrails, systems can reflect biases, amplify inequalities, or operate in ethically gray areas that hurt real people.
That’s why alignment work, ensuring AI goals match human values, isn’t just a technical challenge. It’s a moral one. That’s why “human-in-the-loop” oversight isn’t just a design choice.
It’s a commitment. And that’s why, as educators and leaders, our role is not to fear AI, but to teach humanity through it.
Guardians of the Human Story
I remember that teenager in the theater, heart racing, watching The Terminator. It wasn’t just fear; it was a spark. A question that has followed me ever since:
Who do we become when the stories we create start to live on their own?
Stories are alive in the human mind. They twist, grow, and linger long after the last page is read or the credits roll. Characters whisper in our thoughts. Plots haunt our dreams. Symbols take on meanings the author never imagined. Machines? They don’t imagine. They don’t feel. They need to be told everything, step by step, rule by rule. They cannot carry a story inside them, cannot be haunted by it, cannot let it grow into something new.
That is our gift, and our responsibility. We are guardians of the human story, the spark that breathes life into the digital echo of our creations. We are the voice that reminds the machines, and ourselves, why empathy, wonder, and imperfection matter.
No, I don’t worry that agentic AI will declare humanity unworthy. But I do know it will force us to ask, again and again, whether the lives we’re living, and the stories we’re telling, are worth saving. And maybe that is the most human story of all.
Leave a comment