The Limits of Agency (Human or Otherwise):
Free Will Is Overrated
One of the oldest questions in philosophy asks: what does it mean to act freely? From Aristotle* to Hannah Arendt**, thinkers have wrestled with the nature of agency, not merely as the capacity to act, but as the capacity to act within limits. To choose responsibly, to navigate constraints, to understand what is permitted and what is forbidden.
And yet, in the breathless excitement surrounding AI agents, this question is rarely asked. The conversation tends to orbit grand abstractions: general intelligence, emergent consciousness, the spectre of artificial minds that may one day outthink us. These debates are captivating, but they drift far from the actual point of friction emerging in the real world of AI today.
The problem isn’t whether agents are conscious. It’s whether they are safe.
Already, we’re surrounded by agents: scheduling assistants, automated customer service reps, document summarizers, email drafters. At first, everyone’s entertained. They hallucinate, break in amusing ways, get memed on. No one minds. But most technologies move upstream. What starts as a toy soon finds its way into systems people actually care about: filing tickets, updating dashboards, making API calls, even moving money. These agents are designed to act on our behalf, and yet, they are strikingly naïve about what they’re allowed to do.
Most have no coherent concept of authorization. They will try anything, then wait to be swatted down by an error message or API rejection. But sometimes, there are no error messages when the AI agent accidentally emails payroll records to a customer. Just a PR disaster.
This isn’t simply a technical oversight. It’s a philosophical failure, one rooted in the misunderstanding of what agency actually entails.
Human agency has always existed within structured limits: laws, customs, relationships, responsibilities. Even our most intimate decisions are shaped by invisible frameworks of permission and prohibition: not just formal legal ones, but tacit social ones as well. Without such structures, action collapses into chaos.
As someone from Turkey, I see this all the time. In the US, it’s common for people to walk into homes with their shoes on — something that would cause genuine shock in Turkey, where removing your shoes at the door is an unspoken rule. The action itself isn’t harmful, but without understanding the social norms, it instantly violates trust.
AI agents face the same risk. They’re not magically exempt from the need for authorization. Without it, they’ll cross boundaries they don’t even know exist. And unlike humans, they can’t learn those limits through experience, cultural cues, shared context, or social feedback. Authorization isn’t just a pattern to be inferred from data; it’s a legal, institutional, and social reality. And it’s brittle. It breaks when ignored.
Nowhere is this clearer than in the case of Retrieval-Augmented Generation (RAG), the much-celebrated method for grounding language models in external data. RAG was supposed to be the “safe” option: don’t let the model guess, just retrieve documents from a trusted datastore and summarize them. A tidy solution.
Except it wasn’t. Most RAG systems today sidestep authorization entirely. They retrieve documents indiscriminately, or worse, apply access controls only after the fact, once the model has already seen the data. By the time the output is filtered, the damage is done. The agent pulls confidential legal docs into a marketing summary, or drafts an investor memo based on restricted financials.
But authorization isn’t just about safety. It’s what enables agency in the first place.
An agent that knows its permissions can act with confidence. It doesn’t need constant supervision. If it knows it’s allowed to manage tickets in Project A but not in Project B, it can route requests on its own. It knows where it may tread and where it may not. In this way, authorization doesn’t constrain agency, it enables it. It makes the agent more fluid, autonomous, useful. More agentic.
Some argue that enforcing permissions will slow down AI development. And indeed, it will add friction. But friction is how we build durable systems. The early internet didn’t flourish because it allowed anything to happen. It flourished because we built protocols for identity, encryption, and access control. That’s what made trust at scale possible.
The same will be true of AI agents. The systems that endure won’t be the ones that can do everything. They’ll be those that can do something safely, reliably, and repeatedly.
If Aristotle were alive today (RIP), he’d probably see such agents not as helpers, but as tools misused — technē without phronesis. Acting without moral structure and judgment. He’d likely say the responsibility lies with the designers, not the agents, to embed the necessary guardrails.
Or, in modern terms:
“You wouldn’t hand a sword to someone who hasn’t learned moderation. Why would you hand an agent your data without teaching it restraint?”
In the end, the lesson is neither technical nor novel. It is as old as philosophy itself: true agency isn’t the absence of constraint. It is the mastery of it.
But what would I know? It’s all Greek to me.
* I’ve always liked Aristotle’s view of agency. He doesn’t see freedom as the absence of limits. To him, meaningful action depends on knowing your place, working within the constraints of reason, social roles, and personal virtue. You learn to act well by practicing, by developing good judgment over time. It’s not about breaking boundaries; it’s about understanding when and how to act within them.
See Aristotle’s Ethics, Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/aristotle-ethics/
** Hannah Arendt argued that action isn’t about control or technical skill. It’s unpredictable, relational, and always constrained by others and institutions. Freedom, for her, isn’t unlimited choice, but participation in a shared world with boundaries.
See Managing Freely Acting People: Hannah Arendt’s Theory of Action, SciSpace:
https://scispace.com/pdf/managing-freely-acting-people-hannah-arendt-s-theory-of-sw90ibikym.pdf