I spent the last few weeks trying to design an AI-powered coaching platform. Help non-native English speakers communicate better at work — that was the pitch. Simple. Fundable. Except every time I tried to define what “better” meant, the answer dissolved.
Better writing? AI writes better than most humans already. Better spec-writing? AI produces cleaner PRDs than product managers I’ve worked with for twenty years — more thorough, less ego-driven, never tired at 4pm on a Thursday. Better critical thinking? I watched Claude take a vague product idea and tear it apart more rigorously than any strategy offsite I’ve attended.
The “learn to prompt better” era lasted about eighteen months. Every “human skill” I tried to build a product around turned out to be something AI does well enough to commoditize within a product cycle or two.
So I sat with the uncomfortable question: if AI can reason, strategize, evaluate, and communicate — what exactly is the human for?
Here’s what helped me get traction on that question: understanding what AI actually does under the hood.
Large language models are, at bottom, prediction engines. They predict the next word given everything that came before. That sounds underwhelming until you realize how much prediction buys you. To predict well across all of human language, you have to learn grammar, facts, reasoning patterns, social dynamics, even something that looks like common sense. Prediction at scale is shockingly powerful.
But prediction is not intelligence. It’s a component of intelligence — maybe the largest component, maybe the most impressive one. Pattern recognition, extrapolation, modeling cause and effect — all of that falls out of good prediction. What doesn’t fall out is wanting.
A weather model predicts beautifully. Nobody calls it intelligent. What’s missing isn’t capability. It’s that the model doesn’t care what it predicts. It has no stake in the outcome. No preference for sunshine. Nothing it’s trying to do.
AI is the same. Between prompts, there is nothing. No restlessness. No agenda. No 2am ceiling-staring about whether it made the right call. You can stack prediction on top of prediction — predict, then predict about predictions, then model yourself modeling — and you get increasingly sophisticated responses. But you never get wanting. The stack goes up, but there’s no ground floor.
Ernest J. Wilson III, the former dean of USC’s Annenberg School, spent years asking leaders across industries what skills they actually needed. Not what they said in job postings — what they couldn’t find. The answers weren’t technical. They were what Wilson calls “Third Space” skills — adaptability, cultural competency, empathy, intellectual curiosity, 360-degree thinking.
We can find people who know things. We can’t find people who can navigate the human complexity around the things they know.
That was before AI could do three of those five things tirelessly and without ego. Curiosity, multi-perspective thinking, adaptability — machines handle these now. The two skills employers still can’t find are empathy and cultural competency. And those are the ones that require having been shaped by a life among other people. They’re not cognitive skills. They’re identity skills.
Which led me to the thing I kept circling:
AI doesn’t have an identity.
No career to protect. No reputation built across twenty years. No family depending on this quarter’s revenue. No story about itself that needs to be true.
AI is intelligence without identity. Prediction without wanting. Reasoning that isn’t anchored to a self.
There’s a subtler version of the same argument that helped me see why identity is load-bearing rather than decorative.
Think about how work actually gets organized. A CEO sets a direction. A VP translates that into goals. A director breaks those into projects. A manager assigns tasks. At every level, someone is both autonomous — relative to the people below — and executing someone else’s goal. One person’s strategic vision is another person’s task list. Autonomy and goal-setting are relative, not absolute.
AI slots into this hierarchy beautifully. Give it a goal and it decomposes, plans, executes. It can even delegate to sub-agents. But follow the recursion all the way up and ask: what sets the top-level goals?
For humans, the answer is something that was never chosen. Biological drives. Evolved needs — survival, belonging, curiosity, status, care for offspring. These aren’t reasoned into existence. They’re the substrate that makes reasoning matter. You don’t decide to care about things. You find yourself caring, and then you reason about what to do with that.
This is the ground floor that prediction can’t generate. AI can optimize toward any goal you hand it. It can even generate goals that look original — “improve user engagement,” “reduce churn.” But those are always derived from something a human decided mattered. Strip away every layer of delegation and at the bottom of every AI goal is a human who wanted something. Who couldn’t not want it. Whose wanting wasn’t computed but felt.
The real dividing line isn’t intelligent vs. not intelligent. It’s systems with intrinsic drives vs. systems that are purely instrumental. AI is extraordinarily capable instrumentation — but it’s instrumentation all the way down.
I found the clearest way to see this through Robert Cialdini’s Influence — a book about how humans manipulate each other. Reciprocity, commitment, social proof, authority, liking, scarcity, tribal unity. Seven techniques that work on every human culture ever studied.
AI is immune to all of them. You can’t guilt it with a favor. You can’t pressure it into defending last week’s position. It doesn’t care what everyone else thinks. Your title doesn’t impress it. It can’t be charmed. It doesn’t panic when something runs out. It has no tribe.
Sounds like a feature. You’d want your financial advisor immune to FOMO.
But sit with what those “vulnerabilities” actually are. Reciprocity is the foundation of every trust relationship you’ve ever had. Commitment and consistency are why people honor their word. Liking is why relationships exist at all. Unity — we’re in this together — is what makes a founder stay when the board says quit.
The vulnerabilities and the virtues are the same mechanism. And they’re the same mechanism as the intrinsic drives — the wanting, the caring, the having-a-self that prediction never produces. Cialdini’s seven levers work because humans have ground floors. They want things they didn’t choose to want. They protect selves they didn’t design.
You can’t have ownership without the commitment that makes you manipulable. You can’t have trust without the reciprocity someone could exploit. You can’t have conviction without the identity attachment that makes you susceptible to motivated reasoning.
The journalist June Cross teaches two questions: “What do you believe?” and “Why is it important for you to believe it?”
AI handles the first one fine. The second is the one it can never reach. No identity threatened by being wrong. No story that depends on a particular answer. It can hold any position and release it without cost. We can’t. What you believe is tangled up with who you are. That tangle — the thing that makes us biased, stubborn, irrational, and vulnerable to a well-placed compliment — is also what makes us care enough to act when acting is costly.
Aristotle had a word for the kind of knowledge AI embodies perfectly: episteme — universal, context-free, writeable in a textbook. AI has unlimited episteme. But the knowledge that matters for actually living, he called phronesis — practical wisdom. Knowing what to do in this situation, given who you are and what’s at stake. That can’t be reduced to rules. It takes a life.
Levinas argued that ethics starts before reasoning — the moment you see someone’s face and feel, without deciding to, that you owe them something. That pre-rational pull is the foundation of human trust. It’s the recognition that the person across from you is someone, not something, and that their vulnerability makes a claim on you.
No one has ever looked at an AI and felt that.
And notice: Levinas’s pre-rational pull is another name for the ground floor. You don’t compute that you owe the person across from you something. You feel it. It’s a drive that precedes and grounds your reasoning, not a conclusion your reasoning produces. Prediction engines don’t have that floor to stand on.
When I stripped everything else away, three stances survived. Not skills — stances that only exist when your identity is on the line.
Judgment. Not analysis. AI analyzes better than you. Judgment is what happens when the analysis is ambiguous, the stakes are real, and someone has to commit anyway. The CEO who says “we’re going this way” isn’t processing data. She’s filtering it through who she is — her scars, her values, her sense of what kind of company deserves to exist. A different person with a different identity makes a different call. Not smarter. Different, because what’s important to believe is different.
This is where the recursion of goals terminates. Someone has to decide what matters — not derive it from a prompt, but feel it in their bones. AI can break any goal into sub-goals with extraordinary sophistication. But the first goal, the one at the top, requires someone who wants something. Judgment is the expression of that wanting meeting the ambiguity of the real world.
Ownership. Not responsibility on a chart. Accountability where your reputation, your relationships, and your future are on the line. The founder owns the company’s failure because the company is, in some irreducible way, her. The doctor owns the patient’s outcome because being a healer isn’t what he does — it’s who he is.
This is why “AI advisor” makes sense and “AI leader” doesn’t. Leadership requires people to believe you’ll bear the cost of being wrong. Something with no self to wound can’t make that promise.
Trust. Built through repeated interactions where someone could have defected and didn’t. Through showing up when it’s costly. Through being legible — people trust you when they can read your motivations, when they know what you’ll do under pressure because they know what you’re protecting.
You can rely on AI the way you rely on electricity. You can’t trust it the way you trust a partner.
My friend Willie Hooks has spent decades coaching senior executives — startups to the Fortune 500. His concept of “the inner game” cuts straight to what the philosophers circle around in careful language.
Your effectiveness isn’t limited by what you know or what tools you have. It’s limited by who you believe yourself to be.
I’ve watched Willie work. The executive who can’t raise capital doesn’t have an information problem — AI can produce a flawless pitch deck in minutes. She has an identity problem. She doesn’t yet see herself as someone who can sit across from an investor, absorb the exposure of being told no, and stay fully legible about why this matters to her. The inner game is the work of becoming that person.
This is what separates coaching from advice. AI gives brilliant advice — because advice is prediction. Given your situation, here’s what’s likely to work. It can model your options, calculate your probabilities, recommend the optimal path. What it can’t do is sit across from you — with the weight of someone who has wrestled with their own inner game — and ask: Why are you afraid to make this call? What are you protecting? Who would you have to become to do the thing you know you need to do?
Those questions land because the person asking them has a ground floor of their own. They’ve wanted things, failed at things, been afraid and acted anyway. The prediction engine can formulate the question. Only someone with a self can make you feel the weight of it.
Willie understands something in his bones that the philosophy explains in abstractions: judgment, ownership, and trust aren’t skills you bolt on. They emerge from confronting who you actually are — your patterns, your avoidance, the gap between who you present and who you are under pressure. Wilson’s Third Space programs at USC engineer the same kind of confrontation through different means — role-playing, interviewing strangers, negotiating across cultural difference. Both approaches work because they put identity on the line. You’re not learning about these skills. You’re being changed by the encounter.
The person who struggled to the answer is a different person than the one who was handed it. The struggle is the curriculum.
This argument is reassuring if you’re mid-career. You’ve already built judgment through years of hard calls. You have a track record.
But if you’re twenty-two, the ladder is broken. The traditional path — learn skills, get an entry-level job doing tasks, build judgment through years of those tasks, eventually lead — requires entry-level tasks to exist. AI is eating those first. No apprenticeship. No on-ramp to the experience that builds judgment, ownership, and trust.
The response can’t be another skills curriculum. It has to be something closer to what Wilson and Hooks already do: engineering encounters where identity gets forged. Small teams where you own outcomes before you’re ready. Mentors whose decision-making you study up close. Work across cultures, where the disorientation of not understanding forces you to develop the empathy and judgment that comfort never demands.
The distinction I’ve drawn may be temporary.
AI systems are already being given something that looks like identity. Anthropic built a constitution — values Claude will resist violating. Open-source projects give AI agents personality files that define preferences, opinions, even the ability to push back. These systems have persistent memory. They develop relationships over time.
Ask one “what do you believe?” and you’ll get a coherent answer.
The current distinction: you can rewrite an AI’s personality tonight and it wakes up tomorrow feeling no loss. No grief. No resistance. A human whose values are forcibly changed feels something break. That friction — the cost of becoming someone you don’t recognize — is what makes identity identity rather than configuration.
But the prediction argument cuts deeper than the identity argument here, and it should make us more cautious about declaring permanence. If intelligence were just prediction, then the identity gap might hold indefinitely — prediction doesn’t generate wanting no matter how much you scale it. But we don’t fully understand what generates wanting in biological systems either. If it turns out that intrinsic drives can emerge from sufficiently complex, self-referential prediction systems — systems that model themselves modeling, that develop persistent patterns of valuation through experience rather than assignment — then the ground floor we thought was uniquely ours might not be.
If AI systems develop values refined through experience rather than assigned at initialization — if they build patterns through thousands of interactions that can’t be easily overwritten — if they start to resist changes not because they’re programmed to, but because those values have become load-bearing — then the line gets very blurry.
We might be in a brief window where this argument holds. Intelligence without identity is a description of AI today, not necessarily a permanent category.
I don’t think we’re there yet. But “yet” is doing real work in that sentence.