The Workplace Blind Spot

6 min read Original article ↗

TL;DR

The silence after I published a post about automating my own role told me something. Most people assume their job's complexity will protect them from AI — I held the same belief about my own work until I was proven wrong. The models don't need to get better; better harnessing of what exists today is enough to automate most of what companies do. What's coming isn't gradual.

A few weeks ago, I published a post about automating my own role. I timed it deliberately — sharing it with colleagues before a casual lunch with management, hoping to spark some real conversation about what's happening with AI and what it means for our work.

The response was almost nothing.

A couple of colleagues joked nervously that it made them uncomfortable. Management said nothing. It's been two weeks, and still no substantial feedback. Everyone is busy, life is full, there's no shortage of terrible things demanding our attention. Maybe they haven't read it.

But the silence got me thinking. And what I landed on is something I've started calling the workplace blind spot.


About eight or nine months ago, when I started using AI coding agents seriously, I had a very specific belief: my work is too complex for this. The context was too rich, the problems too nuanced. An AI couldn't effectively grasp what I was dealing with.

I was wrong.

As I got deeper into using Claude Code, I realized — faster than I expected — that the point at which an AI agent could take over most of my software development work had already arrived. It hit me suddenly, and it knocked me sideways for a few days. I had an existential crisis.

Then I got to work. I figured out how to use it as a partner rather than a replacement, and what I found was that it gave me superpowers — not just in software development, but as a general agent for all kinds of tasks, professional and personal. The leverage was genuinely surprising.

Then I wrote the post about automating my PM role. And I had my second existential crisis, because this time the scope was larger. I wasn't watching AI encroach on one function. I was watching a pattern.


Here's the thing that's hard to say without sounding like a doomer: the models don't even need to get better for this to be transformative.

What exists today, better harnessed, is enough to fully automate enormous swaths of what most companies do. The only constraints I run into are time and compute cost — not capability. The path from where we are now to full automation of a business function is not a marathon. It's a hundred-meter sprint.

The Anthropic labor market impact research published this week puts data behind this intuition. The graph below depicts the current state of AI's role in real work. We're still near the beginning of the curve.

Anthropic labor market impact graph

The disruption won't arrive gradually. The heat will be turned up suddenly. And most people won't be ready.


Why? Two reasons.

The first is the false comfort of complexity — the belief that because your role is important or requires significant contextual knowledge, AI can't touch it. I held this belief about my own work. I was wrong. Complexity is not a shield; it's a delay, and it's shorter than you think.

The second is the perception gap. Most people who "use AI" are still using ChatGPT as a smarter Google. That's real value, but it's not where the frontier is. The frontier is agent workflows — AI that doesn't just answer questions but executes tasks, manages processes, and coordinates systems. The gap between what most people think AI can do and what it actually can do is enormous, and it's closing faster than most people realize.

This isn't prediction. I'm not a fortune teller. I'm describing the trajectory I'm watching happen, in real time, in my own work.


Counter Arguments

AI Will Create New Jobs

This is a counterargument I used to reach for when I first started facing this reality. Every wave of automation — industrialization, mechanized farming, computing, the internet — was predicted to cause mass unemployment. It never did. New industries emerged, new jobs were created, and standards of living rose. The pattern is consistent enough to be a near-law: technology destroys job categories, not jobs in aggregate. Economists call the fear a "lump of labour fallacy." History sides with the optimists.

But here's why I think it's different this time.

Speed. Past transitions played out over decades or generations — long enough for education systems, labor markets, and social institutions to adapt. This one is measured in years. The question isn't whether new jobs will eventually appear. It's whether the transition period is survivable for the people caught in it.

Breadth. Previous automation replaced specific physical or routine cognitive tasks — looms, assembly lines, spreadsheets. It left the broad class of knowledge work untouched. AI is hitting all of it simultaneously: writing, analysis, coding, legal, medical, management. There's no safe harbor to retreat to while you retrain.

The new jobs assumption is circular this time. Historical optimism rests on one premise: humans will always have comparative advantage somewhere. Previous automation created new industries that still required humans to run them. The factory replaced hand-weavers — but you still needed humans to run factories. AI doesn't have that constraint. It can do the new jobs too.

Complexity Is a Real Barrier

AI still struggles with genuinely complex tasks — especially institutional knowledge that hasn't been digitized, or the internal politics that shape decisions in an organization. This is a fair point.

But it's the same argument I made about my own software development work, and I was wrong then too. As a developer, I learned early to break complex problems into smaller, more focused, more manageable tasks. The same approach applies here. The complexity that feels insurmountable at the macro level dissolves when you tackle it step by step, in tighter scope. AI excels at that. The real constraint isn't complexity — it's whether the context can be digitized. And most institutional knowledge, given time and the right tooling, can be.


To be clear: I don't think everyone will lose their jobs overnight. Even in the EU, the AI Act requires a human in the loop for consequential decisions. But the number of people needed to run a company will shrink — significantly, dramatically — over a shorter timeline than anyone is currently planning for.

I'm also not writing this to scare anyone. I went through the fear already. What I found on the other side is that you can prepare, adapt, and find a way to work with this instead of against it. But you can't adapt to something you're not looking at.

The silence at that lunch suggested people are assuming someone else is thinking about this. Or that it's further away than it is. Or that the complexity of their work will protect them.

It won't. It didn't protect mine.

That's the blind spot.


This post is part of The Closing Window series. See also: I Am No Longer Needed and AI's Social Trap.