The Real Reason AI Won't Replace McKinsey (Yet)

6 min read Original article ↗

Every consultant will tell you the same story. McKinsey and BCG don’t win because they have better frameworks or smarter PowerPoints. They win because their consultants extract the real story during coffee breaks1.

And that’s exactly why AI hasn’t disrupted consulting yet. But what if I told you that could change in the most dystopian way possible?

I’ve attended a lot of workshops and the pattern is always the same: the real information isn’t in the computers.

Sure, you can read the financials, study the dashboards, and get a sense of how a company is performing. But finding out what’s actually happening requires listening to people outside the conference room.

The biggest breakthroughs I’ve had with clients have always happened during informal moments. “Let’s grab a quick coffee” becomes John from Operations explaining why the new inventory system is actually a disaster, despite what the metrics say. It becomes Sarah from Sales admitting that the biggest client is shopping around, even though they just renewed their contract.

AI can’t grab coffee with you. AI can’t make John feel comfortable enough to admit his team is burned out. AI can’t pick up on Sarah’s hesitation when she mentions the Q4 projections.

At least...not yet.

When I dictate my thoughts to AI instead of typing them, something interesting happens. I include more context, share more nuance, ramble through the connections that matter. The friction disappears, and suddenly the AI understands not just what I’m asking, but why I’m asking it.

This got me thinking. AI works best when it has context, not just “data”. And that’s where things get interesting — and terrifying.

man wearing Friend AI hardware neckpiece
Source: https://techcrunch.com/2024/07/30/friend-is-an-ai-companion-backed-by-founders-of-solana-perplexity-and-zfellows/

There’s a new category of products emerging called “AI wearables.” Devices like Friend.com and Rewind Pendant that listen to everything you say, creating a perfect memory you can query later. “What did Mark say about the product roadmap last Tuesday?

Instant answer.

And if this sounds like wiretapping, that’s because it is.

Now imagine your company requires everyone to wear an AI pin. Or simpler: imagine every Zoom call, every Slack huddle, every hallway conversation is automatically transcribed and fed into a corporate AI brain.

Suddenly, you have an omniscient system that merges all conversations with all data. A CEO could ask:

“Why did we lose the Johnson account?”

The AI responds: “According to the CRM, it was price. But in three separate conversations last week, your sales team mentioned that Johnson’s CEO was insulted when we sent a junior associate to their annual review. Tom specifically told Linda, ‘We lost it the moment they saw who we sent.’”

“What’s our real runway?”

“Officially, 18 months. However, yesterday your CFO told the head of HR to ‘prepare for aggressive cost-cutting by Q2.’ Your head of Product also mentioned to his team that the enterprise deal you’re counting on is, quote, ‘probably not happening.’”

“Is the new product launch actually on track?”

“The Jira board shows green across the board. However, I’ve detected the word ‘worried’ in 47 conversations about the launch this week. Your lead engineer told three different people that ‘we’re going to ship something, but it won’t be pretty.’”

This is a CEO’s dream. No more information silos. No more finding out about problems when it’s too late. No more “nobody told me.” Every insight, every concern, every brilliant idea that emerges in casual conversation becomes part of the corporate intelligence.

This technology sounds dystopian now—and honestly, it is. But here’s the thing about technology: once a path becomes possible, someone will take it.

Startups, desperate for any edge against established companies, will adopt this first. And they’ll move frighteningly fast because of it. While enterprise companies spend six months debating a strategy, a startup with an omniscient AI brain will have already tested, failed, learned, and pivoted twice.

Imagine a 50-person startup with the institutional knowledge of a 5,000-person corporation. Every employee’s insights instantly accessible. Every customer conversation informing product decisions in real-time. Every competitive intelligence snippet automatically catalogued and analyzed.

The companies that embrace this will have such an information advantage that others will be forced to follow.

If it works it will be inevitable.

But here’s the paradox that might break the whole system: the moment people know they’re being recorded, the real information disappears.

Remember John from Operations who revealed the inventory disaster over coffee? Once he knows that pin is recording, that conversation never happens. Instead, you get: “The new system has some optimization opportunities we’re actively addressing.

The very act of perfect surveillance destroys the informal information channels that make it valuable. People will find ways to communicate outside official channels — meeting in parking lots, using personal phones, developing elaborate codes.

The company becomes split between the recorded fiction and the unrecorded reality, just like the “adjusted” EBIDTA.

Then there are the cybersecurity nightmares. Imagine a competitor gaining access to your omniscient AI brain. Or worse: imagine someone poisoning it with false information, making it suggest strategies designed to destroy your company from within. Every conversation becomes a potential attack vector.

And what about the human cost? The stress of knowing every word is recorded, analyzed, and might surface in a CEO query three months from now. The death of psychological safety. The end of thinking out loud, of admitting uncertainty, of being human at work.

We’re standing at a crossroads. Down one path: AI that enhances human connection, helps us make better decisions, and respects the messy, informal ways that real communication happens.

Down the other: omniscient corporate surveillance that promises perfect information but delivers perfect paralysis.

The technology is here. The only question is whether we’ll use it to become more human or less.

McKinsey’s moat isn’t their frameworks or their slides. It’s the trust that allows someone to say, “Can I tell you what’s really going on here?” off the record. The moment we try to put that on the record—to feed it all into an AI—we don’t enhance that trust.

We destroy it.

And that’s the real reason why AI won’t probably replace McKinsey. Not because AI lacks intelligence, but because the very act of trying to capture everything ensures we capture nothing that matters.

What do you think? Would you work for a company with an omniscient AI brain? Or would you run as fast as you could in the opposite direction?

Discussion about this post

Ready for more?