The AI Panic: What I'm Actually Worried About

5 min read Original article ↗

Every CTO is being asked the same question:
"How is AI going to change everything?"

After 18 years of watching tech hype cycles, here’s my honest answer: I’m not worried about robots taking over. I’m worried about CTOs making expensive decisions based on fear and FOMO.

Before diving into real concerns, let me clarify what doesn’t keep me up at night:

  • "AI will replace all developers":
    I've heard this about every tool from IDEs to frameworks. Good developers adapt and become more productive.

  • "Everything will be automated":
    Production systems still need humans who understand the business context.

  • "Traditional programming is dead":
    I’ve been hearing “code is dead” for two decades. Still writing code.

  • "You need an AI strategy now or you'll be left behind":
    Most companies need basic operational competency before they need an AI strategy.

This one scares me the most from a leadership perspective: AI can make people—and teams—lazier when it comes to thinking through problems, and worse, validating AI-generated answers without scrutiny.

It's frightening how quickly this pattern is becoming normal. Not long ago, I worked at a company where the CEO encouraged the team to “stop thinking and just ask AI.” In my view, that’s the worst version of AI adoption—one where people shut down their own thinking and blindly follow the “AI genius.”

What I’m Seeing:

  • Requirements gathering shortcuts:
    “The AI will figure out what users want.”

  • Architecture decisions avoided:
    “Let’s see what the AI recommends.”

  • Problem-solving atrophy:
    “Why think through this when AI can do it faster?”

  • Validation failures:
    Teams implement AI suggestions without testing edge cases or understanding failure modes.

  • Legal and compliance misuse:
    I’ve seen teams use AI to answer critical legal questions—and take those answers at face value.

This shows up in two dangerous ways:

a) Relying on AI the team doesn’t understand
Teams depend on AI systems they can’t debug, troubleshoot, or modify. I’ve consulted with companies where critical business logic was buried in AI models no one could explain—let alone fix.

b) Using AI to build things the team can’t maintain
This one is subtle but more dangerous. AI tools can generate code faster than teams can absorb it, leading to invisible technical debt. I’ve seen teams ship AI-generated microservices that work—until they don’t. Then they realize no one understands the generated architecture well enough to modify it safely.

AI models are surprisingly easy to attack via adversarial examples, model inversion, and data poisoning. As a CTO, I worry we’ll flip the switch on some AI feature without understanding the threat surface—especially when those models touch sensitive customer or operational data.

Specific Concerns:

  • Adversarial inputs:
    Small, intentional changes to input that cause the model to misbehave.

  • Model inversion attacks:
    Bad actors reconstruct training data from model outputs.

  • Data poisoning:
    Contaminating training data to subtly manipulate future outputs.

  • Prompt injection:
    Manipulating AI through carefully crafted inputs.

MCP (Model Context Protocol) servers are a good example. They’re powerful tools for extending AI, but currently lack basic security frameworks. We’re connecting AI to internal databases and APIs without the same rigor we’d apply to traditional integrations.

The scary part isn’t the sophisticated attacks—it’s the basic security hygiene that gets skipped because “it’s just AI.” Teams treat AI as magic rather than as software that needs proper defenses.

The three things that keep me up—critical thinking decay, “magic box” dependencies, and security blind spots—all stem from one root problem:

Teams stop applying first principles thinking when AI gets involved.

Technology should solve real problems—not create impressive demos.
This counters the critical thinking decay. I’ve sat through meetings where teams reverse-engineer use cases to justify being “AI-first.” If you strip away the AI language and can’t clearly explain the problem or the business value, you’re building a solution in search of a problem.

All systems fail. Failure modes define system design.
AI systems don’t just fail—they fail opaquely. If your team can’t explain how an AI model works or what happens when it breaks, you’ve created an unmaintainable dependency. If your product goes down when the AI does, you’ve introduced a single point of failure you can’t even debug.

All systems can be attacked. Security must be built-in, not bolted-on.
Most teams treat AI security as an afterthought—if they think about it at all. But adversarial prompts, data leaks, and injection attacks are not edge cases. They’re the expected consequences of shipping software that processes untrusted inputs without proper safeguards.

Working AI doesn’t excuse you from engineering discipline—it demands more of it. The teams that succeed will be the ones who keep asking the hard questions:

  • Are we solving the right problem?

  • Can we build and maintain this system long-term?

  • Have we secured it properly?

These aren’t the flashy conversations that end up on tech blogs. But they’re what keep your AI projects from turning into expensive messes.

Does worrying about AI mean I’m avoiding it? Quite the opposite.

I actively experiment with AI as part of my workflow, automation, and tooling. There’s real opportunity here—as long as we treat AI with the same respect and discipline we apply to any engineering tool.

So far, I’ve found valuable use cases in:

  • Quick visual prototyping

  • Research and data collection

  • Reporting and summarization

  • Code reviews and PR documentation

The biggest risk with AI isn’t the tech—it’s decision-making under pressure and uncertainty.

CTOs feel like they need an “AI strategy” because everyone else is talking about it. But in reality, the best AI strategy might be:

Solve real problems really well.
Understand AI’s true capabilities.
Make deliberate, grounded choices based on business value—not hype.

The companies that will win with AI aren’t chasing impressive demos. They’re using AI to amplify their strengths. They understand their business, their systems, and their people—and use AI to make them better.

My prediction?
In three years, the most successful AI implementations will be boring. They’ll solve specific, measurable problems.
Not the flashiest demos.
Not the "AI-first" experiments.
Just focused, grounded, disciplined engineering.

Thanks for reading The Pragmatic CTO! This post is public so feel free to share it.

Share

Discussion about this post

Ready for more?