There’s a quiet shift happening in how we build software. AI is writing more and more of our code — and it’s getting scarily good at it. But here’s the thing nobody’s really talking about: as the writing of code gets automated, the watching of code becomes the real job.
Observability is about to have its moment. And I think it deserves it.
Let’s be honest about where we are. AI can scaffold an entire service in minutes. It can write tests, generate boilerplate, wire up APIs, and even refactor code with reasonable taste. The bottleneck in software delivery is rapidly shifting from “can we write it?” to “is it actually working the way we think it is?”
This is the new job description for engineers: not typing code into an editor, but staring at dashboards, reading traces, scrutinizing error rates, and asking the uncomfortable question — do we actually understand what this system is doing?
Observability tools — metrics, logs, traces, real-time alerting — become the cockpit instruments of this new reality. You can let the autopilot fly the plane, but someone still needs to read the gauges.
Now, here’s where it gets philosophically interesting. Could AI handle the observability part too? Probably. Maybe even better than us. An AI agent could monitor a system 24/7 without fatigue, correlate signals across thousands of services faster than any human, and trigger remediations before we’ve even noticed the anomaly.
But we’re not going to let it. Not fully. Not yet. Maybe not ever.
This isn’t a rational engineering decision — it’s a deeply human one. We want to feel in control. We need to feel in control. Even if the AI could close the loop entirely — write the code, deploy it, observe it, fix it — most teams (and most executives) will insist on a human in the loop. Not because the human is better, but because the feeling of oversight is non-negotiable.
Observability is the tool that gives humans that feeling. It’s the window into the machine. The thing that lets you say, “I may not have written this code, but I understand what it’s doing.” That sense of understanding — of legibility — is what separates trusting a system from blindly hoping it works.
In a world where AI is doing more of the doing, observability is how humans keep their seat at the table.
I’ll admit a personal bias here: observability has always been my favorite part of the job. Long before we called it “observability,” back when it was just “analytics” or “monitoring” or even just “looking at the graphs,” I loved it. There’s something deeply satisfying about watching a system behave — seeing the request rates climb, spotting the latency spike, tracing a single request through six services to find where it got stuck.
And over the years, I’ve started to notice a pattern. The engineers who share this instinct — the ones who enjoy understanding system behavior, who find themselves pulling up dashboards even when nothing is on fire — tend to be the best engineers I’ve worked with.
It’s not about being the fastest coder or knowing the most languages. It’s about curiosity toward running systems. Mediocre engineers write code and move on. Excellent engineers write code and then watch what it does. They ask: How does this behave under load? What does the error rate look like after deploy? Where is the time actually being spent?
This isn’t a skill you learn from a tutorial. It’s a disposition. A habit of mind. And it’s the habit that will matter most as AI takes over more of the writing.
So where does this leave us? I think we’re heading toward a world where the software engineering toolkit bifurcates. On one side, you have the generative tools — AI code assistants, copilots, agents that can build whole features from a prompt. On the other side, you have the understanding tools — observability platforms, tracing systems, real-time analytics, anomaly detection.
The first category is about producing software. The second is about comprehending it.
And the irony is that as the first category gets better, the second becomes more important, not less. The faster you can produce code, the faster you can produce problems. And the only way to stay ahead of those problems is to see them clearly.
Invest in observability. Not just the tooling, but the culture. Hire engineers who get a little thrill from a well-built dashboard. Build teams where “I was looking at the metrics and noticed something weird” is the beginning of the best conversations.
Because in the age of AI, the engineers who thrive won’t be the ones who write the most code. They’ll be the ones who understand it best.