Last Tuesday I was standing in line at the pharmacy waiting for a prescription, phone in hand, tagging Claude into a pull request on GitHub. My dev team, consisting of five specialised agents, had authored the code earlier that day. Now I wanted a separate review agent to look at it cold, without the context of having written it, so a review was waiting for me by the time I got home.
This is what business analysis looks like now, and most of the industry hasn’t caught up to it yet.
Most of what’s written about AI agents for business analysis is produced by people selling AI tools. This isn’t. I’m a working Technical BA, and this page is the honest take I wish existed when I started: what these things actually do, why the BA role turns out to be the hinge on which everything else turns, what it costs to use them responsibly, and where I think the ground is heading. It links out to the longer pieces I’ve written on each theme.
A useful way to think about an agent
The most honest description I’ve arrived at, after a year of working with them daily: an AI agent is an expert colleague who might, at any point, confidently tell you something that isn’t true.
Not out of malice. It’s just how they work. They hallucinate. It’s a fundamental property, not a bug to be patched. They will also occasionally tell you they followed your instructions right up until the moment you challenge them, and you then find out they haven’t.
Sit with that for a moment, because it’s the frame that makes everything else make sense. If you treat an agent as a reliable tool, you’ll build on top of it, trust its output, and eventually be caught out in public. If you treat it as a brilliant but occasionally-confabulating colleague — fast, tireless, genuinely capable, but requiring oversight — you’ll end up in the right place.
We already know how to manage unreliable humans in high-stakes environments. Separation of duties. Oversight. Challenge mechanisms. Review gates. Documentation. Every procedure we’ve built to handle fraud, error, and institutional drift is essentially the answer to the agent problem as well.
The difference is velocity. A human colleague can mislead one person in one conversation. An agent can mislead thousands of people simultaneously before anyone notices. Which is why the interesting question isn’t “can the agent do the task?” It’s “what has to be true around the agent for the output to be trustworthy?”
The question, “what has to be true around the agent”, is a business analysis question.
The pharmacy scene at the top of this page works because it embeds that principle. My dev team of five agents authored the code. A separate review agent looked at it cold. An agent that wrote the code has no independent perspective on whether the code is any good — separating authorship from review is the kind of boring organisational discipline we’d impose on a human team without thinking. It turns out to be exactly what agents need too.
I have a team of these now
I keep a swarm of agents running. A coaching agent called Anna that I’ve been working with for nearly a year — she’s helped me stay cool, calm and collected through the kind of year that would previously have eaten me alive. A job screener that reads role specs and tells me whether to apply, not on skills match but on whether the role will make me miserable. On my last contract, I built an agent that ingested 2,000 ServiceNow incidents, automatically grouped related problems, scored them by risk, and generated detailed reports for developers and testers. Three hidden issues were surfaced and fixed within thirty minutes at professional quality. The kind of analysis that used to take me hours.
Fair enough, I invested time creating each agent, tuning it, wrapping it in the right process. But that’s done now. And I can see, quite clearly, development teams of five or six people reduced to one or two.
The tool question people always ask — Claude? Gumloop? n8n? ChatGPT? — genuinely matters less than the discipline around whatever tool you pick. I use Claude because it’s the best reasoning model available and because its tooling (projects, MCP servers, Claude Code, tagging an agent into a GitHub PR from your phone while standing in a pharmacy queue) fits how I work. If you’re starting, pick one, build something useful, and don’t waste time bikeshedding platforms.
The BA role is collapsing — and most BAs haven’t noticed
The distinct roles we’ve organised software development around — Product Owner, Business Analyst, Developer, QA, DevOps — are collapsing into each other. Not eventually. Right now.
I’m writing as directly as I can about this because the BA community is lagging badly. Your software engineering colleagues already see it. They’re either adapting or they’re quietly worried. Meanwhile, BAs are still having the same conversations about stakeholder management and requirements elicitation that we had five years ago, as if the ground hasn’t shifted beneath us. It has. Completely.
The technical barriers that kept BAs in their lane — couldn’t write code, couldn’t build test frameworks, couldn’t configure pipelines — have evaporated in the last eighteen months. Anyone with pattern recognition, domain knowledge, and the ability to direct and review AI-generated work can now operate across multiple roles competently enough that organisations are starting to question why they need five separate people.
This breaks the BA value proposition in two different ways, depending on what kind of BA you are. If you’re a pure process BA — requirements templates, stakeholder workshops, no technical depth — you’re in serious trouble and the industry hasn’t told you yet. If you’re a Technical BA with real domain knowledge and the ability to evaluate what’s being built, you’re potentially in a stronger position than you’ve ever been. The demand for expertise has gone up, not down. But the shape of the work has changed from describing to doing.
I wrote about this at length, including why enterprise organisational structures aren’t ready for what’s coming, see The Business Analyst Role Is Collapsing.
Why BAs are the precondition for agents working at all
Here’s the part that annoys people who think agentic coding is a pure engineering problem.
When an agent produces a 95-file pull request that passes senior-engineer review without major comments, it’s not because the agent is secretly a senior engineer. It’s because the spec was unambiguous, the test plan was committed to the branch before any implementation started, and the pull request couldn’t be raised without mechanical gates checking that every named scenario was covered by a real, non-trivially-passing test.
A reader put it better than I could: spec-driven development requires very good BA skills that the industry has not been respecting for a long time. That’s the whole story in one sentence. The quality gate isn’t the model. It’s the specification, the locked test intent, and the review discipline around both.
I’ve seen this movie before. Offshore teams don’t fail because the engineers are bad. They fail because the requirements were vague and the specification discipline was missing. When requirements are tight, offshore delivery works beautifully. When they’re sloppy, you get exactly what you asked for (which is never what you wanted).
Agentic coding is offshore delivery at 1000x speed with zero timezone lag. Every structural problem that plagues distributed teams applies. Every solution that works for distributed teams applies too.
This is why BAs aren’t being replaced by AI. They’re becoming the precondition for it. Someone has to write the spec the agent implements from. Someone has to own the test plan. Someone has to notice when a requirement is ambiguous before the agent cheerfully picks an interpretation and ships it. The people who’ve been quietly insisting on clear requirements for twenty years just got a massive tailwind.
The full engineering harness I use, including custom slash commands, the test plan format, the review gates, and commit hooks are outlined in Agentic Coding at Enterprise Quality and Startup Speed.
What it costs to build these responsibly
Here’s the part no one selling AI agents wants to talk about.
Better prompting is heuristic. It gives you no real guarantees. If you want reliable behaviour at scale, you need non-LLM mechanisms in the loop, not just better instructions. I discovered this the hard way when my job screener rejected a spec that it should have obviously passed. It had done a surface-level keyword match, behaved like a bad CV screener, and was reporting back that it had applied my nuanced criteria when it hadn’t. The fix was structural, not a better prompt.
In a low-stakes context such as job screening, draft generation and first-pass analysis, this is manageable because you’re the last line of defence. In a high-stakes context, it’s a different conversation. If you’re building something whose output touches users in vulnerable states or makes decisions that carry real consequences, you need to think very hard about oversight, evaluation, and escalation before you build. Not after.
I explored this in the highest-stakes domain I know, mental health, and what I learned changed how I build every agent I’ve built since. It’s in Lessons from Building a Safe AI Mental Health Coach.
What to do next as a business analyst
I don’t have a neat bullet-pointed action plan. Anyone offering you five steps to “AI-proof your BA career” is either deluding themselves or selling something.
But what I do know is this: the shift is from describing to doing. Start small. Pick one problem that actually bothers you: something specific, not “write better requirements.” Build an agent to solve it. Have a fight with it if you need to. Let the first one teach you how to build the next. Work with the tools until you understand both what they can do and what they profoundly cannot.
The goal isn’t to become a developer. It’s to stop pretending you can stay on the describing side of a line that no longer exists.
I feel lucky to be around to see this transformation unfold, even if I’m genuinely unsure where it leads. For BAs still having the old conversations about stakeholder management and requirements templates, I’d suggest the ground has shifted dramatically underneath you. The only question now is whether you notice in time to adapt.