Over the last week, we have seen the software investment sector impacted by a forward-looking 2028 scenario from Citrini Research.
In this article, we challenge their core assumptions about the defensibility of software platforms. This is a deeply reasoned response, based on decades of experience. Our response addresses the complex reality of software platforms deployed at scale.
The Citrini paper begins with the observation that agentic-coding tools had a step-function jump in capability in late 2025. It goes on to assert that by early 2026, a single developer can effectively replicate a mid-market SaaS product in weeks.
This changes enterprise procurement dynamics. CIOs start asking why they are paying six-figure renewals when internal teams can prototype replacements.
The long tail of SaaS gets hit first, but it does not stop there. Even systems of record like ServiceNow face pressure, not from direct replacement but from pricing compression, intensified competition from AI-native challengers, and a mechanical headcount effect: when your customers cut 15% of staff, they cancel 15% of licences.
The fundamental reason software development costs will not collapse is that writing code was never the primary cost driver. Code generation - the phase where AI delivers its most dramatic gains - accounts for roughly 25% of a senior developer’s time, and even the most optimistic independent studies show only 20–55% speedups on that area of work.
The rest of the lifecycle is where human experience comes into play - understanding what to build, designing systems that handle real-world complexity, reviewing code for correctness and security, testing edge cases, navigating deployment, and responding to incidents.
AI is helping to develop code across many use cases. But, across our work, we observe this has shifted more focus downstream into review, testing, and operational readiness. In many cases, these areas now take longer because AI floods those phases with more volume and lower than average quality.
These statistics ignore the real complexity of developing most software products, namely years and decades spent learning the domain knowledge and user workflow nuances of the area in which they work. And half that time is spent reversing out of deeply learned mistakes.
We note that AI-generated velocity can also create new costs. AI code contains vulnerabilities at a 40–45% rate, generates 1.7× more issues per PR, and produces duplicated code at 8× historical levels - all of which must be found, triaged, and fixed by humans. Acceleration without quality discipline is borrowing against future engineering capacity.
There is also a growing governance overhead that enterprises now require - provenance tracking, IP scanning, EU AI Act compliance, shadow AI detection, and security scanning proportional to AI output volume.
Software remains expensive not because the industry lacks tools to generate code faster, but because the enduring costs are in domain understanding, building for scale, security and accountability.
Did the Industrial Revolution take the cost of physical products to zero? No, rather it allowed the design and release of more complex products and made them more widely available. It automated some jobs, but as the complexity of products increased, it created more opportunities.
An AI agent can replicate the core functionality of a project board or task tracker by looking at the UI. The Citrini piece is partially accurate in this category. But the article glosses over a critical reality: these platforms did not win solely on features; they won on distribution. Building a kanban board is trivial. Acquiring 200,000 paying teams is not. The go-to-market spend, brand recognition, partner ecosystems, and sheer inertia of millions of configured workflows represent a moat that has nothing to do with code complexity.
In addition, as these platforms roll out, the architecture (not coding) challenge shifts to the areas that matter: scale, latency, security, and enterprise readiness. We see these challenges across all our clients that make the transition, coupled with the need to bring deep senior experience into the business to handle this inflection point.
Many software platforms have accreted decades of domain knowledge.
An AI agent can replicate a UI. It cannot replicate a decade of proprietary clinical trial signal data, or a unique feed of satellite imagery, or years of labelled adverse-event reports in pharmacovigilance.
Across our work, covering verticals from clinical data management and statistical signal detection through to agricultural yield modelling and insurance actuarial platforms, we see that the moat is not the software. It is the data asset underneath.
Every customer interaction enriches the dataset, which improves the models, which attracts more customers. An AI looking at the screen sees a dashboard. It does not see the 50 million labelled records behind it, nor the regulatory relationships that granted access to them. These businesses are strengthened by AI, not threatened by it.
Many such businesses are based on reliable, deterministic, statistical models that provide mathematical fidelity that a probabilistic model cannot achieve.
Enterprise software accumulates thousands of custom rules, approval chains, exception-handling paths, and integration touchpoints with adjacent systems. The value is not the UI or even the feature set, it is the encoded institutional logic that took years to configure.
Replacing it means re-architecting internal operations, not swapping one application for another. We see repeated failures to replace such systems, often spanning many years.
An AI agent can build a ticketing system in a weekend. It cannot reverse-engineer the 4,000 custom workflow rules a Fortune 500 has layered into its IT service management platform over a decade.
More importantly, neither does the CIO want to reverse engineer such a critical system.
A system of record is more than a database with a front end that AI could knock together.
It is the authoritative source of truth for audit trails, compliance obligations, reporting hierarchies, and cross-system integrations. An ERP, a core banking ledger, or a clinical trial management system holds not just data but context. Such context includes the legal entity that is the counterparty, the approvals obtained, and the regulatory submission references to this record.
AI can read from a system of record. Replicating the full business logic, referential integrity, and years of accumulated institutional memory that surrounds it is a fundamentally different and vastly harder problem.
The article treats software as interchangeable screens. Systems of record are complex organisational logic.
Products such as databases, ETL pipelines, data warehouses and observability platforms sit beneath the application layer that the Citrini piece fixates on.
You cannot ‘vibe-code’ a distributed database engine that maintains transactional data integrity across global regions. You cannot screenshot your way to a streaming data platform that processes millions of events per second with sub-millisecond latency.
This layer is invisible to the end-user AI agent and operates under engineering constraints such as consistency, durability and fault tolerance. Complex observability needs to be embedded.
Moreover, AI depends on this layer. Every AI agent needs infrastructure to run on, data to train on, and observability to monitor it. This sector is a net beneficiary of the AI wave, not a victim.
Building a user interface that lets someone book a train ticket might seem trivial. It is only simple once you have access to all legacy rail platforms and have spent years understanding the fare regulations.
What is entirely different is to build a system that simultaneously queries dozens of rail operators across multiple countries, prices millions of fare combinations in real time, handles dynamic inventory allocation, processes concurrent bookings without overselling, manages payment across currencies, and does all of this at sub 500ms latency when the load spikes by 50x after extreme weather has disrupted 70% of services.
This represents decades of engineering supported by a team with deep rail domain knowledge and experience in scaling distributed systems.
The same applies to payment processing networks, exchange matching engines, real-time bidding platforms, and large-scale marketplace infrastructure. The moat here is not the UI or even the business logic in isolation.
It is the distributed systems engineering, the edge-case handling forged through years of production incidents, and the operational maturity required to run a platform at scale.
CAD/CAE, computational fluid dynamics, molecular dynamics for drug discovery, electronic design automation, reservoir simulation for oil and gas, finite element analysis - such platforms encode decades of scientific research into software. The underlying models are grounded in physics, chemistry, and biology, not business process logic.
An AI agent cannot look at a Computational Fluid Dynamics interface and infer the Navier-Stokes solvers running behind it. These tools require deep domain expertise to build, validate, and certify.
They also carry regulatory and safety implications: if your structural simulation tool gets it wrong, buildings collapse or airplanes fall out of the sky.
Across financial services, healthcare, life sciences, defence, and nuclear, software must meet stringent regulatory requirements around auditability, traceability, explainability, and validation.
A fintech compliance platform does not just detect suspicious transactions. Rather, it must produce audit trails that withstand regulatory scrutiny, maintain cryptographic provenance of decisions, and adapt to an evolving patchwork of jurisdictional rules.
In life sciences, validated systems must comply with FDA 21 CFR Part 11 or EU Annex 11. You cannot just build a functionally equivalent tool; it must be validated, documented, and defensible under examination. Such a regulatory certification process often takes years and requires deep relationships with regulators.
Meeting regulatory obligations is also rarely a static, one-shot process. This is reflected in the latest German requirement for approved digital health applications to submit patient-validated success metrics every quarter.
There is a tendency for AI tools to rewrite or change code not directly related to a feature. This can create significant problems for regulated companies. In many sectors, any change in code requires recertification of the platform. Regulators are also now adding AI-specific governance requirements.
The Citrini piece presents a supposedly simple fictional change to all payments infrastructure: AI agents, optimising ruthlessly for cost, spot the 2-3% card interchange fee as an obvious target and reroute transactions through stablecoins such as Solana, where settlement costs are fractions of a penny.
This makes moving money sound like moving data. Payments are not pure data packets. They are regulated financial instruments carrying legal obligations, consumer rights, and counterparty risk.
The article treats the payment rails as pure plumbing and ignores everything that sits on top of it.
When you pay with a card, you inherit an entire consumer protection framework. Visa and Mastercard’s chargeback systems give you the legal right to dispute a transaction. If a merchant fails to deliver, delivers a defective product, charges the wrong amount, or commits fraud, you can claw the money back. This is codified in regulation.
Stablecoin payments on Solana have no equivalent. Blockchain transactions are final by design. Once USDC (the regulated, USD-linked stablecoin) moves from one wallet to another, there is no chargeback mechanism, no dispute-resolution layer, and no intermediary you can call. If an AI agent sends payment to a fraudulent merchant, the consumer has no standardised recourse. You would need to build an entirely new dispute resolution infrastructure on top of the blockchain - at which point you have recreated much of what the card networks already provide, plus the cost of building and operating it.
This same argument applies to the intermediaries they write about, such as travel agents and e-commerce platforms. The value is not in the technical work of linking buyers and sellers; it is in the huge legal apparatus for quality guarantee that these platforms have created.
Every cross-border payment must comply with an intricate web of KYC, AML, and sanctions-screening obligations that vary by jurisdiction. A platform like Wise holds over 70 licences globally. It conducts a wide range of compliance checks on every transaction. It is a direct participant in instant payment schemes across multiple countries. It maintains relationships with local banking partners, regulators, and clearing houses in each area in which it operates.
Routing a payment via a stablecoin on Solana does not make any of these obligations disappear. The sender still needs to be identified. The recipient still needs to be screened against sanctions lists. The transaction still needs to be monitored for suspicious activity. The GENIUS Act explicitly subjects stablecoin issuers to Bank Secrecy Act requirements. Someone in the chain must bear these compliance costs.
The Citrini piece mentions none of this.
The article seems to say that interchange is the dominant cost of a transaction.
A huge proportion of global commerce is cross-border and multi-currency. Here, the dominant costs are not the interchange. They are FX spreads, hedging, liquidity management, and settlement timing.
A platform like Wise does not just move money. It manages FX exposure, offers guaranteed rates, uses intelligent routing to pick the lowest-cost corridor, forecasts liquidity needs using AI, and pre-funds local accounts to enable instant delivery. It operates multi-currency accounts that let businesses hold, convert, and pay in dozens of currencies.
None of this is solved by putting USDC on Solana. You have shifted the denomination to dollars, but the moment either party needs local currency, you are back in FX-land, and you need all the infrastructure that comes with it.
Card networks have spent decades building merchant acceptance infrastructure. There are over 100 million merchant acceptance points for Visa globally. The rails are integrated into every point-of-sale system, every e-commerce checkout, every subscription billing platform.
Stablecoin payments have negligible merchant acceptance.
This research article imagines this infrastructure materialising overnight. Merchant adoption of new payment rails has historically taken a decade or more, even when backed by the world’s largest technology companies.
Even Apple Pay experienced slow penetration of in-store payments despite having the device already in the consumer’s hand.
Perhaps the deepest irony in the Citrini argument is that stablecoin regulation is converging toward the very protections and costs the article assumes agents will route around. As stablecoins become regulated payment instruments, they will necessarily accrue compliance costs, capital requirements, and consumer protection obligations that look increasingly similar to the existing financial system.
The low transaction cost on Solana is the cost of moving a token. It is not the cost of making a payment - a legally compliant, consumer-protected, dispute-resolvable transfer of value. Those are very different things, and the gap between them is where platforms like Wise, and indeed the card networks, actually live.
The Citrini piece confuses the cost of moving a token on a blockchain with the cost of making a payment in a regulated economy. These are not the same thing. The 2-3% interchange fee does not represent pure profit. Rather, it funds fraud protection, dispute resolution, consumer guarantees, compliance infrastructure, and merchant services. An AI agent optimising purely on transaction cost would be optimising its user into a world with no recourse, no protection, and no legal standing when something goes wrong. That is not optimisation. That is negligence.
The Citrini scenario is provocative, but it is built on a single assumption: that software is simply what it looks like. Strip that out, and the entire chain of argument - from SaaS collapse to private credit contagion to mortgage crisis - loses its first domino.
An AI agent can screenshot a dashboard. It cannot replicate the decades of proprietary data behind it, the regulatory certifications around it, the thousands of custom workflow rules encoded within it, or the institutional trust that took years to earn.
The article asks us to believe that a CIO will rip out a validated system of record - one that touches audit trails, compliance obligations, and cross-system integrations - because an intern built something that looks similar in a weekend.
That is not how enterprises work. It is not how risk-averse institutions behave. And it is not how the software economy will unwind.
The software economy is a spectrum. At one end, there are lightweight tools whose functional moat was always thin and whose real defence was distribution and habit. Such lightweight examples are always used in AI doomsday scenarios.
At the other end sit systems so deeply woven into regulated workflows, proprietary data assets, scientific models, and institutional memory that ‘replicating the UI’ is as relevant as photographing an airplane and then claiming you can build it.
The article’s most fundamental error is assuming that the value of software resides solely in the generation of code. For the categories that matter most - those that represent the overwhelming majority of software value - it never did.
Disclaimer. All of this article was written by an 86bn neuron human language model, assisted by highly fine-tuned human language agents across our team.
About Seedcloud: Technology and strategy advisory firm with 14 years of technology advisory experience across a wealth of transactions in the PE and VC space. Deep experience in the transformation of large-scale enterprise platforms across every category in this article.
About the author: Graham is the managing partner of Seedcloud, a technologist, futurist and investor.
Contact me: Please use 2028@seedcloud.com for discussions on this topic. We will visit the future, read and respond.