AI Liability Insurance Arrives

6 min read Original article ↗

In May 2025, Lloyd’s syndicates began underwriting Armilla Insurance’s new AI liability coverage, one of the first policies explicitly designed to cover the costs of chatbot errors. At the exact same time, other major insurers began moving in the opposite direction, rewriting policies to add “absolute AI exclusions” to avoid exposure entirely.

This bifurcation in the insurance market raises questions about the deployment of generative AI. While the technology has moved quickly, legal reality is catching up.

An Arkansas man recently sued OpenAI after ChatGPT fabricated a criminal conviction in his name. In the now-infamous Mata v. Avianca Inc. case, lawyers were sanctioned for submitting hallucinated legal citations. Meanwhile, The New York Times and The Authors Guild are pursuing economic harm claims over training data.

As arbitration forums report a rising wave of disputes regarding AI service performance, the “black box” nature of AI models is colliding with the rigid requirements of actuarial science. The resulting pressure is likely to change how AI models are engineered more than any government regulation.

A Great Divergence

The insurance industry is currently splitting into two camps as it struggles to categorize AI risk. For years, insurers relied on “cyber liability” policies, which cover unauthorized access, ransomware, and data theft. However, AI models present a new class of risk in that the system itself is secure, but the output can be wrong.

“The current AI-related harms generating divergence in the insurance marketplace relate to economic loss resulting from relying on false outputs produced by an AI application,” said Marcus Denning, CEO of criminal defense firm MK Law.

Denning noted that when AI produces authoritative but material misstatements, which are then incorporated into business decisions or legal documents, significant financial losses can occur that standard cyber policies never contemplated.

Venkata Naveen Reddy Seelam, an AI and insurance expert at PwC, agreed. “These aren’t the standard cyber threats, like someone stealing data; this is the model itself messing up,” he said. He noted that the industry is scrambling to distinguish between “the AI acted weird” and “the AI was hacked,” a distinction that is becoming the central factor in pricing or denying insurance coverage.

James Lei, Chief Operating Officer at Sparrow, a platform for class-action claims, suggested a practical taxonomy for emerging risks, with three different buckets of claims.

First are content harms, or hallucinations leading to defamation, false advertising, or copyright issues, which fit under media liability.

Second are decisioning harms, where model outputs influence eligibility or pricing, leading to bias or errors.

Third are data harms, or prompt injection or leakage of secrets, which remain traditional cyber incidents.

“Underwriters are getting better at drawing those lines, but the gaps appear when evidence is thin,” Lei said.

The End of the ‘Black Box’ Era

To qualify for coverage in this new landscape, the engineering lifecycle of AI models is shifting from “best effort” guardrails to provable controls. The insurance industry is signaling that it will refuse to provide coverage to systems that lack traceable operations.

“The black box era has reached its end because organizations now understand that complete system transparency provides better value than speed,” said Andre Disselkamp, co-founder of startup Insurancy. Disselkamp said a model that fails to explain its operations becomes an untraceable entity that insurance companies simply cannot cover.

This pressure is forcing a transition toward what experts call “compliance-grade observability.”

Sophie Nappert, an independent arbitrator specializing in AI disputes, foresees a shift toward regulator-aligned lifecycle governance. “Model builders need to integrate compliance-grade observability, such as dataset provenance, directly into their architectures instead of adding it post-deployment,” she said.

In practical terms, this means introducing a practice from the medical world.

“In the same way that immutable audit trails became a requirement for tracking the history of devices used to treat patients, immutable audit trails will also need to document the origin of the training data, the versions of the model, and the path taken to make inferences,” said Denning.

Lei detailed what this looks like technically. “Expect immutable audit trails with model and dataset fingerprints, versioned prompts and outputs, time-stamped retrieval citations, and signed change logs for safety settings.” He added that “replay harnesses” that can reconstruct an interaction will become table stakes for underwriting.

Liability Modes and Routing Risk

What may emerge are distinct liability modes, where certain models operate with stricter guardrails to qualify for lower insurance premiums. Seelam said these modes could keep outputs within a safer, narrower range, and act as a prerequisite for coverage.

Such standardization will allow liability to be routed, rather than debated. “Liability will flow to the party that controlled the failed guardrail,” said Lei.

That could include the provider, the deployer, or the user, he said. If a loss stems from training data contamination or systemic behavior reproducible under replay, it points to a model defect and thus the provider would be liable. If the harm arises from how the model was wired into a workflow or a missing human review, that would be an integration error that makes the deployer liable. And if a user bypasses warnings or ignores approvals, it is treated as misuse.

Nappert noted that while the EU AI Act pushes responsibility upstream for high-risk systems, commercial contracts currently tend to shift operational risk downstream to deployers and end-users. However, she said the next few years could see a functional allocation of liability based on role-based obligations.

The consensus among experts is that while the “black box” nature of AI models complicates insurance, it does not make it impossible. Instead, it drives the market toward standardization.

“Model builders are going to have to prove responsibility, not just performance, because capital tends to flow toward what’s traceable,” said Seelam.

Despite the complexities, one thing is clear: The era of moving fast and breaking things is being replaced by an era of logging, tracing, and explaining. The insured party must have engineers turn the black box into “a black box with a tamper-evident diary,” Lei said.

Logan Kugler is a technology writer specializing in artificial intelligence based in Tampa, FL, USA. He has been a regular contributor to CACM for 15 years and has written for nearly 100 major publications.

Submit an Article to CACM

CACM welcomes unsolicited submissions on topics of relevance and value to the computing community.

You Just Read

AI Liability Insurance Arrives

© 2026 ACM 0001-0782/26/3