AI Risks: The New World of AI Insurance

5 min read Original article ↗

Ahmed Fessi

Chatbots and AI tools more generally are definitely shaping the future in many ways. However, it’s essential to consider what might happen if your AI doesn’t perform as expected. For instance, envision a scenario where your chatbot misunderstands a customer, or your AI tool offers advice that leads to confusion, and this leads to a financial impact.

This actually happened.

In 2024, Air Canada got into some legal trouble because of a mistake made by its website chatbot.

Jake M., an Air Canada customer, was looking for a discount for a last-minute flight to attend his grandmother’s funeral. He asked Air Canada’s chatbot if there was a special “bereavement fare.” The chatbot told him yes — that he could book the flight now and apply for the discount later.

So, he booked the flight. But when he later asked for the discount, Air Canada said no. The airline claimed that their official policy didn’t allow discounts after booking.

Jake took them to court.

In the case, Air Canada tried to argue that the chatbot gave wrong information and that it wasn’t their fault. But the judge disagreed. The court said that since the chatbot was part of Air Canada’s official website, the airline was responsible for what it said — just like they would be if a real employee gave the same advice.

As a result, Air Canada was ordered to pay Jake the refund he was originally promised — about 650 Canadian dollars.

So, as we know that this is a valid concern, there’s also some possible “good” news on the horizon. For instance, insurance companies are recognizing these needs and beginning to offer coverage specifically designed for those unexpected bumps with AI technology. This added protection layer can help ease worries, knowing you won’t have to face any mishaps alone. Technology holds tremendous promise, and while it can occasionally surprise us, we can now navigate these challenges with a mitigated risk.

Press enter or click to view image in full size

Insurance Policy for AI — Generated by the Author using Dall-E on GPT 4o

From the first companies to announce this offer, Lloyds’ of London, as shared in this article from Financial Times : https://www.ft.com/content/1d35759f-f2a9-46c4-904b-4a78ccc027df

The Harsh Truth: AI Makes Mistakes

To be absolutely honest, AI will make mistakes. This is by design, as it is a probabilistic system and not a deterministic one. Particularly, those talkative chatbots can sometimes get things (really) wrong — what we call “hallucinations.” We all know the stories of chatbots being rude or sharing completely inaccurate information. These aren’t just amusing anecdotes; they could potentially cost businesses a lot of money.

This is where the new insurance comes in, it is like a safety net for businesses. It gives them the confidence to use AI, knowing they can still be covered if their AI has a hiccup. This of course will come at a cost, and a lot of contractual technical “asterisks”.

AI Insurance is Different

You may already have some standard tech (or cybersecurity) insurance in place, but when it comes to specific issues related to AI, you might find the coverage limits pretty lacking. This new type of insurance is designed specifically for the challenges that AI can bring, it provides much more comprehensive protection for those unique issues that come with using artificial intelligence.

The devil will be in the details, but it should be possible to tailor the insurance as per a given company context, use cases and tools.

When Do You Actually Get Paid?

It is not like every little hiccup with your AI earns you an insurance payout though. This insurance comes into play when your AI truly falls short of expectations and leads to certain damage despite the proactive measures that should be taken.

The case of Air Canada is interesting. What if this happened to 1000 customers for example? Maybe an insurance would have helped cover that risk, but in counterpart it might for sure had asked for a pre-audit and pre-assessment.

Smart Insurers Won’t Just Insure Anything

The people behind these insurance policies are no pushovers. They know their stuff. They even have a dedicated science, actuarial science, the measure risk accurately. Naturally, they’ll be selective about what risks they’re willing to take on. And I think it’s a sensible approach. Again, this will go through lengthy questionnaires and audits to ensure the insurance company is clear about the exact risk it is covering and also to correctly price the service.

My Take: This Just Shows Where AI Is At

The fact that we now have insurance to cover AI mistakes says a lot about where we are right now. It underscores the idea that while AI is truly impressive and holds a lot of potential, it certainly isn’t without its flaws. There’s a genuine risk that it might lead you astray with incorrect information, and sometimes those slip-ups can hit you right in the pocketbook.

For those who prefer to err on the side of caution, this development is likely to pique their interest. On the flip side, for insurance companies, if they can set premiums higher than what they end up paying out for AI-related blunders, it seems like a good business move.

In some years, may be months, for companies selling AI, it might become also a customer requirement to present an insurance against AI risk, as it is the case today with Cybersecurity insurance.