AI-Driven Drone Surveillance Is Leading to Home Insurance Cancellations
scnr.comThis story appears to be a sloppy, confusing summary of a Business Insider piece by Albert Cahn, the man mentioned in the article. The fact that SCNR makes no reference to this piece is telling:
https://www.businessinsider.com/homeowners-insurance-nightma...
Cahn is the founder and executive director of the Surveillance Technology Oversight Project, or STOP, a New York-based civil-rights and privacy group, so he certainly has a dog in the ring here, but he also has a horror story to back it up.
The source article on Business Insider contains much more important details:
> Travelers admitted that it screwed up. It never conceded that its AI was wrong to tag me. But it revealed the reason I couldn't find my cancellation notice: The company never sent it.
> Travelers may have invested huge sums in neural networks and drones, but it apparently never updated its billing software to reliably handle the basics. Without a nonrenewal notice, it couldn't legally cancel coverage. Bad cutting edge tech screwed me over; bad basic software bailed me out.
So basically, this comes down to a dispute over how much moss is too much moss to make a roof structurally unsafe. But it sounds like the process goes straight from "AI detects a problem" to "policy gets cancelled," without human review in the middle. Perhaps a less error-prone way of handling it is for the AI's recommendations to trigger a human to go out to the home and investigate?
Ironically, it seems the article itself was written by AI
The article says it's an industry-wide practice... but nearly all the examples in these articles relate to Travelers.
> to trigger a human to go out to the home and investigate?
But, how can this be outsourced overseas ?
Not to worry, we have the technology
Cahn's article is really dumb.
The entire story: His insurance broker said his policy was cancelled because of AI. His policy was never actually cancelled.
Everything else is complete speculation.
The irony is that the most likely culprit in all of this was simple human error with his insurance broker.
We had such a notice, for moss and mold (in a hot, dry climate). The images were accurate enough. The insurance company relented after we remediated and sent photos.
Image recognition is not really "AI-driven", and the low numbers make that replaceable with humans. It's the cost and legality of drone roof photos that make this possible.
The risk represented in the photos was relatively small, but it's a risk easily and legally measured. Then the higher cost of fix + verify is shifted to the homeowner.
The real beneficiary is roofing companies, raising the question of illegal tying. Insurance is required by mortgages, so homeowners have no choice but abate with roofing services, which creates an opportunity for the insurance company and roofers to share value extracted in various ways. Which ways are legal is an open question. The value extracted is bounded by the cost of switching, which involves another company assessing your property in some way; tight home insurance markets thus increase the value extractable.
Insurance mandates and reliance require regulation, as does using private insurance for large social risks like wildfire and earthquakes, but that all makes insurance less competitive by reducing viability of new entrants.
Nothing in the chain of reasoning - from drone pictures to investor decisions - is improper, but boy the resulting homeowner squeeze is painful.
Moss and mold is extremely damaging to roofs - even in dry climates. And roofs are expensive.
If the insurance company convinced you to go up and "remediate" the moss problem it sounds like the system worked and they spared you (and them) from an expensive roof replacement.
> ... creates an opportunity for the insurance company and roofers to share value extracted in various ways ...
This opportunity is naturally enlarged in jurisdictions that drive down the population of business competitors with aggressive misregulation -- California.
Unless the insurance company mandates that you use a particular roofing company to address the problem, I can't see how that would constitute illegal tying.
I would expect the insurance company would say something like, "We noticed this problem with your roof. Please correct it or we'll cancel your policy." And then the homeowner is free to choose any roofing company they want to remediate.
Heck, the insurance company could even say, "Here are some reputable roofing companies in your area, but you're still free to select any company you want," and I doubt that would constitute tying.
And anyway, even if you could make the argument that roofing companies in the aggregate will benefit from these sorts of policies, it's not as if keeping one's roof in good condition is forcing consumers to make unreasonable purchases.
>The real beneficiary is roofing companies, raising the question of illegal tying. Insurance is required by mortgages, so homeowners have no choice but abate with roofing services, which creates an opportunity for the insurance company and roofers to share value extracted in various ways.
How is the insurance company going to get a share of the "value extracted"?
A “referral bonus” or such would do the trick.
Any evidence this is actually happening?
Expanding on this idea - can health insurance companies fly a drone over my house and see me in my hard eating a cheeseburger and cancel my coverage?
Can an auto insurer do the same and cancel my coverage because they see me doing burnouts in my driveway?
Seems like a giant privacy violation but I'm no legal expert.
If you think about it, the logical conclusion here is that insurance companies could end up with no customers at all. The ideal customer for them is someone with almost zero risk. But if their algorithms get advanced enough to accurately assess who’s a safe bet, anyone they do insure might as well take it as a sign that they don’t actually need insurance in the first place.
It's a self-defeating cycle. As the models improve, they'll start excluding anyone who might ever need to file a claim. The more they optimize, the less relevant they become. In the end, insurance might evolve into something entirely different or become obsolete altogether. It's the ultimate paradox of trying to reduce risk to zero: You end up with no business at all.
There's every reason to assume that these applications are the ones actually backing Open AI and co's insane valuations. They want to use these products to lock consumers down even harder. Actively rewrite contracts that already favor them, to favor them even harder. Cancel contracts that look to need payouts soon. Determine risk assessment for insurance policies, mortgage holders, and screw every last customer out of every last nickel they can. And the privacy destruction will be smuggled in under the notion that "well no HUMAN is looking at these photos, only these machiiiiiines!" with the added side benefit of laundering the responsibility of awful decisions onto those machines, and assigning them the credibility of them too. "The decision is perfect, impartial and unbiased. A computer made it, after all!"
And these applications, crucially, are not pie-in-the-sky, someday-this-will-work type situations as is comprehension of knowledge by an LLM, or a video generator that can remember what a character looks like, no. This is exactly what ML is already used for: aggregate analysis of massive amounts of statistical data. This is it's bread and butter.
That's worth $80 billion. Not shitty melty royalty-free images.
This is my opinion too. The real profitable application of current AI is surveillance and the removal of human empathy from the corporate equation.
An elder care insurance provider definitely sent private investigators after the parents of someone I knew. Their way of getting out of the plan was that he was lifting cardboard boxes: something that his condition should not have permitted.
Your car insurance isn’t high value enough but yes they already do this otherwise.
Your personal insurance isn't, but once the tech is perfected to operate on the ones that are, there is zero reason to assume it won't also be deployed to yours.
Progressive is one company that insures 27 million drivers. If they deployed this tech to all of them and it earned them a paltry $2 more annually per driver, that's $54 million in extra revenue. And any company that dares not do it once it's widely adopted will have shareholders screaming at them to implement it.
Sure, but that’s fine by me. Shareholders are your fellow policyholders in a mutual insurance company. The high risk people can go start their own mutual insurance.
Expanding on this idea - "Data driven decisions are leading to significant change."
No matter the source of the data, all data available is being used for any and all purposes regardless of morals or ethics. As such in business profits are the goal however in war it is to win.
As a multiple fintech founder from the 1990s I have experience with the earliest payment breaches, some known, some not. The banks were buying this data from crims to offset losses by proactively changing the impacted account numbers before they could be used. Few complained when the bank changed their account number unless of course one had a unique sequential card number. ;)
There is an ever increasing amount of public data being produced daily from countless sensors and devices now covering our entire world and to some degree space. Given this evolution one is to believe they can still hide? Just because data comes from a certain sensor or source does not limit the data use in any specific way and if you believe otherwise I have ocean front property in Kansas to deed you. Now consider the number of known breaches today since the internet went public and multiply that value by some large number N to postulate a total breaches number ever. Point being is there is a lot of data available if one knows where to look and HOW to apply that information for "profit" and/or to "win".
Just as banks were paying crims for data to proactively offset losses this mindset works in other business' today as well however it is certainly not being received openly given the impact to consumers during these increasingly challenging economic times which only greatly compounds that impact. Do recall that one can just change their financial account information but one cannot change their health records so why does that matter? Let us propose the hypothetical that health insurance companies are now buying up health records that have been leaked. We all know why they are doing this because these health insurance companies are looking to aggressively target the healthy to sign up with them correct?
Interesting times ahead as "change" continues.
Stay Healthy!
It depends on the policy you signed. For most insurance, you are entering into a private contract with someone to pay for your stuff if it breaks. How they enforce their policy is mostly between them, you, and how much you want to pay a month.
For the most part though, insurers don't actually have too much incentive to create false positives. Increasing your rates for life insurance or cancelling your policy because you ate a burger when they already have your cholesterol information is really dumb and bad for business.
I would assume they've already purchased your "anonymized" credit card transactions, no need to figure out your diet with anything that complicated.
The answer is absolutely "yes". Your car will snitch on you, your house will snitch on you, your phone will snitch on you, your credit card will snitch on you, on an on.
This nation is built to support corporations, not personal freedom. This is the result of that architecture.
For now, in most domains, insurance is optional. If it smells worthless, they have a problem.
I feel like better transparency about policy holders’ attitude to home maintenance leads to better pricing of risk and that’s a good thing - the problem is really the lack of warning from the insurer that they are about to be dropped. Not even giving the policy holder the chance to take corrective action sucks, and hints at a future where you need to be on best behaviour even in private spaces
Are these like military grade aerial drones or something? The one shown in the picture, and those many use to do prelim roof inspections, fly far too low - seems it would be encroachment of owner's property rights, no?
Do the drones need to be directly over the property to capture images? It can easily be done from the streets/sidewalk, which won't infringe on anyone's property rights.
I'm not an aficionado on latest drone tech, admittedly, but the old DJI my friend had - maybe could see on a shotgun house or something right on the street. There's no way it would be able to see mold on my house's roof at that distance without encroaching the property line, unless it was way worse than the homeowner was claiming here and lit up like a big grassy knoll.
>unless it was way worse than the homeowner was claiming here and lit up like a big grassy knoll.
The author is cagey about the extent of the moss growth. He brings up that "A small amount is largely harmless", but for whatever reason refuses to connect either parts of statement (ie. "small amount" or "largely harmless) to his roof specifically. The only thing we know about his roof is that "the moss was dying". My totally unfounded speculation is that the moss growth was actually pretty bad, but he didn't want that to get in the way of his 1500 word article about how companies are using AI to oppress consumers.
The article says they’re also using low flying planes and balloons to do the surveys.
My girlfriend worked for an aerial imaging company for a while, and most of their imaging came from general aviation (small planes).
A commercial pilot's license requires 1500 hours of flight time under one's belt. The company paid them a small sum to strap a camera to their planes and fly certain routes. New pilots make some extra cash and the company gets cheap shots.
Pretty sure the pilot can't get paid until after they get their commercial license.
It's possible the company can however rent hours in the plane at below market costs (a friend did something like this years ago while getting his helicopter license)
You're entirely right, I've got my details wrong.
Looks like a CPL only requires 250 hours of flight time per the FCC, and it's airlines that demand additional time.
https://www.faa.gov/faq/how-do-i-get-commercial-pilot-licens...
Another tech asymmetry that puts consumers at a disadvantage.
I wonder if there exists any kind of service for similarly leveraging technology to help consumers find claims opportunities. E.g. run a drone after a hail storm to look for damages that consumers could file claims with. Ideally the carrier would also be doing this, but most aren't going to volunteer money away.
Normally homeowners do have an information advantage: they know more about what's going on with their property than the insurance company, or could find out. I don't think the occasional drone oversight is going to change that much, other than for the most obvious problems.
It seems like whoever pays for an inspection (such as using drone surveillance), it should be shared with the other party?
The tech you're thinking of is the tools that now make it easier than ever to compare insurance rates and choose an insurer that best fits your risk profile.
Insurance companies thrive in the margins of customers being overly-conservative, and as the information asymmetry drops, their margins go down.
I think a defining aspect of our time is that we've passed a threshold where technology will no longer help us more than be used against us.
Consumer groups say they've seen a 'dramatic increase' in homeowners dropped from coverage because of aerial images.
In Florida that would be especially concerning as home insurance is very pricy and hard to get
Drone or no drone, insurance companies shouldn't be allowed to suddenly drop a covered customer, without having to work with the customer on correcting whatever they are concerned with. There should be a mandatory grace period allowing the insurer to provide notice and list whatever corrective actions they required, and for the customer to either dispute or take those action. "The Market" is clearly failing to provide this on its own. We need some kind of Insurance Customer Bill of Rights.
Same as a “fix-it” ticket for having a busted tail light. You prove to the court that you got it repaired and you don’t have to pay a fine.
I think the headline is backwards.
The drones are feeding data. It's drone-driven AI analysis.
They could cut the AI out of the loop and have humans making bad judgment calls and the result would be the same.
Sounds like the homeowners' complaints aren't with the surveillance itself but with too many false positives in the algorithm that marks surveyed processes "uninsurable". It would be just as bad if an agent came to visit and arrived at the same wrong conclusion.
I would further have issues with any drone flying over my property uninvited... even if I don't strictly have issues with companies automating such inspections conditional on them being agreed to in advance.
Add AI and drones for the dystopian effect. Drones allow insurance companies to review the house and surrounding fire risk quite well. Better than having a human drive around a tall selfie stick!
Such behavior seems like something that should be trivially illegal as an analogy to trespassing. How is it legal to fly a commercial drone over residential areas without consent?
This entire article is based on a single anecdote, stolen from another article, about a guy whose policy was never actually cancelled.
This is clickbait.
I’m frankly surprised this is legal. I can’t imagine something like this passing in the EU.
Compared to the EU, the ostensibly relies more on litigation and less on regulation to keep things in check. This has, in turn, led to a more permissive attitude where, in general, anything goes unless and until people can successfully sue over it. Which, in turn, doesn't happen because we have a thing called forced arbitration that prevents most consumers from suing over most things - instead they have to go to an "arbitrator" who is chosen and paid for by the corporation in question, and therefore extremely highly motivated to not decide cases in a way that might cause their client to take their business elsewhere.
And, thanks to our darn near unrestricted version of free speech and a Supreme Court ruling that tore down most restrictions on political lobbying by ruling that spending money also counts as a form of speech, corporations spend an obscene amount of money on getting buddy-buddy with legislators. So the legislatures also tend to be incredibly slow to act on these kinds of things because they don't want to bite the hand that feeds them, either.
The sounds incredibly like a version of hell.
I think the details around this vary a lot from country to country. I work for a Norwegian insurer, and I’m pretty sure that we couldn’t cancel a contract like this short of something like fraud. OTOH, contracts are renewed annually, and I think declining to renew would be an option. But damages due to lack of maintenance are not covered anyways, so I’m not sure if there’s much incentive for this kind of thing in our market.
Damages due to lack of maintenance are not covered and you don't see the local application?
I’m not convinced it’s cheaper to surveil the entire country than to simply assess claims when they are reported and dismiss those that are due to lack of maintenance.
Perhaps more people should use Mutual Insurance companies. Then you’re a shareholder in the structure and if they drop you it’s because all the other shareholders felt they didn’t want you.