US AI giants seem fine with their tech being used to spy on Europeans | Euractiv

5 min read Original article ↗

US AI giants OpenAI and Anthropic appear untroubled by their technologies being used for mass surveillance of non-Americans, including Europeans, according to recent public statements.

The comments were made in the context of a spat between Anthropic – which has positioned itself as the most ethically minded of the large AI labs, its founders having left OpenAI over safety concerns – and the US Department of Defense (DoD), or the “Department of War” as President Donald Trump prefers to call it.

Anthropic’s Claude chatbot is widely used as a decision-support tool by the US military, most recently for Saturday’s attack on Iran. But the company held to a claimed red line in recent contract talks with the DoD, demanding assurances that its models would not be used for mass domestic surveillance or to power fully autonomous weapons.

The talks reportedly fell apart last week. On Friday, US Defence Secretary Pete Hegseth not only cut his department’s contracts with Anthropic but designated the LLM maker as a supply-chain risk – meaning other companies will not be able to use Anthropic while working on any projects with the US military.

It’s a very strong restriction normally reserved for companies from countries like China. In this case, the motivation for the hard limit appears to be Trump administration officials’ disgust at a company they perceive as stuffed with “woke” liberals.

Europeans aren’t Americans, yo!

Still, as regards the use of AI for spying, the safeguards Anthropic had wanted concerned only domestic mass surveillance – it didn’t object to lawful mass surveillance of non-Americans, including Europeans.

“We support the use of AI for lawful foreign intelligence and counterintelligence missions,” Anthropic’s CEO Dario Amodei wrote last week. “But using these systems for mass domestic surveillance is incompatible with democratic values.”

On Saturday, OpenAI announced that it had stepped in and done a deal with the DoD – which it claimed had better safeguards than those Anthropic had sought. But, on surveillance, OpenAI’s safeguards also focus solely on Americans.

“The AI System shall not be used for unconstrained monitoring of US persons’ private information as consistent with these authorities,” reads an excerpt from OpenAI’s DoD contract, per a blog post on the AI developer’s website.

Following some domestic backlash, OpenAI’s CEO Sam Altman also wrote on Tuesday that it would amend its deal with the department to add further safeguards to stop its AI system being used for “domestic surveillance of US persons and nationals”.

Altman also wrote that OpenAI had been assured that its services would not be used by US intelligence agencies under the DoD, such as the National Security Agency.

AI supercharges mass surveillance

While AI is not the only tool that can pull together personal data to create detailed profiles of individuals, Robin Staab, who researches AI and privacy at ETH Zürich in Switzerland, points out that modern AI systems can do this task much more cheaply than less cutting-edge approaches.

“You get it for a hundredth – a thousandth – of the financial and time investment, or even less,” he told Euractiv.

Staab is also not convinced that it’s possible to technically train AI models to not be used for mass surveillance, especially because logically combining data is what they are built to do.

Technical safeguards will also not stop someone who is motivated to use an AI tool for mass surveillance from doing exactly that, Staab warned.

EU terms and conditions may apply

Of course, Anthropic or OpenAI agreeing in principle that the US government can use their AI models for mass surveillance of non-Americans does not necessarily mean that any US body is doing that.

There are also some rules on how the US is allowed to use the data of Europeans.

Notably, the EU–US Data Privacy Framework (DPF) creates obligations for participating US organisations that handle the personal data of EU citizens when it’s transferred to the United States.

But previous high-level transatlantic data transfer deals came unstuck precisely because of US mass surveillance programmes targeting non-Americans. While, under the DPF, there is scope for US intelligence agencies to access EU citizens’ data, a complaint mechanism included in the framework is intended to create checks and balances, though its effectiveness is disputed.

Max Schrems – a prominent European privacy rights activist whose legal challenges buried the prior EU–US transfer agreements in court – doesn’t think these deals provide much protection against AI surveillance.

“If you are not a US citizen, almost anything is possible as long as the data is ‘relevant to US international relations’,” Schrems told Euractiv.

Other data deals underway

There are also already deals under which US authorities are provided with specific data on Europeans, including on commercial flights they took or regarding financial transactions. These agreements mostly aim at helping to counter terrorism.

The EU and US are also currently discussing a new agreement that would involve transferring sensitive data on European travellers to the United States, which could allow automated decisions – potentially involving the use of AI – under certain circumstances.

Euractiv contacted Anthropic, OpenAI and the US representation in Brussels to ask about any use of AI to conduct mass surveillance of Europeans, and what rules would apply, but none had responded at the time of publication.

(nl, aw)