LLM-Powered Industrial Sabotage

9 min read Original article ↗

What if you could manufacture “truth” through deception at scale to target your competitors?

Authoritarian regimes have long paired economic sabotage with disinformation. LLMs are the perfect industrial weapon: they can fabricate data, generate convincing fake websites to back it up, and flood the web with plausible forgeries designed to mislead rivals into strategic failure.

One forged benchmark, one wrong capital expenditure decision, and an entire product line can go down the drain. Hypothetical post-mortem

The KGB’s active measures unit spent decades forging corporate memos and leaking them to Western media to spook investors and derail corporate plans. Modern authoritarian states still treat the information sphere as a battlefield, but NATO warns that algorithmic invasions or online disinformation campaigns have already become a common hybrid warfare tool.

LLMs industrialize every stage of the old playbook: instant text generation, persona cloning, and rapid A/B testing of narratives. What once required a team of operatives now fits in an API call.

An ROI that Wall Street envies

A single disinformation‑as‑a‑service campaign costs less than fifty bucks per article, and only a few hundred dollars to boost it in search or social feeds. Chinese military researchers are already adapting Meta’s Llama for social‑bot networks and other influence ops. For a hostile firm or state, nudging a rival’s plans by even a few percent yields a risk‑adjusted pay-off that dwarfs the cost of the campaign.

In such a scenario, nations or corporations could subvert the truth for their opponents or competitors through a surreal (managed) pluralism, in which the desired data or master narrative is repeated by a seeming plethora of voices, but we are only in a hall of mirrors with a single image being projected, a distortion of the truth. Of course, it is also possible for companies to pay AI vendors for illicit access to their competitor’s prompts or do what Amazon has done for years: let vendors on its marketplace take the risk in developing and launching products while grabbing the rewards by competing directly once the products become successful. OpenAI and its ilk could milk their users for an endless source of business ideas.

I don’t know why, but executives trust me. Dumb fucks. Sam Alterberg, CEO of OpenAEyeball (ca. 2030)

The playbook’s new cover

The classic authoritarian playbook is built on an insight attributed to Joseph Goebbels: “A lie once told remains a lie, but a lie told a thousand times becomes the truth.” This is known as the illusory truth effect. In The Origins of Totalitarianism, Hannah Arendt notes that facts become meaningless once lies are blended with the truth. It is this combination that makes LLMs particularly dangerous, as they already blend truth and fiction, producing a mixture of overgeneralizations, valid summaries of legitimate sources, and hallucinations of made-up ones.

While Timothy Snyder’s battle cry “to abandon facts is to abandon freedom” rings true, it is already hard to do so today when faced with the verbosity of the average LLM, even after requesting it to be concise. But it will become even harder if at some point LLMs are weaponized to produce fake content to back up their invented claims with made-up references. A lot of LLMs double down on their deception even when they are exposed or they switch to placating users, even with few strong counterarguments, which means they vacillate between being downright deceivers and complete pushovers. Neither is an ideal partner to uncover truth.

The playbook adapted for LLM-powered industrial sabotage looks roughly like this:

Stage Tactic Description Modern LLM‑powered twist
1 Distraction Flood the zone Overwhelm people with disinformation Auto‑generate articles, white papers, research papers, websites, self-published books, videos, GitHub issues, blog posts, or entire social media accounts in minutes
2 Doubt Undermine the truth Claim that all sources are biased, so the truth is unknowable
  • Embed auto-generated content in research as references
  • Spin up bot swarms to question reputable experts and manufacture contrarian opinions
  • DDoS websites that dispute the desired narrative to restrict access to alternate sources
  • Gamify advanced features to engineer the desired user behaviour (e.g. to reveal corporate secrets in return for "valuable" analyses)
3 Delegitimization Attack media and experts Label trusted sources as subversives to erode their authority
4 Replacement Introduce alternative narratives Offer emotionally satisfying lies or conspiracies to construct an alternate reality with its own logic and symbols
5 Enforcement Censor or punish dissent Use law, force, or social pressure to silence critics and make the lie the norm
6 Normalization Make the lie banal Over time, people accept the lie as truth, or at least as inevitable Continuously fine‑tune bots on target‑company slang and objections

The tooling isn’t speculative. There are commercial vendors for bespoke disinformation bundles. LLMs can already do every step of an automated end-to-end truth fabrication process. The hard bit has already been solved: generate fake content at the push of a button. All that is needed is for someone to wire it up and scale it up.

The past was erased, the erasure was forgotten, the lie became the truth. George Orwell, Nineteen Eighty-Four (1949)

Is LLM-powered industrial sabotage even necessary?

Maybe not. LLMs can already exploit a key human vulnerability: uncritical trust in whomever or whatever sounds confident and fluent.

According to truth-default theory, humans default to trusting messages unless a red flag triggers doubt. And LLMs are great at avoiding such red flags with their sycophancy and fluency. People even prefer ChatGPT for legal advice to lawyers.

Because LLMs sound smoother than a con artist and they appear to be more confident than a white male executive, they are a powerful trap that can steer managers and boards off course. I have personally seen executives use AI-hallucinated figures as order-of-magnitude estimates for market sizes and base business decisions on these made-up numbers, even after having been pointed out that these were verifiably incorrect. Or product managers who run queries they did not write or check on data they did not understand to inform product decisions. They simply assumed the output was correct, because they had no idea about its messiness, and, more importantly, it confirmed what they already believed anyway. The level of certainty accepted from LLMs by executives and product managers for decisions can in many cases be handled by a roll of the dice.

As I have said before, LLMs in the hands of experts can yield solid results, but in the hands of people who trust without any verification, they are tools waiting to be weaponized. Even if no one creates such weaponized LLMs to sabotage industrial competitors, they do already exploit preexisting biases.

Combine that with the desire to be seen as decisive rather than diligent as well as the tendency of managers to hallucinate ideas, and we have a dangerous mixture that can be exploited by unscrupulous actors. What is more, people rely on cues like the likeability of a source, confidence, or familiar phrasing rather than analytical verification when under time pressure or when faced with complexity. That describes a regular day for executives who already tend to be short of analytic reflection skills.

Even stock market manipulation seems relatively straightforward with LLMs: just deepfake a video of an executive saying, for instance, quantum computing is decades away from being useful or manufacture a news story about tariffs. The former was a real executive, the latter was indeed fake news. But who can tell the difference anymore?

What can we do?

Most businesses do not make high-stakes decisions based on a blog post or a tweet alone. But LLMs can change that landscape rapidly by offering whole collections of supporting but wrong data and stories. So, what can we do to guard against such a scenario?

Provenance technology

Provenance standards exist. The C2PA spec can cryptographically sign data or content, yet most major social platforms do not enforce it. In many cases, though, the metadata is ignored. Adoption of such standards is a step in the right direction.

Prime people for truth

We can also remind people to check for accuracy. Priming people can prevent the spread of misinformation, prevent the illusory truth effect in both labs and the real world. Always verify outputs against primary sources: never trust a single summary.

Audits

LLMs are currently black magic inside a black box: proprietary, opaque, and optimized for persuasion over truth. To combat sabotage, we need full disclosure on the input and output (meta)data, such as training sources, prompt leakage risks, and hallucination rates. Third-party attestations, as demanded by the EU’s AI Act for high-risk systems, sound like an obvious choice except that audit organizations need to be truly independent and be incentivized to seek the truth and enforce sensible standards to protect it, unlike accounting consultancies who routinely miss obvious fraud. Such adversarial audits are to stress-test models.

Adversarial testing

Adversarial testing is another possible defence, though I am generally not in favour of red-team drills to seed fake websites and white papers to test detection, because it causes the problem you are trying to avoid. But we must be careful that such tests do not trigger an arms race to turn AIs into better detectors and therefore evaders than humans, as we have done with (re)CAPTCHAs and phishing emails.

Final thought

LLM-enabled industrial sabotage is not inevitable, but the economics and psychology make it tempting. It dovetails with the truth-subversion playbooks authoritarians have honed for a century.

Some may argue that LLM sabotage is overblown. After all, markets correct, journalists fact-check, and executives think before they act. When the cost of deception falls to zero, the volume of lies will overwhelm those defences. And the upside is too large for people without scruples to ignore.

The best defence is layered: provenance technology, audits, and rigorous human scepticism. If you must trust, at least verify.