AI attacks demand a mental shift

2 min read Original article ↗

On November 13, 2025, Anthropic said a Chinese state-sponsored hacking group used their Claude Code to run an espionage campaign.

Bleeping Computer compiled a fair portion of criticism that followed. Security researchers pointed out that report is too vague and contains no indicators of compromise. Prominent voices called it PR stunt.

Unfortunately, the key message was lost in emotion. Communication is a tricky art, and being too pushy selling your product can distract people from the core issue.

I know the urge to dismiss it as a marketing trick. But “don’t push the horses!” The danger comes not from the ML models. Real power lies in their novel application in combination with existing tools.

Every time this happens, people argue: we need to restrict this and that. Build higher walls, censor knowledge, track everything. Well, hole patching as a default choice is a very short-sighted reaction.

To better understand this phenomenon, I recommend a monumental read by mnemonic and NRK about Magic Cat and Darcula phishing operation (link at the end). TL;DR: key innovation was introduction of automated website generator with a fancy admin panel that made stealing credit cards much easier.

Breakthroughs happen when friction is removed from the process. The AI model acting as an orchestrator can handle context and execute actions faster than a human. To secure data, we need maintainable, auditable tooling and more people who can build and review it.

Reactive by nature, we humans will eventually recognize that better education and open ecosystems facilitate economic growth. The mental shift is to act before we’re forced: fund education and open source like your life depends on it.

Links