AI goes phishing: Scams no longer have tell-tale typos, errors

3 min read Original article ↗
Illustration of a repeating pattern of robot hands holding magnifying glasses.

Illustration: Aïda Amer/Axios

AI chatbots have made scam emails harder to spot and the tells we've all been trained to look for — clunky grammar, weird phrasing — utterly useless.

Why it matters: Scammers are raking in more than ever from basic email and impersonation schemes. Last year, the FBI estimates, they made off with a whopping $16.6 billion.

  • Thwarting AI-written scams will require a new playbook that leans more on users verifying messages and companies detecting scams before they hit inboxes, experts say.

The big picture: ChatGPT and other chatbots are helping non-English-speaking scammers write typo-free messages that closely mimic trusted senders.

  • Before, scammers relied on clunky tools like Google Translate, which often were too literal in their translations and couldn't capture grammar and tone.
  • Now, AI can write fluently in most languages, making malicious messages far harder to flag.

What they're saying: "The idea that you're going to train people to not open [emails] that look fishy isn't going to work for anything anymore," Chester Wisniewski, global field CISO at Sophos, told Axios.

  • "Real messages have some grammatical errors because people are bad at writing," he added. "ChatGPT never gets it wrong."

The big picture: Scammers are now training AI tools on real marketing emails from banks, retailers and service providers, Rachel Tobac, an ethical hacker and CEO of SocialProof Security, told Axios.

  • "They even sound like they are in the voice of who you're used to working with," Tobac said.
  • Tobac said one Icelandic client who had never before worried about employees falling for phishing emails was now concerned.
  • "Previously, they've been so safe because only 350,000 people comfortably speak Icelandic," she said. "Now, it's a totally new paradigm for everybody."

Threat level: Beyond grammar, the real danger lies in how these tools scale precision and speed, Mike Britton, CIO at Abnormal AI, told Axios.

  • Within minutes, scammers can use chatbots to create dossiers about the sales teams at every Fortune 500 company and then use those findings to write customized, believable emails, Britton said.
  • Attackers now also embed themselves into existing email threads using lookalike domains, making their messages nearly indistinguishable from legitimate ones, he added.
  • "Our brain plays tricks on us," Britton said. "If the domain has a W in it, and I'm a bad guy, and I set up a domain with two Vs, your brain is going to autocorrect."

Yes, but: Spotting scam emails isn't impossible. In Tobac's red team work, she typically gets caught when:

  • Someone practices what she calls polite paranoia, or when they text or call the organization or person being impersonated to confirm if they sent a suspicious message.
  • A target uses a password manager and has complex, long passwords.
  • They have multifactor authentication enabled.

What to watch: Britton warned that low-cost generative AI tools for deepfakes and voice clones could soon take phishing to new extremes.

  • "It's going to get to the point where we all have to have safe words, and you and I get on a Zoom and we have to have our secret pre-shared key," Britton said. "It's going to be here before you know it."

Go deeper: AI voice-cloning scams: A persistent threat with limited guardrails