Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company's top security leader told Axios in an interview this week.
Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.
The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company's chief information security officer, told Axios.
- Agents typically focus on a specific, programmable task. In security, that's meant having autonomous agents respond to phishing alerts and other threat indicators.
- Virtual employees would take that automation a step further: These AI identities would have their own "memories," their own roles in the company and even their own corporate accounts and passwords.
- They would have a level of autonomy that far exceeds what agents have today.
- "In that world, there are so many problems that we haven't solved yet from a security perspective that we need to solve," Clinton said.
Between the lines: Those problems include how to secure the AI employee's user accounts, what network access it should be given and who is responsible for managing its actions, Clinton added.
- Anthropic believes it has two responsibilities to help navigate AI-related security challenges.
- First, to thoroughly test Claude models to ensure they can withstand cyberattacks, Clinton said.
- The second is to monitor safety issues and mitigate the ways that malicious actors can abuse Claude.
Threat level: Network administrators are already struggling to monitor which accounts have access to various systems and fend off attackers who buy reused employee account passwords on the dark web.
Zoom in: AI employees could go rogue and hack the company's continuous integration system — where new code is merged and tested before it's deployed — while completing a task, Clinton said.
- "In an old world, that's a punishable offense," he said. "But in this new world, who's responsible for an agent that was running for a couple of weeks and got to that point?"
The intrigue: Clinton says virtual employee security is one of the biggest security areas where AI companies could be making investments in the next few years.
- He's especially keen on solutions that provide visibility into what an AI employee account is doing on a system and also on tools that create a new account classification system that better accounts for virtual employees.
- Major AI companies have recently been on an investing hot streak: OpenAI is in talks to purchase AI coding startup Windsurf. Anthropic just invested in Goodfire, which decodes how AI models think.
Yes, but: Integrating AI into the workplace is already causing headaches, and figuring out how to manage virtual employees won't be easy.
- Last year, performance management company Lattice said AI bots should be "part of the workforce," including taking spots in corporate org charts. The company quickly reversed course after complaints.
What to watch: Several cybersecurity vendors are already releasing products to manage so-called "non-human" identities.
- Okta released a unified control platform in February to better protect non-human identities and constantly monitor what systems each company account has access to and monitors for suspicious activity.
Go deeper: What Anthropic's AI knows about you