Exclusive: Future OpenAI models likely to pose "high" cybersecurity risk, it says

2 min read Original article ↗

OpenAI says the cyber capabilities of its frontier AI models are accelerating and warns Wednesday that upcoming models are likely to pose a "high" risk, in a report shared first with Axios.

Why it matters: The models' growing capabilities could significantly expand the number of people able to carry out cyberattacks.

Driving the news: OpenAI said it has already seen a significant increase in capabilities in recent releases, particularly as models are able to operate longer autonomously, paving the way for brute force attacks.

Catch up quick: OpenAI issued a similar warning relative to bioweapons risk in June, and then released ChatGPT Agent in July, which did indeed rate "high" on its risk levels.

Yes, but: The company didn't say exactly when to expect the first models rated "high" for cybersecurity risk, or which types of future models could pose such a risk.

What they're saying: "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," OpenAI's Fouad Matin told Axios in an exclusive interview.

The big picture: Leading models are getting better at finding security vulnerabilities — and not just models from OpenAI.