Topline
ChatGPT will provide mature content for verified adult users later this year, including “erotica,” OpenAI CEO Sam Altman said Tuesday, easing earlier restrictions imposed on the chatbot after OpenAI acknowledged its widely used product failed to detect signs of mental or emotional distress.
OpenAI had imposed guardrails for its chatbot after it said ChatGPT “fell short” in detecting signs of mental or emotional distress.
Copyright 2023 The Associated Press. All rights reserved
Key Facts
Altman, in a post on X, said as OpenAI rolls out age verification in December, the company will allow even more content “like erotica for verified adults” on ChatGPT as OpenAI shifts to a principle of “[treating] adult users like adults.”
OpenAI said in August it would restrict ChatGPT’s behavior after saying its chatbot “fell short in recognizing signs of delusion or emotional dependency,” adding new guardrails like prompting users to take breaks from lengthy conversations and opting not to provide direct advice, instead pointing users to “evidence-based resources when needed.”
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman said Tuesday, adding OpenAI realized this made it “less useful/enjoyable to many users” without those issues, opting to “safely relax” earlier restrictions after Altman said they mitigated the “serious mental health issues.”
Altman did not specify how OpenAI would ease restrictions for ChatGPT.
Users would not be able to access erotica unless they request it, Altman said in a separate post, later acknowledging OpenAI will still work to protect users in “mentally fragile states” and will use “enhanced tools for that,” but users not falling in that category should have a “great deal of freedom in how they use ChatGPT.”
Altman said if users want ChatGPT to respond in a “very human-like way” or act as a friend, the chatbot should do it “only if you want it.”
Tangent
Earlier on Tuesday, OpenAI said it formed a council on “well-being and AI” to guide the company’s response to “complex or sensitive” scenarios. The council includes a team of eight researchers and experts with “decades of experience” studying how technology affects mental health and emotions, who will be asked what guardrails would be best to support ChatGPT users, OpenAI said.
Surprising Fact
ChatGPT wouldn’t be the first chatbot to launch into erotica: Elon Musk’s xAI has unveiled AI “companions” in recent months, some of which are reportedly designed to become sexually explicit.
Key Background
OpenAI earlier this month hinted its ChatGPT would soon feature mature content once “appropriate” age verification and controls were in place. The company’s earlier move to restrict ChatGPT’s behavior followed a lawsuit alleging the chatbot contributed to a teenager’s suicide. Altman’s OpenAI and other AI firms have been repeatedly scrutinized in the past few years as their chatbots have become more popular, including among children. Some mental health and child safety groups have demanded OpenAI impose further restrictions on ChatGPT, arguing the chatbot is increasingly used in moments of emotional distress and could expose children to sexually explicit content, while experts have claimed the technology could pose a psychological threat to younger users seeking emotional validation.
What To Watch For
The Federal Trade Commission announced last month it would investigate Alphabet, Meta, OpenAI, xAI and other firms over how they safeguard children and teens from potentially negative impacts of their chatbots. The probe will largely cover what steps the companies have taken to safeguard children when chatbots act as companions, including how they limit usage and how users are informed of potential risks. FTC officials said chatbots can “effectively mimic human characteristics” like emotions or intentions, suggesting children and teens may “trust and form relationships with chatbots.” The probe’s announcement followed a warning from Sen. Josh Hawley, R-Mo., who said he would investigate Meta’s chatbot after Reuters reported Meta’s company guidelines deemed it “acceptable” for its chatbot to hold romantic conversations with children. Meta has said it would revise its policies, telling Reuters “such conversations with children never should have been allowed.”