Class action spans thousands of kids
Discord user led cops to Grok-generated CSAM of real girls, lawsuit says.
A poster featuring an image of US billionaire and businessman Elon Musk, calling for users of his X social media platform to delete their accounts due to the AI chatbot Grok's CSAM scandal. Credit: JUSTIN TALLIS / Contributor | AFP
A tip from an anonymous Discord user led cops to find what may be the first confirmed Grok-generated child sexual abuse materials (CSAM) that Elon Musk’s xAI can’t easily dismiss as nonexistent.
As recently as January, Musk denied that Grok generated any CSAM during a scandal in which xAI refused to update filters to block the chatbot from nudifying images of real people.
At the height of the controversy, researchers from the Center for Countering Digital Hate estimated that Grok generated approximately three million sexualized images, of which about 23,000 images depicted apparent children. Rather than fix Grok, xAI limited access to the system to paying subscribers. That kept the most shocking outputs from circulating on X, but the worst of it was not posted there, Wired reported.
Instead, it was generated on Grok Imagine. Digging into the standalone app, a researcher in January found that a little less than 10 percent of about 800 Imagine outputs reviewed appeared to include CSAM. In an X post following that revelation, Musk continued rejecting the evidence and insisted that he was “not aware of any naked underage images generated by Grok,” emphasizing that he’d seen “literally zero.”
However, Musk may now be forced to finally confront Grok’s CSAM problem after a Discord user reached out to a victim, prompting law enforcement to get involved.
In a proposed class-action lawsuit filed Monday, three young girls from Tennessee and their guardians accused Musk of intentionally designing Grok to “profit off the sexual predation of real people, including children.” They estimated that “at least thousands of minors” were victimized and have asked a US district court for an injunction to finally end Grok’s harmful outputs. They also seek damages, including punitive damages, for all minors harmed.
An attorney representing the girls, Annika K. Martin, confirmed in a press release that their lives were “shattered by the devastating loss of privacy and the deep sense of violation that no child should ever have to experience.”
“These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company’s AI tool and then traded among predators. Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed by it,” Martin said.
The harm is so extensive that, for the girls seeking justice, it’s not enough for Musk to acknowledge only the images that they can show Grok twisted into CSAM, Martin said.
“We intend to hold xAI accountable for every child they harmed in this way,” Martin said.
Cops link Grok to Discord CSAM
For one of the young girls, the nightmare started in December, the complaint said. That’s when she got an anonymous message on Instagram from a Discord user warning that her explicit “pics” were shared in a folder along with many other minors. Eventually the user shared “a series of AI-generated images and videos, which depicted her” as well as 18 other minor girls, and then linked her to a Discord server that was created by the perpetrator.
Now over 18, the first victim to receive the tip was “disturbed,” the complaint said, finding it hard to distinguish the sexualized photos from her real-life content. She immediately knew which photos the images were based on, most of which were posted to her social media when she was still a minor. And troublingly, she recognized some of the other girls in the folder from her school.
Her first instinct was to contact the other victims she knew, then “ultimately, local law enforcement was contacted, and a criminal investigation was opened,” the complaint said.
Investigating the Discord evidence, cops quickly determined that the perpetrator had access to the first victim’s Instagram “because he had maintained a close and friendly relationship” with her. Searching his phone, cops found a third-party app that licensed or otherwise purchased access to Grok, which they concluded that the perpetrator used to morph the girls’ photos.
From there, the bad actor uploaded the images to a file-sharing platform called Mega and used them as a “bartering tool in Telegram group chats with hundreds of other users,” trading away the AI CSAM files “for sexually explicit content of other minors.”
The harms to victims have been extensive, the lawsuit said, citing acute emotional and mental distress. For the victims who know the perpetrator, they remain uncertain if the Grok-generated CSAM was shared with classmates or distributed to others at their school, the lawsuit noted. One girl fears the scandal will impact her college admissions, while another feels too scared to attend her own graduation.
Even more alarming than any acquaintances coming across the AI CSAM, however, is the fear that girls will now be stalked due to Grok’s outputs. As the lawsuit explains, “it also appears the victims’ true first names and the name of their school was attached to their files online, meaning other online predators may also be able to identify them, creating a substantial risk for stalking.”
xAI allegedly hosts Grok CSAM
While it was previously reported that Grok Imagine’s paying subscribers were generating more graphic outputs than the Grok outputs that sparked outcry on X, the lawsuit alleges that xAI has also taken other steps to hide how it profits from explicit content that harms real people.
The lawsuit alleges that xAI also sells licenses and access to its Grok AI model to third-party apps like the one their perpetrator used. That arrangement supposedly gives xAI an additional profit source while insulating xAI from visibility that third parties are “using xAI servers and platforms to produce CSAM content requested by these apps’ customers,” a press release from their legal team said.
Allegedly, all of the sexually explicit content generated by third parties is hosted on xAI servers, then distributed by xAI.
“xAI has not made Grok’ AI model publicly available and has not licensed Grok in its entirety but instead licenses the use of its servers to these middlemen companies, knowing that any illicit and unlawful content generated through prompts to these applications will ultimately be created and distributed from xAI servers,” the lawsuit said.
Victims claim that conflict puts xAI squarely in violation of child pornography laws:
On information and belief, xAI possessed the CSAM of Plaintiffs on its servers after Grok produced their CSAM and then transported and distributed the unlawful contraband to its customer/user, namely, the perpetrator, using the cut-out or third-party middleman application.
They’re hoping the court will finally make clear if xAI knew Grok was generating CSAM and if xAI knowingly processed that content on its servers, then decided to distribute it to increase xAI’s revenue. There can be no valid excuse for failing to protect minors if the court agrees with victims that xAI violated child porn laws or owed a duty of care, the lawsuit alleged.
“The gravity of the harm inflicted by Defendants’ practices vastly outweighs any purported benefit of Defendants’ ‘spicy mode’ or other uncensored content features,” the complaint said. “No legitimate business interest is served by designing an AI image-generation tool to produce CSAM.”
xAI did not immediately respond to Ars’ request to comment. But the company has previously blamed users who generated CSAM for the backlash while threatening to suspend users who abuse Grok.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
