“I was naive,” he told POLITICO in a rare interview. “I don’t want any more negative consequences because I was stupid enough to think that I could just put an idea out for people to look at in today’s world.”
His proposals would have created new state agencies to enforce commitments that certain AI companies make to benefit the public good, address job displacement and ensure that advanced models are released safely. POLITICO first reported the withdrawals this week.
Simply qualifying for the ballot in California typically requires millions of dollars to hire fleets of petition-carriers to gather signatures — a process the state attorney general cleared Oldham to start in early February. But Oldham said his goal was only to draw attention to his causes, with no plan to mount a serious campaign and not enough personal money to try.
“I thought basically, it gets seen by people, and they’d like it, or it just wouldn’t … and it’d just be whatever,” he said. “My main thing is, I’m afraid that a big world of AI is a big world of zero accountability,” pointing to recent examples like an AI-generated video posted by President Donald Trump that depicted former President Barack Obama and former First Lady Michelle Obama as apes.
The measures quickly drew attention, with Oldham saying he now gets “calls literally nonstop.” Part of the intrigue was his sparse public profile, lack of history with California campaigns — and that not even advocates in the AI safety space appeared to have heard of him. Oldham also filed his paperwork for the proposal on the same day that another ballot campaign launched, with the explicit goal of reversing OpenAI’s corporate restructuring.
This week, OpenAI went on the attack. The AI company called Oldham’s background into question by lodging a complaint with the Fair Political Practices Commission, California’s top campaign finance watchdog, the New York Post first reported.
OpenAI lawyer Brian Hauck asked the agency to investigate Oldham while drawing connections between him, industry competitors and organizers of the other ballot campaign.
By his own telling, Oldham says he’s a “nobody,” having worked at his family’s small boat chartering business for years. Before that, he aspired to be a filmmaker.
He claimed to have written the proposals himself, ironically turning to AI tools like ChatGPT for help with the legal particulars. He has spoken about them with friends, he said, but not enlisted any lawyers, consultants, or professionals, describing his exposure to tech as that of a hobby enthusiast — tinkering with computers, previously being an avid gamer, reading science fiction and living in the AI-obsessed Bay Area.
Initially, Oldham repeatedly evaded interview requests from POLITICO and other media outlets, later explaining that he was unprepared for the surge of interest.
His withdrawal has diminished the field of AI ballot campaigns for 2026. The one remaining is for a measure filed by Poornima Ramarao, the mother of a deceased OpenAI whistleblower who is fundraising for it with an anonymous group known as the Coalition for AI Nonprofit Integrity. Ahead of Oldham’s withdrawal, OpenAI shelved its own kids chatbot safety measure with Common Sense this month.
CANI has declined to comment on how much progress it’s made, and neither the group, nor Ramarao, nor Oldham has registered a committee to fundraise for their measures. In its sworn FPPC complaint against Oldham, OpenAI also accused CANI of failing to report likely contributions for its campaign.
OpenAI’s lawyer referenced prior reporting from the Post to back the company’s suspicion that the two efforts were teaming up to single out the company from rivals in the AI industry.
“Recent reports questioning the personal ties and motivations of other AI ballot measure proponents are concerning,” Hauck said in a statement for this story. “Measures that can’t be defended openly don’t belong on the ballot. We respectfully ask the FPPC to encourage full candor and transparency so the public can evaluate these efforts on their merits.”
The complaint cited that Oldham’s stepsister, Zoe Blumenfeld, is a senior employee at competitor Anthropic and that a court document previously described his mother as a friend and past investor of Guy Ravine, an entrepreneur who lost a legal trademark fight against OpenAI.
Oldham disputed that the family connections had anything to do with his initiatives. He said he never heard of CANI, met Ravine a few times at least a decade ago, and has not been close with his stepsister since his stepfather died in 2006. He added he forgot Blumenfeld worked at Anthropic, having last seen her at Thanksgiving about two years before the story.
“I didn’t even think of her,” he said. “It is just a pure coincidence that she works for Anthropic, like I honestly didn’t even clock that.”
OpenAI did not address Oldham’s claims of intimidation in the statement from Hauck.
An Anthropic spokesperson said the company rejects “what appears to be a personal attack on one of our employees” and that it wasn’t involved in Oldham’s proposals but did not support either of them. Blumenfeld did not respond to an inquiry.
Ravine also maintained that there was no coordination with Oldham, confirming they haven’t been in contact for about a decade and refuting the link to his mother as “tenuous.”
Oldham’s measures did not mention OpenAI, and he said his intention was to get more oversight for the entire sector, not target a particular company. As written and summarized by the attorney general’s office, one of his initiatives applies to “companies that develop or control advanced AI systems and meet other specified criteria,” and the second covers all AI companies that were “incorporated as ‘public benefit corporations’ or nonprofits under California law.”
OpenAI may be the most notable example of the latter group, with the ChatGPT maker completing a high-profile conversion from a nonprofit to a hybrid public benefit model last fall. But Anthropic is also structured as a public benefit corporation, as was Elon Musk’s xAI until 2024.
While Ramarao’s measure does not name OpenAI either, her separate campaign website states it is targeting the company.
OpenAI has used aggressive tactics before to unmask and draw parallels between outside critics.
It sent subpoenas to multiple groups that objected to its restructuring and initially tried getting the FPPC to investigate CANI last July. The agency dismissed the complaint that fall, as POLITICO first reported, responding that OpenAI’s lawyers did not offer enough evidence to support the violations they had alleged. Ann O’Leary, a former chief of staff to Gov. Gavin Newsom and lawyer for OpenAI, met with FPPC staff afterwards about the decision, according to the complaint.
If Oldham is to be believed, his experience shows the challenges that ordinary people run into with California’s initiative process — which despite being designed as a tool of direct democracy, giving voters a power on par with the state legislature, is used most successfully by special interests.
Oldham’s situation also highlights a broader challenge in AI policymaking, where proposals developed in secrecy or by opaque actors often blur the line between genuine inexperience and hidden agendas.
Many nonprofits can shield their anonymous donors and voluntarily choose whom to disclose, which companies like OpenAI argue makes it impossible to determine whether competitors are behind activities. The ChatGPT maker previously alleged that CANI propped up a New York-based LSAT tutor as a sham leader for its organization.
The issue of anonymity has frustrated AI industry players and safety advocates alike.
The pro-industry super PAC group, Leading the Future, which counts OpenAI President Greg Brockman among its donors, has also described its opponents as a “dark money network” working to “advance one company’s business and ideological agenda.” The critique is a reference to Anthropic, which recently donated $20 million to a nonprofit countering the super PAC.
When Oldham’s proposals first emerged, some advocates of stronger AI regulations said they wouldn’t evaluate his ideas — however worthy they may be in principle — due to uncertainty about his identity.
In its past subpoenas and other legal filings, OpenAI has pointed to CANI’s secrecy to question the identities of advocacy groups that have shared their donors and were concerned about the company’s direction.
Oldham said the probing pushed him to rethink the proposals.
“The collateral damage of me not withdrawing is higher than anticipated,” he said over text. “It’s actively making other [people’s] lives much more difficult and that wasn’t the goal here.”
A version of this story first appeared in California Decoded, POLITICO’s morning newsletter for Pros about how the Golden State is shaping tech policy within its borders and beyond. Like this content? POLITICO Pro subscribers receive it daily. Learn more at www.politicopro.com.