OpenAI told DC company it can’t pitch using ChatGPT for politics

1 min read Original article ↗

This is the first known instance of OpenAI policing how the use of its technology is advertised. The company last updated its policies in March, which now ban people from using its models for, among other things, building products for political campaigning or lobbying, payday lending, unproven dietary supplements, dating apps, and “high risk government decision-making,” such as “migration and asylum.”

OpenAI told Semafor that it uses a number of different methods to monitor and police when those policies are being violated. In the case of politics specifically, the company revealed it’s working on building a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying.

The incident between OpenAI and FiscalNote stemmed from how the latter described two intertwined products. One is “VoterVoice,” which uses AI to help well-funded Washington interests to send hundreds of millions of targeted messages to elected officials in support or opposition of legislation.

OpenAI did not express objections to another product, SmartCheck, which uses ChatGPT to coach grassroots advocacy groups on how to improve their email campaigns by assessing things like subject lines, the number of links they include, and other factors.