Fearing “loss of control,” AI critics call for 6-month pause in AI development

3 min read Original article ↗

“The risks and harms have never been about ‘too powerful AI,’” Bender wrote in a tweet that mentions her paper, On the Dangers of Stochastic Parrots (2021). “Instead,” she continued, “They’re about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).”

Still, many on both sides of the AI safety/ethics debate agree that the pace of change in the AI space has been overwhelmingly rapid over this past year, giving legal systems, academic researchers, ethical scholars, and culture little time to adapt to the new tools, which are poised to potentially kick-start radical changes in the economy.

A screenshot of GPT-4's introduction to ChatGPT Plus customers from March 14, 2023.

A screenshot of GPT-4’s introduction to ChatGPT Plus customers from March 14, 2023.

Credit: Benj Edwards / Ars Technica

A screenshot of GPT-4’s introduction to ChatGPT Plus customers from March 14, 2023. Credit: Benj Edwards / Ars Technica

As the open letter points out, even OpenAI urges slower progress on AI. In a statement on artificial general intelligence (a term that roughly means human equivalent AI or greater) published earlier this year, OpenAI wrote, “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

The Future of Life Institute believes that the time to limit that growth is now, and that if “all key actors” don’t agree to slow AI research soon, “governments should step in and institute a moratorium.

However, there might be some difficulty on the government-regulation front. As The Guardian points out, “The call for strict regulation stands in stark contrast to the UK government’s flagship AI regulation white paper, published on Wednesday, which contains no new powers at all.”

Meanwhile, in the US, there appears to be little government consensus about potential AI regulation, especially in regard to large language models such as GPT-4. In October, the Biden administration proposed an “AI Bill of Rights” to protect Americans from AI harms, but it serves as suggested guidelines rather than a document backed by force of law. Also, it does not specifically address potential harms from AI chatbots that emerged after the guidelines were written.

Whether the points laid out in the open letter present the best way forward or not, it seems likely that the disruptive power of AI models—whether through super-intelligence (as some argue) or through a reckless rollout full of excessive hype (as others argue)—will land on regulators’ doorsteps eventually.