The splendidly-named "OpenSlopware" was, for a short time, a list of open source projects using LLM bots. Due to harassment, it's gone, but forks of it live on.
"OpenSlopware" was a repository on the European Codeberg git forge containing a list of free software and open source projects which use LLM-bot generated code, or integrate LLMs, or which show signs of "coding assistants" being used on the codebase, such as pull requests created or modified by automated coding tools.
However, its creator – who we are intentionally not naming or tagging here – received so much harassment from LLM boosters that they removed the repository, and indeed their Bluesky account, stating that they would withdraw from social media for a while. Now, if you try to visit the original URL, you will receive only a 404 message.
All is not lost, though. Although it contained human-readable text, it was a Git repository, and so it was possible to fork it – clone its contents into another different repository. Several people did so before the original OpenSlopware creator deleted it again, such as this Small-Hack version, also on Codeberg. The Register has contacted the maintainer of this fork and asked if they'd talk to us about it, but so far, they say they're still thinking about it. Others were planning to maintain copies but have decided to join forces with this one.
Notably, this is despite some people involved in the original apologizing for their involvement and saying it should not be revived.
This is one of a growing number of sites, groups, and communities, which exist to criticize the growing use and promotion of LLM bots and their output, for which the word "slop" is becoming the standard term. Some merely spell out their criticism, such as this open letter to those who fired or didn't hire tech writers because of AI. Others go further, and name and shame those responsible. For instance, we recently saw this blog post "Authors" using AI slop in their books: a small list.
One example is the AntiAI subreddit, but there is also a Lemmy instance devoted to it, called Awful.systems. (For those unfamiliar with it, Lemmy is a tool for creating news aggregator and discussion sites – think Reddit or the recently revived, but LLM-infested, Digg – based around the same ActivityPub protocols used by Mastodon and the rest of the Fediverse.)
One of the Awful.systems site admins is Unix sysadmin and former Wikipedia press officer David Gerard, who formerly took an ultra-skeptical view of the cryptocurrency world on Attack of the 50 Foot Blockchain (which also inspired a book and a sequel. He now publishes an equally skeptical blog on the subject of the LLM bot industry, Pivot to AI. In an post on the Lemmy instance as well as on his Mastodon feed, he says that Awful.systems also plans to curate and maintain a list along the lines of OpenSlopware – but they're looking for a comparably-catchy name.
Those still on the fence about the merits of LLM bots and their output may be surprised by the levels of vitriol this inspires, but it is one of the most contentious aspects of the entire computing world today.
- IceWM soldiers on while Budgie jumps the Wayland ship
- Brussels plots open source push to pry Europe off Big Tech
- Debian goes retro with a spatial desktop that time forgot
- Linus Torvalds: Stop making an issue out of AI slop in kernel docs – you're not changing anybody's mind
In its section Why not LLMs?, the OpenSlopware continuation mentions the copyright and licensing implications, and continues to cite a Wikipedia article detailing the Environmental impact of artificial intelligence.
These are legitimate concerns, but there are many more. As The Reg reported back in July, in the only test of its kind that we know of, the LLM-promoting site Model Evaluation & Threat Research found that although using coding assistants made programmers think that they were working faster, the truth is that debugging the bots' code slowed the humans down by as much as they thought it sped up the process. The implications of this for code quality are obvious. What long-term use does to programmers' analytical faculties is as yet unmeasured, but the effects on social media look frankly terrifying. Its effects on hiring looked dire early last year, and even as companies rehire those laid off, they are paid less afterwards. Claimed productivity gains are nowhere to be seen.
Along with objective, verifiable measurements, such as performance testing of people and code alike, open criticism is needed – no matter how much it upsets some of those being criticized. ®