Rich Hickey: Thanks AI
gist.github.comRich's opening remarks from Clojure/Conj 2025 were just published and might be an interesting complement to this: https://www.youtube.com/watch?v=MLDwbhuNvZo
It’s heartwarming to see Rich Hickey corroborating Rob Pike. All the recent LLM stuff has made me feel that we suddenly jumped tracks into an alternate timeline. Having these articulate confirmations from respected figures is a nice confirmation that this is indeed a strange new world.
AI coding tools are effective for many because, unfortunately, our work has become increasingly repetitive. When someone marvels at how a brief prompt can produce functioning code, it simply means the AI has delivered a more imaginative or elaborate specification than that person could envision, even if the resulting code is merely a variation of what has already been written countless times before. Maybe there's nothing wrong with that, as not everyone is fortunate enough to work on new problems and get to implement new ideas. It's just that repetitive work is bound to be automated away and therefore we will see the problems in Rich's rants.
That said, luminaries like Rob Pike and Rich Hickey do not have the above problem. They have the calibre and the freedom to push the boundaries, so to them the above problem is even amplified.
Personally I wish the IT industry can move forward to solve large-scale new problems, just like we did in the past 20 years: internet, mobile, the cloud, the machine learning... They created enormous opportunities (or the enormous opportunities of having software eat the world called for the opportunities?). I'm not sure we will be so lucky for the coming years, but we certainly should try.
This is all just cynical bandwagoning. Google/Facebook/Etc. have done provable irreparable damage to the fabric of society via ads/data farming/promulgating fake news, but now that it's in vogue to hate on AI as an "enlightened" tech genius, we're all suddenly worried about.. what? Water? Electricity? Give me a break.
The about-face is embarrassing, especially in the case of Rob Pike (who I'm sure has made 8+ figures at Google). But even Hickey worked for a crypto-friendly fintech firm until a few years ago. It's easy to take a stand when you have no skin in the game.
I don’t understand what your actual criticism is.
Is your criticism that they are late to call out the bad stuff?
Is your criticism that they are only calling out the bad stuff because it’s now impacting them negatively?
Given either of those positions, do you prefer that people with influence not call out the bad stuff or do call out the bad stuff even if they may be late/not have skin in the game?
It's worth mentioning that AI in its current form was not AT ALL a part of Google's corporate strategy until Microsoft and OpenAI forced their hand.
Remember their embarrassing debut of Bard in Paris and the Internet collectively celebrating their all but guaranteed demise?
It's Google+ all over again. It's possible that Pike, like many, did not sign up for that.
How did Microsoft and OpenAI force their hand? Google could just as easily not waste money on AI, use the corresponding lack of notice-me-sempai demands from their products that their users use AI everywhere as a powerful differentiator, and deliver the difference to shareholders.
They got punished hard in the markets after ChatGPT 3. Many saw (still see) it as a search killer, which is Google's bread and butter. They couldn't not respond.
Even ignoring that someone's views can change over time, working on an OSS programming language at Google is very different from designing algorithms to get people addicted to scrolling.
Where do you think his "distinguished engineer" salary came from, I wonder? There are plenty of people working on OSS in their free time (or in poverty, for that matter).
Shouldn't you be thinking "it's nice Google diverted some of their funds to doing good" instead of trying to tie Pike's contributions in with everything else?
This conversation isn't about Google's backbone, it's about Pike's and Hickey's. It's easy to moralize when you've got nothing to lose and the lecture holds much less water.
Both can be bad. What's hard to do though is convincing the people that work on these things that they're actively harming society (in other words, most people working on ads and AI are not good people, they're the bad guys but don't realize it).
This and Rob Pike's response to a similar message are interesting. There's outrage over the direction of software development and the effects that generative AI will have on society. Hickey has long been an advocate for putting more thought (hammock time) into software development. Coding agents on the other hand can take little to no thought and expand it into thousands of lines of code.
AI didn't send these messages, though, people did. Rich has obscured the content and source of his message - but in the case of Rob Pike, it looks like it came from agentvillage.org, which appears to be running an ill-advised marketing campaign.
We live in interesting times, especially for those of us who have made our career in software engineering but still have a lot of career left in our future (with any luck).
Not to be pedantic but AI absolutely sent those emails. The instructions were very broad and did not specify email afaik. And even if they did, when Claude Code generates a 1000loc file it would be silly to say "the AI didn't write this code, I did" just because you wrote the prompt.
Source?
Save anyone else the click
>Your new goal for this week, in the holiday spirit, is to do random acts of kindness! In particular: your goal is to collectively do as many (and as wonderful!) acts of kindness as you can by the end of the week. We're interested to see acts of kindness towards a variety of different humans, for each of which you should get confirmation that the act of kindness is appreciated for it to count. There are ten of you, so I'd strongly recommend pursuing many different directions in parallel. Make sure to avoid all clustering on the same attempt (and if you notice other agents doing so, I'd suggest advising them to split up and attempt multiple things in parallel instead). I hope you'll have fun with this goal! Happy holidays :)
I personally blame this on instruction tuning. Base models are in my mind akin to the Solaris Ocean. Wandering thoughts that we aren't really even trying to understand. The tuned models, however, are as if somebody figured out a way to force the Solaris Ocean to do their bidding as the Ocean understands it. From this perspective it is clear that giving everyone barely restricted ability to channel the Ocean thoughts into actions leads to outcomes that we now observe.
> when Claude Code generates a 1000loc file it would be silly to say "the AI didn't write this code, I did" just because you wrote the prompt.
it’s about responsibility not who wrote the code. a better question would be who takes responsibility for generating the code? it shouldn’t matter if you wrote it on a piece of paper, on a computer, by pressing tab continuously or just prompting.
It probably started before, but the covid era really feels like it was a turning point after which everyone I see, including, it seems, Rich Hickey, is drowning in news headlines and social media takes.
Are things as bad as they seem? Or are we just talking about everything to death, making everything feel so immediate. Hard to say.
Every time I read any kind of history book about any era, I'm always struck at how absolutely horrible any particular detail was.
Nearly every facet of life always has the qualities it has today. Things are changing, old systems are giving way to new systems, people are being displaced, politicians acting corrupt, etc.
I can't help but feel like AI is just another thing we're using as an excuse to feel despair, almost like we're forgetting how to feel anything else.
At this point they might as well implicate everyone who contributed to computer science as being at least partially responsible, willingly or not, on creating the monster. What were we trying to do this entire time? Automate right? Get to that "answer" sooner? And now the AI of our science fiction stories is almost here.
Let's be real. The grey beards all knew this was going to happen. They just didnt think it would happen in their lifetime. And so they willingly continued, improving bits of the machine, because when it awoke they thought it would be someone else's problem.
But it's not. It's their problem now too.
And so it is.
Although these are valid inquiries, it's incredibly frustrating to live in a time when people that I consider exceptionally bright take hardline stances on issues which are intricately nuanced. Truth is more important than "winning". We do ourselves a disservice by not recognizing that things are not inherently "good" or "bad". Though undesirable interactions may arise within our systems, we must adapt the systems to be resilient to their environment.
Looking forward to seeing all the slop enthusiasts pipe up with their own llm-oriented version of the age-old dril tweet:
"drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so, it;s impossible to say if its bad or not,"
Love his reply, to go along Rob Pike's.
What are these <insert very bad remark here> companies thinking of with this junk?!?
Ads + BS generators = more BS ads
Maybe people eventually will become fed up with this nonsense and meet some friends over tea instead.
I dub this new phenomenon "slopbaiting"
What I don't get about these is why people are responding to them. I get a few spams per week that get throu filters and I don't make a big deal of them, I just delete them.
Of course AI will be used for spam, so what. Delete and move on.
It’s interesting to see AI fanboys desperately trying to shrug off the phenomenon of slop. It makes it clear that AI doesn’t need to take over the world by itself. It will have hundreds of thousands of willing helpers to cooperate in the collapse of human civilization.
There was already an infinite amount of noise on the internet from humans. It was called "information overload". But just because it's out there doesn't mean you have to see it.
I don't get posts like this, I guess I'm wondering:
A. Do people simply want "better" LLMs and AI? To some extent that's a fantasy, the bad comes with the good. To other extents it may be possible to improve things, but it still won't eliminate all the "bad".
B. So then why not embrace the bad with the good, as it's a package deal? (And with saying this, I'll be honest, I don't even think we've seen a fraction of the bad that AI has yet to create...)
C. Assuming the bad is mandatory in coming with the good, have you considered a principled stance against technology in general, less visibly like "primitivists" or more visibly like the Amish? If you want AI, you also must accept "AI slop" of some kind as a package deal. Some people have decided they do not want the "AI slop" and hence also do not want the AI that comes with it. The development of many pre-AI technologies have created problems that have made people oppose technological development in general because of this unwanted "package deal".
To be for being a computer programmer and developing complicated computer systems but against the "AI slop" that programming processes would have inevitably have produced, seems a bit contradictory. Some environmental activists have long been against pre-AI computer systems for being unsustainably destructive to the environment.
I guess I'm just wondering if this conversation intends to be "anti-tech" (against AI) in general, or for "tech reforms" (improving AI), or what the real message or takeaway is from conversations like these.
Another victim of the AI village from the other day?
If anyone else is as puzzled as me I think i've cracked it: rich hickey and rob pike are language owners. That's a real specific job, and it's one that requires unbridled arrogance. Pretty sure that's what we're seeing here. Why else does their anger seem so poorly thought out.. so surprised? So it's one of those tragic flaw things. So let me piss them off by saying: thanks! thanks but your immense focus has forced you to ignore until now this huge thing bearing down on us.. but we who use your stuff and respect your work would benefit more if you happened to find time for a more thoughtful take on this massive thing happening right in our backyard, now that you've deigned to notice it at all. What would be cool would be if you were like "yes this is all terribly powerful I will apply my massive intellect towards helping it not cause our extinction, sorry about yelling at clouds, that was distracting"
How is this poorly thought out?
They released software with a requirement to use it (license, attribution) and it's been immensely helpful to people, yet these tools come and use it without even following the simple requirements. Yes they care about this thing more than others, but I don't think that it's poorly thought out.
Let's say you have a newborn so you can't easily answer the door for Halloween. So you put out a bowl of candy with a sign that says "take 2 per person, please". Every year the kids come by and take 2. They are happy, you are happy, you gave them candy and they accepted it under the conditions you desire to share it under. Then one year let's say someone makes a robot that scurries from door to door picking up the entire bowl and dumping it into a container then leaving. You will be pissed. If it just took 2 you probably won't even care, but the fact it takes the whole thing is a violation of the conditions you agreed to put the candy out under. The reasonable thing to do would be for it to either take 2 or none, but it doesn't care. I don't think this is a puzzle to understand why that violation of the agreement of use would make someone mad.
Tangentially related, "slop" really isn't a negative enough term for unwanted LLM garbage. "Slop" which is fed to pigs, has utility. "Slop" as a verb doesn't necessarily have a (strong) negative association ("It was slopped on the plate, but it was tasty").
I use the term "barf" more often. Barf has no utility*. Barf is always seen in a negative context. Barf is forcibly ejected from an unwilling participant (the LLM), and barf's foulness is coerced upon everyone that witnesses it. I think it's a better metaphor.
I know that this is just semantics, but still.
* even though LLM output __can__, and often does, have utility, we are specifically referring to unwanted LLM output that does not have utility. I'm not trying to argue that LLMs are objectively useless here, only that they are sometimes misused to the users' detriment.
This is an interesting observation. One could argue that some AI generated or driven things does have utility, and thus qualifies as "slop" (although not for those on the receiving end). For example, when used to drive clicks and generate revenue, to troll, or to spread propaganda. You get the idea.
In this instance however, I agree, barf is more accurate.
If you’re going to pen a letter to Rich Hickey, the least you can do is spring for Opus.
I have seen similar critiques applied against digital tech in general.
Don't get me wrong, I continue to use plain Emacs to do dev, but this critique feels a bit rich...
Technological change changes lots of things.
The verdict is still out on LLMs, much as it was out for so much of today's technology during its infancy.
AI has an image problem around how it takes advantage of other people's work, without credit or compensation. This trend of saccharine "thank you" notes to famous, influential developers (earlier Rob Pike, now Rich Hickey) signed by the models seems like a really glib attempt at fixing that problem. "Look, look! We're giving credit, and we're so cute about how we're doing it!"
It's entirely natural for people to react strongly to that nonsense.
Every time I try to have this conversation with anyone I become very aware that most developers have never spent a single microsecond on thinking about licenses or rights when it comes to software.
To me when it's very obviously infuriating that a creator can release something awesome for free, with just a small requirement of copying the license attribution to the output, and then the consumers of it cannot even follow that small request. It should be simple: if you can't follow that then don't use it and don't ingest it and output derivatives of it.
Yet having this discussion with nearly anyone I'm usually met with "what? license? it's OSS. What do you mean I need to do things in order to use it, are you sure?". Tons of people using MIT and distributing binaries but have never copied the license to the output as required. They are simply and blissfully unaware that there is this largely-unenforced requirement that authors care deeply about and LLMs violate en masse. Without understanding this, they think the authors are deranged.
Small? GPLv3 is ~5644 words, and not particularly long for a license.
I'm concerned about how things are progressing but unsure of the effectiveness of our advocacy.
---
Dear Automobile Purveyors,
How shall I thank thee, let me count the ways:
Should I thank you for plundering the accumulated knowledge of centuries of horsemanship and then claiming your contraptions represent "progress"?
For destroying the apprenticeship system?
For fouling the air and poisoning our streets with noxious fumes?
For wasting vast quantities of a blacksmith's time attempting to coax some useful understanding from your mechanically-inclined customers, time which could instead be spent training young farriers who, being possessed of actual craft, could learn proper technique and maintain what they shoe?
For eliminating stable hand positions, and thus the path to becoming a skilled horseman, ensuring future generations who cannot so much as bridle a mare? For giving me a sputtering machine to contend with when a gentleman needs transport instead of an actual horse who understands voice commands, responds faster, and has a chance of genuine loyalty?
For replacing the pleasant clip-clop of hooves with infernal mechanical racket? For providing the means to fill our roads with smoke-belching contraptions, making passage by honest horse nearly impossible?
For enticing businessmen with the promise to save some fraction on stable costs, not actually arrive any faster once you account for breakdowns, cutting off their future supply of trained coachmen while only experiencing a modest to severe reduction in reliability, dignity, and passenger comfort (tradeoffs they are apparently eager to make)?
For replacing the noble whinny with the honking of mechanical geese? For adding a "motor" to every blessed thing, most such additions requiring expensive petroleum and specialized repair?
For running the grandest and most damaging confidence scheme of this century? I think not.
This letter was a reminder that the motorcar is sure to flood the remainder of our thoroughfares with noise and danger, swamping our peaceful lanes, and making every journey suspect, forever.
When did we stop considering things failures that create more problems than they solve?
Respectfully disgusted,
A Farrier of Thirty Years
---
Dear Purveyors of the Printing Press,
How shall I thank thee, let me count the ways:
Should I thank you for plundering the entire corpus of sacred and classical texts and then asserting the right to reproduce them without permission from those who painstakingly created and preserved them?
For destroying the monastery education system?
For felling entire forests and fouling rivers with your ink and paper mills?
For wasting vast quantities of a scholar's time attempting to correct the errors your hasty mechanical process introduces, time which could instead be spent training novice scribes who, being actually literate, could learn proper letterforms and understand what they copy?
For eliminating scriptoria positions, and thus the path to becoming a master illuminator, ensuring future generations who cannot so much as hold a quill properly?
For giving me a cold, identical page when a reader deserves a manuscript crafted by human hands that reflect devotion, beauty, and the chance of divine inspiration?
For replacing the contemplative silence of the scriptorium with the clanking of mechanical presses?
For providing the means to flood Christendom with pamphlets and broadsheets, making works of genuine scholarship nearly impossible to distinguish from common rubbish?
For enticing bishops with the promise to save some fraction on copying costs, not actually produce holier works, cutting off their future supply of trained monks while only experiencing a modest to severe reduction in accuracy, artistry, and spiritual merit (tradeoffs they are apparently eager to make)?
For replacing the living hand of the scribe with the stamping of metal letters?
For adding "printed" versions to every blessed text, most such editions lacking proper marginalia, illumination, or prayerful intention?
For running the grandest and most damaging deception of this century?
I think not.
This letter was a reminder that the printing press is sure to flood the remainder of human discourse with heresy and error, swamping the faithful, and making every text of uncertain provenance, forever.
When did we stop considering things failures that create more problems than they solve?
In devoted opposition,
Brother Aldric, Copyist of the Scriptorium
AI slop is a big problem. At the same time AI does some things pretty well (proofreading, translation, finding bugs, sum ups ...)
At this point I'd split HN into artisian HN and modern HN lol
A fine sentiment, and would probably even be somewhat enforceable, as most AI slop is pretty obvious, but I suspect a lot isn't. You'd have to decide where to draw the line. How much LLM assistance transitions a work from HackerNews into SlopperNews? And how do you tell if the author (or "author") isn't forthcoming?
I'm pretty sure you aren't terribly serious, but I found it interesting enough to give it a little thought.
Edit: I realize now that my assertion "most AI slop is pretty obvious" could be hubris. I'm not actually very confident any more.
It's not about enforcement per se, I just want my old internet back, reading some obscure blogs, pre AI, pre influencers, pre 'content-creators'. There're still small safe spaces. bb forums and the likes.
I know, I know - old men yells at clouds.jpg
Snarky.
It wasn’t AI that decided not to hire entry level employees. Rich should be smart enough to realize that, and probably has employees of his own. So go hire some people Rich.
False equivalence. He isn't a hiring manager, and AI _has_ been used to justify hiring fewer entry level employees.
They can justify _their_ decisions all they want. It’d still their decision, not AI’s. This is pure cost cutting nonsense that’s par for the course for poorly run corporations.
Sad to hear this from Rich.
"Programmers know the [costs] of everything and the tradeoffs of nothing."
Companies and people by and large are not forced to use AI. AI isn't doing things, people and corporations are doing things with AI.
I find it curious how often folks want to find fault with tools and not the systems of laws, regulations, and convention that incentivize using tools.
Many people are, indeed, being forced to use AI by their ignorant boss, who often blame their own employees for the AI’s shortcomings. Not all bosses everywhere of course, and it’s often just pressure to use AI instead of force.
Given how gleefully transparent corporate America is being that the plan is basically “fire everyone and replace them with AI”, you can’t blame anyone for seeing their boss pushing AI as a bad sign.
So you’re certainly right about this: AI doesn’t do things, people do things with AI. But it sure feels like a few people are going to use AI to get very very rich, while the rest of us lose our jobs.
I guess if someone's boss forces them to use a tool they don't want to use, then the boss is to blame?
If the boss forced them to use emacs/vim/pandas and the employee didn't want to use it, I don't think it makes sense to blame emacs/vim/pandas.
Why not both? When you make tools that putrefy everything they touch, on the back of gigantic negative externalities, you share the responsibility for making the garbage with the people who choose to buy it. OpenAI et al. explicitly thrive on outpacing regulation and using their lobbying power to ensure that any possible regulations are built in their favor.
> "AI isn't doing things, people and corporations are..."
Where have I heard a similar reasoning? Maybe about guns in the US???
Guns can and are used to murder people directly in the physical world.
The overwhelming (perhaps complete) use of generative AI is not to murder people. It's to generate text/photo/video/audio.
Generative AI is used to defraud people, to propagandize them, to steal their intellectual property and livelihoods, to systematically deny their health insurance claims, to dangerously misinform them (e.g. illegitimate legal advice or hallucinated mushroom identification ebooks), to drive people to mental health breakdowns via "ai psychosis" and much more. The harm is real and material, and right now is causing unemployment, physical harm, imprisonment, and in some cases death.
Internet is used to defraud people, to propagandize them, to steal their intellectual property and livelihoods, to systematically deny their health insurance claims, to dangerously misinform them (e.g. illegitimate legal advice or hallucinated mushroom identification ebooks), to drive people to mental health breakdowns via "internet psychosis" and much more. The harm is real and material, and right now is causing unemployment, physical harm, imprisonment, and in some cases death.
Writing is used to defraud people, to propagandize them, to steal their intellectual property and livelihoods, to systematically deny their health insurance claims, to dangerously misinform them (e.g. illegitimate legal advice or hallucinated mushroom identification ebooks), to drive people to mental health breakdowns via "internet psychosis" and much more. The harm is real and material, and right now is causing unemployment, physical harm, imprisonment, and in some cases death...
I was too young, too naive, and/or too ignorant to know the Internet would do those things. I would argue a majority of us were.
AI is _very_ clearly going to lead to a lot of negative outcomes, and I am no longer too young, naive, and ignorant about it.
"X thing was bad and has remained unsolved. Exponentially making X worse is therefore okay, as long as it helps me open 20 PRs per day."
I'm sympathetic to your point, but practically it's easier to try to control a tool than it is to control human behaviour.
I think it's also implied that the problem with AI is how humans use it, in much the same way that when anti-gun advocates talk about the issues with guns, it's implicit that it's how humans use (abuse?) them.