Knitting bullshit
katedaviesdesigns.comIncreasingly, my reaction to AI-generated content of basically all types is simply a deep, resonant sadness.
The growth of AI feels a little like losing a limb - there is an initial shock of sadness, an initial dose of loss, an initial sense of what has been taken away.
But then for months and years afterwards, the daily occurrence of some other little humdrum experience, and only at the moment of the encounter does one think, "Ah yes, this too is forever changed."
Like sounding the depths of a dark well, where every day you lower the rope a little further, but every day there is nothing to feel but a pointless swinging in a vast, unquantifiable emptiness.
To me it had, in a way, the opposite effect - I started appreciating non-AI content more.
Good art has something that is difficult to reproduce if one isn't already an artist who is just using AI as a medium - it's intentionality.
Take for example Floor796[0]. Every little detail counts and while you could use AI to generate single characters or even the whole thing, you'd inevitably find details which have no reason to be there. You could then remove them manually or modify your prompt or input image so that those you know about won't appear, but AI being AI will keep sneaking in new ones.
The longer your prompt, the more intentional everything becomes, effectively making it the art piece.
I don’t think it’s intentionality.
It’s style.
A lot of people regard technical measures as the signal of quality. The most realistic painting, the most expensive purse, the most technical flip on a skateboard, the most well drawn AI art.
It’s a cheap way to judge quality because you don’t have to understand what makes something good.
AI is really showing this divide.
AI might actually help in this regard. Where you may have someone who has good taste, and can create a unique style, but lacks the skill to execute on the technique. Kind of like a song writer that can't play an instrument, but can hum a tune to the band, and articulate subtle changes they want.
Of course, current AI is not even close to that yet, but decoupling creativity from technical ability could actually be a good thing in the long run. Though to be honest, I am generally pessimistic on it.
But then some people recognize that technical excellence is not the most important thing, and extend that to assuming that technique does not matter at all. And so we get this constant drip feed of absolutely terrible conceptual art (with an AI-generated artist statement, can't leave that out!) in every single local art scene.
If there's anything more tragic than wealth without taste, it is technique without vision
And what makes a style good (as objectively as it can get, anyways)? Why would it be the defining factor of what makes art good?
Yeah, but you have to realize we are on the losing side of this war. The armies of bullshit now have an incredible advantage that actual art can never have. At least in the past, there was some equilibrium. Bullshit was cheaper to make than art (or any other quality product), but now it has become infinitely cheaper to produce, and much more expensive for us to separate from the bullshit.
Think about this for a moment - it takes a company of 8 people to make 3000 podcast episodes a week. It would take far more than 8 people to listen to that many podcasts. How can we possibly hope to separate the wheat from the chaff? What happens when it's 30,000 episodes per week? 300,000. What possible hope does art and craft have against an army that is effectively infinite.
We can hope that the cream will rise to the top, but I am not optimistic. I genuinely believe we are watching the end of art and human creativity as it is absolutely drowned is mass slop.
tl;dr - we're fucked.
>How can we possibly hope to separate the wheat from the chaff?
Categorize, curate, and share. The war is only for your attention. I have favorite creators now, and they would cease to be favorite if they suddenly started sloppin' it up. The best of them recommended cool things made by other people, who in turn recommended more things, and so on.
If instead you peddle bullshit, it won't take long to be identified as a bullshit vendor, even if you have 1000x the bullshit of the next leading brand.
Not everyone will get the message especially if you mainly consume algorithmic feeds - we all seem to have that relative who thinks you would enjoy being sent an AI Jesus image every other week.
It feels like I will forever mourn the totally self-inflicted loss of the Internet. I feel like I will never get over it, so much so that I wish I had never experienced its (brief) moment of brilliance. I feel sorry for my younger self for thinking it was here to stay.
It was a very special time when the Internet was full of people's open, personal gardens. I feel fortunate for having experienced that because it showed me what's out there if I look, and I want to cultivate the pleasure of finding such things and sharing them with people I care about.
It's not self-inflicted at all. Users didn't enshittify the sites we used to enjoy.
"The growth of AI feels . . . like losing a limb"
Indeed. Figuratively, generatively, and of course, generationally.
> The growth of AI feels a little like losing a limb
Or gaining a new, oddly misshapen and inexplicably placed limb of no apparent purpose or utility.
That will randomly and unpredictably try to take over tasks from your other limbs, hijacking your somatosensory system so you can't tell when it's doing so without actively looking at what you're doing.
I think you just described my cat's tail
It seems like Black Mirror's "Joan is Awful" is here, but instead of a quantum computer generating personalized content, we just have an endless parade of meaningless slop.
>an endless parade of meaningless slop
This is, increasingly, the front page of HN. Direct slop is uncommon, but not rare. I skip any headline that mentions AI. But sometimes you get baited, you start reading, and it's about AI anyway. A few days ago there was an article about someone hacking some device, and it was just the author vibe-hacking with AI.
It is not interesting.
I have intense AI fatigue. Make a containment board for AI sloppers. It's so much worse than all the previous fads combined, like blockchains and Rust rewrites. I'm not even anti-AI, but the exposure to it is just overwhelming and unrelenting.
It reminds me of how movie special effects making-ofs got super boring when most of the work started being done with computers end-to-end.
But with everything.
> the front page of HN
I think I've "flagged" more links this year than my last 13 years on this site combined. I'm sure it's unproductive and doesn't really do anything, but it makes me feel a touch better. I'm so over the slop I think I'm actually visiting HN markedly less because of it.
On the plus side, there has been a (predictable) uptick in slop-flagging browser extensions over the last few months. Once a good locally-hosted version exists, I think it'll take its rightful place alongside ad blockers for tech-minded folks.
>slop-flagging
I would love something like SponsorBlock for YouTube, but for AI slop. Crowd-flag channels, and banish them from my sight.
I think at this point the containment board is the entire Internet. I have no idea how, but we all need to "Atlas Shrugged" this shit and start over somewhere else with something else.
The economist in me immediately asks: Where is the financial incentive to do this? Just the same way the programmer would ask what the stack is. Some possibilities:
1) Money laundering - large content farm someone can argue makes xyz in revenue to hide an alternate source of revenue.
2) Ad fraud - leading up podcast charts or SEO results to attract clicks to sell ads. Bot farms could also be making clicks to pretend sell ads as well.
3) Attempt to dominate the niche for sale of knitting products. Or to pretend to dominate it so they can sell their the business later at a larger multiple.
4) Test the waters of a much bigger engine for doing 1-3 above in an innocuous hidden subject, before they do it with elections or some other more profitable field. Regulatory waters as well - seeing what they can get away with.
Feel free to brainstorm more incentives for making something like this.
I don't understand your question, are you asking what's the financial incentive to AI-generate thousands of podcasts a week? Isn't it obviously the income from streams and/or ads?
Did you read the article? Headcount went from 300 to 8, number of podcast per day went up and apparently listenership went up.
This only works if there are people willingly listening to crap.
Perhaps there are.
Are they? Or do they think they are listening to something real?
I've enjoy reading alt-history at times. However I can only enjoy this when it is clear that this isn't real history. Often one of the more enjoyable parts is authors notes of how real history differs.
I have heard some human written songs that really sounded real and tugged at the heart strings - until I found out it was fiction, and then I was offended. The key here is that it showed someone good (to modern ideals - they all considered themselves good Christians) existed in a timeline where they where we know almost nobody was good.
the bitch of it, though, is that it doesn't only work if people listen to it. it also works if a bunch of AI bots can convincingly fake people listening to it. and, of course, those types of bots exist and have financial incentive to continue faking it, too.
at some point, these two competing interests are going to find out that they're paying each other to stare at each other's dwindling profits, but my bet is that it's going to be a while yet before that wake up call. and it will be an even longer churn into something else because no one is going to admit that they were funneling money into nonsense for years. they're going to "adjust strategies" to "modernize against changing markets" for "new potential growth". all shit that takes a long time to do because it's a half measure aimed at saving face to investors. so it'll work for a long time just based on the momentum of bullshit. =/
they said, podcasts had 12 million downloads. 750k weekly at the moment.
They get people listening. And when you download you don't know it will be crap AI slop.
I now get a bunch of this in youtube - just endless drivel about some theme I am interested in. They create so much crap it's hard to see which one is real. I started banning the accounts that are making AI crap, but there are so many now.
I think the question he's asking is this: is it an ad ouroboros, or is there some other (nefarious?) intentionality behind it?
My hot take: porque no los dos?
The summary but no content thing is interesting. I’ve seen it in many forms and I’m not sure why it plays out that way. Maybe the summary is tied to the prompt tightly? The rest not?
I saw some bots on Reddit that were very odd in that if anyone asked a question in relation to something like an news article some account would respond with a non answer but sorta summarized bit of the article. If you responded “that’s not really what I asked” you got an even odder response.
This isn’t that strange as people will do that in a way… but i noted it because I saw a flurry of those accounts in Reddit and then they vanished.
> The summary but no content thing is interesting. I’ve seen it in many forms and I’m not sure why it plays out that way.
I would guess that it's because the incentives and goals are different.
The point of a summary is to entice a listener to begin the podcast. So it has to offer the promise of interesting depth.
Once they've started listening, all the body of the podcast has to do is be soothing enough to get the user to keep listening until the next ad comes on. It has no need to actually keep the promise unless the listener is paying enough attention to hold it accountable.
It's scandalous that no-one has yet posted Gary Larson's Far Side cartoon "Bullknitters".
https://www.instagram.com/p/C2OQtokvzCa/
(or google image search)
Related: Four Yorkshiremen: https://www.youtube.com/watch?v=ue7wM0QC5LE.
Personally I think it's scandalous that the now top comment is an off-topic reference to something tangential to the title, and nothing to do with the article, which isn't really about knitting at all, except for being the hook to which the author was pulled in to the world of AI podcasts, and consequently found their output rather lacking in content.
You could substitute the word knitting for almost any hobby, and the article would read almost the same.
It's an article about the soulless content-free world of AI podcasting, and about how AI output is about validating the emotions of the listener rather than meaningful content.
That wasn't meant to be the top comment, it was meant to be buried somewhere round the bottom!
I did read the whole article and have some thoughts about it. But they are pessmistic and difficult, so I'd rather share something fun.
My on-topic thoughts are that I just spent a long weekend in good company playing music and chatting. Returning to quotidien life made me think the solution is to get as far away from computers as possible, and back to the in-person interaction that we're evolved for.
A big reason IMHO that we're susceptible to phony bullshit (whether it's knitting podcasts or broadcast propaganda) is that we're not evolved for it, and it misses many of the contextual clues present in in-person interaction for which we are evolved.
> All of the images in this post were generated by an ai in response to the simple two-word prompt “lovely knitting”
Touché.
Wow I’d never have expected Kate Davies to show up on Hacker News. I think it’s important to understand her background a bit when she talks about knitting as a matter of life and death. She was a scholar of 18th century literature before she suffered a stroke young[0]. She focused on knitting as a means of recovery and never looked back. She built a business and a community and attributes a lot of her physical and mental health to knitting.
So while this post hopefully hits a chord for anyone in a creative field she embodies a particular type of person for whom slop is a genuine risk to their being. Not their job; their whole personhood. In a world where slop has chased out the humanity of things and the bullshit machines fill all content what are the chances someone like her could build a second life better than her first?
0: https://katedaviesdesigns.com/2015/01/28/five-years-on-part-...
Wild. This kind of empowerment and long lived effort is the type of story we should be sharing.
I'm concerned that we've taken an amazing character like this and turned the world against her for frankly a bet against human intellectual development.
> Not their job; their whole personhood
I hate so much that while I read your reply this particularly phrasing was grating to me.
There's nothing wrong with it, and I have no doubts you, whoever you are, wrote it.
What annoys me to death is that a perfectly fine language construct is tarnished to a point that a mere glance reflexively caused me to wince, and I had to actively interpret it as fine.
Interesting, sounds like a similar experience to my reading of this reply
The root comment brings insight and value and stakes out a human position. I don’t see a need to snipe from the hedges
Am I to believe that those 700K+ downloads are organic traffic? Who's listening to all this stuff?
HN sends tens of thousands of views to AI-farmed articles about why AI is good or why AI is bad. These articles get upvoted to the front page literally every day. They don't say anything interesting, but many of us just like having our existing beliefs recited back to us.
So to answer your question, I think we all do, it's just that different audiences have different sets of topics for which they let their guard down.
There is a huge market for content that makes you feel smart without requiring thinking and makes you busy without requiring work. I'm not not saying it's inherently bad. I'm listening to music on my daily commute and it's the same thing: just enjoyable filler so that you can do something other than getting angry at other drivers. The internet just weaponized the formula, and now AI is the equivalent of nuclear weapons I guess.
How is a listen to a podcast counted?
If someone listens to a couple of minutes of a 30 minute slopfest and nopes away, is that counted as a listen?
Your example of HN sending views to shit is interesting, because I presume a lot of people sometimes click on a link expecting something insightful and is greeted by bullshit. A view is counted, but no meaningful interaction happened.
By McHealy's logic, we ought not be concerned about that. After all, it's low-stakes content.
My podcast app downloads way more podcast episodes than I actually listen to.
I occasionally put on a (human-made) podcast for the word-sounds rather than the content. I can imagine others do the same without caring whether it is human-made.
If the sonic quality of a human voice is what you're seeking, then I imagine a generated voice will still be less appealing, no?
No, but to misinform people you have two main strategies: limiting through tailored scarcity and dilute in extra-generic overabundance. Don’t get it wrong: both can be combined and even can sometime overlap.
It doesn’t matter if no one is listening. Equally saturating all channels, metrics and indicator is enough to create hindrance so preventing relevant information to spread in meaningful time.
Attention is all you need, so distraction is all that will be given.
Also, fracturing audiences to infinity.
I listened to a podcast a while back (human authored I'm pretty sure) about low-quality gutter level streamer content and how popular it is, speaking of personalities like asmongold and a vast number of even worse imitators.
This content is made by humans but is pointless grindingly stupid filler spiced with a dash of obviously performative offensiveness. You're basically listening to a complete loser (or someone LARPing as one) telling you about their boogers and then being racist and then playing video games for 6 hours.
But it's wildly popular. Millions of people stream this kind of shit for hours every day.
There's a lot of people out there who just want to numb their brains, and there seems to be no floor. You can just keep making it dumber. The stuff people stream (and doom scroll) on the Internet makes 1980s daytime soaps look like high art from a lost golden age.
So it's not at all surprising that millions of people listen to low-quality un-curated AI slop podcasts.
I actually unsubbed from the podcast I heard. Meta discussion of crap like this isn't much better than the content itself. Keep driving. Do not look at the car accident.
I had kind of an epiphany like that in the last year. The Information Age means information is free. It costs $0 and is produced to infinity. That means you are not missing anything. Your attention is actually 100% yours, and if you choose to ignore the car wreck that's fine. There are infinity car wrecks. There are infinity everything. Keep driving.
The problem is I want to live in the "correct information age" - that qualifier is hard to find. I suspect that correct will cost money. Unfortunately I don't know how to pay for it. Many of the major publishers are also using AI with questionable fact checking. Where I most need correct information is my local small town news, and there isn't even a newspaper anymore. (there is the nearby big city newspaper, but they don't cover my local issues well)
"No one ever went broke underestimating the intelligence of the American people."
--H. L. Mencken (or at least attributed so.)
One of the real costs of the end game attention economy is that when your "car" crashes, noone is going to stop to help. When the market you engage in gets swallowed up, everyone will buy the swill that outcompetes you on perceived surface level value. Communities get fractured. Organizations that used to be community pillars (church) become self serving. All these things create a positive feedback loop of intellectual degradation.
Other bots?
Dead Internet Theory.
AI produced, AI downloaded. No humans in the loop.
There was one of those "memes" a few years ago that is just a screenshot of someone's Twitter post that was essentially:
"My wife is a teacher, she used AI to help create an assignment, all the kids used AI to complete it, and now she's using AI to grade it. Nobody learned anything, nobody really did anything. What's happening?"
Being the devil's advocate, it sounds like no one involved see any value in that exchange, therefore they don't care.
In that sense AI slop is a symptom, not a disease. But perhaps also a catalyst.
I really wonder if there is a sort of silver lining here, and in the long term low value activities will be filtered out of society. Though that borders on the AI maximalist view which I don't fully agree with.
Of course the glaring question is what value even is.
"silver lining"
agree. if internet is so filled with slop. people will move away from it, start to read books, walk, hang out with each other again?
not when all the world capital is pushed into that bubble while eventually eroding the freedom to do all these things
can't move away from internet when you can't earn money without it, and all the services require you to participate
can't read books if no one is going to be publishing those, after they get out-competed by cheap endless slop
no point in walking when cities are built for cars and businesses, and public spaces continue to dwindle and be defunded
can't hang out when you're too tired trying to survive
Or maybe Ms. McHealy was simply lying.
> one of the most pernicious things about this particular kind of bullshit is the way it casts any form of critical scrutiny as a terrible failure of sensibility.
What a great line. And you'll probably notice this technique being used by very skilled bullshitters and master manipulators: any request for rigor or scrutiny is met by something like genteel condescension. You're treated as if you've committed a breach of etiquette, and that's one of the reasons the technique is powerful -- you're likely to feel embarrassed and, following that, to back off.
Great point. This happens on forums too. For example if Kate opposes knitting bullshit, a common strategy is to characterize Kate as 'hostile', 'overheated', 'overreacting', etc. Kate's actual argument doesn't need to be addressed. We just rule that Kate isn't posting content, she's causing conflict or experiencing an unfortunate emotional state.
This strategy also indirectly helps overworked moderators by penalizing disagreement, which in turn discourages flame wars.
Kate's critics can even say they support Kate. They just want to help her deal with her emotional overload.
> but, Anne tells Jamie blithely, this really doesn’t matter because the topics under discussion are so low stakes
Put differently-- it matters so deeply that the genre itself will inevitably become an unfathomably sad parody of what it could otherwise be.
I can absolutely recommend the book On Bullshit, it's a tiny read and makes an excellent gift. Kate's article summarizes it very well.
https://en.wikipedia.org/wiki/On_Bullshit
This reddit comment puts it perfectly:
"What’s it about? Frankfurt tries (successfully) to define bullshit (rather academically). In short, a bullshit artist is solely focused on persuasion and making an impression, not caring about truth. Paradoxically, bullshit can be true.
What makes it bullshit is how it is created - shoddily, hastily and without regard for fine work. A gifted liar does their thing carefully so that the truth cannot be found out. A bullshit artist just flings it out, overwhelming skepticism with sheer volume, until something sticks with the audience."
https://www.reddit.com/r/books/comments/1pidpb2/on_bullshit_...
An 1980s take on something that has taken over our 2020s digital airwaves, indeed.
I'm from Texas and we use bullshit & tall-telling as an entertainment art (this is my background; I do t think it's specific!). This definition of bullshit is spot on; I'm pretty proud of my ability to bullshit-for-entertainment, so AI bullshit really grinds my gears... it's so bad.
I like how the pictures got more and more sloporific through the essay.
It doesn't mention an important group being harmed: the creators who make high-quality, sincere podcasts about knitting. Their genuine content gets buried under a mountain of slop. In theory, recommendation algorithms ought to surface the best stuff, but that doesn't seem to align with incentives. Sad.
Yes, I noticed and appreciated the sloporific (great word!) quality, too :) I stopped midway for a sec to try and figure out an image, then eventually realized they were just getting more nonsensical on purpose.
Or even worse, it gets fed back into the AI slop machine
I wonder if (or, more accurately hope that) this kind of slop will eventually die out as people realise how little care is put into it. I am more and more convinced that if the devil existed he'd take care of the bigger stuff, but have an army of little devils that encourage people to do things like make unsupervised automated podcasts about knitting, relentlessly chipping away at the messy joys of living.
At the start of Good Omens, there’s a scene where demons are sharing their recent misdeeds. A couple of them are sharing “classic” demon stuff like killing and possessing, but Crowley (the protagonist demon) shares more modern evil deeds, such as creating traffic jams.
https://en.wikipedia.org/wiki/Good_Omens
I’d link to a clip of it, but to your point some devil is making it frustratingly hard to find.
It's been years, but I seem to recall that Crowley specifically is very proud about making sure some motorway project got botched, because the continual drip of suffering from the accumulated jams and road rage makes him look really good in the spreadsheets even though he's not much for the classical showy stuff. Millions of little instances of suffering adding up year on year, instead of a handful of incidents of really intense suffering.
I thought it was that he altered planning documents and even went and moved physical markings to make the M25 the shape of the ancient evil sigril Odegra (this is from memory; I just read it a lot as a teenager) so every angry drive round it powers that sigil.
Yes, I think you’re right. And if I recall correctly, near the end he’s trying to get somewhere but gets stuck in traffic by the same problem he caused.
In all fairness, he gets stuck trying to do something good, which is not the standard "evil trapped by its own design" moral.
Ha, that's right! I forgot about that bit.
Man. I do miss Terry Pratchett.
https://youtu.be/M0S3a32RzEo with David Tennant as Crowley.
It's a well done scene that is properly faithful to the original.
Whoever decided adding silly audio effects to an operating system is surely one of these lesser devils. Just think of how many people have been aggravated by a colleague's laptop when it "wakes up" every day, or an inappropriate notification sound during a presentation or something. On any desktop PC I interact with I do my bit by disabling all sound effects before I continue.
The real genius was whoever decided to add fake typing sounds to virtual keyboards on touch screens.
Blessedly everyone around me has disabled these, I had forgotten how enraged that makes me. Even though I don't think physical keyboards bother me at all.
For a long time I thought that the AdSense business model was ultimately doomed because I assumed that people hate ads as much as I do. It turns out I was just wrong about what most people are willing to put up with.
I remember visiting a friend over a decade ago, and for some reason I had to use their computer for a bit. I was immediately thrown aback by all the ads everywhere and installed an ad blocker before anything else. They were very grateful, but the part that surprised me was they were annoyed by the ads but never thought to look for some way around it. It never even crossed their minds it could be done or to search for it.
All human progress in history has been due to a VERY small handful of people who think “this is bullshit, things could be better”.
The vast majority of people accept what they see as the way things are and it never occurs to them that things could be different.
It's always absolutely shocking using a regular person's computer. How can they live like this? I have lived in this ad-free bubble for so long that I forget that's not the real world. If I had to live without adblockers, I don't think I'd ever visit the internet.
Similarly, when my partner moved in I told her about the network-level adblocker and she kinda scoffed at it saying ads don't bother her. A few years later she started complaining that when she's out of the house she gets ads.
I really doubt it's going to die out.
I think a lot of the value in these AI Podcasts is just the self-validation of the listener. It really doesn't matter to the listener if there's nothing between Egyptian socks and Revelry because the point was to feel good not to learn.
But also because I've had a long standing pet peeve with news articles that include random ass stock footage in articles. If humans can get away with include a picture of _any_ ship when talking about a specific ship (that may have never been in the harbor the picture shows) then why does the AI need to be correct?
I'm afraid it'll lead to a weird music-ification of content.
Music can make you feel good and keep you engaged just purely out of engaging our pattern recognition.
AI videos and photos seem to have a similar effect. Even if it's not real, they encode enough patterns from good human work to be able to engage our attention.
Just proving people with an attentional escape is valuable on the internet.
Boomers love slop. Even when they know it’s ai (and it can be increasingly difficult to tell even for people who don’t struggle to send an email) they love it almost as much as they love political ragebait, and they love political ragebait more than their own families. My ancient grandpa is on some facebook groups and they share bottom of the barrel ai videos and images all day. If it can be consumed with zero effort, it’s great. They couldn’t care less about whether any care of effort was involved in its creation, whether it has any value whatsoever, whether it’s made up bullshit. Not a whit.
They also have money and can vote, so there will be an endless avalanche of slop being generated every single day, enough to bury organic content ten times over.
Eventually they will all die, and then the upcoming Gen-Z and -Alpha will save the world with their well-documented refined tastes for artisanal, purely human short-form video slop.
It's definitely the sort of thing that Crowley from Good Omens would be working on.
Just like Big Tobacco moved onto greener pastures in the developing world, Big Slop is not targeting specifically us, but the billions of new internet users who connected over the past decade:
https://data.worldbank.org/indicator/IT.NET.USER.ZS
There's this (now old) meme called "Italian brainrot" - AI generated characters with vaguely Italian-sounding names like Bombardiro Crocodilo (note the incorrect spelling of the Italian word for crocodile).
One character stands out - Tung Tung Tung Sahur. Not only does it not sound Italian at all, that last word rang a bell.
Sahur (or Suhur) is the meal eaten before dawn during Ramadan.
After some digging I discovered this whole category originated in Indonesia. The country experienced an absolute explosion in the number of internet users in recent years and is home to internet phenomena which spread globally, but few in the west seem to realise that.
Yeah, people will reflexively filter out the slop, eventually, but they'll do it by leaving the places that have been rendered worthless by its persistent presence.
The particular type of innovator ghoul that's enabled by generative AI dreams of filling the entire internet with bullshit content. Aggregators (media and content) should be actively pushing them out for their own long-term survival, IMO.
Unfortunately, I think it is here to stay. But humans are very adaptable; I think things may play out fine.
See, I have some hobbies. You probably have yours too. The thing about hobbies, is that in many ways they are niche.
Of course, a lot of people enjoy running, but very few enjoy taking it seriously, delving into it beyond an occasional thing, to use it as an example. Replace running with any activity that may be approached as a hobby and that can be slopified.
And as any niche thing, there will be a separation of masses that consume it as slop, and those that engage with the real thing. They don't cancel each other out, just coexist as different things entirely.
I didn't have the words to explain to my mother why those AI health/advice/story/etc videos she shares are harmful.
> While a liar displays an underlying respect for the truth in the very act of intentionally distorting it, “the essence of bullshit”, Frankfurt writes “is not that it is false but that it is phony.” For Frankfurt, then, bullshit, is discourse from which incidental matters like truth and reality have been completely hollowed out and replaced by performance and simulation.
She would often say "but I happen to know that some the underlying information is true." The answer is the videos are phony, even when part of it happens to be true.
I think AI is the final nail in the coffin of meaningless slop that started pervading our lives after covid.
I see people tiring of this brain rot, especially Gen Z, there are more offline events - music festivals, day time raves, running events, people are appreciating more things analog - LP records, cassettes, younger people getting turned off by social media.
I remember this kind of slop from times well before the LLM explosion.
I'm specifically thinking of a print magazine that was designed to make you feel like you are a smart reader of science articles, without any useful information about the actual science or technology.
Yes, the article acknowledges this in the first paragraph by citing Harry Frankfurt’s „On Bullshit“ (1986). Of course bullshit (as well as even more insidious misinformation/propaganda) have always been around, but the incredible advances in its production and dissemination are worth considering. At some point, sheer quantity turns into its own quality. Indeed I would argue these issues have always been underconsidered. The article is a kind of inoculation against bullshit that every generation requires again and again. People aren’t born nearly skeptical enough, and the game keeps ever changing.
I actually don’t think the article is sufficiently vehement in calling out just how brain-frying this is. And how destructive on a societal level. The razor’s edge between being too uncritical and too cynical is hella narrow.
> I remember this kind of slop from times well before the LLM explosion.
Even if that were true (which I don’t think it is, this is a different kind of worthless content), you most definitely don’t remember it at this scale, and that’s a major point.
There's something innate about AI generated contents that my instincts somehow knows these aren't creations by humans.
I can't seem to state exact properties but they are hollow, out-of-norm, or as the author describes it -- bullshit
Great article, thanks for sharing.
I didn't know (but should have assumed) AI-generated podcasts existed. That's depressing.
I imagined if mankind had the ideal machine, that could automate anything, we would get rid of dull office work and back breaking physical labor, but not the things that are actually enjoyable: sharing with each other, entertaining each other, making art. I imagined a lively world of live performance and creation, since all subsistence work had been taken care of. Instead we might end up in the world of fifteen million merits.
It seems people don't mind letting their minds be hacked by machines that can create the form of what they find enjoyable, if not the substance. But I guess there's always been slop and the public for it. To imagine actual people wasting their limited time on Earth listening to these GPT logorrhea podcasts is truly depressing. The unchemical soma.
What are we even supposed to spend our days doing in this bright future of the AI champions'? Stop automating away the things that give people purpose, tackle real problems instead.
The incentives are at odds. In this capitalist landscape, you create podcasts and blogs (or have them created) to attract an audience which then attracts those fat advertising dollars.
Ironically, these are both incredibly common, LLM-able takes:
Lament: Oh why did we automate art?
Answer: Capitalism.
It's superficially true, currently. We've had generative AI for a few years and people are using it to make a quick buck. But even if the world had been taken over by communism, or if the Western Highlands of Papua New Guinea had got imperial ambitions and now we all lived in a gift economy, people would still be using generative AI to gain attention and status. This will work until it wears thin. Thinner.
it's really disheartening that so much youtube content is now AI generated.
Their dialogue on "substituting truth and validity with a register of emotional validation" is pretty prevalent across the entirety of US culture right now. The first thing that comes to mind for me are Christian groups that do a lot of celebrating during services or events with absolutely no goodwill, volunteering, or donating at all. They're real good at making you feel righteous, but awful at actualizing it. Hate the hollowing out of traditions that used to make communities and people great.
If the kind of AI slop the article talks about entertains/infuriates/depresses you and you want more, you will definitely like the "kroshay" subreddit: http://reddit.com/r/kroshay
Interestingly, Inception AI seem to have pivoted from content slop for "gardening, [...] knitting, cooking" - or "things we can afford to be wrong" - to "AI Immigration Drafting Software for Law Firms": https://www.inceptionai.co/
I'm somewhat curious how that'll work out. Hint: I'm not.
EDIT: My bad, wrong company, it's "Inception Point AI": https://www.inceptionpoint.ai/
There's also https://www.inceptionlabs.ai - it's not confusing at all.
Why does this site want to access apps and services on my local network?
On topic, I do wonder how "the market" is going to sort this out. At this moment I'm leaning towards just banning this shit, but maybe there is a better way?
We can already see the market in action. Increasingly people are more hostile to online content and influencers, except for the few people they follow, just like everyone was already defensive against unsolicited email. Authenticity will become valuable in a sea of slop, and making high budget productions (think Mr Beast) will be worth nothing since it can be easily faked and hard to distinguish.
> I do wonder how "the market" is going to sort this out
Unlikely to do a better job than it did with anything else.
For someone complaining about slop, I found this unreadable.
TL;DR: there are brainrot farms with help from AI.
But I saw this one coming three or four years ago.
Actually, I've been listening to AI-generated brainrot music. I prefer it to some human-generated brainrot music (there's "I Hate Boys" from Christina Aguilera. Sorry if you are a fan).
Brainrot serves a specific social purpose: relieving stress, incoherently winning elections. It's a kind of drug that dulls the dangerous part of the brain while leaving the he-is-a-good-tool and she-is-blonde brain hemispheres in working order.
In fact, I do believe that if there were to be an uprising in a couple of decades against AI, and the human side were to rise victorious, the aftermath's social order would be studiously anti-AI and anti-science, but they would make a carve-out for AI brainrot (yes, I published a short fiction story with that premise, because I'm brainrot-vers).
Are you serious when you connect anti-AI sentiment to anti-science sentiment?
To me, they are opposite sentiments, and my experience discussing AI with others supports this. The most pro-AI people I meet are very far removed from science, and my research colleagues are definitely more critical of AI than not.
AI's tendency to emit unsourced, untrue statements with authority is about the most unscientific thing you can get.
AI is scientism: presenting science-flavoured things as a cultural marker.
> Are you serious when you connect anti-AI sentiment to anti-science sentiment?
I don’t believe that the current state of things represents peak-AI problems. AI is for now weak both in its capability and its impact, and also just new. Speculatively, if things go really bad, in a couple of decades there will be a huge swath of population without jobs nor high-flying education. They, perhaps rightly, will blame AI for the situation, but they’ll also, perhaps rightly, blame capital and the “snobbish elite” that is today and in the near future propping AI. That “snobbish elite” is well-paid engineers and researchers. That’s because people tend to like to have somebody to blame for their problems. But even without making it about bad guys, the heart of the thing that is pouring billions into AI is a relentless ethos of profit deriving from progress and disruption. You can’t stop AI without stabbing that heart.
Calling current AI "weak in its capability" is very disconnected from the reality. Their capabilities in many areas and on many tasks are incredibly strong. The disconnect seems to come from completely unrealistic expectations, e.g. imagining the AI as a sort of omniscient oracle which should never make mistakes.
I think he is making the point that scientist built the AI.
The whole "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should,"
ummmm, WOW!, hey that clicks your brainrot/drug description is good. making a choice for zero human content and therfore interaction.
the full suite of options would include perfectly artificial scents. personaly, I am way over in the analog/organic direction, but I get the need to disconect from the "whatever this is™" that passes for a society. the question remains for AI scaling to meet the demands and desires society has always placed on indivuals
the audible exasperated noise comming from the person in line with me, seeing me pull out cash, thereby breaking there own perfect little automated world, mearly by bieng subjected to witnessing such a primitive ritual, not behind me I might add, the person leaving in front of me, is the prime example of someone who will violently reject AI and the rest when it inevitably fails to "fix" everything
Extremely long winded. I think this person is trying to throw stones at someone else’s work, but their own is so elliptical I lost the will to find out.
Maybe. But at least she gave a shit enough to actually write something you didn’t like.
The sloplings don’t even bother.
Not taking away the right to your opinion, but I couldn't disagree more; I found it an excellent sociological article. One, it takes the formal concept of "bullshit" and applies it to knitting in a very methodical and strict manner. I found it novel and convincing, and the examples were great; not contrived or forced at all. IMO it was much better than many academic books or articles; an immediate share.
Two, the turns of logic are clearly laid out, in a conversational way, which would make it easy to stick a wrench in and form a polemic if you found any of her arguments or logical implications specious. That said, that does make the article quite long. But then, it is anything other than "elliptical", which I think you used as "runs in circles and repeats itself often", while it actually means "omits parts and thus is difficult to understand" (like the ellipsis sign: …).
Also: what the heck is wrong with that podcast farm founder. I hope they have a bad year.
Yeah, well good thing that LLMs are good at summarizing articles, unlike generating believable knitting images.
I was a couple of images in before I sussed it. Bullshit images, but pleasing enough to look at. Without the images, it would have either been a big wall of text, which would have put me off reading, though I did give up about 25% of the way through after sussing the images and thus the incoherence in the argument. The images bring something to the article. They were cheap/quick to generate. The increase the potential payoff (more reader) without significantly increasing the cost. Without the images, the payoff(readers) would likely have been lower, below the cost of actually writing the article. Same goes for a history of knitting podcast or that video. Production costs would not be worth it for a very niche viewership.
Reading that made me feel like you wanted to be contrarian from the get-go and dismiss the article with the least effort possible. The whole point of the images is that they're low-effort AI slop, it's part of what she's trying to point to when someone is generating unsupervised automated podcasts about knitting.
I came in indifferent but it doesn’t take much to make me give up on an article linked on hacker news. I use it as bubblegum while waiting for a compile/prompt, intent ally for stuff that can be dropped easily. I saw her disclaimer at the end. My point was that the slop images make a more appealing article than if they were absent
So you're saying you can spot AI generated bullshit, but not spot a deliberate and hilarious contrivance that the author uses to reinforce their point?
The AI images were deliberate and part of the narrative. Ie, you can generate slop with zero effort.
from TFA: "All of the images in this post were generated by an ai in response to the simple two-word prompt “lovely knitting”
Edit: ps: Kate Davies is an actual creator who has been creating knitting patterns for years.
Yes, I saw. By giving up I meant I skimmed to the end. The images improve the article
You only had to reach the second paragraph to find the example of an 8-person company that uses AI to generate “about 3000 podcast episodes per week, hosted by AI personalities.”
I feel like the alt captions for the images, although diligent and thorough, don't really capture the most important aspects.
I like the blog but the premise of the blog is an engineering/epistemological perspective on the craft. The writer clearly cares more about the process, technique and history more than the feeling and validation.
It could be, that a big part of the the future of hobby's and entertainment in this way is the feeling and validation over the actual performance. Or it can be that a massive amount of people find their value in this content.
So .. I think we need to ask a deceptively simple question here, which is: is knitting real?
I'll add in an aside to this, which is not only are there fake knitting podcasts there are fake knitting and crochet patterns, which is a problem because people get a substantial way through making them only to discover that they don't work. In some cases the giveaway is that the supposed final image isn't physically possible, like the images in this article, but the fakers can use a real stolen image and just spam a pattern underneath it.
So: what is the knitting that is real? It has to be the use of your hands, needles, and yarn to produce a physical object, right?
The podcasts work towards something else. The identity of "being a knitter". This is a form of "hobby" that was already not unusual, that of discussing a thing without ever bothering to actually do it. Photographers are especially bad at this: too many lenses, not enough photographs. They've also got comprehensively run over by AI, because you can just generate the photographs now. Same for "authors".
But ultimately all these pleasant sensations aren't backed by a connection to the real. If you're going to talk about the history of knitting, shouldn't it be the real, evidenced history? As done by real (usually) women? Otherwise you're just knitting a pleasant fantasy for yourself.
The AI approach is "wireheading": the logical conclusion of all of that would be to find a means of inserting a wire in your head that provides constant pleasant sensations. Achieving happiness through a constant feed of generated images is less effective, but it's the same order of things.
(see also: authenticity in food, which could easily turn into another ten thousand words)
I'd also say a few things, if knitting takes a long time consider how long it takes to make a good clear pattern so that others can replicate it.
People who make patterns are already dealing with a saturated market. This includes historical/vintage patterns, which for many years patterns were primarily given away freely to incentivize yarn sales, or dominated by publishers. It wasn't until recently (internet, etsy, ravelry) when designers actually had the means to sell directly to consumers. People making an effort to produce usable patterns are now being dwarfed by AI nonsense in the speed of their output. It was already a difficult market. That everybodys images of real objects (along with AI generated ones) are being used to peddle and market patterns that will never work can be really demotivating.
One last thing is how many of the 8 people in this podcast company are actually generating slop and how many are actually just doing marketing?
I am with you until you make this assumption:
> But ultimately all these pleasant sensations aren't backed by a connection to the real. If you're going to talk about the history of knitting, shouldn't it be the real, evidenced history? As done by real (usually) women? Otherwise you're just knitting a pleasant fantasy for yourself.
If the real is the feeling you get from listening to the podcast or identifying with a subculture, then that is the real for that person. Factual, grounded information is just one take. If it was not this way, we would have much less myths, religions, etc historically.
People will feel the same degree of joy and completion when the final word of the podcast is read like you feel when you finish a really complex piece of work.
If you genuinely believe this, there is no point to doing anything at all except heroin. Every moment that you aren't dedicating to being on heroin or getting more heroin, to heroinmaxx if you will, is a net loss.
'But what if I run out though' I hear you ask? Simply finish off on a truly heroic dose and sail into oblivion on a wave of bliss that's much better than all your relationships and hopes and dreams. It's real for you, right? If it makes your friends sad, they could just do some heroin about it. More real than real!
Do not willingly become a lotus eater.
Look, I get your comparison and while extreme, it's funny. I just have very little faith in that the average person cares this deeply about the physically grounded reality. It's kind of a luxury of the well-off to be able to sit and think about what content to engage with when you just want to relax after a 8 hour shift followed by picking up kids, getting groceries, etc. If someone sees an AI-video that makes them happy or laugh, they send it to their friend who also laughs about it, that's their reality.
We happen to have time to argue about the philosophy about direction of the ontology of information at the downvoted bottom of a HN thread today, most people dont.
The idea that we could create a world where 'a big part of the future of hobbies and entertainment' is people listening to meaningless words made up by machines that help them feel good about themselves sounds horrifying. How could anybody feel ok about that? What would it say about the society we've built?
It would say that society changes, and people who were not used to a new world get upset about it, as it has always been throughout the entire history of humanity.
We were used to having psychologists and doctors in person, now the most common form is to have it through apps, and the younger generation does not care, it's in fact more efficient to get a prescription that you like than to spend time going places and having in-person meetings. But older generation finds it hollowing out and horrifying.
You need to accept that society moves on, and it can look different from your perspective.
The problem is, who is moving “society on” and what is their agenda.
I don’t think it’s healthy to encourage an attitude to just accept all change without any sort of reflection or push back.
> the younger generation does not care, [...] more efficient to get a prescription that you like [through apps]
Absolutely
> people listening to meaningless words made up by machines that help them feel good about themselves sounds horrifying
Yes
> Every ... person ... craves authenticity, connection, and meaningful work.
Right
> to find a means of inserting a wire in your head that provides constant pleasant sensations.
https://psycnet.apa.org/record/1955-06866-001
> Factual, grounded information is just one take.
Absolutely
A looooot of assumptions here. We have yet to see any of these brave new ideas actually work.
Therapy has never been more available, yet mental health is through the basement.
I’m also not seeing any evidence that young people are the driving force behind turning the world to shit. Every Gen Z person I know craves authenticity, connection, and meaningful work. All of this is the opposite.
It's interesting how every time this argument is made, its about subjective experiences of 'craving'. If this was the objective reality, we would have a majority of Gen Z engaged in movements, social groups and other concepts that would help them fulfill their 'cravings'.
However, it seems to not be the case, it seems like they prefer to spend their free time to doomscroll, or sit at home, and engage more in parasocial relationships that perhaps can be more on their terms, on their timeframes, and with their opinions.
That’s one explanation. The other explanation is that young people feel powerless to change anything, and that they are hooked against their will on deliberately addictive ad delivery platforms.
The more alarming conclusion here happens to be backed by a lot of science, unfortunately, so it’s not easy to dismiss.
You could justify basically anything with that logic. Change isn't always about progress.
In this case, the user is deciding that they choose what progress is. I am saying, that people who use the tool and value the utility of it decide what is progress. If people listen to the podcast, or use doctors in the phone because it provides them any value, it will be a change and a perceived progress for them.
If the generated podcasts did not bring any value to the users, such as validation, or engagement, they would not use them, and there would be no change.
"But how does the collapse of truth and meaning in society affect you personally?"
https://knowyourmeme.com/photos/2565163-smugjak-but-how-does...
Your meaning and your truth, not necessarily other peoples who find their meaning and truth in other things.
Go to China, or Congo and you will find that the public might hold a different version of some truths than you do.
We had religions dominating the world order for thousands of years, which projected their versions of the truth onto their societies.
If we would extrapolate that to today and to your opinion, it would be that everyone in the middle ages actually had it all figured out, they knew that the religious texts about splitting oceans or the moon were fake, and were all just playing along with it for the social structure.
Maybe it just happens that the LLM-generated stuff is the next thing in this iteration.
> Your meaning and your truth, not necessarily other peoples who find their meaning and truth in other things.
The makers of those AI podcasts explicitly stated they were unconcerned with whether their content was factual, so this is not comparable to people that actually thought they were right. But if you're arguing that listeners of those podcasts will believe that made-up slop is truth, that that's the "their truth" you're talking about, then yes, that is exactly what I meant by "collapse of truth".
Can't wear feelings and validation...
If you only care about the material and physical utility of the product, you can order the sweater from AliExpress for 5% of the cost and no time spent.
Seriously? You can't get the feeling of satisfaction of wearing something, or having someone wear something you made from AliExpress. My point is your sense of feeling and validation is extremely distorted if you have no knitted material to show for it?
Completely subjective take by you with similar epistemology around value as the blog author.
People might not care. I might identify as a runner because I bought a little jacket, expensive shoes, and wide-purple-tinted sunglasses, do I have to run? Not necessarily if the objects and my identity gives me the feeling of completion and satisfaction.
If your premise was true for all people, and the sense would be distorted, we would not see these phenomena, and people wouldn't listen or engage with AI-content. But the biological reality and the path of least resistance seems to prove us otherwise.