Since circumstances in my life require that I spend way more time than I want thinking about AI, I figure at least I can maybe get a blog post out of it.
It’s Not “AI”
Okay, so first off: I don’t even like to refer to the recent in-vogue text and image generation tools as “AI.” I think it’s a poorly-defined term that is used more to evoke associations in people’s brains than to be an accurate representation of the technology we have in front of us. Throughout this rant I will refer to things like ChatGPT as “LLMs,” for “Large Language Models.” Because if LLMs are AI, so is autocorrect on your phone. So is every other “chat bot” that uses keyword matching to try to direct you to a website’s help pages to run interference on letting you talk to a human.
Its Boosters Are Misanthropic
Here’s Sam Altman, OpenAI’s CEO, being a dunce:
These quotes are a reference to a 2021 paperEmily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Conference on Fairness, Accountability, and Transparency (FAccT ’21), March 3–10, 2021, Virtual Event, Canada. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3442188.3445922 about LLMs. In it, its co-authors Emily Bender and Timnit Gebru define “stochastic parrot” thus:
…an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.
In this context, what Altman is saying is clear: when you have thoughts and feelings and use them to express yourself with language, all you’re really doing is putting one word in front of another in a way that’s probabilistically consistent with the vast amount of other language you’ve been exposed to in your life.
Here’s some text I probabilistically generated with Sam Altman’s claims and the definition of “stochastic parrot” as input: not only is this wrong, it fucking sucks, and I hate it. I know it is wrong, and I know I hate it, because I read it and I feel disgust as a somatic experience not unlike mild nausea, along with an involuntary twist in my face that I have to consciously suppress and involuntary tenseness in my muscles that I have to consciously relax. My reaction is not just the result of neurons firing in response to linguistic entities. I feel it in my body. And even if I don’t choose to assemble my thoughts into words as I’m doing now, the feeling persists. What about dance? Sport? Can these bodily experiences and all the intuition they require to achieve be reproduced synthetically by consuming enough training data?This paragraph is a patchy, inadequate survey of a small portion of the ground covered in Siri Hustvedt’s tremendous essay “The Delusions of Certainty,” which can be found in her book A Woman Looking At Men Looking At Women. If you are unconvinced by my single paragraph of rebuttal here, please read that essay and consider how many of the delusional certainties discussed in it the humans-as-stochastic-parrots claim rests on.
I hate not just that Altman’s unsupported claim debases the richness of actual human experience, but that it does so in such a transparently self-serving way. If humans are just stochastic parrots, of course, anything they can do can be done instead by a language model, such as those conveniently produced by Altman’s company. This minimization of the sophistication of human experience is the counterpart of AI fearmongering, which posits AI as a possible “existential risk” in order to make it seem more powerful than it is. I don’t think Sam Altman really believes what he’s saying here, but whether he does or not, it’s incredibly cynical, and it sucks.
The Hype Sucks And It’s Encroachingly Ubiquitous
The hype around LLMs has been at the level of “fever pitch” for months that feel like years. It’s the most significant invention since the Internet. No, since the printing press. No, since electricity. No, seriously. In an editorial for the Wall Street Journal, Henry KissingerReally? This Henry Kissinger?, Eric Schmidt and Daniel Huttenlocher say:
A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.
Marc Andreessen says:
The stakes here are high. The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.
This doesn’t make any fucking sense! You can’t run a so-called AI without electricity and microchips! How can AI be more important than the things its very existence depends on? But then, of course, Andreessen is a venture capitalist who invests in these technologies. So perhaps he’s not a wholly objective and impartial observer here. But so because what we call a “free” country is actually largely controlled by the whims of people like Marc Andreessen, it’s apparently become mandatory for every company on earth to cram some “AI” functionality that no one asked for into their already rapidly-enshittifying product, in order to…? Attract VC funding? Get a bump in their stock price? If I were a more inquisitive or cynical person I might have a better understanding of whatever asinine capitalist incentives are at play here. But we’re just coming off yet another record-breaking heatwave where I live, so I just can’t bring myself to give a shit about capitalist incentives, because look where they’ve gotten us.
On the other side of the coin, now that LLMs are in the hands of The People, they’re rapidly using them to flood the entire internet with dubiously-accurate laundered plagiarism. The uselessness of the internet for obtaining actual information has been intensifying for years, what with Google gradually turning their search engine into an advertisement delivery engine and the proliferation of SEO spam. But LLMs are going to make this a hundred times worse. Indeed, they’re already doing so.
Its Obsequity Is Annoying And Its Prose Is Vapid
Maybe I wouldn’t mind seeing LLM-extruded text everywhere if it were actually pleasant or interesting to read. But it’s not. It’s invariably a flavorless beige paste, as dead on the page as the entity that produced it.
Insofar as ChatGPT can be said to have a personality, it is
deferential to the point of obsequity. ChatGPT backs down immediately
and totally when challenged. Recently I used it to provide me an
rsync command to
transfer some files to my NAS, and it gave me a working command with no
issue. Pretty cool!I never had any doubt that this would work. GPT is
trained on the internet, which is replete with nerds talking about their
nerd shit. Nerds love nothing more than to explain things. As evidence,
see: this whole-ass web site The network connection got interrupted, and I asked how
to resume the transfer. ChatGPT gave me another command with the
--partial flag, and that worked too. Also pretty cool! Then
I told it that that the --partial flag that I had just
successfully used doesn’t actually exist. It readily agreed and
“corrected” itself:
It’s hard to credit ChatGPT with a lot of new or original inventions, considering that it’s built completely off the stolen text of the actual humans who preceded it, but it seems to have invented a new kind of rhetorical technique which I’m pithily naming the “apology sandwich.” I view anyone who actually wants to be groveled at like this with extreme suspicion, but it does seem to be designed to take advantage of our tendency to anthromorphize, to lead us to think these things are actually intelligent and even sentient. They’re not.
Or see this chat where Jason Briggeman presses it on why it fails to generate a limerick that scans when a poetic form with very strict and clear parameters for meter and rhyme should be right up a piece of software’s alley. One half expects it to respond to a correction with “pwease i am just a smol chatbot who is twying its vewwy best.” I would almost prefer Bing Chat’s amoral, hostile intransigence if not for fear that it would spawn a thousand hysterical thinkpieces about “alignment“Alignment” is the term for making sure that some imaginary sentient AI’s incentives are “aligned” with humanity’s so that it doesn’t end up, like, launching nukes or turning us all into paperclips or whatever. Pure science fiction, but that’s the mode the kind of people who write about “alignment” want you to be operating in.” for every day it was allowed to continue running.
Its Boosters Don’t Give A Shit About Consent
It seems like barely a day goes by when I don’t read about some company adding a clause to their user agreement saying they’re going to be training LLMs on your data. This training is never opt-in. It’s enabled by default, and it will be happening unless you (a) find out about it and (b) wade through the inevitable thicket of dark patterns lovingly grown between you and the setting to turn it off. Assuming there is one.
But of course, why wouldn’t companies that use LLMs do this? Using people’s work without their consent is literally the foundation upon which LLMs are built. So of course they’re going to see nothing wrong with silently opting-in their users, or ignoring decades-old web scraping conventions, or selling their users’ data for LLM training. It’s the most important thing since electricity! Why do you hate progress???
So What Am I Even Fucking Doing Here
“Other people’s work,” in the last paragraph, includes mine, of course:
And so it’s like, why keep posting on the internet if these assholes are just going to scrape it into their insatiable maw, along with a million other actual people’s actual hard work, to be shat out in an unrecognizable(though, come to think of it, recognizable would be even worse) homogeneous, anodyne slurry? I don’t think I’m a great writer but I think I’m OK at it, and I can’t put my feelings about this any better than maya, whose maya.land is the kind of intensely unique and flavorful and human old-school personal website that the web should be aspiring to—
This site has been too much of an experiment for me to claim now that I have a grand theory about it, about similar sites – but certainly by now I have instincts, feelings… …and making my writing available for automated summarization? So someone can sell ads by a depersonalized version of my stuff? The feeling is nausea.
—in her incredibly relatable “mr. openai i don’t feel so good.” Or, to TL;DR the very good How To Find Things Online, which you should also actually R, What Even Is The Point Anymore.I know it had already been over a year since my last blog post when ChatGPT arrived on the scene. But it’s the principle of the thing.
Maybe I am engaged in the valuable, noble act of Building A Stronger Web Without AI. Maybe by linking to real humans using their hard-earned brain cells to write thoughtful words on the internet, I can help make a web that feels like a web, interconnected and constructed by individuals, one not dependent on the whims of multi-billion-dollar advertising companies. Maybe I just want to add my voice to the others saying this stuff is bad, morally and aesthetically. Maybe since it contributed to taking away what little wind I had in my sails for blogging in the last couple years, it seemed only suitable that I use my first post back to slag it off. Anyway, here you go, ChatGPT. Go tell people how much I think you suck.