Large Language Muddle | The Editors

20 min read Original article ↗

To what extent will artificial intelligence revolutionize (derogatory) the lives of the professionally and semiprofessionally literary? Already the technology has barreled through enough creative and commercial sectors to fill a Studs Terkel oral history, threatening mass unemployment events for copywriters, translators, illustrators, sales and customer service representatives, and more. The New York Times assails readers almost daily with appalling stories of AI therapists and AI boyfriends, capturing the paradox of a technology that promises to fix broken systems even as it makes them worse. As social ties fray and mental health infrastructure deteriorates, people turn to AI for emotional support—the same AI that then urges them to kill themselves.

For those of us working in reading- and writing-heavy fieldschiefly media and academia, the US’s last two eroded islands of institutional intellectual lifethe boom in sophistication of large language models over the past few years has struck alarm bells that were already chiming. Well before the inflection point of OpenAI’s 2022 debut of ChatGPT, freelance writers and adjunct instructors were already beset by declining web traffic, stagnant book sales, the steady siphoning of resources from the humanities, and what was hard not to interpret as a culture-wide devaluation of the written word. Then along came a swarm of free software that promised to produce, in seconds, passable approximations of term papers, literary reviews, lyric essays, Intellectual Situations.

The swarm assembled faster than anyone had anticipated. “If you have not been using AI,” wrote the poet and Yale Review editor Meghan O’Rourke in a Times op-ed this July, “you might believe that we’re still in the era of pure AI ‘slop’simplistic phrasing, obvious hallucinations.” ChatGPT’s writing is still merely competent“no rival for that of our best novelists or poets or scholars,” O’Rourke writesbut it’s “so much better than it was a year ago that I can’t imagine where it will be in five years.”

How can intellectuals chronicle such a shiftone that threatens their ability to chronicle anything at all? O’Rourke is among a growing group of literary writers who have tried to answer the question in the first person, scoping out artificial intelligence’s encroachments from within the domains it most imperils. Their writing asks what AI-generated writing can or (much less often) can’t do, and how human writers can or (much more often) can’t respond. Call the genre the AI-and-I essay. Between April and July, the New Yorker published more than a dozen such pieces: essays about generative AI and the dangers it poses to literacy, education, and human cognition. Each had a searching, plaintive web headline. “Will the Humanities Survive Artificial Intelligence?” asked Princeton historian D. Graham Burnett. “What’s Happening to Reading?” mused the magazine’s prolific pop-psych writer Joshua Rothman, a couple months after also wondering, with rather more dismay, “Why Even Try If You Have AI?” “AI Is Homogenizing Our Thoughts,” declared Kyle Chayka, with the irony of a columnist whose job is to write more or less the same thing every week using his own human mind. An article by Hua Hsu put it most starkly, seeing the artificial-versus-human intelligence war as all but lost: “What Happens After AI Destroys College Writing?”

AI-made material is a waste product: flimsy, shoddy, disposable, a single-use plastic of the mind.

Tweet

No topicnot the genocidal war and famine in Gaza, not Trumpian authoritarianismhas magnetized bien-pensant attention this year in the way that AI has. Writing on AI thus comes in every mode: muckraking (“Inside the AI Prompts DOGE Used to ‘Munch’ Contracts Related to Veterans’ Health”), scholastic (“Deep Learning’s Governmentality”), polemical (“The Silicon-Tongued Devil”), besotted (“The AI Birthday Letter That Blew Me Away”). The AI-and-I essay, however, usefully registers a generalized intellectual anxiety. While spoken in the voice of an individual author, each piece in this emergent corpus stages a more collective drama. To read these writers writing about AI writing is to witness, almost in real time, intellectual laborers assimilating a threat to their own existence. The threat looms more distantly for some than for others. But whether or not one enjoys the near-extinct security and legacy prestige of a New Yorker staff job (to spend one’s days “focussing” on the rapid erosion of the life of the mindthe dream!), this work paints a persuasive picture of a world hollowed by machinesa world, the writers suggest, we will all have to learn to live in.

Ironically, these essays about the fundamental iterability of prose can read like iterations of the same piece. Nearly all marshal similar data points about AI’s spread on university campuses. In this the essays seem at first only to repeat the pathological chattering-class fixation on elite colleges and the perpetual downslide of the American mind. But the numbers are scandalous. Forty-two percent of undergraduates use AI at least once a week; anywhere from 50 to 90 percent have used it to cheat on their schoolwork. AI’s observed effects on an already screen-addled and pandemic-frazzled student body are bleaker still. In a widely cited study by MIT researchers published in June, participants who wrote SAT-style essays with the assistance of ChatGPT engaged a narrower spectrum of neural networks while writing, struggled to accurately quote what they had just written, andthanks to generative AI’s inbuilt tendency toward clichéused the same handful of refried phrases over and over. The study’s authors warned that habitual AI use could lead to “cognitive debt,” a condition of LLM dependency whose long-term costs include “diminished critical inquiry,” “increased vulnerability to manipulation,” and “decreased creativity.” It turns out your brain, like love or money, can be given away.

Onto these grim findings the AI-and-I writer heaps a dollop of anecdotal evidence. The writer’s friends and family, eager to live less administratively burdened lives, are all hooked on Gemini. Those who teach for a living notice that their students’ weekly response papers have become suspiciously competent (if marred by occasional citations of nonexistent books), even as those same students struggle to read more than a dozen pages in one go. The journalists and critics they knowpeople who learned to write the old-fashioned way, by typing out weekly response papers with their own hands!find that Google’s useless “AI Mode” shoos readers away from their published work entirely.

For the sake of inquiry, the AI-and-I writer tries out an LLM for themself. Here the tone starts to change. “I’ve used ChatGPT a handful of times,” writes Hsu, “and on one occasion it accomplished a scheduling task so quickly that I began to understand the intoxication of hyper-efficiency.” In a piece about using chatbots to treat loneliness, Paul Bloom confesses that “during a long bout of insomnia, sometime after three in the morning, I once found myselfmore out of boredom than out of convictionopening ChatGPT on my phone.... I found the conversation unexpectedly calming.” Burnett, the history professor, describes uploading his own nine-hundred-page course reader into Google’s NotebookLM and asking it to produce a podcast based on the assigned readings, which he listened to while doing dishes. (“Respect, I thought. That was straight-A work.”) Rothman, accidentally posing a suggestive analogy between AI-generated writing and puking, recounts the time he tested out ChatGPT by prompting it to predict the course of a norovirus outbreak in his household. It did so, with “verve.”

All these points in AI’s favor prompt some nervous reappraisals. Essays that began in bafflement or dismay wind up convinced that the technology marks an epochal shift in reading and writing. In “Are We Taking AI Seriously Enough?” Rothman describes having “both an ‘Aha!’ and an ‘uh-oh’ moment” after prompting ChatGPT to complete an onerous personal-finance task. He narrates this recognition in the awestruck tone of a sci-fi protagonist watching the aliens finally make contact: “It’s here, I thought. This is real.” Not to be outdone, Burnett calls AI “the most significant revolution in the world of thought in the past century.” Other writers cast their periodizations even further back, arguing that AI-assisted writing may augur the close of the “Gutenberg Parenthesis”: the five-hundred-year period in which printed writing has dominated human communication.

The second coming of the Trump Administration has exposed the civic sclerosis of the US body politic? Time to turn the Social Security Administration over to Grok.

Tweet

In the face of a monolith-in-the-veldt-level era-heralder, what surfaces is an earnest, almost midcentury humanism, interested in charting the distinctions between man and machine. But the case the AI-and-I authors make shows the strain of special pleading. The single, vital aspect of humanity that LLMs can never match, the essays assert again and again, is our imperfection. “AI allows any of us to feel like an expert, but it is risk, doubt, and failure that make us human,” writes Hsu. O’Rourke, in the Times: “When I write, the process is full of risk, error and painstaking self-correction. It arrives somewhere surprising only when I’ve stayed in uncertainty long enough to find out what I had initially failed to understand.” ChatGPT can instantly summarize classic texts, notes Rothman, but “the human version of reading involves finitude,” and it’s this qualitythe impossibility of true optimizationthat makes a literary life, or any life, interesting. Imagining a future where chatbots provide constant companionship to incels and the elderly, Bloom worries that “we risk losing part of what makes us human.”

So, inconstancy, fallibility, forgetfulness, suffering, failurethese, apparently, are the unautomatable gifts of our species. Well, sure. To err is human. But does the AI skeptic have nothing else to fall back on than an enumeration of mankind’s shortcomings? Are our worst qualities the best we can do? It’s hard not to read the emphasis on failure as an ambivalent invitation for the machines to succeed. The final notes of the AI-and-I essay are cowed resignation, awed acquiescence, and what Trotsky called the “terrifying helplessness” of cultural production at “the beginning of a great epoch”all from the very writers best placed to condemn AI’s creep into literary life. “What if we take seriously the idea that AI assistance can accelerate learningthat students today are arriving at their destinations faster?” Hsu asks. But what if we don’t?


The temptation to resign ourselves to resignation is never stronger than at a time of overlapping crises. The AI upheaval is unique in its ability to metabolize any number of dread-inducing transformations. The university is becoming more corporate, more politically oppressive, and all but hostile to the humanities? Yesand every student gets their own personal chatbot. The second coming of the Trump Administration has exposed the civic sclerosis of the US body politic? Time to turn the Social Security Administration over to Grok. Climate apocalypse now feels less like a distant terror than a fact of life? In three years, roughly a tenth of US energy demand will come from data centers alone.1 And in the middle of these concentric circles is the bedraggled individual, their intellectual autonomytheir humanity!ever more under siege from the ingratiating question answerers in their phones.

No one can deny the power and size of the AI coalition, an ensemble of Silicon Valley investors, lobbyists, and bought-off politicians; hucksters, carnival barkers, and gravy train chasers; and bosses gleeful at the opportunity to downsize and deskill. Unsurprisingly, this well-funded upheaval is outpacing any social, political, or ethical effort to slow it down. According to the logic of market share as social transformation, if you move fast and break enough things, nothing can contain you.

There’s still time to disenchant AI, provincialize it, make it uncompelling and uncool.

Tweet

An extraordinary amount of money is spent by the AI industry to ensure that acquiescence is the only plausible response. But marketing is not destiny. The simplest reason to refuse resignation is that resignation can only harmonize with the tech-industry narrative of AI inevitability. When we give in, we join the sales force. (Literally: according to a recent Financial Times op-ed by Salesforce CEO Marc Benioff, “the real magic lies in partnership: people and AI working together, achieving more than either could alone.”)

For however headily mystified and abstracted its operations, privately owned AI is always and everywhere a vehicle for accumulation. For users, the optimal GenAI experience feels effortless“frictionless,” in the smooth-talking lingo of Silicon Valley. But ChatGPT’s instant answers and uncanny simulacra are the fetishized product of what Marx called “dead labor”: all the past work of thinking and writing behind the texts on which AI models are trained, together with the coding and engineering required for their development and maintenance.2

So far, so Capital, volume one. Then there is our own unpaid labor of interpretation and optimization. When we grade an AI-generated essay, we accord it a kind of dignity. When we press a chatbot to fine-tune its answers or sift its sources, we serve the machine. With every click and prompt, every system-tweaking inch we give to the spectral author, we help underwrite AI profits (or at least the next round of equity funding; no major AI product has yet come close to actually making money).

But a still graver scandal of AIlike its hydra-head sibling, cryptocurrencyis the technology’s colossal wastefulness. The untold billions firehosed by investors into its development; the water-guzzling data centers draining the parched exurbs of Phoenix and Dallas; the yeti-size carbon footprint of the sector as a wholeand for what? A cankerous glut of racist memes and cardboard essays. Not only is the ratio of AI’s resource rapacity to its productive utility indefensibly and irremediably skewed, AI-made material is itself a waste product: flimsy, shoddy, disposable, a single-use plastic of the mind.

The current AI triumphalism also clouds the history of the field’s own waxing and waning fortunes. Once the recondite domain of mathematicians and IBM tinkerers, AI not so long ago looked more moribund than monetizable. In the preface to the twentieth-anniversary edition of his 1972 broadside What Computers Can’t Do, the philosopher Hubert Dreyfus dismissed AI as “a degenerating research program,” which “all but a few diehards” recognized had “failed.” Cyclical downturns in research and funding are persistent enough to be known in the trade as “AI winters.” We have the misfortune to be living through a manic, Stravinskian rite of spring.

The GenAI sector’s foremost feat of marketing, then, has been the term intelligence itself. Among AI’s leading critics, the computational linguist Emily M. Bender has targeted the industry with a campaign of righteous disenchantment. ChatGPT’s appearance of humanlike intelligence, she and others point out, is a mirage. The narcotized text it produces is only a recombination of whatever corpora it consumes; LLMs, Bender argues, should be seen as “stochastic parrots” and “synthetic text extruding machines.” Where real thinking involves organic associations, speculative leaps, and surprise inferences, AI can only recognize and repeat embedded word chains, based on elaborately automated statistical guesswork. This alone isn’t news: no ChatGPT-happy college sophomore really thinks there’s a little man in the machine writing his Hamlet essay for him. But it bears insistent, strident repetition. A refusal of AI in creative work begins with a refusal of that product’s ideological packaging. “When we imbue these systems with fictitious consciousness,” Bender and Alex Hanna write in their recent book The AI Con, “we are implicitly devaluing what it means to be human.” The way out of AI hell is not to regroup around our treasured flaws and beautiful frailties, but to launch a frontal assault. AI, not the human mind, is the weak, narrow, crude machine.


The AI bubbleand it is a bubble, as even OpenAI overlord Sam Altman has admittedwill burst. The technology’s dizzying pace of improvement, already slowing with the release of GPT-5, will stall.3 But until it does, its expansion will appear inevitable and irresistible. And AI’s semivisible, infrastructural role in exploiting precarious labor (Uber’s nifty algorithm for suppressing driver wages), landlordism (RealPage’s widely used rent-hiking software), and neocolonial war (Israel’s high-tech civilian-bombing gadgetry) may already be too entrenched to stop. But professional readers and writers: We retain some power over the terms and norms of our own intellectual life. We ought to stop acting like impotence in some realms means impotence everywhere.

Major terrains remain AI-proofable. For publishers, editors, critics, professors, teachers, anyone with any say over what people read, the first step will be to develop an ear. Learn to tellto read closely enough to tellthe work of people from the work of bots. Notice the poverty of the latter’s style, the artless syntax and plywood prose, and the shoddiness of its substance: the threadbare platitudes, pat theses, mechanical arguments. And just as important, read to recognize the charm, surprise, and strangeness of the real thing. So far this has been about as easily done as said. Until AI systems stop gaining in sophistication, it will become measurably harder. Required will be a new kind of literacy, an adaptive practice of bullshit detection.

Whatever nuance is needed for its interception, resisting AI’s further creep into intellectual labor will also require blunt-force militancy. The steps are simple. Don’t publish AI bullshit. Don’t even publish mealymouthed essays about the temptation to produce AI bullshit. Resist the call to establish worthless partnerships like the Washington Post’s Ember, an “AI writing coach” designed to churn out Bezos-friendly op-eds. Instead, do what better magazines, newspapers, and journals have managed for centuries. Promote and produce original work of value, work that’s cliché-resistant and unreplicable, work that triesas Thomas Pynchon wrote in an oracular 1984 essay titled “Is It OK to Be a Luddite?”“through literary means which are nocturnal and deal in disguise, to deny the machine.”

At school, denying the machine carries more contradictions. AI-and-I essays narrate countless instances in which professors, overworked and faced with floods of essays generated by ChatGPT, simply give up on assessment and hand everybody an A minus. Punishing already overdisciplined and oversurveilled students for their AI use will help no one, but it’s a long way from accepting that reality to Ohio State’s new plan to mandate something called “AI fluency” for all graduates by 2029 (including workshops sponsored, naturally, by Google). Pedagogically, alternatives to acquiescence remain available. Some are old, like blue-book exams, in-class writing, or one-on-one tutoring. Some are new, like developing curricula to teach the limits and flaws of generative AI while nurturing human intelligence. (Hamlet can wait: imagine a new course in Richardsian “practical criticism,” where first-year students are trained to parse differences between artificial writing and real prose.)

All these models, of course, require professional time and institutional support that few teachers now enjoy. Surely the US media is so fixated on the AI invasion in part because our criminally unequal educational institutions are so vulnerable to it. The spurious social-justice case sometimes made for AI toolsas field-leveling time savers for working or disadvantaged students, who are said to lack the “luxury” of reading and writing with their brainsaugurs a still more dismal underside: a system of schooling further polarized between a privileged layer of small seminars and bespoke mentorship, and a vast corporate-backed lower order of luckless pupils whose instructors are reduced to little more than deputized Claude moderators.

For both publications and universities, there are narrow pathways to follow the example of Hollywood, where, after its strike two years ago, the Writers Guild won historic if imperfect protections against GenAI in screenwriting. Under the 2023 WGA contract terms, writers can’t be replaced by AI, or be required to use it (and where they choose to, they must be paid at standard rates). True, the film industry—however enshittified at other steps of productionis far more unionized than media or academia will likely ever be. But perhaps AI’s ascent in knowledge-industry workplaces will give rise to new demands and new reasons to organize.

Our final defenses are more diffuse, working at a level of norms and attitudes. Stigmatization is a powerful force, and disgust and shame are among our greatest tools. Put plainly, you should feel bad for using AI. (The broad embrace of the term slop is a heartening sign of a nascent constituency for machine denial.) These systems haven’t worked well for very long, and consensus about their use remains far from settled. That’s why so much writing about AI writing sounds the way it doesnervous, uneven, ambivalent about the new regime’s utilityand it means there’s still time to disenchant AI, provincialize it, make it uncompelling and uncool.

All who deny the AI machine will themselves face a very uncool label: the L-word. In a technophilic culture, few figures look lonelier or more quixotic than the latter-day Luddite. Yet as Gavin Mueller writes in Breaking Things at Work, Luddismborn as a revolt by artisan weavers against their proletarianization by manufacturing capitalwas far from “a simple technophobia.” The Luddite rebellion, he notes, “was not against machines in themselves, but against the industrial society... of which machines were the chief weapon.” Witness, next to the hair-pulling indecision and defeated passivity of typical AI-and-I writing, the clarion force of the Luddite lesson: in Mueller’s summary, “that technology was political, and that it could and, in many cases, should be opposed.”

As we train our sights on what we oppose, let’s recall the costs of surrender. When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the further degradation of an already degraded educational system. We agree that we would rather deplete our natural resources than make our own art or think our own thoughts. We dig ourselves deeper into crises that have been made worse by technology, from the erosion of electoral democracy to the intensification of climate change. We condone platforms that not only urge children to commit suicide, they instruct them on how to tie the noose. We hand over our autonomy, at the very moment of emerging American fascism.

“The independent artist and intellectual are among the few remaining personalities equipped to resist the stereotyping and consequent death of genuinely lively things,” wrote C. Wright Mills in a 1945 essay, presciently titled “The Powerless People: The Social Role of the Intellectual,” surveying the threats new business and military technologies posed to reading and writing life. “Fresh perception now involves the capacity continually to unmask and to smash the stereotypes of vision and intellect with which modern communications swamp us.” A literature which is made by machines, which are owned by corporations, which are run by sociopaths, can only be a “stereotype”a simplification, a facsimile, an insult, a fakeof real literature. It should be smashed, and can.

  1. An earlier version of this piece misstated a statistic about projected energy demand from data centers. This statistic has been been updated. We regret the error. 

  2. In fact, tech capital appears already to be running out of dead labor to exhume. Having effectively plundered the entire internet for free training data, the major AI companies are now courting (or being taken to court by) publishers and media outlets, for access to copyrighted material not yet shoveled into the LLM maw. 

  3. The AI arms race, however, shows no sign of abating. In August, Anthropic offered to license its AI chatbot, Claude — not to be confused with competitors Jasper, Llama, or Grok — to Trump’s federal government for $1 per agency. That Silicon Valley kingmakers would play handservant to an incipient fascism should surprise no one, but who thought they would do it virtually for free? 

If you like this article, please subscribe or leave a tax-deductible tip below to support n+1.