The internet was drowning in low-quality content long before generative AI. Our forensic hunt for machine tells reveals more about anxiety than aesthetics.
On Hacker News, a user named nospice was performing forensic analysis on a Christmas promotional image. The subject: a painting of a milk carton, commissioned by Apple to advertise the television series Pluribus. The evidence: four damning observations. The maze printed on the carton cannot be solved. Two maze walls appear smudged while nothing else shows similar artefacts. The floral pattern on a nearby plate repeats, but berries randomly shift position and leaves change shape between iterations. The carton displays a "splotchy texture that's really common in ChatGPT-generated illustrations."
The verdict, delivered with prosecutorial confidence: probable AI contamination.
This detective work—multiplied across dozens of comments scrutinising shadow angles, label typography, and carton geometry—deserves examination in its own right. Not for what it reveals about the image, but for what it reveals about us. We have developed elaborate rituals of exposure for one particular category of creative tool. We hunt for tells. We demand confessions. We treat the use of generative AI as a transgression requiring investigation, even when the stakes approach zero.
The new authenticity police
The credited artist, Keith Thomson, offered what commenters correctly identified as a "non-denial denial." He stated that he "always draws and paints by hand and sometimes incorporates standard digital tools." The phrase "standard digital tools" could theoretically include generative AI. Suspicion intensified. Technology blogger John Gruber concluded that the image "either is AI-generated slop or it looks like AI-generated slop for no artistic or thematic reason whatsoever."
What makes this remarkable is not the forensic scrutiny but the stakes involved. A promotional image for a streaming show, posted on social media. Whether hand-painted, AI-assisted, or entirely machine-generated, it affects nothing—not the show, not the viewer's experience, not anything that might plausibly matter to anyone outside the illustration industry. Yet investigation proceeded with intensity typically reserved for electoral fraud.
One commenter captured the dynamic: "There will be a tiny, loud community of AI haters who make a stink on social media, but the vast majority of people will be oblivious." Probably true. But the existence of volunteer authenticity police—citizens committed to exposing AI usage wherever it lurks—suggests we have created a new category of transgression. Serious enough to warrant investigation. Whose actual harms remain curiously difficult to name.
When humans manufactured the slop
The term "slop" has become the preferred epithet for AI-generated content of dubious quality. Gruber's headline—"Slop Is Slop"—crystallises the argument: bad work is bad work, regardless of provenance. Reasonable enough. But this framing obscures a history worth remembering.
In 2009, Demand Media was publishing one million items monthly through its website eHow. One million. The equivalent of four complete English Wikipedias annually. Human freelancers produced this content, paid roughly fifteen dollars per article, writing about topics they often knew nothing about, optimised for search rankings rather than usefulness. Wired called it "the fast, disposable, and profitable as hell media model."
The results matched the investment. New York Times critic David Carr spotlighted Demand Media's Super Bowl party guide: "Buy several six-packs of beer. Keep the beer in a cooler close by so you don't have to run to the fridge when it's third and inches." eHow explained how to open a refrigerator door. How to pick blueberries: "Tie a bucket to your waist." Thousands of articles answering questions nobody asked, optimised for algorithms rather than humans.
By 2010, Demand Media ranked as the seventeenth-largest web property in America. One hundred and five million monthly visitors. It went public in January 2011 at a valuation exceeding $1.5 billion—briefly worth more than the New York Times. Search engine DuckDuckGo eventually blacklisted eHow entirely, its CEO calling Demand Media's output "low-quality content designed specifically to rank highly in Google Searches for the purposes of promoting advertising."
The internet was drowning in slop before generative AI existed. Humans proved perfectly capable of producing vast quantities of worthless content at industrial scale, driven by the same incentives—advertising revenue, search optimisation, minimal production costs—that now animate AI content farms. The machines merely automated what humans were already doing.
Consider an asymmetry. When you read a novel, you don't ask whether the author composed on a typewriter, word processor, or by dictation. When you use software, you don't inquire whether developers wrote it in assembly or Python, whether they used AI code completion, or how extensively they relied on libraries written by strangers. When you admire a building, you don't demand to know if the architect designed it with pencil, CAD software, or parametric algorithms.
We evaluate outputs. Process interests specialists but carries no moral weight for consumers. Nobody suggests that a novelist using Scrivener is "cheating" compared to one writing longhand. Nobody demands that programmers disclose GitHub Copilot usage the way artists are now expected to disclose AI assistance.
Music offers the closest parallel. When Auto-Tune emerged in the late 1990s, critics called it cheating—technology allowing artists to skip learning to sing. The debate raged. Some purists still object. But the broader verdict has been acceptance: Auto-Tune is a tool, and what matters is whether the music works. T-Pain, whose overt use became his signature, is now celebrated rather than dismissed.
Why should images differ? The standard answer invokes authenticity, the human hand, creativity as essentially human. These concepts prove slippery under pressure. Philosopher Denis Dutton argued that we evaluate artworks partly as "the end point of a performance"—knowledge of creative process affects aesthetic judgement. When a painting believed to be by Vermeer was revealed as a Han van Meegeren forgery, critics suddenly found it "sentimental and of inferior quality." Nothing visible had changed. Only the backstory.
This psychological response is real. Whether it should guide our judgements is another question. We might instead conclude that such responses reveal bias rather than insight. Philosopher Arthur Koestler argued that if a forgery "fits into an artist's body of work and produces the same kind of aesthetic pleasure," there's no reason to exclude it from museums. The experience remains constant. Only our knowledge shifts.
The aura already dissolved
Walter Benjamin, writing in 1936, argued that mechanical reproduction destroys the "aura" of artwork—its unique presence in time and space, its authenticity as singular original. Photography, he suggested, had already initiated this transformation. "That which withers in the age of mechanical reproduction is the aura of the work of art."
Benjamin viewed this withering not as tragedy but as liberation. Art freed from "parasitical dependence on ritual." New democratic possibilities opened. The masses, he wrote, "seek to bring things 'closer' spatially and humanly." Reproduction serves this desire.
If Benjamin was right—if photography dissolved aura nearly two centuries ago—then anxiety about AI represents another chapter in an old transformation, not a new crisis. We have lived in a post-aura world since the daguerreotype. The hand-wringing may concern status more than aesthetics: who claims the title "artist," who receives commissions, whose labour carries prestige.
Not trivial concerns. Creative workers face genuine displacement. But different concerns from those driving the forensic hunt for AI tells. The Hacker News commenters weren't worried about illustrators' livelihoods. They were worried about being fooled. About categorical violation. About something resembling aesthetic fraud—even if it harms no one.
The strongest objection
A serious argument holds that AI differs categorically from previous tools. Photography still required a human eye, compositional choices, timing. Auto-Tune corrects pitch but doesn't compose melodies. Even Demand Media's wretched articles required humans typing words in sequence.
Generative AI can produce complete works from a text prompt. Human contribution may shrink to a few words of instruction. Perhaps the line around AI reflects a genuine threshold: tools that augment human creativity versus tools that replace it.
This objection carries weight. Yet it conflates current applications with necessary nature. Thoughtful AI-assisted art involves dozens of generation attempts, careful prompt engineering that functions as its own skill, extensive post-processing, curatorial judgement about what to keep. The workflow resembles photography's selection process—shoot hundreds, print one—more than automated assembly.
Human creativity, meanwhile, has always involved borrowing, influence, recombination. Every artist builds on predecessors. Every style emerges from imitation before becoming original. The Romantic vision of the solitary genius creating from nothing was always mythology.
If we cannot distinguish AI work from human work by examining output—if objection emerges only when we learn the process—then our concern isn't really about the work. We're policing a category boundary rather than evaluating quality. Perhaps a legitimate project. But we should be honest about what we're doing.
What actually deserves attention
Quality matters. The milk carton image may be ugly—Gruber thinks so, and he may be right. Its ugliness, if real, exists independent of provenance. Slop is slop whether produced by underpaid freelancers, rushed illustrators, or language models. We knew how to recognise mediocrity before AI. We still do.
Attribution matters. If Thomson was paid for hand-painted work and delivered AI output, that's dishonesty about what was sold. A legitimate commercial concern, like other misrepresentation. But a commercial concern, not an aesthetic one.
Economic displacement matters. Fear that AI will eliminate illustration jobs may prove partly justified, as with photography and portrait painters. Technologies displace workers, though history suggests they also create new work and expression. This genuine question deserves attention untangled from authenticity panics.
What probably matters less than we imagine: forensic detection of AI usage in promotional images, holiday cards, social media posts. The investigative energy seems wildly disproportionate to harm prevented. An unsolvable maze in a milk carton illustration affects nobody's life.
The internet has always contained vastly more garbage than treasure. Search engines have always surfaced questionable content. Advertising incentives have always rewarded volume over quality. AI accelerates these tendencies but didn't create them. We were already drowning. The water rises faster now.
The useful response may not be heightened vigilance against machine-generated content but general scepticism about provenance combined with evaluation based on utility and quality. We manage this with software. With architecture. With most of what we consume. We could manage it with images, if we chose to redirect investigative energy toward concerns more proportionate to actual harms.
For those who find satisfaction in hand-crafted work—in process rather than only output—nothing prevents this. I, for example, took up oil painting recently. Craft has intrinsic rewards beyond the artefacts it produces. But extending personal attachment to craft into a general policing regime for all creative output may demand more consistency than our other judgements can bear.
We don't ask how the code was written. We probably shouldn't ask, with such intensity, how the image was made.