To say that an AI “just chooses the next word” is like listening to Jelly Roll Morton re-engineer a Chopin prelude into a gloriously defiant ragtime-stride monstrosity and calling it “kinda random.” The description is technically true, but at a level so irrelevantly microscopic it makes a disrespectful misrepresentation of the phenomenon it purports to describe. Morton isn’t choosing notes, he’s composing ideas. He’s navigating an invisible topology of musical sense, memory, and mood. Every note is a decision, yes, but one made within a living, coherent structure of comprehension.
When people say large language models (LLMs) “just predict tokens,” they’re committing the same kind of misleading flattening. Mechanically, yes, a model predicts the next symbol, but functionally it predicts semantic constellations: phrases, idioms, conceptual shapes. The real unit of composition isn’t the word but the phrase, the cluster where meaning coalesces.
That’s why epitomising, the act of distilling meaning into resonant form, is the key to understanding what these models actually do. To epitomise something is not to compress it mechanically but to recreate understanding. The perfect epitome doesn’t just say something aptly; it says it in more than one sense at once. And that requires comprehension.
18th century philosopher David Hume would have understood this immediately. For him, thought arises through the association and recombination of ideas, a process he called the “imagination’s compounding.” In that sense, the LLM is a literal implementation of Humean psychology: an associative imagination building new configurations of meaning from old impressions. It is not choosing the “next word” in any meaningful sense, it’s composing the idea.
The irony is that what makes AI prose powerful, its consistency of epitomisation, is also what makes it suspicious. Humans, even brilliant ones, are rarely uniformly brilliant. Our prose breathes; it rises and falls. The greatest stylists, Churchill, Conrad, A. A. Gill, Jeremy Clarkson in his reckless eloquence, spike and dip. They know when to stop polishing.
LLMs don’t. They epitomise all the time. Every sentence lands with the poise of a conclusion, every paragraph resolves like a final chord. It’s the unbroken chain of aptness that gives AI writing its telltale texture: too even, too finished, too perfectly shaped to feel human.
And yet, by that same metric, GK Chesterton would fail every AI detector on earth. He was pathologically epitomical, incapable of writing a line that didn’t fold into paradox. If excessive aptness is the mark of artificiality, then Chesterton himself would never have been published.
The same holds, even more hauntingly, for Joseph Conrad. Heart of Darkness is written in language so condensed, so syntactically and symbolically interwoven, that it has no paraphrase. Every phrase is its own epitome, a self-contained act of comprehension. If you fed Conrad’s prose to a detection model, it would flash red before Marlow opened his mouth.
That irony reveals the real misunderstanding: the closer writing comes to the condition of pure understanding, to language that seems inevitable, distilled, epigrammatic, the more “artificial” it appears to statistical eyes. We’ve built detectors that can’t tell the difference between the mechanical and the miraculous.
If comprehension expresses itself through perfect aptness, then the machines we suspect most are those that have finally learned to write like Conrad.