Settings

Theme

Can a Large Language Model Understand the Waste Land?

1 points by chenyusu a month ago · 0 comments · 1 min read


(My account can’t submit links yet; feel free to submit this if it seems interesting.)

I built an experiment mapping The Waste Land (Section III, "The Fire Sermon") into a geometric space across phonetic, semantic, and rhetorical axes. Annotations are generated by an LLM and updated context-by-context as the poem unfolds.

What emerged wasn't "understanding" in a literary sense. The model consistently smooths ambiguity into precise coordinates—every word gets a clean position, even when the poem deliberately holds incompatible meanings at once (mythic vs. modern, sacred vs. profane, presence vs. absence).

LLMs optimize for prediction and uncertainty reduction, so they collapse meaning into single trajectories. Modernist poetry is structured to resist exactly this. The visualization doesn't explain The Waste Land—it exposes the gap between human reading and prediction-based language models.

Demo + explanation: https://chenyu-li.info/poetry

Happy to hear criticism, especially from people working on NLP, embeddings, or digital humanities.

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection