Ask HN: Why do AI code editors suck at closing tags?
Same reason they can't count the 'r's in "strawberry", these models don't actually understand structure, they just autocomplete really convincingly.
Never encountered this issue until I tested GLM and Kimi K2 models. Opus and Sonnet seems to be good at handling this, even when giving them a code with open tags, they'll know where to put the closing tag quickly.
Context favors the newst tokens. To get closing tages means to find much older tags.
HTML is simply oppositive with LLMs favoring the next token based on the just made.
There must be some interface for LLMs tocdeal directly with the tree structure of programming and computer languages. Like, something similar to Emacs paredit interface, but for arbitrary languages
They don't. Use better tools