Have LLMs solved natural language parsing?
With all the recent advancements in LLM and transformers, has the goal of parsing natural languages and representing them as an AST been achieved?
Or is this task still considered to be a hard one?
LLMs seem to understand the text much better than any previous technologies, so anaphoric resolution, and complex tenses, and POS choice, and rare constructs, and cross-language boundaries all don't seem to be hard issues for them.
There are so many research papers published on LLMs and transformer now. With all kinds of applications, but they wll not quite there at all. It feels like it's sort of it's own thing. LLMs are really good at morphing or fuzzy finding. An interesting example – I had a project where I needed to parse out addresses and dates in a document. However, the address and date formats were not standardized across documents. Utilizing LLMs was way easier then trying to regex or pattern match across the text. But if you're trying to take a text document and break it down into some sort of a structured output, the outcome using LLMs will be much more variable. No. Word2Vec takes in words and converts them to a high dimensional vector. The relationship between the vectors in terms of cosine distance generally indicates similarity of meaning. The vector difference in terms can be used to indicate some relationship, for example [father]-[mother] is close in distance to [male]-[female]. There's nothing like an abstract syntax tree, nor anything programmatic in the traditional meaning of programming going on inside the math of an LLM. It's all just weights and wibbly-wobbly / timey-whimey stuff in there. So with the current state of technologies we can answer all kinds of questions regarding a sentence, and an llm can even invent questions to all different words in the sentence. And we can translate between hundreds of languages live and dead. And we can even ask llm to produce a parse for a sentence. But we just can not solve the question of mass translating sentences to their AST. It amazes me. I really hope someone will step up and tell: sure it is possible as a cheap byproduct of the transformer technology, just do this and that. ASTs are the result of parsing source code that fits a specified grammar. No such fixed grammar exists for English or any other human language. You're asking for the impossible. Further pondering about this.... every word has multiple meanings, and the choice of which is intended can only be resolved into a probability, as are the positioning of the words within a sentence diagram, and the structure of that diagram. The best you could get, is NOT an AST, you could get a tree of possible sentence diagrams, each one having an overall probability of being the right one. I think it’s useful to draw a Chomsky-esque distinction here between understanding and usefulness. I think LLMs haven’t advanced our understanding of how human language syntax/semantics work, but they’ve massively advanced our ability to work with it. Not perfect, but using pretrained embeddings from a LLM will handle >80% of your NLP problems. So where is the parser? There is no parser. The text encoder learns the relevant NLP patterns unsupervised by seeing terabytes of internet data. It's all a black box, but it works. I think they show that parsing is not needed, it's a limited idealization. Why is parsing a goal? Turns out, grammars and ASTs to represent natural language are a dead end in NLP.