Remind me again why Large Language Models can't think
microprediction.medium.com I have begun to lose my faith in reductionism as it
applies to LLMs, and started to demand evidence.
OP, you've gotten so overcome by the fumes from your own butthole that you've lost touch with reality. Come up for air already.Seriously: people thought a couple screens of ancient LISP was "thinking" when ELIZA was published. They were just as intensely wrong as your post is.
OTOH, maybe this is an end-run around the Turing test - produce humans whose reactions are so predictable that it's easy for an LLM to reproduce one. (see also cryptobros, early innovators in the space)
Note though, that ELIZA was not written in LISP.