Sabine Hossenfelder: “I believe chatbots understand part of what they say”
youtube.comHumans might see that a ball dropped from height will fall to the ground. They can have a mental model of such a thing. If enough humans write that on the internet then LLM might also say the ball will fall. The model the LLM has however is that those words go together. Humans will have both models in their mind, that the words go together and the ball falling. I think that people are confusing the two different type of models and assuming the LLM has both when it only has one. A human raised in an environment without language will still understand the model of a ball falling but a LLM will not.
Once you know roughly how ChatGPT works under the hood (see ChatGPT API), most of these claims on ChatGPT seems to emerge from us humans being easily tricked by our own anthropomorphism.
Carbon or Silicon. What is difference if its a computer neural net, or biological calcium ions and electrical potentials. Once you know how the brain works under the hood, we'll know that we have been tricking ourselves.