Settings

Theme

Sabine Hossenfelder: “I believe chatbots understand part of what they say”

youtube.com

4 points by eirikurh 3 years ago · 3 comments

Reader

richardjam73 3 years ago

Humans might see that a ball dropped from height will fall to the ground. They can have a mental model of such a thing. If enough humans write that on the internet then LLM might also say the ball will fall. The model the LLM has however is that those words go together. Humans will have both models in their mind, that the words go together and the ball falling. I think that people are confusing the two different type of models and assuming the LLM has both when it only has one. A human raised in an environment without language will still understand the model of a ball falling but a LLM will not.

hubadu 3 years ago

Once you know roughly how ChatGPT works under the hood (see ChatGPT API), most of these claims on ChatGPT seems to emerge from us humans being easily tricked by our own anthropomorphism.

  • FrustratedMonky 3 years ago

    Carbon or Silicon. What is difference if its a computer neural net, or biological calcium ions and electrical potentials. Once you know how the brain works under the hood, we'll know that we have been tricking ourselves.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection