Settings

Theme

What do humans have that LLMs don't

twitter.com

23 points by uka 2 years ago · 15 comments

Reader

teleforce 2 years ago

Stephen Wolfram in his tutorial article on ChatGPT, in his conclusions on the main differences between human and ChatGPT learning approaches [1]:

When it comes to training (AKA learning) the different “hardware” of the brain and of current computers (as well as, perhaps, some undeveloped algorithmic ideas) forces ChatGPT to use a strategy that’s probably rather different (and in some ways much less efficient) than the brain. And there’s something else as well: unlike even in typical algorithmic computation, ChatGPT doesn’t internally “have loops” or “recompute on data”. And that inevitably limits its computational capability - even with respect to current computers, but definitely with respect to the brain.

[1] What Is ChatGPT Doing and Why Does It Work:

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...

mitthrowaway2 2 years ago

> LLMs produce their answers with a fixed amount of computation per token

I'm not that confident that humans don't do this. Neurons are slow enough that we can't really have a very large number of sequential steps behind a given thought. Longer complex considerations are difficult (for me at least) without at least thinking out loud to cache my thoughts in audible memory, or having a piece of paper to store and review my reasoning steps. I'm not sure this is very different than a LLM prompted to reason step by step.

The main difference I can think of is that humans can learn, while LLMs have fixed weights after training. For example, once I've thought carefully and convinced myself through step-by-step reasoning, I'll remember that conclusion and fit it into my knowledge framework, potentially re-evaluating other beliefs. That's something today's LLMs don't do, but mainly for practical reasons, rather than theoretical ones.

I believe the extent of world modelling done by LLMs still remains an open question.

  • aeternum 2 years ago

    Yes, this is key. This idea that humans also require sequences to think was popularized by Jeff Hawkins even before all the LLM hype.

    He was able to show that the equivalent of place cells (normally used to determine one's physical location) fire sequentially when humans perform tasks like listening to music or imagine feeling along the rim of a coffee cup.

    The think step-by-step trick might just be scratching the surface of the various mechanisms we can use to give LLMs this kind of internal voice.

TillE 2 years ago

The "world model" is basically the old school idea of AI, which has been mostly abandoned because you can get incredibly good results from just ingesting gobs of text. But I agree that it's a necessity for AGI; you need to be able to model concepts beyond just words or pixels.

PH95VuimJjqBqy 2 years ago

The answer is that humans have genitalia.

And while that may seem trite, it's really not. you can't separate humans thinking from the underlying hardware.

Until LLM's are able to experience real emotion, and emotion here really means a stick by which to lead the LLM, it will always be different from humans.

nittanymount 2 years ago

Lecun's voice in this post, it sounds like he knows the answers for sure, haha ...

  • lucubratory 2 years ago

    That's his default tone. Occasionally he has something interesting to say, but the level of arrogance coming from the leader of the second-best AI group at Meta is grating.

  • resource0x 2 years ago

    What makes you so giggly? Fairly reasonable post IMO.

lagrange77 2 years ago

More of a scaling issue: Humans do continuous* online learning, while LLMs get retrained once in a while.

* I'm no expert, 'continuous' might be oversimplified.

cc101 2 years ago

subjective experience

floppiplopp 2 years ago

The difference is, LLMs are way better than most humans at impressing gullible morons, even highly intelligent gullible morons. In truth it's only an incomprehensible statistical model that does what it's told to do, without agency, motivation or ideas. Smart people have build something, they themselves cannot fully understand and the results remind me a lot of what Weizenbaum said about eliza: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection