Yann LeCun does not possess an internal monologue
twitter.comthis just seems like some Wittgenstein mistake
>"thinking and reasoning require language"
we are just going to argue about what thinking and reasoning are because the sentence is meaningless.
Clearly animals think and reason and dont have language.
Or clearly thinking and reasoning is a process of symbol manipulation contingent on language itself because nothing else provides the means to ground the symbols nor the rules to encode acceptable manipulations.
I am bilingual and I have noticed that in my head, meaning of the word (symbol) and sound of the word (language) are two different things, where language is dependent on meaning, but meaning is not dependent on language.
I can think in my native tongue or English, but I can also use just abstractions to think - i.e. designing architecture of a software tool or using math. What I am finding frustrating is to have abstract idea in the head and be limited by language in expression of that idea, so much so, that sometimes it is easier to paint a picture of that idea than try to express it in written form. Language is a projection of abstract mind with massive loss of contextual information. But you don't need language to have an abstract mind.
If you've lived with animals, of course they have language - it's what we would call (from a human-centric perspective) "non-verbal". My two dogs absolutely understand and can reason from certain words they learn, one I've raised as a puppy from day 0.
I could easily argue they don't have language, only associative learning.
Otherwise why would you be careful not to say the word "walk" around your dog when you didn't intend to go walking.
If you pick up a leash and say, "we are not going on a walk" what will your dog do?
Call me crazy, but I think that an internal monologue is a "you" instance running beside your main instance.
The "you" is because "you" are a lot of smaller stuff instances of things running along in your brain.
Probably it's common to develop this multiple "yous" and kept them happily running (may be with therapy some get to "shutdown" or "suspend" some "evil" "you" they have running and messing everything).
But some people don't need several instances of themselves, and they just run one "you" thing. Hence, no internal monologue.
I wonder how many "leaders" in AI don't have one, and therefor don't understand reasoning the same way as others do who do have it (FWIW I have Aphantasia, although it doesn't mean I can't imagine - I just can't see it).
It's hard to tell if one does or doesn't have an inner monologue/dialog.
It's the standard "is the color I call 'yellow' the same yellow color that you call 'yellow'?" question.
We have no frame of reference for what an inner monologue is really like for other people.
How many gradations are there? 0 = none, 1000 = max (I can hear my voice in my head as clearly and cogently as you can hear my voice when I'm explaining something to you?)
0 = none, 3 = max, and almost all humans are either a 0, 1, 2 or 3?
Until we get neuralink's descendent implanted and can share each other's consciousness, we won't know for sure. And probably not even then.
I don't understand your comment. I literally have no words going through my head most of the time, unless I'm composing text or speech as I am doing right now. The rest of the time as I go about my business and solving problems throughout the day: zero words. Nada. Zilch. Furthermore, having to think in words is a mental strain on my energy level that I don't feel when non-verbally solving problems. I could be doing very intense engineering work and find it less mentally taxing than forcing myself to speak for a few minutes or compose an email. It's a qualitative difference, not a spectrum.
For me, I can drive for example, and think about complex topics with words running through my mind at the same time, without it distracting from my ability to drive.
The converse is also true. I’m sure people with internal monologues (I’m not one) are much more optimistic about LLMs.
Oh I have usually more than one internal monologues going on, but I also think LLMs are just Markov chains with more infrastructure and way over-hyped (because I can reason this)
> I’m sure people with internal monologues (I’m not one) are much more optimistic about LLMs.
Can I ask why you're "sure" about this? I have an internal monologue and I'm quite skeptical of the LLM hype.
I'm not. I was making a rhetorical point regarding the fallacious thinking of the post I was replying to.
I feel like this might explain (and validate?) his conservative stance towards generative LLMs leading to AGI/ASI. What do you think?
I wonder if that characteristic correlates with greater math ability.