Sequence to sequence learning with neural networks: what a decade
youtube.comAn important point from this talk:
The better the models are at reasoning, the less predictable they become.
This is analogous to top chess engines which make surprising moves that even human super grandmasters can't always understand.
Thus, the future will be unpredictable (if superintelligence takes control).
Link to the full talk & the time he talks about this: https://youtu.be/1yvBqasHLZs?si=3M6eZCQtXnW2tSUd&t=866
I think there is an analogous challenge to the gradual, but large scale adoption of self-driving cars. Even in the absence of reasoning, they have different constraints than human drivers. This makes them react to situations in ways that surprise human drivers around them. It's not hard to imagine that being a new source of accidents.
It won't just be a matter of learning how they react differently, because it will be different from one self-driving platform to another. Sometimes even from one version of a platform to another. And is self-driving engaged, or is the human in control at the moment? Or is self-driving in the process of abdicating to the human, making behavior different from what it was a moment ago?
> Thus, the future will be unpredictable (if we let superintelligence take control)
As opposed to the predictable future we've had for the past few decades?
I think it's a matter of magnitude and semantics. It was quite predictable that the Internet would take off, become ubiquitous, and highly influential.
If superintelligence emerges soon, we may not even know which technologies will emerge and how many will be unleashed in the next 2 decades.
ADDED: Examples of some concrete predictions:
(1980) https://en.wikipedia.org/wiki/The_Third_Wave_(Toffler_book)
(1995) https://en.wikipedia.org/wiki/The_Road_Ahead_(Gates_book)
Obviously, specifics differ from predictions and plenty of people got it wrong. Many good forecasters got the broad strokes right though.
Which forecasters can even predict most technologies that would be invented after ASI emerges?
Wouldn't base economics dictate that, not ASI? A state can have knowledge of how to do things, how things work, hypothetical implementations, etc. However, if a state lacks the resources, skill, or desire to actually confirm and implement those hypothetical technologies, would they just stagnate? There might be a bottleneck there?
Resources will surely impact near-term possibilities.
The space of possibilities is probably large enough that, given time, ASI can get around most limitations if it has the desire/goal to implement them.
Well, that's wishful thinking at best...
There's a lot of things that we know how to fix, but cannot. It's with thinking through why that is, and whether it's any different for a 'superintelligence.'
> There's a lot of things that we know how to fix, but cannot.
This assumes that humanity would remain in control after ASI emerges. We might not.
Plenty of people predicted that the Internet was just a fad.
"What is behind a dynamic super-intelligent tiling manager implemented in auto-regressive LSTM pre-training models?"
Suggestive pathways, he replied.
Hindsight.
To be fair, Ilya seems to have called the previous one pretty well!
Is it just: smarter people look less predictable from a point of view of dumber people. (i.e., there may a perfectly good reason for a smarter person to behave a certain way, it is just not immediately apparent to the dumber one)
I don’t think that’s broadly true with chess. Did you mean go?
Ilya says in the talk that chess ai is unpredictable even to grand masters.
That's kind of a trivial observation -- if chess AI were predictable to grand masters, then those GMs could play like the AI and thus wouldn't lose to it.
It's not predictable of course, but the parent commend said:
> surprising, even human super grandmasters can't always understand
Of course they don't expect the move, but they always understand it, even if it takes a bit. There's no move an engine makes where grandmasters go "I can't understand this", it may take them a bit, but they will always ultimately have a good idea of why it's the best move.
4-D chess ?
if you don't want to dip your toe in the festering pile of crap that is X:
I appreciate the link, but the tone of this comment is very un-HN. I don’t even see people talk that way about 4chan, which one can argue would deserve it more.
these are very un-HN times
The content on 4chan is almost indistinguishable from X.
Ussername [does|does not] check out
Already being discussed here: https://news.ycombinator.com/item?id=42413677
Any recommendations for thinkers writing good analysis on the implications of superintelligence for society? Especially interested in positive takes that are well thought through. Are there any?
Ideally voices that don’t have a vested interest.
For example, give a superintelligence some money, tell it to start a company. Surely it’s going to quickly understand it needs to manipulate people to get them to do the things it wants, in the same way a kindergarten teacher has to “manipulate” the kids sometimes. Personally I can’t see how we’re not going to find ourselves in a power struggle with these things.
Does that make me an AI doomer party pooper? So far I haven’t found a coherent optimistic analysis. Just lots of very superficial “it will solve hard problems for us! Cure disease!”
It certainly could be that I haven’t looked hard enough. That’s why I’m asking.
Interesting that he left out the concept of safety (being the founder of a company called Safe SuperIntelligence). Would have been curious to his thoughts on that.
Is there a transcript? The slides are very clear and useful already but I guess there is more.
link to full talk.
Nothing new is really said? Yet, things are going to happen!