Settings

Theme

Sequence to sequence learning with neural networks: what a decade

youtube.com

89 points by dspoka a year ago · 33 comments

Reader

nopinsight a year ago

An important point from this talk:

The better the models are at reasoning, the less predictable they become.

This is analogous to top chess engines which make surprising moves that even human super grandmasters can't always understand.

Thus, the future will be unpredictable (if superintelligence takes control).

Link to the full talk & the time he talks about this: https://youtu.be/1yvBqasHLZs?si=3M6eZCQtXnW2tSUd&t=866

  • rented_mule a year ago

    I think there is an analogous challenge to the gradual, but large scale adoption of self-driving cars. Even in the absence of reasoning, they have different constraints than human drivers. This makes them react to situations in ways that surprise human drivers around them. It's not hard to imagine that being a new source of accidents.

    It won't just be a matter of learning how they react differently, because it will be different from one self-driving platform to another. Sometimes even from one version of a platform to another. And is self-driving engaged, or is the human in control at the moment? Or is self-driving in the process of abdicating to the human, making behavior different from what it was a moment ago?

  • AceJohnny2 a year ago

    > Thus, the future will be unpredictable (if we let superintelligence take control)

    As opposed to the predictable future we've had for the past few decades?

    • nopinsight a year ago

      I think it's a matter of magnitude and semantics. It was quite predictable that the Internet would take off, become ubiquitous, and highly influential.

      If superintelligence emerges soon, we may not even know which technologies will emerge and how many will be unleashed in the next 2 decades.

      ADDED: Examples of some concrete predictions:

      (1980) https://en.wikipedia.org/wiki/The_Third_Wave_(Toffler_book)

      (1995) https://en.wikipedia.org/wiki/The_Road_Ahead_(Gates_book)

      Obviously, specifics differ from predictions and plenty of people got it wrong. Many good forecasters got the broad strokes right though.

      Which forecasters can even predict most technologies that would be invented after ASI emerges?

      • someothherguyy a year ago

        Wouldn't base economics dictate that, not ASI? A state can have knowledge of how to do things, how things work, hypothetical implementations, etc. However, if a state lacks the resources, skill, or desire to actually confirm and implement those hypothetical technologies, would they just stagnate? There might be a bottleneck there?

        • nopinsight a year ago

          Resources will surely impact near-term possibilities.

          The space of possibilities is probably large enough that, given time, ASI can get around most limitations if it has the desire/goal to implement them.

          • sdenton4 a year ago

            Well, that's wishful thinking at best...

            There's a lot of things that we know how to fix, but cannot. It's with thinking through why that is, and whether it's any different for a 'superintelligence.'

            • nopinsight a year ago

              > There's a lot of things that we know how to fix, but cannot.

              This assumes that humanity would remain in control after ASI emerges. We might not.

      • adrianN a year ago

        Plenty of people predicted that the Internet was just a fad.

      • unit149 a year ago

        "What is behind a dynamic super-intelligent tiling manager implemented in auto-regressive LSTM pre-training models?"

        Suggestive pathways, he replied.

      • jb1991 a year ago

        Hindsight.

    • riwsky a year ago

      To be fair, Ilya seems to have called the previous one pretty well!

  • d0mine a year ago

    Is it just: smarter people look less predictable from a point of view of dumber people. (i.e., there may a perfectly good reason for a smarter person to behave a certain way, it is just not immediately apparent to the dumber one)

  • nwienert a year ago

    I don’t think that’s broadly true with chess. Did you mean go?

    • mike_hearn a year ago

      Ilya says in the talk that chess ai is unpredictable even to grand masters.

      • codeflo a year ago

        That's kind of a trivial observation -- if chess AI were predictable to grand masters, then those GMs could play like the AI and thus wouldn't lose to it.

      • nwienert a year ago

        It's not predictable of course, but the parent commend said:

        > surprising, even human super grandmasters can't always understand

        Of course they don't expect the move, but they always understand it, even if it takes a bit. There's no move an engine makes where grandmasters go "I can't understand this", it may take them a bit, but they will always ultimately have a good idea of why it's the best move.

  • foogazi a year ago

    4-D chess ?

Caitlynmeeks a year ago

if you don't want to dip your toe in the festering pile of crap that is X:

https://www.youtube.com/watch?v=1yvBqasHLZs

  • TheSisb2 a year ago

    I appreciate the link, but the tone of this comment is very un-HN. I don’t even see people talk that way about 4chan, which one can argue would deserve it more.

defenestrated a year ago

Cleaner link: https://xcancel.com/vincentweisser/status/186771902044488911...

skissane a year ago

Already being discussed here: https://news.ycombinator.com/item?id=42413677

tablatom a year ago

Any recommendations for thinkers writing good analysis on the implications of superintelligence for society? Especially interested in positive takes that are well thought through. Are there any?

Ideally voices that don’t have a vested interest.

For example, give a superintelligence some money, tell it to start a company. Surely it’s going to quickly understand it needs to manipulate people to get them to do the things it wants, in the same way a kindergarten teacher has to “manipulate” the kids sometimes. Personally I can’t see how we’re not going to find ourselves in a power struggle with these things.

Does that make me an AI doomer party pooper? So far I haven’t found a coherent optimistic analysis. Just lots of very superficial “it will solve hard problems for us! Cure disease!”

It certainly could be that I haven’t looked hard enough. That’s why I’m asking.

melvinmelih a year ago

Interesting that he left out the concept of safety (being the founder of a company called Safe SuperIntelligence). Would have been curious to his thoughts on that.

thih9 a year ago

Is there a transcript? The slides are very clear and useful already but I guess there is more.

ChumpGPT a year ago

link to full talk.

https://x.com/vincentweisser/status/1867719020444889118

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection