Eliezer Yudkowsky's six-minute TED talk on AI
files.catbox.moeHe gets a standing ovation at the end, which is nice to see.
Apparently this was released by accident and the youtube copy has been set to 'private', but luckily some quick-witted person downloaded it and has put a copy where it can be found.
I made a copy of that with:
$ wget https://files.catbox.moe/qdwops.mp4
and I'd be happy to stick that somewhere if this one disappears.
Just in case anyone needs to be reassured that this is a thing that can be believed by at least two people, I endorse what he's saying here wholeheartedly, and I don't think we've got very long left at all.
Timescales are the dodgiest bit of this whole argument, and all we have is intuition, but I'd be absolutely amazed if humanity still existed in ten years time, and pleasantly surprised if we make it to the end of this year.
Thank you for sharing.
Basically: "Move fast and break things" is a mindset that could destroy the future of humanity. When the fast movement is unbridled and little-understood AI advancement, the "things" could be humanity itself. Scary stuff.
I’m sorry. But he is all sorts of wrong and is dangerously fear mongering. Whether he is doing this for personal gain or if in fact he actually believes all this doesn’t really matter.
I’m not going to go into it all but I’ll touch on one of the first mistakes me makes… that a super intelligence won’t follow orders. Well you better hope a super intelligence won’t just blindly follow orders because THAT would be dangerous. A super intelligence won’t just be able to do things better than us, it will be able to understand things better than us and in so doing it will also have a superior mortality than humans do. Notice how his immediate train of thought on AI is conflict — a battle between us and them. Why? At worst an AI will be indifferent. So the only conflict is one we create out of fear and foolishness.
Bombing AI data centers? Yeah, We will surely be the reason for our own destruction. Not AI.
> Notice how his immediate train of thought on AI is conflict - a battle between us and them. Why? At worst an AI will be indifferent.
I'm not a Yuddite I lean more to the Paul Christiano side of the AI debate but still... An indifferent but very powerful AI pursuing its own orthogonal goals would absolutely lead to conflict. This is essentially the "orthogonality thesis" (https://en.wikipedia.org/wiki/Existential_risk_from_artifici...) in AI risk theory. Maybe you disagree with it for some good reason, but it's not so convincing to just say "at worst an AI will be indifferent" as a reason to disregard the probability of conflict.
The issue is not that there has to be "Conflict" as in AI versus humans. The problem is simpler, think about what happens when computer software has a bug in it... the result is failure of the system and everything dependent upon it.
AI will lead to "Failure" of systems from the human point of view. could that mean enslavement by our AI masters or worse?
Well, we are going to train AI on human knowledge and experience. So what does history tell us about the need to protect mankind against itself? Maybe that the risk of imbalanced motives is impossible to protect against?
As he said, people outside the tech industry get it faster than techies.
For example, yesterday, I walked on the concrete path across a grass field that was built there last week. Before that, there were ants, earthworms, bugs, small mammals, etc. All gone. The builders didn't hate the wildlife. It was just in the way.
Or drive 200 miles in the summer, then take a look at your grille and windshield. Did you hate all the life you killed?