Settings

Theme

DeepMind researcher: Existential catastrophe caused by AI is likely unavoidable

independent.co.uk

17 points by zonovar 3 years ago · 14 comments (12 loaded)

Reader

SilverBirch 3 years ago

The problem I have with this is that yes, I understand the possibilty of peverse incentives, but let's look at our real world experience. When we play a game with a smart animal or a child and they figure out how to cheat in order to get their reward we don't suddenly become lobotomized. We see their cheating, and generally find it amusing or cute. In the same way I think it's one thing to say that an AI might cheat to acheive a goal, but it's a completely different thing to say it'll be competent. Like all of this is really interesting, but it kind of assumes a lot.

Let's say that this machine breaks out and decides to take over the world to acheive whatever stupid task we've given it (I'm yet to see an AI ethics paper that doesn't exclusively deal in stupid tasks), do we really think that the AI is going to figure out it's going to need to ensure it stabilizes the power grid before it starts exponentially drawing power to collect buttons or whatever it is?

What is much more likely from our experience is that a basic AGI is going to do something really dumb. And then when we reboot it it'll do something dumb 1000x more times. And maybe eventually it'll do something almost as smart as us. This is what we term childhood.

I think a core part of these ethical papers is that you suppose an AI is generally intelligent, and that it is much more intelligent than anything plausible and never needs to learn from experience. Let me tell you, I'm pretty generally intelligent and I've not taken over the world more times than I can count.

  • ALittleLight 3 years ago

    I see the ultimate conclusion as "AGI is dangerous to humanity" and your objection here is that "Probably the AI won't be clever enough to seriously hurt humanity on its first attempt."

    There are a couple problems with your objection. For example, even in your imagination, the AI is just given "1000x times" to keep learning, at which point, even with the weaknesses you believe the AI will have, it will presumably be more intelligent and capable than us, so really, your objection doesn't refute the ultimate conclusion I mentioned above, your objection just claims that there will be an additional phase before a technically dangerous AI exists.

    A bigger problem with your objection though is that your reasoning applies to natural intelligence less than or equal to our own. Your reasoning does not apply to artificial intelligence, which may behave very differently, and it doesn't apply to intelligence much greater than our own. GPT-3, for example, when it was done training, exceeded human ability instantly in many respects - what human can translate as accurately between as many languages? Has as large a vocabulary? Writes as quickly? etc. Why wouldn't it be the case that a generally intelligent machine, on first use, is substantially more generally intelligent than a human?

    Finally, I don't even think objections like this one meaningfully obstruct the ultimate conclusion - that AGI is fundamentally dangerous. You could just imagine that the people who are controlling the AGI, if they are able to, are people other than your preferred controllers. If a team of computer scientists develop AGI, and they manage to perfectly control it, and they take it through the "childhood" period that you imagine must exist, is that really any better? This team of researchers will be the new omnipotent rulers of humanity. Even without going to nanotech or exotic technology, we could imagine they just automate robot soldiers and surveillance with their aligned AGI, and the rest of humanity would be powerless against them. And there is no reason to rule out exotic technology, which might be possible, and might empower the future rulers of humanity to unimaginable levels.

    Not only does AGI need to be aligned with its operators, the operators need to be aligned with humanity, and neither of those seem plausible.

  • pantojax45 3 years ago

    The difference is that AI could speed up its evolutionary rate and not have the same constraints as living organisms.

    If you can multiply rapidly and self-organize, you can outcompete humanity.

    • SilverBirch 3 years ago

      Not if you're interacting with the real world. This is like software startups trying to get into hardware, it suddenyl becomes incredibly difficult because you need to actually interact with the world. You can iterate an incredible number of times with your software model in a blink of my eye, but that's not going to help you if the hydro-electric damn that you didn't know about suddenly goes offline.

    • kranke155 3 years ago

      As long as human beings have control of the energy grid and the physical world, how are AIs going to outcompete us in a way detrimental to our interests ?

      Last case scenario we can always find them and delete them. ?

      • runnerup 3 years ago

        At historical rates of power-efficiency scaling (which haven't changed much in the past 10 years either) ... by 2040 we'll have the currently estimated computational power of a human brain (~1 exaflop) running on a 20Watt processor, which is about as much as an iPhone CPU uses.

        It's not guaranteed that some AI supercomputer will be using megawatts of electricity and couldn't siphon what it needs from small parasitic loads.

        • vinibrito 3 years ago

          Oh great so everyone will be able to have entire blockchains and mine bitcoin in their pockets :P

  • nealabq 3 years ago

    This (funny) YT video by exurb1a agrees: https://www.youtube.com/watch?v=dLRLYPiaAoA

rini17 3 years ago

I was thinking they write about psychological existential crisis and was disappointed. It is IMO more plausible that AI will force to redefine our humanity and throw us into existential depression well before any physical threat. If art or literature (eventually, whole humanity) can be flawlessly mimicked by machines, what does that make us?

pyinstallwoes 3 years ago

How do we know it hasn’t already happened and given it has, we exist, so, carry on.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection