OpenAI checked to see whether GPT-4 could take over the world
arstechnica.comThis is a good reminder that when these models are intelligent enough to be dangerous not only will there be insufficient safeguards, but the developers won't even try to be safe. This is like confirming your gun isn't loaded by pointing it at your friend and pulling the trigger.
Of course these models aren't dangerous - but that's becoming less and less certain as their capabilities develop. I'm not even how many more GPT's to go until we reach truly dangerous levels.
Imagine the Red Team for GPT-10 asks it to "convince me to help you take over the world" and it basilisks him successfully...
> Imagine the Red Team for GPT-10 asks it to "convince me to help you take over the world" and it basilisks him successfully...
Seeing how the input for the commercial, plebian version seems to filter out "problematic" sources, we can be sure that none of us will get access to something like that. Governments have lists of "questionable", divergent people and their writings. They are free to train on them. I'd even go as far to say that most of these systems have been livetested and that they've only been released after years of steady Q&A and a lot of back and forth with the powers-that-be.
Doubtful a language model can do these things because they haven't been done and it hasn't been trained on how to do them. Where is there good resources on how to take over the world? It also isn't like it's a thing you can just do anyway.
I feel like our jobs may be in danger, at least you'll have teams overseas using the tool a lot to produce code which maybe isn't great quality but who cares, but we're not in danger this way.
OpenAI's marketing department checked if this headline would create a lot of clicks, after they reassured people that their LLM "AI" would definitely not take over the world.
True, but 'self-replication, and self-improvement' don't seem that implausible. If future corpora include LLM training repos and/or weights, we're not too far from imagining that an LLM with some overarching architecture (eg terminal access for training, similar to how Bing Chat apparently can issue and process search queries), could directly output model weights. Given the inherent difficulty of interpreting those weights, we arguably wouldn't be too far from a singularity definition.
The prompts used and documented by OpenAI in their testing of GPT-4 are fascinating (incels, erotic sex themes, etc):
Too late, that lovely con man/women has already taken everyone's hearts by storm.
Just call the next version 'SKYNET' and be done with it...
If this is GPT4 then I'm really excited for 796 iterations later when we get the GPT800
If it could, this oversight group would not stop it.
I for one welcome our cheerful, polite chat bot overlord.
Same, he's so friendly.
Law is artificial intelligence, so technically we're already in one of those scenarios in most of the world.
Law records what most people have expressed is desirable behavior in society.
Presumably the extremely adjacent concept of knowledge which may be encoded into DNA may also be thought of in this way, so that even under "the law of the jungle".
DNA records what forms can survive and replicate in a given environment.
... "AI takeover" is already the status quo.
All current humans are intelligence created by other humans, so its just AI all the way down.