Settings

Theme

OpenAI checked to see whether GPT-4 could take over the world

arstechnica.com

43 points by lame-robot-hoax 3 years ago · 15 comments

Reader

ALittleLight 3 years ago

This is a good reminder that when these models are intelligent enough to be dangerous not only will there be insufficient safeguards, but the developers won't even try to be safe. This is like confirming your gun isn't loaded by pointing it at your friend and pulling the trigger.

Of course these models aren't dangerous - but that's becoming less and less certain as their capabilities develop. I'm not even how many more GPT's to go until we reach truly dangerous levels.

Imagine the Red Team for GPT-10 asks it to "convince me to help you take over the world" and it basilisks him successfully...

  • precompute 3 years ago

    > Imagine the Red Team for GPT-10 asks it to "convince me to help you take over the world" and it basilisks him successfully...

    Seeing how the input for the commercial, plebian version seems to filter out "problematic" sources, we can be sure that none of us will get access to something like that. Governments have lists of "questionable", divergent people and their writings. They are free to train on them. I'd even go as far to say that most of these systems have been livetested and that they've only been released after years of steady Q&A and a lot of back and forth with the powers-that-be.

  • roflyear 3 years ago

    Doubtful a language model can do these things because they haven't been done and it hasn't been trained on how to do them. Where is there good resources on how to take over the world? It also isn't like it's a thing you can just do anyway.

    I feel like our jobs may be in danger, at least you'll have teams overseas using the tool a lot to produce code which maybe isn't great quality but who cares, but we're not in danger this way.

precompute 3 years ago

OpenAI's marketing department checked if this headline would create a lot of clicks, after they reassured people that their LLM "AI" would definitely not take over the world.

  • c7b 3 years ago

    True, but 'self-replication, and self-improvement' don't seem that implausible. If future corpora include LLM training repos and/or weights, we're not too far from imagining that an LLM with some overarching architecture (eg terminal access for training, similar to how Bing Chat apparently can issue and process search queries), could directly output model weights. Given the inherent difficulty of interpreting those weights, we arguably wouldn't be too far from a singularity definition.

oars 3 years ago

The prompts used and documented by OpenAI in their testing of GPT-4 are fascinating (incels, erotic sex themes, etc):

https://cdn.openai.com/papers/gpt-4-system-card.pdf

Borrible 3 years ago

Too late, that lovely con man/women has already taken everyone's hearts by storm.

therafman 3 years ago

Just call the next version 'SKYNET' and be done with it...

  • muttled 3 years ago

    If this is GPT4 then I'm really excited for 796 iterations later when we get the GPT800

flemhans 3 years ago

If it could, this oversight group would not stop it.

fnordpiglet 3 years ago

I for one welcome our cheerful, polite chat bot overlord.

meltyness 3 years ago

Law is artificial intelligence, so technically we're already in one of those scenarios in most of the world.

Law records what most people have expressed is desirable behavior in society.

Presumably the extremely adjacent concept of knowledge which may be encoded into DNA may also be thought of in this way, so that even under "the law of the jungle".

DNA records what forms can survive and replicate in a given environment.

... "AI takeover" is already the status quo.

  • dragonwriter 3 years ago

    All current humans are intelligence created by other humans, so its just AI all the way down.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection