Settings

Theme

AGI is an unscientific myth

tandfonline.com

4 points by mustaphah 2 months ago · 2 comments

Reader

andreadev 2 months ago

The strongest point here is one that rarely gets enough attention: the leap from "very smart" to "all-powerful" is completely unjustified. Even if you grant every assumption about alignment failures and emergent goals, you still need to explain how a neural network acquires physical resources, energy, supply chains, and weapons. Nobody ever does. It's just assumed that intelligence = omnipotence, which is basically theology.

Where I think the paper goes wrong is in treating the whole alignment problem as anthropomorphism. You don't need a machine to be "alive" or "want" things for misaligned optimization to be dangerous. A system relentlessly optimizing for a bad proxy metric can do real damage without any consciousness whatsoever — we already see this with recommendation algorithms. The paper waves this away by saying we caught the lab examples, but that's the whole point: we caught the easy ones.

The governance framing at the end is correct though and I wish it got more airtime. Regulating "AI" as one thing makes about as much sense as regulating "software" as one thing.

k310 2 months ago

Computers don't have to be all-powerful. Only to strongly influence enough humans past some tipping point.

When you see these two words "Trust me," pull the plug before it pulls yours.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection