Ask HN: Are leaders genuinely afraid of AI or do they have an agenda?
Mixes of:
- Agenda
- Straight-forward Fear of AI
- Fear that AI might trigger some really-not-so-bad social changes or upheavals...which *are* so bad for them & their friends
- Mirroring the fears of their peers
Well worth noting: the "leaders" talking about AI are not magically wise, nor especially foresighted, nor widely experienced. They've mostly gotten to be leaders by being utterly obsessed with getting ahead in the human social hierarchy, and devoting their lives to doing that in some narrow social niche or other. There are human-nature reasons why the leaders in ~every historical major industry failed to be leaders in the industry which replaced it.My reason for posting was I was appalled by this interview, which I think was quite irresponsible. I've lost all respect for Lex Fridman for playing the hapless dupe "just asking questions" and not pushing back harder.
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
I'm over halfway through it, and it seems to me that Lex just can't wrap his mind around the warning that Eliezer is trying to give him. He's so in love with AI that he just can't fathom how things could go wrong.
I'm convinced the threat is real, but have no idea what the timeline is. I hope, like most things, we'll skate by, and just stop calling it AI once it happens, and treat it like any other tool. I strongly doubt that is true.
I suspect what will actually happen is that peak oil will catch us off guard, and we won't have the spare power available to train GPT7, and that will avert the singularity.
Having finished the episode, it seems quite clear to me that Lex just doesn't understand the argument, or doesn't want to understand. He's so used to the idea of falling in love with an AI that he can't see the danger.
I see the danger, let me give an analogy.
What if, according to the laws of physics, it were possible to make a thermonuclear weapon out of beach sand using a microwave oven.
That's something so absurd that we'd never figure it out, but AGI could. That scale of dangerously destabilizing knowledge could show up at any time from a superintelligent AGI.
Its bad enough that nation-states have the resources to make civilization ending weapons. I think AGI could super-empower those with access to it.
---
On the other hand, what if it were possible to make unlimited clean energy using beach sand, a microwave oven and some whiskey as a catalyst. AGI could make that future possible as well.
I think there's reason for concern. I own, but haven't read Nick Bostrom's "Superintelligence", which lays out the risk scenario in depth. I have read Bostrom's "Global Catastrophic Risks", which treats AI in a chapter rather than a book, and I found the argument for AI being a genuine threat convincing.
Thanks for the recommendations. I have concerns too, but it seems to me some are stoking fears for political and religious purposes. For example, talking about Golems and Sam Altman being Jewish to justify their antisemitism.
Why not both?
Good point. It's a crazy, mixed up world we live in.