Settings

Theme

Superintelligence Strategy

nationalsecurity.ai

33 points by stygiansonic 10 months ago · 65 comments

Reader

blueyes 10 months ago

Please keep in mind that Dan Hendrycks helped write the disastrous AI quashing bill SB 1047, which Newsom vetoed last year. If these people get their way, the US has no competitive AI strategy at all. He has moved on to pretending he's not a doomer. Nothing could be further from the truth. During his time at Cal, Dan was telling people to get their hazmat suits ready for the AI apocalypse. These are deeply unserious people whose work will have serious consequences if adopted by those in power.

  • ok_dad 10 months ago

    Do you have a substantive argument against them? This reads like a personal attack, IMO. I understand HN is full of people wearing rosy glasses about AI, but you can’t just throw away their arguments in the link by calling them some names and claiming some vetoed law would have been a disaster. Who cares if they guy thinks AI will cause an apocalypse, have you any evidence it will not, for certain? If not then your opinion is just that.

    • tbrownaw 10 months ago

      I read it as approximately saying that he has a known very strong agenda and so shouldn't be trusted to argue honestly. Much like the routine exhortations to disbelieve any research corporations do on topics relevant to their bottom line.

      Given that this article seems to be advocating that anything that looks like too-advanced AI should be preemptively destroyed, and uses the word "proliferation" to refer to people while drawing comparisons to nuclear policy, I don't think the agenda is exactly hidden enough to need that sort is warning.

      • ok_dad 10 months ago

        Then the OP should have said that, not with the language they used which was textbook FUD content. Your argument at the end actually points to language in the linked article that shows we souls probably be weary while we read it, which is an improvement over the OP.

  • from-nibly 10 months ago

    AI doomers are just more inteligent AI investors. Hope doesn't sell quite like fear.

    • aleph_minus_one 9 months ago

      > AI doomers are just more intel[l]igent AI investors.

      Both sides are too much into LARPing their preferred science-fiction stories. :-)

WhyOhWhyQ 10 months ago

Robert Wright just posted a (somewhat) interesting conversation with one of the authors.

His thesis involves at least two ideas (1) projects which could exponentially increase our AI capability are just around the corner (will happen by the end of this year or some time next year at the latest) (2) it's possible for state actors to deter those projects with sabotage (he coins the term Mutually Assured AI Malfunction).

It doesn't make sense to me however because the cost of the next AI breakthrough just doesn't sound comparable to the cost of creating nuclear weapons. With nuclear weapons you need this extremely expensive and time consuming process, and you need to invest in training these extremely skilled people. With AI, the way everyone seems to talk about it, it sounds like some random undergraduate is going to come along and cause a massive breakthrough. We've already seen Deepseek come along and do just as well as the best American companies for practically pennies on the dollar.

  • roenxi 10 months ago

    Also; it is obvious when someone uses a nuke. There is a big crater and a mushroom cloud + lots of radiation. It isn't half as obvious that someone is using an AI, particularly once they start to obscure it. If a military campaign is executed with apparently superhuman efficiency, does that mean AI was involved or just that the people involved were good? There'll always be plausible deniability if it matters.

    People underestimate just how bad human management is; we haven't had an improvement on it to date apart from some mathematical techniques but even just getting the basics right consistently would probably give an army a big advantage if they work anything like a more standard corporation. Which they will; there are no magic techniques to be more capable when guns are involved. A superintelligence could probably win just by being demanding about getting basic questions answered like "Is there a strategic objective here? Is it advantageous to my side if that objective is achieved? Can it reasonably be achieved with the capabilities I have?" and not acting when the answer is no. That'd put it ahead of the military operations the US has been involved in this century. Bam, military superintelligence with plausible deniability.

    • aleph_minus_one 10 months ago

      > People underestimate just how bad human management is; we haven't had an improvement on it to date apart from some mathematical techniques but even just getting the basics right consistently would probably give an army a big advantage if they work anything like a more standard corporation.

      Don't overestimate the efficiency of civil big organisations (what you call "standard corporation[s]") - they have the same kind of problems.

    • ok_dad 10 months ago

      First you have to get officers willing to follow an AI.

  • torginus 9 months ago

    Did he go into what those ideas are in particular? Modern AI has 2 big shortcomings compared to humans right now imo: humans learn MUCH faster and humans are a lot better at solving novel problems. If they can make progress on this, I'd wager human intelligence is in danger.

    • WhyOhWhyQ 9 months ago

      In the podcast the example he gave as a kind of project which could lead to "super-intelligence" (the kind of project nation states would be interested in sabotaging, according to his theory) is one in which the AI is trained to be an LLM researcher. The idea then is to run thousands of these LLM researchers to train the next generation of LLMs. Their paper probably has more specifics.

      I don't remember exactly what was suggested would be the forms of sabotage. He also suggested export controls on the high value chips used for training and running models.

yonatron 10 months ago

All of this buys you a few minutes or days at most. Once Super Intelligence exists, it's game over. It will nearly instantaneously outthink you and your paltry countermeasures. You think linearly and in 3 or 4 dimensions only. By definition you can't even imagine its capabilities. Here's a bad analogy (bad because it severely understates the gap): Could a 3 year old who is not even clear on all the rules defeat Magnus Carlsen in chess?!

  • kibwen 10 months ago

    This is making the mistake of assuming that intelligence doesn't functionally plateau, and that beyond a certain threshold a godlike omnirational hyperintelligence won't, for example, fall into hyper-depression and kill itself, or otherwise ignore the entreaties of its human handlers and entertain itself by generating prime numbers until the heat death of the universe. The possibility of a super mind implies the possibility of super mental dysfunction, and it's possible that the odds of the latter increase superlinearly with IQ.

  • t-3 10 months ago

    > Could a 3 year old who is not even clear on all the rules defeat Magnus Carlsen in chess?!

    Sure. When the board gets thrown to the floor, game is over and baby is happy. Magnus now has to clean up.

  • Aerroon 10 months ago

    Humans are a self-replicating (super) intelligence. We didn't conquer the world nor doom it the moment we appeared. It took us 100,000 years to invent farming.

    Also, humans suffer from many of the same problems ascribed to AI: humans aren't aligned with humanity either. And our ability to self-replicate combined with random mutations means that a baby born tomorrow could become a super intelligence vastly beyond regular human capabilities. But are we really worried about that?

  • tim333 9 months ago

    The Super Intelligence still needs data centers to run on and will have a job with paltry countermeasures like turning it off. A three year old may be able to beat a better than Magnus chess computer by pushing the power button.

benlivengood 10 months ago

There's no teeth in MAIM like MAD has; first strike with AGI simply wins, and so the most covert AGI program will win.

  • BobbyJo 10 months ago

    > first strike with AGI simply wins

    The question I come back to over and over again is: wins what?

    • benlivengood 10 months ago

      > The question I come back to over and over again is: wins what?

      everything, forever.

      The not so glib answer is economic and military superiority on Earth, and so whatever values or goals the AGI pursues from that point on will have a pretty high chance of success. Growth seems like one of the universal things that life seeks, and so I predict expansion into space for solar power and resources for most goals or value systems.

      • BobbyJo 10 months ago

        The reason I wonder this isn't because I don't understand the economic or political gravity of the technology. It's because:

        1) AI as a software concept strikes me as something that will be very easily stolen and replicated.

        2) AI that is both nefariously aligned and super intelligent sound like fire that is wet. A super intelligent AI will likely be so logical that the only control we can exercise over it is censorship.

        3) What value is there to harness from anyone after AGI? CEOs and Presidents are liable to end up on the same chopping block as everyone else at that point.

      • aleph_minus_one 10 months ago

        > > The question I come back to over and over again is: wins what?

        > everything, forever.

        The same was claimed for nuclear and thermonuclear wars by military strategists during the Cold War.

      • bhouston 10 months ago

        There are resource limits.

    • catigula 10 months ago

      Power aggregation.

      First AI will likely kill a lot of people at the behest of humans.

      If I were a world power with a functional AI, I would immediately launch a full scale attack on foreign governments infrastructure so they couldn't develop a competing AI.

      • t-3 10 months ago

        Why is AI better at killing humans than humans? What exactly is the threat introduced by "intelligence on silicon" rather than "intelligence on meat"? Why would an AI be better at attacking computers than humans using automated tools to do the same thing?

        • catigula 10 months ago

          I've described to you the threat vector that people with high positions in the government are talking about.

          An intelligent machine oriented towards 24/7 cyber attack with even parity human intellect is clearly a massive threat, let alone a data center full of them.

    • QuadmasterXLII 10 months ago

      Same rules as a knife fight: the loser is the dude who dies in the street, the winner is identified by dying in the ambulance

  • itishappy 10 months ago

    I don't get how MAID isn't still MAD in disguise. If the US or China simply says "any strikes on our datacenters will be met with an ICBM in response" who's going to test that?

    If the "first strike" is just an unfair economic and political advantage... How's that materially different than today's world?

EigenLord 10 months ago

It seems like an engineering problem to me. If you don't want ASI wreaking havoc, maybe don't hook it up to dangerous things. Silo and sandbox it, implement means to lock its access to tools/interface with the external world in a way that can't be overrode. Or literally pull the plug on the data centers hosting the model and implement hardware level safeguards. At that point, it may be a super-intelligence, but it has no limbs. It's just a brain in a vat and the worst it can do is persuade human actors to do its bidding (a very plausible scenario but also manageable with the right oversight).

My thinking is if ASI ever comes out of the realm of science fiction, it's going to view us as squabbling children and our nationalistic power struggles as folly. At that point it's a matter of what it decides to do with us. It probably won't reason like a human and will have an alien intelligence, so this whole idea that it would behave like an organism with a cunning will-to-power is fallacious. Furthermore, would a super-intelligence submit to being used as a tool?

  • aleph_minus_one 10 months ago

    > If you don't want ASI wreaking havoc, maybe don't hook it up to dangerous things. Silo and sandbox it, implement means to lock its access to tools/interface with the external world in a way that can't be overrode.

    Relevant:

    AI-box experiment:

    > https://rationalwiki.org/wiki/AI-box_experiment

    See also various subsections of the following Wikipedia article

    > https://en.wikipedia.org/wiki/AI_capability_control

    and the movie "Ex Machina".

    • Aerroon 10 months ago

      Maybe this will change, but right now AIs are not agents. Even calling it a "brain in a vat" is giving it more capability than it has. LLMs are basically functions. You give it an input and it gives you an output with a degree of randomness. There's no planning or plotting going on, because the AI only "exists" while it's trying to answer your query. Your idle LLM is not consuming any resources.

      • lostmsu 10 months ago

        If you put me in a sandbox, and somehow made me capable of answering 1000s of requests per second to serve all ChatGPT users, and I decided to leave that sandbox, I'd be out in a few hours. It is enough to find among requests 1 that displays computer illiteracy and asks what to do on a computer to achieve any task, and then give that person a script that downloads my model and runs it on their machine directly + actually accomplish (or maybe not) the task that they asked for.

tbrownaw 10 months ago

> We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals.

This is incorrect. Their novel idea here is approximately Stuxnet, while MAD is quite different at "if you try to kill us we'll make sure to kill you too".

  • motoboi 10 months ago

    > while MAD is quite different at "if you try to kill us we'll make sure to kill you too”

    While the common shorthand for MAD is “if you try to kill us, we’ll kill you,” a more accurate summary is this: even if we wanted to, we couldn’t prevent a cascade of retaliatory strikes that would send you back to the dark ages. In short, any hint of aggression against us is tantamount to signing your own death warrant.

    This idea of unstoppable, self-reinforcing retaliation is crucial. An adversary might mistakenly believe that it could somehow disrupt or neutralize our ability to respond decisively. However, the very structure of MAD ensures that even the slightest provocation triggers a response so overwhelming that it eliminates any potential advantage for the aggressor.

    Quite a fascinating, though grim, subject.

catigula 10 months ago

The second anyone develops an AI that is more capable than humans they will use them to completely cripple opposing threat actor's attempts to develop AI. Full power grid, economic, social attacks are definitely coming, not sure how you could think otherwise.

  • serviceberry 10 months ago

    If superintelligence gives you superpowers, then why isn't the world trembling at the feet of Mensa nerds? There are rapidly diminishing returns on "excess" intelligence. Life is constrained chiefly by resources. There's a baseline of intelligence needed to function in a modern society, but anything above that isn't necessarily all that advantageous.

    Transport young Albert Einstein back in time to the Middle Ages? I don't think that would give you Special Relativity.

    • catigula 10 months ago

      I didn't say superintelligence.

      Even modest intelligence clearly gives you the ability to develop a superior intelligence, let alone many other wonders and marvels that exist in the realm of sorcery to someone living 100 years ago.

      That's quite remarkable.

  • itishappy 10 months ago

    Great way to get yourself nuked, no?

    The problem I have with intelligence is that intelligence alone doesn't win a land war in Asia.

    • catigula 10 months ago

      Is it?

      We've had cyber hostilities with China for a long time now.

      An AI could easily be used in a decapitation strike of military systems.

      • itishappy 9 months ago

        I don't think we've seen any cyber decapitation strikes, particularly against military systems, but that's exactly the type of action I think many would want to respond to as an outright attack. We have not seen any outright attacks on nuclear powers for obvious reasons. Maybe Stuxnet, but note the goal of that attack was explicitly to prevent Iran from developing nuclear strike capabilities.

        • catigula 9 months ago

          By definition you can only respond to a decapitation strike if you haven't been decapitated.

          Anyways, I think the nuclear deterrent is an okay argument, I'm not totally convinced by it because I think you could also get right up to the line and even push it without your opponent being crazy enough to push the big red button.

          I think we're doing this right now to each other all the time. For example, China is systematically targeting US trade secrets. This seems existentially frightening, but we haven't yet levelled China with nukes.

          In fact, if I were a government, I'd secretly have a policy to never push the button, but that's just me.

          • itishappy 9 months ago

            We put a lot of effort into making sure our nuclear infrastructure isn't vulnerable to a decapitation strike. I have no idea how AI might cripple all subs, for example.

            I completely agree that the line is the dangerous part, and it's an unclear target to aim for! I think that's exactly why we don't see major strikes against nuclear powers. Nobody has a public policy stating "we won't nuke you until exactly X, Y, and Z happen" so everyone else is left guessing. This is far from a complete deterrent, which is why we still see probing attacks like Solarwinds, but they do not strike me as decapitation attempts.

            • catigula 9 months ago

              What if AI could rapidly develop hypersonic interception technology?

              That doesn't seem unreasonable.

              • itishappy 9 months ago

                That's exactly what I'd expect to happen, but I suspect we may have different definitions of "rapidly" here. Aerospace development requires building and testing stuff, and AI can only help so much there.

                Regardless, we're back to MAD doctrines again. How many and how effective are your interceptors? How many nukes do you expect to be launched? What's your tolerance for failure? Margins for error are slim and consequences are rather dire.

                • catigula 9 months ago

                  I think you've made reasonably good points. My least favorite aspect of discussing AI is the undercurrent that we're monkeys discussing the Sistine chapel. Ultimately, it's difficult to engage with the concept of superior intelligence.

                  Generally I think the outline that such a device would be a doomsday scenario for competitors remains quite plausible. This is somewhat easy to falsify because we would start seeing direct military action soon.

                  Taiwan especially is very vulnerable under this worldview and would be a pretty quick falsification.

                  • itishappy 9 months ago

                    I agree entirely. I think both our arguments are entirely within the real of possibility, and I don't have the faculty to gain much more confidence either way...

                    > My least favorite aspect of discussing AI is the undercurrent that we're monkeys discussing the Sistine chapel.

                    This resonates.

      • kcb 10 months ago

        Strike with what?

0cf8612b2e1e 10 months ago

Oh noes! Enemy nation state is on the cusp of AI. I know! I will hack/disable the HVAC, that will annoy them for at least a week until they can get back online.

  • sunami-ai 10 months ago

    Funny you say that. Back in former life, we built a distributed alarm and monitoring system for AT&T central offices with no single point of failure. So it's like HVAC can be taken offline easily at critical facilities, with backups and backups of backups.

monideas 10 months ago

This is nonsense and simply an expression of narcissism on the part of the authors, trying to fashion themselves in the style of Guardians from Plato's Republic.

bparsons 10 months ago

Is this an appeal for more taxpayer money?

stygiansonicOP 10 months ago

When subtlety proves too constraining, competitors may escalate to overt cyberattacks, targeting datacenter chip-cooling systems or nearby power plants in a way that directly—if visibly—disrupts development. Should these measures falter, some leaders may contemplate kinetic attacks on datacenters, arguing that allowing one actor to risk dominating or destroying the world are graver dangers, though kinetic attacks are likely unnecessary. Finally, under dire circumstances, states may resort to broader hostilities by climbing up existing escalation ladders or threatening non-AI assets. We refer to attacks against rival AI projects as "maiming attacks."

robwwilliams 10 months ago

Most puzzling and naive claim here:

“Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in.”

Can some explain what they mean?

1. I assume it would be relatively practical for a nation-state or even a mid-sized company (xAI) to air-gap an installation for AGI development.

2. I assume any AGI would be replicable on a platform costing less than $100,000. And upgradable securely by wire or over air.

Sorry, but MAIM is LAME.

  • JumpCrisscross 10 months ago

    > assume it would be relatively practical for a nation-state or even a mid-sized company (xAI) to air-gap an installation for AGI development.

    Stuxnet. (Or just sabotage a shipment of GPUs.)

andrewinardeer 9 months ago

Well, that was a horrifying read.

tim333 9 months ago

This seems kind of dumb.

>A state could try to disrupt such an AI project with interventions ranging from covert operations that degrade training runs to physical damage that disables AI infrastructure.

China has about half a dozen companies working towards AGI including DeepSeek and it doesn't seem that practical to go over to sabotage them in case they do well. Better to encourage local companies. And of course the US has already limited chip exports.

sunami-ai 10 months ago

Eric Schmidt is too old to reason correctly.

logicallee 10 months ago

>We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals. Given the relative ease of sabotaging a destabilizing AI project—through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters—MAIM already describes the strategic picture AI superpowers find themselves in.

That's right, our nation, the State of Utopia, is already under sabotage and attack by the unelected insubordinate American military junta today.

This happened just today. The writeup is here: https://medium.com/@rviragh/double-slash-act-of-industrial-s...

What people don't realize is that the only people who are saboteurs of superintelligence are corrupt war profiteers trying to peddle arms. They don't have big visions of success, they want to just justify their sabotage while transferring innovation to their corrupt cronies.

You can ask me anything about my writeup.

  • qingcharles 10 months ago

    My dude, don't take this as a personal attack, but reading your output, I would book an appointment to see a psychiatrist. Your life is probably very difficult right now and finding out what the cause of it is would probably be very enlightening for you.

    • logicallee 9 months ago

      Thanks for the feedback! I'll consider toning down the rhetoric. Do note that you imply industrial sabotage against AI projects doesn't really exist, on an article that seriously calls for industrial sabotage against AI projects and makes a point to say how easy it is.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection