Settings

Theme

AI Labs Urged to Pump the Brakes in Open Letter

time.com

32 points by ktamura 3 years ago · 45 comments

Reader

midland_trucker 3 years ago

I find it really hard to see how productive a collective pause and 'think' about something so inherently unpredictable will be.

> "implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts"

Who deserves to be called an expert on this? Feels like Economics or something, where you have camps of thought advocating for themselves but little way of knowing who's right. Best to break things and develop antibodies whilst the stakes are still low.

vlaaad 3 years ago

So Musk was outed from OpenAI and is now salty? Or are they trying to build a competitor and are falling behind and want OpenAI to get a break so they can catch up? In either way, the ethics talk is total bullshit.

  • wseqyrku 3 years ago

    Was thinking the same thing. Just a few days ago Elon was criticizing ChatGPT to be too "woke" which basically means it's careful to not push any hot buttons and that it needs to be less politically correct all the time.

    It's funny because the other day I asked ChatGPT that the only thing that holds back AI tech. is regulatory issues. Of course it denied cuz it didn't know any better, but when GPT-4 paper came out, this was explicitly mentioned, that they were solely working on safety issues for months before release.

  • wsgeorge 3 years ago

    > So Musk was outed from OpenAI and is now salty?

    Outed? AFAIK he quit.

    • wseqyrku 3 years ago

      Back then he said it was because of "conflict of interest" with Tesla. But that wasn't entirely true, he wanted to be the boss and run the company. Altman said no. [citation needed]

    • vlaaad 3 years ago

      That's my speculation. If I was Musk, I wouldn't voluntarily quit OpenAI.

      • daydream 3 years ago

        > That's my speculation. If I was Musk, I wouldn't voluntarily quit OpenAI.

        In hindsight. Back in 2018 it wasn’t at all obvious they would be in the position they are today. Google released the Transformer paper just a year before and were commonly seen as being in the lead.

reset-password 3 years ago

I felt the same way when I really wanted a super soaker 2000 and then the neighbor kid got one before me. "MOOOOM!!!"

  • TheLoafOfBread 3 years ago

    This is exactly my feeling from the whole letter. I don't understand why are people scared of ChatGPT and similar, when it is just better IntelliSense

andrewstuart 3 years ago

People are really freaking out about AI aren’t they?

Why bother? It’s moving super fast, just wait and see what happens.

And even if you could control or regulate it, exactly how would you do that? What would you be regulating/controlling? How would you define it?

And why would you want to anyway? The party has just started, if you think the revolution has arrived, your completely wrong - this is just the beginning - the most amazing stuff is yet to come.

These people begging for the pace to slow, it’s analogous to the newspapers and music companies wanting the internet to slow down as they were being rapidly involuntarily made redundant.

  • glenneroo 3 years ago

    Did you read the article or letter? They list a lot of very valid reasons. I won't bother quoting because I would just be copy/pasting the article (which is very short).

    For a more detailed list of reasons, go read what AI alignment scientists think, since they have been working on how to align an AI in order to not "turn everything into paperclips" since the 1970s and it seems as though many are rather skeptical about us having any future if we keep up at this rate (spoiler: many believe the end of humanity will occur soon after the creation of any superintelligence): https://www.alignmentforum.org

    My personal takeaway: continued development might mean the end of humans and maybe even our planet (if an ASI can deploy nanotech to convert everything into substrate). As it already stands, nobody knows why LLMs work as well as they do - they are already a black box. Sure, plenty of people can explain the math behind each of steps involved: training, matrices, transformers, inference, et al. but it's still a big black box which spits out "magic" answers. You can't just drop a breakpoint inside a model during inference to see what's going on, you will just get a long list of unintelligible floating point numbers at any step of the process.

    Your question about regulation is valid... but something needs to be done. I feel like we're standing very close to one of the Big Filters.

    Comparing AI/AGI/ASI to anything we have currently seen is probably pointless, they are worlds apart. Would you bother comparing a smartphone to a book? The rate of AI/ML progress is sufficient to measure in hours instead of months or years.

    • kromem 3 years ago

      It could also present the only realistic way we avoid the many other human-ending threats we've introduced ourselves.

      This isn't a one sided coin flip.

      You have nuclear war, emerging bioweapon capabilities, global warming, pollution, and a number of other existential concerns. And yes, AI is one on that list.

      But how many solutions to those other items do we realistically have right now? Because it keeps seeming to be that nothing is getting done outside people pointing out the sky is falling.

      Quite possibly the ONLY real, well funded, viable solution to these issues would be a straight up deus ex machina being pulled out of our rear ends.

      So yeah, it's a scary coin flip that could add to the massive list of other things inevitably going to kill us all, or maybe end up being the coin flip to negate the rest of that list.

      So a letter that only points out a possible threat model isn't genuinely pursuing discussion over the ethical considerations - it's just fear mongering. And looking at signatures much further down the list of "concerned parent" or "fearful citizen" it's having the intended (and very sketch) effect.

  • kromem 3 years ago

    Also, the letter only presents threats.

    Part of the ethical consideration needs to be the opportunity cost a six month delay could cause.

    As an example, let's say that GPT-N will cure cancer.

    Over a six month period that's at least 5 million people dead if the date it arrives and cures cancers broadly was pushed by that long.

    What about negotiating foreign policy treaties to prevent war, or identifying a way to reverse climate changes, or any number of other positive effects?

    The fact the letter even positions 'should' such an advanced AI exist as a legitimate question I find pretty gross.

    Should we hold back the progress of intelligent life in the universe out of the ego of humanity?

    I get superintelligence is an unsettling idea.

    But inherent to that name is an indication that there's a seat at the debate table that's currently vacant which might have important and interesting things to say on the subject, and anyone suggesting aborting it is a valid course of action (particularly given the myriad of existential threats we already face from fellow humans) I can only regard as being quite far from super.

  • Peritract 3 years ago

    > It’s moving super fast, just wait and see what happens.

    This seems like bad advice in almost any context.

    • andrewstuart 3 years ago

      Why?

      We didn’t have to slow down and wait when the web arrived…. and tons of people lost their jobs because of the web.

  • csomar 3 years ago

    > People are really freaking out about AI aren’t they?

    AI touches the domain of software development the most (since we have put a lot of data about it on the Internet). It touches other things too like writing, and design. It doesn't touch things like food delivery, construction, or a farmer.

    Currently, they happen to be at the lower tier of society. For some reason. Despite the fact that you can't go two days at a row without food. AI can flip this. There is no need for this army of developers, designers, marketers and bureaucrats. Some people are afraid.

    Tl;dr: The people who are freaked out about AI are the people who are bound to lose the most by it.

    • Kinrany 3 years ago

      Logistics and farming seem like poor examples: both can be solved with programming and a little hardware.

    • DoctorOetker 3 years ago

      > Tl;dr: The people who are freaked out about AI are the people who are bound to lose the most by it.

      And some probably used ML assistance to contribute to the draft:

      > "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

      This would lead to emphasis on algorithmic compute efficiency, quickly decreasing reliance on mega-actor cooperation! Hah!

fooker 3 years ago

If this sort of prevention didn't work for nuclear weapons it'll not work for anything ever.

  • mechagodzilla 3 years ago

    This basically worked perfectly with nuclear weapons. Everyone took their development extremely seriously, and we’ve managed to avoid a nuclear apocalypse. If anyone could get on their computer and buy a nuke from Amazon, we would all be dead within the week.

    • thfuran 3 years ago

      Of course, software production and testing is a hell of a lot harder to detect via satellite imaging and seismographs and can be performed using readily available materials, so it's a good deal harder to control.

    • fooker 3 years ago

      It worked — after destroying two whole cities.

  • RugnirViking 3 years ago

    we did at least relatively okay with nuclear weapons? I notice that there are a whole lot less of them now than there were, nobody has used any, and several countries under extreme pressure still haven't developed them (taiwan, japan, south korea, armenia, venezuela, syria)

Kinrany 3 years ago

The letter: https://futureoflife.org/open-letter/pause-giant-ai-experime...

I don't see any way to verify the signatures. Though the mention of Sam Harris' signature disappearing suggests they're being moderated at least?

  • dusted 3 years ago

    This, indeed, I'm also wondering why we'd learn about it only through that site, wouldn't the people signing be posting about it too ?

sj8822 3 years ago

The posters here seem to be highly skeptical of the need to regulate emerging AI.

I find that pretty disappointing and surprising.

Recently, they gave gpt-4 access to a terminal, the internet, and money. And gpt-4 itself is a software (and software in general has bugs, vulnerabilities, etc) black box that is incredibly, unprecedentedly powerful and not fully understood. Part of its training data is almost every known security vulnerability.

You guys really don’t see any potential problems with this? I mean really? Get a little creative here.

  • windex 3 years ago

    Reminds me that fire is dangerous and making it would have been patent-able today. There is no guarantee that once taken away from the masses, AI research wont continue in walled gardens with only the rich, connected, and powerful having access. Had people been egalitarian or safety oriented, a lot of other things wouldn't have been developed, like nukes, robotics& automation, or self driving cars for example. There wouldn't have been billionaires setting up rules for everyone else.

    The only reason this is an issue for "them" is that this tech isn't under their complete control and it seems to threaten the rent extraction model in a bunch of other industries.

mindcrime 3 years ago

See also:

https://news.ycombinator.com/item?id=35356980

https://news.ycombinator.com/item?id=35356750

https://news.ycombinator.com/item?id=35356771

https://news.ycombinator.com/item?id=35356499

https://news.ycombinator.com/item?id=35356490

https://news.ycombinator.com/item?id=35356390

https://news.ycombinator.com/item?id=35355803

https://news.ycombinator.com/item?id=35354565

https://news.ycombinator.com/item?id=35353458

https://news.ycombinator.com/item?id=35353484

https://news.ycombinator.com/item?id=35353394

https://news.ycombinator.com/item?id=35354704

https://news.ycombinator.com/item?id=35355541

(not about "the letter" but in the same spirit)

https://news.ycombinator.com/item?id=35356691

windex 3 years ago

My bets are that they want to catch up.

  • kromem 3 years ago

    Exactly.

    And while I don't love OpenAI becoming increasingly closed, they really have done an excellent job with alignment that I very much doubt some of the signers of the letter would have.

    So I'd much rather have the progress in AI continue to be led by a company pursuing caution than by another company that used those 6 months to catch up and take the lead with even less caution.

    This is very much "out of the frying pan and into the fire" territory targeted at trying to stall OpenAI for competitive reasons and getting every outspoken person against AI for years in academia to jump onto signing it.

    • windex 3 years ago

      And bottle everything up in a company that will then want to charge massive rents for it. This is a transformative tool in a lot of industries. Given half a chance, Musk and his types would have patented fire.

    • Kinrany 3 years ago

      > they really have done an excellent job with alignment

      In what way?

weekendflavour 3 years ago

The accelerationist dream is finally becoming a reality and these nerds wanna stop it. Deal with it

RcouF1uZ4gsC 3 years ago

Speaking of AI safety:

With Tesla “Autopilot” Elon Musk is responsible for releasing AI that has actually killed multiple people.

I don’t think ChatGPT has killed anyone yet.

DoctorOetker 3 years ago

Ultimately its a millennia old fight between manipulators and rationalism.

The bead maze toy versus the abbacus.

Twisted contorted reasoning versus formal verification.

Choose your abbacus.

The black box nature of machine learning models is not the issue. Instead of training to imitate vacuous conjectures and claims as humans on average typically do, they could be trained to do automated theorem proving, AlphaZero style.

A minimalistic verifier like metamath is available free for download, including set.mm and a freely available book. It would be hard to purge from civilization.

Currently its math database is collaboratively worked on at github.

In theory a blockchain could host it.

Fermat style challenging could be used to objectively assess the value of theorems: the longer it matures unproven as a challenge on the chain, the higher the reward if someone finally proves it.

This inevitably creates an incentive to enter and digitize known mathematics into machine readable form, which will be easy for machine learning to accomplish.

Machine learning empowered automated theorem proving will become a profitable business, with the fruits available for all to benefit from.

Well, cryptography and protocols will also appear.

So during training the machine learning models will get endless bedtime stories about Alice, Bob and Eve.

Using conventional forward or backward chaining combined with adversarial models, one can construct arbitrary provable theorems, and negate it, then hide or propagate the negation so its not simply the first symbol in the theorem.

So we can train models to challenge each other Fermat style, about the truth or falsehood of a statement, and demand proof.

We can thus construct artificial mathematical systems with known inconsistencies and train models to seek a proof that the system contains an inconsistency. Such a proof will depend on the conflicting axioms.

Hence the models will be our best tool to detect and resolve hypocrisy.

The literal meaning of "apocalypse" is "revelation" or "uncovering", not "big tragedy"...

The verification algorithm, for example the ~300 LoC python implementation by Raph Levien, owes most of its length to parsing the metamath file format.

The actual Kolmogorov complexity of the verification algorithm itself is much smaller. There won't be any bits to "align".

All these hopeless attempts at trying to align the intuition component of the machine learning model, instead of training it to gain intuition in producing logical derivations.

The real horror of the control freaks is not that their alignment mechanisms might fail, but that its impossible to bias the verification algorithm itself, that its impossible to perpetuate the conflicts of interest, that any additional code in the verifier is immediately suspect, especially if it obviously skips all checks and dogmatically accepts a statement if its signed by a hardcoded "right" key.

The objective judge will be mechanized.

"abaccus akbar!"

hermannj314 3 years ago

Elon already made his money selling a promise of FSD but now wants to pump the brakes on AGI because it is almost here and he doesn't own it?

Where did all the hyper-competitive SV libertarians disappear to in the last year?

  • maxdoop 3 years ago

    Elon has been pretty vocal about the potential dangers of AI for a while now. And this open letter isn’t just him, it’s from other folks like Stuart Russel as well.

rvz 3 years ago

> As of Tuesday, no O̶p̶e̶n̶AI.com employees had signed the letter, although CEO Sam Altman’s name briefly appeared then disappeared from the list of signatories.

You already know the intention(s) of Sam Altman and O̶p̶e̶n̶AI.com. It was only to run with VC money and close up all their research up.

They are no better than DeepMind.

  • nullsense 3 years ago

    >You already know the intention(s) of Sam Altman and O̶p̶e̶n̶AI.com. It was only to run with VC money and close up all their research up.

    Citation needed.

    • rvz 3 years ago

      So O̶p̶e̶n̶AI.com did not take VC money then and close up the technical and architectural details of GPT-4 then?

      They might as well rename themselves to AI.com since they are not 'Open'.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection