Settings

Theme

"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded

vox.com

59 points by longdefeat 2 years ago · 13 comments

Reader

r0ckarong 2 years ago

My bet would be they saw the kind of ideas people brought in return for funding and how they would do almost everything to become the next Google.

Imagine social score and surveillance based on multi-modal LLM combined with military target selection and running politics via "AI". I'd quit too if I had to be complicit in that stuff. There's only so much ethical sabotage you can do.

JohnMakin 2 years ago

Hard to find the answer to the headline's question anywhere in this article, or anything of substance at all here really.

  • Atotalnoob 2 years ago
    • JohnMakin 2 years ago

      Thanks, have seen this - however, it seems like this article is jumping to conclusions about what this means. Could it be that executives at OpenAI are scared of what they built, or could another interpretation be that the senior leadership doesn't see what they are building as an actual threat because it isn't? (I personally do not believe that we will accomplish anything close to AGI with our current strategies, and I think the leadership at openAI is somewhat aware of this).

      Seems like wild speculation.

      • tivert 2 years ago

        > Thanks, have seen this - however, it seems like this article is jumping to conclusions about what this means. Could it be that executives at OpenAI are scared of what they built, or could another interpretation be that the senior leadership doesn't see what they are building as an actual threat because it isn't?

        The article left the pretty strong impression OpenAI's leaders, especially Altman, are too busy trying to make money and amass power now to care about any long term risk, and that's reflected in their actions.

ChrisArchitect 2 years ago

[dupe]

More discussion: https://news.ycombinator.com/item?id=40390831

https://news.ycombinator.com/item?id=40391382

candiddevmike 2 years ago

Are they sure quitting is the best thing they can do? Is the team that's supposed to be in charge of AI safety really unable to do anything about AI safety?

Why give up your seat at the table?

Could be there's a lot these ex employees are not sharing...

  • throwaway4233 2 years ago

    The screenshot of the tweet thread at the end seemed to indicate that the alignment team wanted to explore other avenues(or models) to get to AGI, while the company leadership wanted to keep improving on the transformer models.

    From the business point of view, continuing to improve the goose that is laying the golden eggs makes sense, but as a researcher they might have already ended up seeing the limit of adding compute and data, and wanted to pivot. As someone who worked a mid level manager/engineer, I can understand the frustration involved in making the management level understand what's really happening on the ground level.

  • r0ckarong 2 years ago

    Because look at how whistleblowers are treated. They're not heroes, they're traitors that get punished.

cdme 2 years ago

Just the people we want in charge of, well, anything.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection