Settings

Theme

OpenAI to remove non-profit control and give Sam Altman equity

reuters.com

410 points by award_ a year ago · 69 comments

Reader

jarbus a year ago

This has been an insane, slow, public descent into everything they promised not to do, it’s so crazy to have seen people inside try to stop it

  • 7e a year ago

    It's all part of the long con.

  • Spacecosmonaut a year ago

    Its a bit much to ask for people, in the face of such exceptional wealth prospects, to stay steadfast in their initial commitment.

    • limit499karma a year ago

      Excuse me but what sort of bullshit is this?

      Anyone with sufficient intellectual power to grok building AI must be fully aware of the monetization value of the same. If you are navel gazing over AIG taking over humanity you must first step through the stage were capital and AI couple up.

      So it is not too much to ask since others who also were aware of the inherent unwanted social distortions that was entirely predictable were relying on these individuals and "non-profit" organizations to actually live up to their claims.

      As it is, it seems like a thinly disguised propaganda to recruit and benefit from altruistic and capable workers in the field to then have Sam Altman (and whoever is behind him $$$) to parachute and take over and say "oh well, you can'tn expect people to be truthful and have principles! What are ya, a chump?"

    • hackernewds a year ago

      It is not much to expect people to be good people.

benreesman a year ago

If Plucky Nonprofit was OAI-1, and Abruptly Serious AI Lab was OAI-2, And Viable Commercial Product was OAI-3, and Scary Brittle Governance With Creepy NSA Ties was OAI-4, then God Emperor of Arrakis is presumably OAI-5.

I figured they’d ship GPT-5 to justify OAI-5, but I guess they’ve realized that they now answer to no one on anything in practical terms.

That’s terrifying.

  • e12e a year ago

    Nit; the God Emperor arises long after the Butlerian Jihad banned thinking machines - so it's not a great image for the rise of totalitarian artificial intelligence...

    • Raed667 a year ago

      Yeah but he didn't mind using Tleilaxu technology when it suited him

elAhmo a year ago

> Since then, OpenAI's board has been refreshed with more tech executives, chaired by Bret Taylor, former Salesforce co-CEO who now runs his own AI startup. Any corporate changes need approval from its nine-person non-profit board.

Why would the non-profit board approve a change to a for-profit company? Wouldn't this be against the nature of the non-profit entity that was founded and which they are supposed to govern?

  • neom a year ago
    • pyinstallwoes a year ago

      > In May 2018, he became head of the National Security Agency, the Central Security Service and the United States Cyber Command.

  • dirkc a year ago

    I don't think you can vote to change a non-profit to a for-profit company. Once a non-profit, always a non-profit.

    There are sort-off loopholes, like changing the name of "Open AI" to something else and selling the name along with the IP to a commercial entity - EdX did this. But it's not really a loophole, since the board would need justify the price the commercial company paid and the money from the sale is still within a non-profit and bound by it's rules.

    • gwern a year ago

      Exactly. The problem here is that OA is too valuable, and the non-profit owns too much of it (ie. all of it). The non-profit owns exactly 100% of the OA for-profit right now, and can cancel the "PPUs" any time it pleases, and if you believe Altman about the agent roadmap, would be required to do so within a few years. Now, given that the PPUs are being raised at a pseudo-marketcap of $150b, we're hearing, and that non-profits are required to sell assets for fair market value, how can the board sell its 100% ownership of the for-profit for anything less than the $150b that the market values that ownership at...? And where does one get $150b, exactly? Even Sam Altman can't pull that off.

      So, the whole question has been, how does he figure out how to leave the board with <50% ownership (preferably much less than, even 0%), in a way which passes the legal sniff test so the deal goes through, but which doesn't sacrifice >$75b that neither he nor anyone else has?

      This is where the rhetoric and preparing the grounds comes in. You can argue that OA is actually worth <<$150b, maybe even as low as $0, by saying that it has value only because it is going to keep raising enough cash to reach AGI... but the board remaining in charge + the current cap on raising capital + Altman quitting will destroy that raising ability, rendering OA worthless. (Similar to his threat last year to destroy OA by creating a competitor in MS, and MS then immediately cutting off hardware & capital so the rump OA would starve to death.)

      Hence, the board should - nay, is required by their legal & moral duties to the nonprofit - to accept much less than $150b, because whatever that deal is, it's more than $0.

      This is of course ridiculous and wrong, but it's not so ridiculous that a board of loyalists can't sign off on it, nor that lawyers can't defend it in court in front of a judge who wants to rule in its favor (because judges will tolerate anything in nonprofits short of gross criminality).

  • d--b a year ago

    This reeks of conflicts of interests

freetime2 a year ago

Seems like a good time for a rebrand to remove "Open" from the name. Maybe "InevitableAI"?

  • teleforce a year ago

    How this not in the front page of HN is really beyond me.

    I asked ChatGPT this simple question and the responses are really commendable and a bit hilarious:

    Prompt: Since OpenAI the owner of the ChatGPT has removed non-profit control of OpenAI and gave Sam Altman equity what is the more appropriate alternative name to OpenAI now?

    Answers: Given the shift in OpenAI's structure from a non-profit to a more profit-oriented model, a name like "AltmanAI" could be seen as more reflective of its current leadership and business model. Alternatively, something like "ProfitAI" or "EquityAI" could highlight the focus on profitability and equity distribution, though these might come across as blunt or satirical.

    However, if the goal is to preserve a sense of innovation and artificial intelligence leadership while reflecting the new profit-driven focus, names like "InnovativeAI" or "EnterpriseAI" could also work. These names could better align with the organization's current direction while maintaining a forward-thinking brand identity.

    • dang a year ago

      I'm not entirely sure what happened but in any case it made the frontpage in a different incarnation, as usually happens with major stories:

      OpenAI to Become For-Profit Company - https://news.ycombinator.com/item?id=41655954

    • krick a year ago

      > a name like "AltmanAI" could be seen as more reflective of its current leadership and business model

      Damn, that's harsh. And Musk said it cannot joke.

      • tmikaeld a year ago

        It can’t, it’s the most likely answer to the question based on what it knows. Which kind of makes it funnier

    • benreesman a year ago

      I also find it curious if not distressing that this wasn’t pinned at #1 all day. To be clear: I’m quite sure that @dang is deeply committed to running a clean shop on moderation.

      But whether something highly unlikely and slightly sensational like a finger on the scales, or the far more likely and utter banal explanation that people in vast numbers see their fates as intertwined with the status quo, the result is the same: on some deep institutional level HN is never going to hold Altman to account.

      So the question becomes, what authority handles the cases the community can’t? On paper that’s regulators and legislators. Those folks ostensible and actual missions aren’t identical, and differ more with time, but they intersect at “prevent would-be autocrats being so brazen as to provoke de facto revolt”.

      The public doesn’t hate Big Tech generally and its sociopath fringe specifically enough to make it a true wedge issue yet, but it’s trending that way.

      I’d go so far as to say that most almost anyone breathing the Bay air isn’t capable of truly internalizing how deeply the general public loathes the modern Valley machine: it’s dramatically more than Wall St at any time.

      It’s getting even trickier than usual to predict which historical social norms are still bright lines, but “profiting personally via using a charity as a vehicle for fraud” is still putting popular people in prison with bipartisan support.

      And Altman isn’t popular even here. He’s feared here, but loved almost nowhere.

      • rightbyte a year ago

        If you remember Altmans coup, which was branded as a board coup against him, there were a lot of supporters here on HN trying to manufacture consent.

        • benreesman a year ago

          I know. When I said he’s feared here I didn’t mean by everyone, and by me never.

          It sounds like you also refuse to live in fear of the goblin child of pg’s overconfidence in the joint character estimation of himself and Jessica and pg’s sloppy personal entanglements writ large as de facto public policy.

          Almost no one thinks this is ok. A plurality if not a majority is still willing to co-sign because of various pressures.

        • tim333 a year ago

          I was mildly suportive of Altman. I guess I was wrong.

          • benreesman a year ago

            A lot of very smart people acting in good faith had that investment in the benefit of the doubt betrayed, and for an eminently relatable reason: historically trustworthy figures stopped declaring their conflicts of interest at some point and thereby became exploiters of those who respected their judgement.

            The real test is who carries on now that the lupine avarice has been utterly unmasked.

        • dang a year ago

          That's just a function of the HN community having a range of opinions on every topic. "Manufacture consent" is too sledgehammer a phrase for the normal process of people discussing things.

  • mirekrusin a year ago

    NowClosedAI

Stem0037 a year ago

I'm a bit concerned about how this might impact their commitment to AI safety though. The non-profit structure was supposed to be a safeguard against profit-driven decision making. Will they still prioritize responsible AI development as a regular for-profit company?

  • tivert a year ago

    > I'm a bit concerned about how this might impact their commitment to AI safety though.

    Their commitment will remain unparalleled, because AI safety actually means doing whatever it takes to provide maximum return to the shareholders, no matter the social cost.

  • mewpmewp2 a year ago

    Depends how they predict it to affect their bottom line.

  • ericzawo a year ago

    lmao what do you think?

tim333 a year ago

That Sam is a shifty one. Here's him 4 months ago:

“It's so deeply unimaginable to people to say i don’t really need more money... If I were to say I'm going to try and make a trillion dollars with OpenAI it would save a lot of conspiracy theories”

And now having turned OpenAI into closed AI he's trying to give himself $10bn in equity.

  • pnt12 a year ago

    Of course he doesn't want the money, he's only getting money to shut down those conspiracy theories.

    /s

    • smsm42 a year ago

      Well if it turns out to be true, it's not a "conspiracy theory" anymore, so mission accomplished.

superultra a year ago

This felt inevitable which is why it’s not front page everywhere. It also doesn’t help that we’re in a bit of a lull with AI. I was with friends who don’t work in tech and AI came up in conversation at dinner. The general consensus is that AI is kinda dumb but it does a great job helping everyone write nicer emails.

But I don’t think I’m being alarmist when I say that this moment, when the altruistic ideals get suddenly pushed to the side, may be the moment noted in history books before whatever it is that this leads us to happens. I don’t mean evil machines are next, but I do think it’s a cotton gin, telegram over the ocean, light bulb, AARPNET moment. Maybe even more impactful than those. Manhattan project? TBD I guess.

Which is why I believe we’ll regret that we didn’t move slower or enforce more collective stop gaps behind the unbridled force of capitalism and the public goodwill. I’m not a doomsayer but you can’t tell me something isn’t up when this much money is involved.

  • auggierose a year ago

    Would you have recommended moving slower to build the first atom bomb?

    • consteval a year ago

      Yes. We used those bombs for evil beyond our comprehension.

      I encourage all Americans to further research Hiroshima and Nagasaki. Our propaganda has told us our war crimes were completely justified, but a more neutral historical analysis reveals this isn't the case.

minimaxir a year ago

This is, uh, interesting timing to put it mildly given Mira Murati's departure.

ac130kz a year ago

ClosedAI

blackeyeblitzar a year ago

It’s so weird that the person behind Loopt has now come fully in charge of this company and perverted its initial goals so completely. I’m still not clear on what Sam has accomplished or why he was put in charge of YC or OpenAI. Also apparently chairman of Helion Energy (fusion startup). Masterclass in failing up.

  • rightbyte a year ago

    I think it is the same reason rich people throw money after Adam Neumann post Wework. They know it is a con and wanna be part of it. They see themself as alpha-wolfes to badass to be losers. Compare to people that shill as true believers in different strange crypto Ponzi-schemes. They think they are in on it.

    Being so good that you get in charge of YC, and not fired by PG at all, makes you perfect to meta-morph 'OpenAI' into some dystopian big corp, as seen.

  • limit499karma a year ago

    He is an effective bullshit artist with no moral restraint and capital behind the scene promotes and pushes him and his type as they are the perfect match for their role as "captains" of "industry". The 'building' of these fronts is a collaborative effort, with definitive media element of which even HN is likely included.

  • d--b a year ago

    At this stage though, he has no more place to fail up to!

ren_engineer a year ago

and apparently removing the cap on returns for investors

habosa a year ago

Once Sam Altman has consolidated all the power (think: top-10 richest person in the world and control over AGI v0) we’ll find out what he actually wants. And we will hate it.

dalant979 a year ago

https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

a historic reminder

lysace a year ago

It is weird that I get this news item from my local general interest newspaper in Europe but not from the HN front page, isn't it?

Posted another source (https://news.ycombinator.com/item?id=41653028) since I feel this needs a discussion. This one has a more descriptive headline though.

  • booleanbetrayal a year ago

    I also am a bit mystified as to why this isn't trending to the front.

    • tazu a year ago

      It's probably because of the poorly-designed "flamewar detector" that censors posts if they are upvoted/downvoted too quickly. @dang explained it to me a few weeks ago[1] on another YC-related post that conveniently got scrubbed from the front page.

      [1]: https://news.ycombinator.com/item?id=41510285

    • NewJazz a year ago

      Not sure, but HN might down weight reuters due to paywall, and this story seems to be a Reuters-exclusive.

  • freetime2 a year ago

    Currently there is another OpenAI story on the front page about about Mira Murati leaving [1], and a couple of comments there are also talking about this news of OpenAI removing non-profit control [2] [3].

    I tend to agree that this is the bigger story and more worthy of being on the front page, but HN tends to enjoy a bit of celebrity gossip so not surprising to me that the news of the CTO leaving would get more traction.

    I don't think it's any sort of conspiracy if that's what you're implying.

    [1] https://news.ycombinator.com/item?id=41651038

    [2] https://news.ycombinator.com/item?id=41653013

    [3] https://news.ycombinator.com/item?id=41651601

    • lysace a year ago

      I see what you mean, but at the same time there is a value in this being discussed on its own. The celebrity gossip story is currently covering up the actual important news. (Yeah, I'll go with with that conspiracy theory, also known as a classic PR move.)

  • blackeyeblitzar a year ago

    Yes very weird this isn’t on the front page

rvz a year ago

Probably now everyone is starting to realize that Sam Altman really is far worse than Elon and everyone was very late to understand this after the coup that happened nearly a year ago.

Perhaps this is what Mira, Greg and IIya saw in Sam; his true intentions after that coup.

This 'non-profit' / 'for-profit' complication structure + taking capped investment won't be tried again in a very long time after these events.

  • steeeeeve a year ago

    This isn't the first time it happened. It won't be the last. Heck, it's not even the first time Altman pulled it off.

  • peanuty1 a year ago

    Mira was appointed CEO after Sam was fired and she fought to bring back Sam, no? Why are people now saying she was part of the coup that fired Sam?

    • Atotalnoob a year ago

      There is a New York Times article that states Mira wrote a memo to the board

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection