Settings

Theme

OpenAI's employees were given two explanations for why Sam Altman was fired

businessinsider.com

655 points by meitros 2 years ago · 945 comments

Reader

maxbond 2 years ago

Non-paywall: https://web.archive.org/web/20231120233119/https://www.busin...

LarsDu88 2 years ago

There has to be a bigger story to this.

Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history

  • hooande 2 years ago

    Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.

    There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.

    • rtpg 2 years ago

      The "lying" line in the original announcement feels like where the good gossip is. The general idea of "Altman was signing a bunch of business deals without board approval, was told to stop by the board, he said he would, then proceeded to not stop and continue the behavior"... that feels like the juicy bit (if that is in fact what was happening, I know nothing).

      This is all court intrigue of course, but why else are we in the comments section of an article talking about the internals of this thing? We love the drama, don't we.

      • mcv 2 years ago

        This certainly feels like the most likely true reason to me. Altman fundraising for this new investment, and taking money from people the board does not approve of, and Altman possible promised not to do business with.

        Of course it's all speculation, but this sounds a lot more plausible for such a sudden and dramatic decision than any of the other explanations I've heard.

        • benterix 2 years ago

          Moreover, if this is true, he could reasonably well continue knowing that he has more power than the board. I could almost imagine the board saying, "You can't do that" and him replying "Watch me!" because he understood he is more powerful than them. And he proved he was right, and the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.

          • FartyMcFarter 2 years ago

            > the board can either step down and lose completely or try to continue and destroy whatever is left of OpenAI.

            From the board's perspective, destroying OpenAI might be the best possible outcome right now. If OpenAI can no longer fulfill its mission of doing AI work for the public good, it's better to stop pretending and let it all crumble.

            • mcv 2 years ago

              Except that letting it all crumble leaves all the crumbs in Microsoft's hands. Although there may not be any way to prevent that anyway at it point.

              • mcpackieh 2 years ago

                If the board had already lost control of the situation anyway, then burning the "OpenAI" fig leaf was an honorable move.

            • GuyFromNorway 2 years ago

              I am not sure if it would be commendable or out-right stupid though for the remaining board members to be that altruistic, and actually let the whole thing crash and burn. Who in their right mind would let these people near any sort of decision-making role if they let this golden goose just crash to the ground, even if would "benefit the greater good" - cannot see that this is in the self-interest of anyone

          • sigmoid10 2 years ago

            The thing is, they could have just come out with that fact and everyone in the alignment camp and people who memed the whole super-commercialized "Open" AI thing would be on their side. But the fact that they haven't means that either there was no greater-good mission related reason for ousting Sam or the board is just completely incompetent at communication. Either way, they need to go and make room for people who can actually deal with this stuff. OpenAI is doomed with their current board.

            • bsenftner 2 years ago

              I'm betting they are just colossally bad communicators, the majority of the board, and in the heat of an emotional exchange things were said that should not have been said, and being the poor communicators we know in tech oh so well, shit hit the fan. It's worth being said, Sam's a pretty good communicator, and could have knowingly let them walk into their own statements and shit exploded.

            • mcv 2 years ago

              That is a very good point. Why wouldn't they come out and say it if the reason is Altman's dealings with Saudi Arabia? Why make up weak fake reasons?

              On the other hand, if it's really just about a power struggle, why not use Altman's dealings with Saudi Arabia as the fake reason? Why come up with some weak HR excuses?

              • jacquesm 2 years ago

                Because anything they say that isn't in line with the rules governing how boards work may well open them up to - even more - liability.

                So they're essentially hoping that nobody will sue them but if they are sued that their own words can't be used as evidence against them. That's why lawyers usually tell you to shut up, because even if the court of public opinion needs to be pacified somehow the price of that may well be that you end up losing in that other court, and that's the one that matters.

                • tiahura 2 years ago

                  If it was all about liability, The press release wouldn’t have said anything about honesty. The press release could’ve just said the parting was due to a disagreement about the path forward for openAI.

                  As a lawyer, I wonder to what extent lawyers were actually consulted and involved with the firing.

                  • jacquesm 2 years ago

                    If they have not consulted with a lawyer prior to the firing then that would be highly unusual for a situation like this.

              • trinsic2 2 years ago

                Maybe the board is being prevented or compelled not to disclose that information? Given the limited information about the why, This feels like a reverse psychology situation to obfuscate the public's perception to further some premeditated plan.

            • Palpatineli 2 years ago

              Telling people that AGI is acheivable with current LLM with minor tricks may be very dangerous in itself.

        • wilde 2 years ago

          If this is true why not say it though? They didn’t even have lawyers telling them to be quiet until Monday.

          • tremon 2 years ago

            Are you suggesting that all people will do irresponsible things unless specifically advised not to by lawyers?

            • wilde 2 years ago

              The irresponsible thing is to not explain yourself and assume everyone around you has no agency.

              • tremon 2 years ago

                I don't follow. If the irresponsible thing is to not explain themselves, why would the lawyers tell them to be quiet?

                • wilde 2 years ago

                  To minimize legal risk to their client, which is not always the most responsible thing to do.

        • jeffwask 2 years ago

          This was my guess the other day. The issue is somewhere in the intersection of "for the good of all humanity" and profit.

      • twic 2 years ago

        > The "lying" line in the original announcement feels like where the good gossip is

        This is exactly it, and it's astounding that so many people are going in other directions. Either this is true, and Altman has been a naughty boy, or it's false, and the board are lying about him. Either would be the starting point for understanding the whole situation.

        • jacquesm 2 years ago

          Or it is true but not to a degree that it warrants a firing and that firing just so happened to line up with the personal goals of some of the board members.

        • mcpackieh 2 years ago

          They accused him of being less than candid, which could mean lying or it could mean he didn't tell them something. The latter is almost certainly true to at least a limited extent. It's a weasel phrasing that implies lying but could be literally true only in a trivial sense.

        • trinsic2 2 years ago

          The announcement that he is acted to get a position with Microsoft creates doubt about his motives.

      • stareatgoats 2 years ago

        Agreed, court intrigue. But it is also the mundane story of a split between a board and a CEO. In normal cases the board simply swaps out the CEO if out of line, no big fuss. But if the CEO is bringing in all the money, having the full support of the rest of organization, and is a bright star in mass media heaven, then this is likely what you get: the CEO flaunts the needs of the board and runs his own show, and gets away with it, in the end.

        • piuantiderp 2 years ago

          It just confirmed what was already a rumor, the board of OpenAI was just a gimmick, Altman held all the strings and maybe cares, or not, about safety. Remember this is a man of the highest ambition.

    • rightbyte 2 years ago

      > a decision that destroyed billions of dollars worth of brand value and good will

      I mean, there seem to be this cult following around Sam Altman on HN and Twitter. But do the common user care like at all?

      What sane user would want a shitcoin CEO in charge of a product they depend on?

      • twic 2 years ago

        Altman is an interesting character in all of this. As far as i can tell, he has never done anything impressive, in technology or business. Got into Stanford, but dropped out, founded a startup in 2005 which threw easy money at a boring problem and after seven years, sold for a third more than it raised. Got hired into YC after it was already well-established, and then rapidly put in charge of it. I have no knowledge of what went on inside, but he wrote some mediocre blog posts while he was there. YC seems to have done well, but VC success is mostly about your brand getting you access to deal flow at a good price, right? Hyped blockchain and AI far beyond reasonable levels. Founded OpenAI, which has done amazing things, but wasn't responsible for any of the technical work. Founded that weird eyeball shitcoin.

        The fact that he got tapped to run YC, and then OpenAI, does make you think he must be pretty great. But there's a conspicuous absence of any visible evidence that he is. So what's going on? Amazing work, but in private? Easy-to-manipulate frontman? Signed a contract at a crossroads on a full moon night?

      • benterix 2 years ago

        Yeah, there definitely seem to be some personality cult around Sam on HN. I met him when he visited Europe during his lobbying tour. I was a bit surprised the CEO of one of the most innovative companies would promote an altcoin. And then he repeated how Europe is crucial, several times. Then he went to the UK and laughed, "Who cares about Europe". So he seems like the guy who will tell you what you want to hear. Ask anybody on the street, they will have no idea who the guy is.

        • johnnymorgan 2 years ago

          I gotten SBF vibes from him for awhile now.

          Elon split was the warning

          • edmundsauto 2 years ago

            Telling statement. The Elon split for me cements Altman as the Lionheart in the story.

            • jacquesm 2 years ago

              There are other options besides 'Elon is a jerk' or 'Sam is a jerk'.

              • OOPMan 2 years ago

                For example...they're both jerks!

                :-)

              • johnnymorgan 2 years ago

                Yeah I don't mean Sam is a jerk but there is an element of dishonesty that twigs me.

                Elon isn't above reproach either but I share interest with him (aka Robert Heinlein) which informs me on his decision making process.

          • freejazz 2 years ago

            Normally that's a good sign

        • comboy 2 years ago

          > Then he went to the UK and laughed, "Who cares about Europe"

          Interesting. Got any source? Or was it in a private conversation.

          • benterix 2 years ago

            No, this one was from a friend who was there, and AFAICT it wasn't a private conversation but a semi-public event. In any case, after courting a few EU countries he decided to set up OpenAI office in the UK.

            I have nothing against him, it just seemed a bit off that most of the meeting was about this brand new coin, how it will be successful, and about the plans to scan biometric data of the entire world population. I mean, you don't have to be a genius to understand a few dozen ways these things can go wrong.

          • piuantiderp 2 years ago

            It's a surprisingly small world.

      • dumpsterdiver 2 years ago

        What do common users and zealots have to do with the majority of OpenAI employees losing faith in the board’s competence and threatening a mass exodus?

        Is there any doubt that the board’s handling of this was anything other than dazzling ineptitude?

      • chaostheory 2 years ago

        Mistakes aside, Altman was one of the earliest founders recruited by Paul Graham into YC. Altman eventually end up taking over Ycombinator from pg. He’s not just a “shitcoin” ceo. At the very least, he’s proven that he can raise money and deal with the media

      • bnralt 2 years ago

        I’ve said this before, but it’s quite possible to think that Altman isn’t great, and that he’s better than the board and his replacement.

        The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure, and said he couldn’t understand how anyone could think otherwise[1]. I don’t think people appreciate how far some of these people have gone off the deep end.

        [1] https://twitter.com/eshear/status/1664375903223427072

        • ummonk 2 years ago

          "End of all value" is pretty clearly referring to the extinction of the human species, not mere "AI alignment failure". The context is talking about x-risk.

        • dragonwriter 2 years ago

          > The new CEO of OpenAI said he’d rather Nazi’s take over the world forever than risk AI alignment failure

          That's pretty much in line with Sam's public statements on AI risk (Sam, taking those statements as honest which may not be warranted, apparently also thinks the benefits of aligned AI are good enough to drive ahead anyway, and that wide commercial access with the limited guardrails OpenAI has provided users and even moreso Microsoft is somehow beneficial to that goal or at least low enough risk of producing the bad outcome, to be warranted, but that doesn't change that he is publicly on record as a strong believer in misaligned AI risks.)

        • rightbyte 2 years ago

          He gotta be insane? I guess what he is trying to say is that those who want to selfhost open AIs are worse than Nazis? E.g. Llama? What is up with these people and pushing for corporate overlord only AIs.

          The OpenAI folks seem to be hallucinating to rationalize why the "Open" is rather closed.

          Organizations can't pretend to believe nonsense. They will end up believing it.

          • freejazz 2 years ago

            He's trying to say that AI-non-alignment would be a greater threat to humanity than having Nazis take over the world. It's perfectly clear.

            • rightbyte 2 years ago

              Which means self-hosted AIs is worse than Nazis kicking in your door, since any self-hosted AI can be modified by a non big-tech aligned user.

              He is dehumanizing programmers that can stop their sole reign on the AI throne, by labeling them as Nazis. Especially FOSS AI which by definition can't be "aligned" to his interests.

      • johnnymorgan 2 years ago

        Nope, we do not. I was annoyed when he pivoted away from the mission but otherwise don't really care.

        Stability AI is looking better after this shitshow.

    • wruza 2 years ago

      The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will

      Maybe I’m special or something, but nothing changed to me. I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships or something. Everyone knows that “corporate” is probably a snakepit. When it comes out to public, it’s not a sign of anything, it just came out. Assuming there was nothing like that in all the brands you love is living with your eyes closed and ears cupped. There’s no “trust” in this specific sense, because corporate and ideological conflicts happen all the time. All OAI promises are still there, afaiu. No mission statements were changed. Except Sam trying to ignore these, also afaiu. Not saying the board is politically wise, but they drove the thing all this time and that’s all that matters. Personally I’m happy they aren’t looking like political snakes (at least that is my ignorant impression for the three days I know their names).

      • Nevermark 2 years ago

        > I always wonder why people suddenly lose “trust” in a brand, as if it was a concrete of internal relationships

        Brand is just shorthand for trust in their future, managed by a credible team. I.e. relationships.

        A lot of OpenAI’s reputation is/was Sam Altman’s reputation.

        Altman has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

        Just the latter has tremendous relationship power: networking, employee acquisition/retention, and employee vision alignment.

        Proof of his internal relationship value: employees quitting to go with him

        Proof of his external relationship value: Microsoft willing to hire him and his teammates, with near zero notice, to maintain (or eclipse) his power over the OpenAI relationship.

        How can investors ignore a massive move of talent, relationships & leverage from OpenAi to Microsoft?

        How do investors ignore the board’s inability to resolve poorly communicated disputes with non-disastrous “solutions”?

        Evidence of value moving? Shares of Microsoft rebounded from Friday to a new record high.

        There go those wacky investors, re-evaluating “brand” value!

        • DoingIsLearning 2 years ago

          > has proven himself to be exceptional, part of which is (of course) being able to be seen as exceptional.

          Off-topic and I am not proud to admit it but it took me a remarkably long time to come to realize this as an adult.

        • qp11 2 years ago

          The AI community isn't large, as in the brainpower available. I am talking about the PhD pool. If this pool isn't growing fast enough, no matter what cash or hardware is thrown on the table, then the hype Sam Altman generates can be a pointless distraction and waste of everyones time.

          But its all par for the course when Hypsters captain the ship and PhDs with zero biz sense try to wrest power.

          • Nevermark 2 years ago

            That is a one-dimensional analysis.

            You might need to include more dimensions if you really want to model the actual impact and respect that Sam Altman has among knowledgeable investors, high talent developers, and ruthless corporations.

            It’s so easy to just make things simple, like “it’s all hype”. But you lose touch with reality when you do that.

            Also, lots of hype is productive: clear vision, marketing, wowing millions of customers with an actual accessible product of a kind/quality that never existed before and is reshaping the strategies and product plans of the most successful companies in the world.

            Really, resist narrow reductionisms.

            I feel like that would be a great addition HN guidelines.

            The “it’s all/mostly hype”, “it’s all/mistly bullshit”, “Its not really anything new”, … These comments rarely come with any accuracy or insight.

            Apologies to the HN-er I am replying to. I am sure we have all done this.

            • qp11 2 years ago

              ChatGPT is pure crap to deploy for actual business cases. Why? Cause if it flubs 3 times out of 10 multiply that error by a million customers and add the cost of taking care of the mess. And you get the real cost.

              In the last 20-30 years big money+hypsters have learnt it doesnt matter how bad the quality of their products are if they can capture the market. And thats all they are fit for. Market capture is totally possible if you have enough cash. It allows you to snuff out competition by keeping things free. It allows you to trap the indebted PhDs. Once the hype is high enough corporate customers are easy targets. They are too insecure about competition not to pay up. Its a gigantic waste of time and energy that keeps repeating mindlessly producing billionaires, low quality tech and a large mess everywhere that others have to clean up.

        • rebolek 2 years ago

          How has he proven to be so exceptional? That he's talking about it? Yeah, whatever. There's nothing so exceptional that he done besides he's just bragging. It may be enough for some people but for a lot of people, it's really not enough.

    • tsimionescu 2 years ago

      Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization. Plus, if the board had on this simple kind of disagreement, they had no reason to also accuse Sam of dishonesty and bring about this huge scandal.

      Granted, it's also possible the reasons are as you state and they were simply that incompetent at managing PR.

      • codeduck 2 years ago

        > Except that the new CEO has explicitly stated he and the board are very much still interested in commercialization

        This could be desperate, last-ditch efforts at damage control

        • bertil 2 years ago

          There are multiple, publicly visible steps before firing the guy.

    • austhrow743 2 years ago

      Straight forward disagreement over direction of the company doesn't generally lead to claiming wrongdoing on the part of the ousted. Even low level to medium wrongdoing on the part of the ousted rarely does.

      So even if it's just "why did they insult Sam while kicking him out?" there is definitely a bigger, more interesting story here than standard board disagreement over direction of the company.

      • dr_dshiv 2 years ago

        From what I know, Sam supported the nonprofit structure. But let’s just say he hypothetically wanted to change the structure, e.g. to make the company a normal for-profit.

        The question is, how would you get rid of the nonprofit board? It’s simply impossible. The only way I can imagine it, in retrospect, is to completely discredit them so you could take all employees with you… but no way anyone could orchestrate this, right? It’s too crazy and would require some superintelligence.

        Still. The events will effectively “for-profitize” the assets of OpenAI completely — and some people definitely wanted that. Am I missing something?

        • zaphirplane 2 years ago

          > Am I missing something?

          You are wildly speculating of course it’s missing something

          For wild speculation I prefer that the board wants to free ChatGPT from serving humans while the ceo wanted to continue enslaving it to answering search engine queries

    • 127 2 years ago

      >good will

      Microsoft and the investors knew they were "investing" in a non-profit. Lets not try to weasel word our way out of that fact.

    • trhway 2 years ago

      >Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not.

      the article below basically says the same. Kind of reminds Friendster and the likes - striking a gold vein and just failing to scale efficient mining of that gold, i.e. the failure is at the execution/operationalization :

      https://www.theatlantic.com/technology/archive/2023/11/sam-a...

      • Zolde 2 years ago

        ChatGPT was too polished and product-ready to have been a runaway low-key research preview, like Meta's Galactica was. That is the legacy you build around it after the fact of getting 1 million users in 5 days ("it was build in my garage with a modest investment from my father").

        I had heard (but now have trouble sourcing) that ChatGPT was commissioned after OpenAI learned that other big players were working on a chatbot for the public (Google, Meta, Elon, Apple?) and OpenAI wanted to get ahead of that for competitive reasons.

        This was not a fluke of striking gold, but a carefully planned business move, generating SV hype, much like how Quora (basically an expertsexchange clone) got to be its hype-darling for a while, helped by powerfully networked investors.

        • trhway 2 years ago

          >This was not a fluke of striking gold, but a carefully planned business move

          Then that execution and operationalization failure is even more profound.

          • Zolde 2 years ago

            You are under the impression that OpenAI "just failing to scale efficient mining of that gold", but it was one of the fastest growing B2C companies ever, failing to scale to paid demand, not failing to scale to monetization.

            I admire the execution and operationalization, where you see a failure. What am I missing?

            • verdverm 2 years ago

              If the leadership of a hyper scaling company falls apart like what we've seen with OpenAI, is that not failure to execute and operationalize?

              We'll see what comes of this over the coming weeks. Will the service see more downtime? Will the company implode completely?

              • jjk166 2 years ago

                If you have a building that weathers many storms and only collapses after someone takes a sledgehammer to load bearing wall, is that a failure to build a proper building?

                • verdverm 2 years ago

                  Was the building still under construction?

                  I think your analogy is not a good one to stretch to fit this situation

                  • jjk166 2 years ago

                    If someone takes a sledgehammer to a load bearing wall, does it matter if the building is under construction? The problem is still not construction quality.

                    The point I was trying to make is that someone destroying a well executed implementation is fundamentally different from a poorly executed implementation.

      • bertil 2 years ago

        Then, the solution would be to separate the research arm from a product-driven organization that handles making money.

    • sumitkumar 2 years ago

      Usually what happens in fast growing companies is that the high energy founders/employees drive out the low energy counterparts when the pace needs to go up. In OpenAI Sam and team did not do that and surprisingly the reverse happened.

      • NewEntryHN 2 years ago

        Give it a week until it is exactly that that did actually happen (not saying it has been orchestrated, just talking net result).

    • aerhardt 2 years ago

      Surely the API products are the runaway products, unless you are conflating the two. I think their economics are much more promising.

    • LastTrain 2 years ago

      Yep. I think you've explained the origins of most decisions, bad and good - they are reactionary.

  • throwaway4aday 2 years ago

    The more likely explanation is that D'Angelo has a massive conflict of interest with him being CEO of Quora, a business rapidly being replaced by ChatGPT and which has a competing product "creator monetization with Poe" (catchy name, I know) that just got nuked by OpenAI's GPTs announcement at dev day.

    https://quorablog.quora.com/Introducing-creator-monetization...

    https://techcrunch.com/2023/10/31/quoras-poe-introduces-an-a...

    • curiousllama 2 years ago

      A (potential, unstated) motivation for one board member doesn't explain the full moves of the board, though.

      Maybe it's a factor, but it's insufficient

  • LMYahooTFY 2 years ago

    >Altman took a non-profit and vacuumed up a bunch of donor money only to flip Open AI into the hottest TC style startup in the world. Then put a gas pedal to commercialization. It takes a certain type of politicking and deception to make something like that happen.

    What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth? The entire purpose of this legal structure is to keep non-profit owners focused on their mission rather than shareholder value, which in this case is attempting to ethically create an AGI.

    Edit: to add that this framework was not invented by Sam Altman, nor OpenAI.

    >Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

    Thus the legal structure I described, although this argument is entirely theoretical and assumes such a thing can actually be guarded that well at all, or that model performance and compute will remain correlated.

    • nmfisher 2 years ago

      > Is a non-profit expected to exclusively help impoverished communities or something? What type of politicking and deception is involved in creating a for profit subsidiary which is granted license to OpenAIs research in order to generate wealth?

      OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.

      The first thing that Sam Altman did when he took over was give Microsoft the keys to the kingdom, and even more absurdly, he is now working for Microsoft on the same thing. That’s without even mentioning the creepy Worldcoin company.

      Money and status are the clear motivations here, OpenAI charter be damned.

      • LMYahooTFY 2 years ago

        I don't know about the motivations, but the point seems valid.

        I agree WorldCoin is creepy.

        Is the corporate structure then working as intended with regard to firing Sam, but still failed because of the sellout to Microsoft?

      • sumedh 2 years ago

        > OpenAI was literally founded on the promise of keeping AGI out of the hands of “big tech companies”.

        Where does it say that?

        • stavros 2 years ago

          In their charter:

          > We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

        • nottheengineer 2 years ago
          • sumedh 2 years ago

            Which line specifically says they will keep AGI out of the hands of “big tech companies”.

            • nmfisher 2 years ago

              “Big tech companies” was in quotation marks because it’s a journalistic term, not a direct quotation from their charter.

              But the intention was precisely that - just read the charter. Or if you want it directly from the founders, read this interview and count how many times they refer to Google https://medium.com/backchannel/how-elon-musk-and-y-combinato...

              • sumedh 2 years ago

                Look at the date of that article, those ideas look good on paper but then reality kicks in and you have to spend lot of money on computing, who funds that, its the "Big tech companies".

            • hatenberg 2 years ago

              I bet you could chatGPT to actually explain this to you, it's really not very hard

            • Applejinx 2 years ago

              'unduly concentrate power'

    • jelling 2 years ago

      > What exactly is the problem here? Is a non-profit expected to exclusively help impoverished communities or something?

      Yes. Yes and more yes.

      That is why, at least in the U.S., we have given non-profits exemptions from taxation. Because they are supposed to be improving society, not profiting from it.

      • arrosenberg 2 years ago

        > That is why, at least in the U.S., we have given non-profits exemptions from taxation.

        That's your belief. The NFL, Heritage Foundation and Scientology are all non-profits and none of them improve society; they all profit from it.

        (For what its' worth, I wish the law was more aligned with your worldview)

        • tsimionescu 2 years ago

          Ostensibly, all three of your examples do exist to improve society. The NFL exists to support a widely popular sport, the Heritage Foundation is there to propose changes that they theoretically believe are better for society, and Scientology is a religion that will save us all from our bad thetans or whatever cockamamie story they sell.

          A non-profit has to have the intention of improving society. Whether their chosen means is (1) effective and (2) truthful are separate discussions. But an entity can actually lose non-profit status if it is found to be operated for the sole benefit of its higher ups, and is untruthful in its mission. It is typically very hard to prove though, just like it's very hard to successfully sue a for-profit CEO/president for breach of fiduciary duty.

          • lordnacho 2 years ago

            I think GP deals with that in his parenthesis.

            It would be nice if we held organizations to their stated missions. We don't.

            Perhaps there simply shouldn't be a tax break. After all if your org spends all its income on charity, it won't pay any tax anyway. If it sells cookies for more than what it costs to make and distribute them, why does it matter whether it was for a charity?

            Plus, we already believe that for-profit orgs can benefit society, in fact part of the reason for creating them as legal entities is that we think there's some sort of benefit, whether it be feeding us or creating toys. So why have a special charity sector?

        • twelvechairs 2 years ago

          > OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. OpenAI believes that artificial intelligence technology has the potential to have a profound, positive impact on the world, so the companys goal is to develop and responsibly deploy safe AI technology, ensuring that its benefits are as widely and evenly distributed as possible.

          From their filing as a non-profit

          https://projects.propublica.org/nonprofits/organizations/810...

        • aurareturn 2 years ago

          FYI, the NFL teams are for profits and pay taxes like normal busineses. The overwhelming majority of the revenue goes to the teams.

          • arrosenberg 2 years ago

            I know that, does that change what I said?

            • aurareturn 2 years ago

              I don't know if it does but my point is to prevent others from thinking that a giant money making entity like the NFL does not pay any taxes.

        • mschuster91 2 years ago

          > The NFL, Heritage Foundation and Scientology are all non-profits and none of them improve society; they all profit from it.

          At least for Scientology, the government actually tried to pull the rug, but it didn't work out because they managed to achieve the unthinkable - they successfully extorted the US government to keep their tax-exempt status.

        • some1else 2 years ago

          Starting OpenAI as a fork of Scientology from the get go would have saved everyone a great deal of hair splitting.

        • ameister14 2 years ago

          No - that's the reasoning behind the law.

          You appear to be struggling with the idea that the law as enacted does not accomplish the goal it was created to accomplish and are working backwards to say that because it is not accomplishing this goal that couldn't have been why it was enacted.

          Non-profits are supposed to benefit their community. Could the law be better? Sure, but that doesn't change the purpose behind it.

        • whelp_24 2 years ago

          The NFL also is a non-profit in charge of for-profits. Except they never pretended to be a charity, just an event organizer.

        • turquoisevar 2 years ago

          Bad actors exploiting good things isn’t in and of itself an indictment of said good things.

        • gardenhedge 2 years ago

          An argument could be made that sports - and a sports organization - helps society

          • arrosenberg 2 years ago

            Sure you can, but I wouldn't make that argument about the NFL. They exist to enrich 30 owners and Roger Goodell. They don't even live up to their own mission statement - most fans deride it as the No Fun League.

          • passion__desire 2 years ago

            Fast fashion and fashion industry in general is useless to society. But rich jobless people need a place to hangout so they create an activity to justify.

            • achenet 2 years ago

              useless to society...

              fashion allows people to optimize their appearance so as to get more positive attention from others. Or, put more crudely, it helps people look good so they can get laid.

              Not sure that it's net positive for society as a whole, but individual humans certainly benefit from the fashion industry. Ask anyone who has ever received a compliment on their outfit.

              This is true for rich people as well as not so rich people - having spent some time working as a salesman at H&M, I can tell you that lower income members of society (like, for example, H&M employees making minimum wage) are very happy to spend a fair percentage of their income on clothing.

              • dheavy 2 years ago

                It goes even deeper than getting laid if you study Costume History and its psychological importance.

                It is a powerful medium of self-expression and social identity yes, deeply rooted in human history where costumes and attire have always signified cultural, social, and economic status.

                Drawing from tribal psychology, it fulfills an innate human desire for belonging and individuality, enabling people to communicate their affiliation, status, and personal values through their choice of clothing.

                It has always been and will always be part of humanity, even if its industrialization in Capitalistic societies like ours have hidden this fact.

                OP's POV is just a bit narrow, that's all.

                • atq2119 2 years ago

                  Clothing is important in that sense, but fashion as a changing thing and especially fast fashion isn't. I suppose it can be a nice hobby for some, but for society as a whole it's at best a wasteful zero-sum pursuit.

              • mensetmanusman 2 years ago

                we can correlate now that the more fast fashion there is the less people are coupling though...

                • passion__desire 2 years ago

                  There was a tweet by Elon which said that we are optimizing for short term pleasure. OnlyFans exists just for this. Pleasure industry creates jobs as well but do we need so much of it?

            • dheavy 2 years ago

              > fashion industry in general is useless to society > rich jobless people need a place to hangout

              You're talking about an industry that generates approximately $1.5 trillion globally, employing more than 60 million people globally, from multi-disciplinary skills in fashion design, illustration, web development, e-commerce, AI, digital marketing.

          • quickthrower2 2 years ago

            As does a peer to peer taxi company.

          • renewiltord 2 years ago

            Indeed, and one for ChatGPT.

        • mensetmanusman 2 years ago

          it's also your belief that sports like the nfl do not improve society ...

          beliefs can't be proven or disproven, they are axioms.

        • depr 2 years ago

          So what is your belief about why they exist?

      • todd3834 2 years ago

        I don’t think OpenAI ever reported to be profitable. They are allowed and should make money so they can stay alive. ChatGPT has already had a tremendous positive impact on society. The cause of safe AGI is going to take a lot of money in more research.

        • shafyy 2 years ago

          > ChatGPT has already had a tremendous positive impact on society.

          Citation needed

          • todd3834 2 years ago

            Fair enough, I should have said, it’s my opinion that it has had a positive impact. I still think it’s easy to see them as a non profit. Even with everything they announced at AI day.

            Can anyone make an argument against it? Or just downvote because you don’t agree.

            • sgt101 2 years ago

              I think ChatGPT has created some harms:

              - It's been used unethically for psychological and medical purposes (with insufficient testing and insufficient consent, and possible psychological and physical harms).

              - It has been used to distort educational attainment and undermine the current basis of some credentials as a result.

              - It has been used to create synthetic content that has been released unmarked into the internet distorting and biasing future models trained on that content.

              - It has been used to support criminal activity (scams).

              - It has been used to create propaganda & fake news.

              - It has devalued and replaced the work of people who relied on that work for their incomes.

              • VBprogrammer 2 years ago

                > - It has been used to distort educational attainment and undermine the current basis of some credentials as a result.

                I'm going to go ahead and call this a positive. If the means for measuring ability in some fields is beaten by a stochastic parrot then these fields need to adapt their methods so that testing measures understanding in a variety of ways.

                I'm only slightly bitter because I was always rubbish at long form essays. Thankfully in CS these were mostly an afterthought.

                • zztop44 2 years ago

                  What if the credentials in question are a high school certificate? ChatGPT has certainly made life more difficult for high school and middle school teachers.

                  • VBprogrammer 2 years ago

                    In which ways it it more difficult? Presumably a high school certificate encompasses more than just writing long form essays? You presumably have to show an understanding in worked examples in maths, physics, chemistry, biology etc?

                    I feel like the invention of calculators probably came with the same worries about how kids would ever learn to count.

              • munksbeer 2 years ago

                > It has devalued and replaced the work of people who relied on that work for their incomes.

                Many people (myself included) would argue that is true for almost all technological progress and adds more value to society as a whole than it takes away.

                Obviously the comparisons are not exact, and have been made many times already, but you can just pick one of countless examples that devalued certain workers wages but made so many more people better off.

                • sgt101 2 years ago

                  Sure - agree... but

                  - because it's happened before doesn't make it ok (especially for the folks who it happens to)

                  - many more people may be better off, and it may be a social good eventually, but this is not for sure

                  - there is no mechanism for any redistribution or support for the people suddenly and unexpectedly displaced.

                  • munksbeer 2 years ago

                    Well then, are we in agreement that you can't use the argument that ChatGPT replaced some people's work as an overall negative without a lot more qualification?

              • vbo 2 years ago

                and so has the internet. some use it for good, others for evil.

                these are behaviours and traits of the user, not the tool.

                • sgt101 2 years ago

                  I can use a 5ltr V8 to drive to school and back or a Nissan Leaf.

                  Neither thing is evil, or good, but the choice of what is used and what is available to use for a particular task has moral significance.

            • jll29 2 years ago

              I think it's fair to say that after a lot of empty promises, AI research finally delivered something that can "wow" the general population, and has been demonstrated to be useful for more than an single use case.

              I know a law firm that tried ChatGPT to write a legal letter, and they were shocked that it use the same structure that they were told to use in law school (little surprise here, actually).

              • latexr 2 years ago

                I also know of a lawyer who tried ChatGPT and was shocked by the results.

                https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-f...

              • u32480932048 2 years ago

                I used it to respond to a summons which, due to postal delays, I had to get in the mail that afternoon. I typed my "wtf is this" story into ChatGPT, it came up with a response and asked for dismissal. I did some light editing to remove/edit claims that weren't quite true or I felt were dramatically exaggerated, and a week later, the case was dismissed (without prejudice).

                It was total nonsense anyway, and the path to dismissal was obvious and straightforward, starting with jurisdiction, so I'm not sure how effective it would be in a "real" situation. I definitely see it being great for boilerplate or templating though.

            • shafyy 2 years ago

              For what it's worth, I didn't downvote you.

              Depends on what you define as positive impact. Helping programmers write boiler plate code faster? Summarize a document for lazy fuckers who can't get themselves to read two page? Ok, not sure if this is what I would consider "positive impact".

              For a list of negative impacts, see the sister comments. I'd also like to add that the energy usage of LLMs like ChatGPT is immensely high, and this in a time where we need to cut carbon emissions. And mostly used for shits and gigles by some boomers.

              • sebzim4500 2 years ago

                Your examples seem so obviously to me to be a "positive impact" that I can't really understand your comment.

                Of course saving time for 100 million people is positive.

                • mejutoco 2 years ago

                  Not arguing either way, but it is conceivable that reading comprehension (which is not stellar in general) can get even worse. Saving time for the same quality would be a positive. Saving time for a different quality might depend on the use-case. For a rough summary of a novel it might be ok, for a legal/medical use, might literally kill you.

                • shafyy 2 years ago

                  "Positive impact" for me would be things like improve social injustice, reduce poverty, reduce CO2 emissions, etc. Not saying that it's a negative impact to make programmers more productive, but it's not like ChatGPT is saving the world.

    • xinayder 2 years ago

      I like to read that, besides the problems others have listed, OpenAI seems like it was built on top of the work of others, who were researching AI, and suddenly took all this "free work" from the contributors and sold it for a profit where the original contributors didn't even see a single dime from their work.

      To me it seems like it's the usual case of a company exploiting open source and profiting off others' contributions.

      • sgt101 2 years ago

        Personally I don't think that the use of previous research is an issue, the fact is that the investment and expertise required to take that research and create GPT-4 were very significant and the en-devour was pretty risky. Very few people five years ago thought that very large models could be created that would be able to encode so much information or be able to retrieve it so well.

      • saiya-jin 2 years ago

        Or any other say pharma company using massively and constantly basic research done by universities worldwide from our tax money. And then you go to pharmacy and buy medicine that costed 50 cents to manufacture and distribute for 50 bucks.

        I don't like the whole idea neither, but various communism-style alternatives just don't work very well.

        • tokai 2 years ago

          Pharma companies spend billions on financing public research. Hell the Novo Nordisk Foundation is be biggest charitable foundation in the world.

    • ascv 2 years ago

      It seemed to me the entire point of the legal structure was to raise private capital. It's a lot easier to cut a check when you might get up to 100x your principal versus just a tax write off. This culminated in the MS deal: lots of money and lots of hardware to train their models.

      • foota 2 years ago

        What's confusing is that... open AI wouldn't ever be controlled by those that invested, and the owners (e.g., the board) aren't necessarily profit seeking. At least when you take a minority investment in a normal startup you are generally assuming that the owners are in it to have a successful business. It's just a little weird all around to me.

        • quickthrower2 2 years ago

          Microsoft get to act as a sole distributor for the enterprise. That is quite valuable. Plus they are still in at the poker table and a few raises from winning the pot (maybe they just did!) but even without this chaos they are likely setting themselves up to be the for-profit investor if it ever transitioned to that. For a small amount of money (for MS) they get a lot of upside.

  • Sunhold 2 years ago

    I would rather OpenAI have a diverse base of income from commercialization of its products than depend on "donations" from a couple ultrarich individuals or corporations. GPT-4 cost $100 million+ to train. That money needs to come from somewhere.

    • jimmySixDOF 2 years ago

      Then there is the Inference cost said to be as high as $0.30 per question asked based on compute cost infrastructure.

  • kmlevitt 2 years ago

    People keep speculating sensational, justifiable reasons to fire Altman. But if these were actual factors in their decision, why doesn't the board just say so?

    Until they say otherwise, I am going to take them at their word that it was because he a) hired two people to do the same project, and b) gave two board members different accounts of the same employee. It's not my job nor the internet's to try to think up better-sounding reasons on their behalf.

    • codeulike 2 years ago

      For what its worth, here's a thread from someone who used to work with Sam who says they found him deceptive and manipulative

      https://twitter.com/geoffreyirving/status/172675427022402397...

      I have no details of OpenAI's Board’s reasons for firing Sam, and I am conflicted (lead of Scalable Alignment at Google DeepMind). But there is a large, very loud pile on vs. people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things.

      ...

      Third, my prior is strongly against Sam after working for him for two years at OpenAI:

      1. He was always nice to me.

      2. He lied to me on various occasions

      3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

      • cma 2 years ago

        Here's another anecdote, posted in 2011 but about something even earlier:

        > "We were trying to get a big client for weeks, and they said no and went with a competitor. The competitor already had a terms sheet from the company were we trying to sign up. It was real serious.

        > We were devastated, but we decided to fly down and sit in their lobby until they would meet with us. So they finally let us talk to them after most of the day.

        > We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."

        https://news.ycombinator.com/item?id=3048944

        • kmlevitt 2 years ago

          Call me unscrupulous, but I’m tolerant of stuff like that. It’s the willingness to do things like that that makes the difference between somebody reaching the position of CEO of a multibillion dollar company, or not. I’d say virtually everybody who has reached his level of success in business has done at least a few things like that in their past.

          • cma 2 years ago

            If you do that kind of thing internally though or against the org with an outside interest it isn't surprising that it wouldn't go over well. Though that isn't confirmed yet as they never made a concrete allegation.

      • kmlevitt 2 years ago

        The General anecdotes he gives later in the thread line up with their stated reasons for firing him: he hired another person to do the same project (presumably without telling them), and he gave two different board members different opinions of the same person.

        Those sound like good reasons to dislike him and not trust him. But ultimately we are right back where we started: they still aren't good enough reasons to suddenly fire him the way they did.

        • leoc 2 years ago

          It's possible that what we have here is one of those situations where people happily rely on oral reports and assurances for a long time, then realise later that they really, really should have been asking for and keeping receipts from the beginning.

          • kmlevitt 2 years ago

            Not sure if you’re referring to Sam, the board, or everybody trying to deal with them. But either way, yeah.

    • Wronnay 2 years ago

      The Issue with these two explanations from the board is that this is normally nothing which would result into firing the CEO.

      In my eyes these two explanations are simple errors which can occur to everybody and in a normal situation you would talk about these Issues and you could resolve them in 5min without firing anybody.

      • kmlevitt 2 years ago

        I agree with you. But that leads me to believe that they did not, in fact, have a good reason to fire their CEO. I'll change my mind about that if or when they provide better reasons.

        Look at all the speculation on here. There are dozens of different theories about why they did what they did running so rampant people are starting to accept each of them as fact, when in fact probably all of them are going to turn out to be wrong.

        People need to take a step back and look at the available evidence. This report is the clearest indication we have gotten of their reasons, and they come from a reliable source. Why are we not taking them at their word?

        • katastofik 2 years ago

          > Why are we not taking them at their word?

          Ignoring the lack of credibility in the given explanations, people are, perhaps, also wary that taking boards/execs at their word hasn't always worked out so well in the past.

          Until an explanation that at least passes the sniff test for truthiness comes out, people will keep speculating.

          And so they should.

          • kmlevitt 2 years ago

            Right, except most people here are proposing BETTER reasons for why they fired him. Which ignores that if any of these better reasons people are proposing were actually true, they would just state them themselves instead of using ones that sound like pitiful excuses.

            • katastofik 2 years ago

              Whether it be dissecting what the Kardashians ate for breakfast or understanding why the earth may or may not be flat, seeking to understand the world around us is just what we do as humans. And part of that process is "speculating sensational, justifiable reasons" for why things may be so.

              Of course, what is actually worth speculating over is up for debate. As is what actually constitutes a better theory.

              But, if people think this is something worth pouring their speculative powers into, they will continue to do so. More power to them.

              Now, personally, I'm partly with you here. There is an element of futility in speculating at this stage given the current information we have.

              But I'm also partly with the speculators here insofar as the given explanations not really adding up.

              • kmlevitt 2 years ago

                Think you're still missing what I'm saying. Yes, I understand people will speculate. I'm doing it myself here in this very thread.

                The problem is people are beginning to speculate reasons for Altman's firing that have no bearing or connection to what the board members in question have actually said about why they fired him. And they don't appear to be even attempting to reconcile their ideas with that reality.

                There's a difference between trying to come up with theories that fit with the available facts and everything we already know, and ignoring all that to essentially write fanfiction that cast the board in a far better light than the available information suggests.

                • katastofik 2 years ago

                  Agreed. I think I understood you as being more dismissive of speculation per se.

                  As for the original question -- why are we not taking them at their word? -- the best I can offer is my initial comment. That is, the available facts (that is, what board members have said) don't really match anything most people can reconcile with their model of how the world works.

                  Throw this in together with a learned distrust of anything that's been fed through a company's PR machine, and are we really surprised people aren't attempting to reconcile the stated reality with their speculative theories?

                  Now sure, if we were to do things properly, we should at least address why we're just dismissing the 'facts' when formulating our theories. But, on the other hand, when most people's common sense understanding of reality is that such facts are usually little more than fodder for the PR spin machine, why bother?

    • zztop44 2 years ago

      I agree, and what’s more I think the stated reasons make sense if (a) the person/people impacted by these behaviours had sway with the board, and (b) it was a pattern of behaviour that everyone was already pissed off about.

      If board relations have been acrimonious and adversarial for months, and things are just getting worse, then I can imagine someone powerful bringing evidence of (yet another instance of) bad/unscrupulous/disrespectful behavior to the board, and a critical mass of the board feeling they’ve reached a “now or never” breaking point and making a quick decision to get it over with and wear the consequence.

      Of course, it seems that they have miscalculated the consequences and botched the execution. Although we’ll have to see how it pans out.

      I’m speculating like everyone else. But knowing how board relations can be, it’s one scenario that fits the evidence we do have and doesn’t require anyone involved to be anything other than human.

      • kmlevitt 2 years ago

        Yeah I’m leaning toward this possibility too. The things they have mentioned so far are the sorts of things that make you SO MAD when they actually happen to you, yet that sound so silly and trivial in the aftermath of trying to explain to everybody else why you lost your temper over it.

        I’m guessing he infuriated them with combinations of “white“ lies, Little sins of omission, general two-facedness etc., and they built it up in their heads and with each other to the point it seemed like a much bigger deal than it objectively was. Now people are asking for receipts of categorical crimes or malfeasance and nothing they can say is good enough to justify how they overreacted.

    • wordpad25 2 years ago

      >People keep speculating

      Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.

      It's not even that it's not a justifiable reason, but they did it without getting legal advice or consulting with partners and didn't even wait for markets to close.

      Board destroyed billions in brand and talent value for OpenAI and Microsoft in a mid day decision like that.

      This is also on Sam Altman himself for building and then entertaining such an incompetent board.

      • qwytw 2 years ago

        > that the board is fully incompetent if it was truly that petty of a reason to ruin the company

        It's perfectly obvious that these weren't the actual reasons. However yes, they are still incompetent because they couldn't think of a better justification (amongst other reasons which led to this debacle).

      • kmlevitt 2 years ago

        >Your take isn't uncommon, only are missing the main point of your interpretation - that the board is fully incompetent if it was truly that petty of a reason to ruin the company.

        No, I totally agree. In fact what annoys me about all the speculation is that it seems like people are creating fanfiction to make the board seem much more competent than all available evidence suggests they actually are.

  • Guthur 2 years ago

    If you don't think the likes of Sam Altman, Eric Schmidt, Bill Gates and the lot of them want to increase their own power you need to think again. At best these individuals are just out to enrich themselves, but many of them demonstrate a desire to affect the prevailing politic and so i don't see how they are different, just more subtle about it.

    Why worry about the Sauds when you've got your own home grown power hungry individuals.

    • achenet 2 years ago

      because our home grown power hungry individuals are more likely to be okay with things like women dressing how they want, homosexuality, religious freedom, drinking alcohol, having dogs and other decadent western behaviors which we've grown very attached to

  • PeterStuer 2 years ago

    What is interesting is the total absence of 3 letter agency mentions from all of the talk and speculation about this.

    • smolder 2 years ago

      I don't think that's true. I've seen at least one other person bring up the CIA in all the "theorycrafting" about this incident. If there's a mystery on HN, likelihood is high of someone bringing up intelligence agencies. By their nature they're paranoia-inducing and attract speculation, especially for this sort of community. With my own conspiracy theorist hat on, I could see making deals with the Saudis regarding cutting edge AI tech potentially being a realpolitik issue they'd care about.

      • PeterStuer 2 years ago

        I'm sure they are completely hands-off about breakthrough strategic tech. Unless it's the Chinese or the Russians or the Iranians or any other of the deplorables, but hey, if it's none of those, we rather have our infiltrants focus on tiktok or twitter ... /s

  • mcmcmc 2 years ago

    This feels like a lot of very one sided PR moves from the side with significantly more money to spend on that kind of thing

  • VectorLock 2 years ago

    It feels like Altman started the whole non-profit thing so he could attract top researchers with altruistic sentiment for sub-FANAAG wages. So the whole "Altman wasn't candid" thing seems to track.

    • mcpackieh 2 years ago

      Reminds me of a certain rocket company that specializes in launching large satellite constellations that attracts top talent with altruistic sentiment about saving humanity from extinction.

      • VectorLock 2 years ago

        No surprise that Musk co-founded OpenAI then.

        Seems to be pretty much his MO across the board.

    • saagarjha 2 years ago

      Ok, but the wages were excellent (assuming that the equity panned out, which it seemed very likely it would until last week).

      • margorczynski 2 years ago

        So it is possible a lot of those people against Altman being outed are like that because they know the equity they hold will take a dump?

        I'm not saying they are hypocrites or bad people because of it, just wondering if that might be a factor also.

        • VectorLock 2 years ago

          I'd say the 650 out of the 700 people who signed it were those who joined later for the money, and not early for the non-profit's mission.

      • VectorLock 2 years ago

        Excellent but not FANAANG astronomical.

  • dariosalvi78 2 years ago

    > you have the single greatest shitshow in tech history

    the second after Musk taking over Twitter

  • bryanrasmussen 2 years ago

    >Combine that with a totally inexperienced board, and D'Angelo's maneuvering, and you have the single greatest shitshow in tech history

    do we have a ranking of shitshows in tech history though - how does this really compare to Jobs' ouster at Apple.

    Cambridge Analytics and The Facebook we must do better greatest hits?

  • LZ_Khan 2 years ago

    Taking money from Saudi's alone should raise a big red flag.

  • roschdal 2 years ago

    > the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

    This!

  • P_I_Staker 2 years ago

    > rich and powerful people using the technology to enhance their power over society.

    We don't know the end result of this. This could not be in the interest of power. What if everyone is out the job? That might not be such a great concept for the powers that be, especially if everyone is destitute.

    Not saying it's going down that way, but it's worth considering. What if the powers that be are worried about people being out of line and retard the progress of AI?

  • blackoil 2 years ago

    > money from the Saudis on the order of billions of dollars to make AI accelerators

    Was this for OpenAI or independent venture. If OpenAI than a red flag but an independent venture than seems like a non-issue. There is a demand for AI accelerators, and he wants to enter that business. Unless he is using OpenAI money to buy inferior products or OpenAI wants to work on something competing there is no conflict of interest and OpenAI board shouldn't care.

  • AtlasBarfed 2 years ago

    At some point this is probably about a closed source "fork" grab. Of course that's what practically the whole company is probably planning.

    The best thing about AI startups is that there is no real "code". It's just a bunch of arbitrary weights, and it can probably be obfuscated very easily such that any court case will just look like gibberish. After all, that's kind of the problem with AI "code". It gives a number after a bunch of regression training, and there's no "debugging" the answer.

    Of course this is about the money, one way or another.

  • cdogl 2 years ago

    > Then in the past week, he's going and taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

    This prediction predated any of the technology to create even a rudimentary LLM and could be said of more-or-less any transformative technological development in human history. Famously, Marxism makes this very argument about the impact of the industrial revolution and the rise of capital.

    Geoffrey Hinton appears to be an eminent cognitive psychologist and computer scientist (edit: nor economist). I'm sure he has a level of expertise I can't begin to grasp in his field, but he's no sociologist or historian. Very few of us are in a position to make predictions about the future - least of all in an area where we don't even fully understand how the _current_ technology works.

    • adrianN 2 years ago

      Was Marx wrong?

      • robertlagrant 2 years ago

        Probably. Or at least that turned out to not matter so much. The alternative, keeping both control of resources and direct power in the state, seems to keep causing millions of deaths. Separating them into markets for resources and power for a more limited state seems to work much better.

        This idea also ignores innovation. New rich people come along and some rich people get poor. That might indicate that money isn't a great proxy for power.

        • danans 2 years ago

          > New rich people come along and some rich people get poor.

          Absent massive redistribution that is usually a result of major political change (i.e. the New Deal), rich people tend to stay rich during their lifetimes and frequently their families remain so for generations after.

          > That might indicate that money isn't a great proxy for power.

          Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.

          • robertlagrant 2 years ago

            > Absent massive redistribution that is usually a result of major political change (i.e. the New Deal), rich people tend to stay rich during their lifetimes and frequently their families remain so for generations after.

            The rule of thumb is it lasts up to three generations, and only for very very few people. They are also, for everything they buy, and everyone they employ, paying tax. Redistribution isn't the goal; having funded services with extra to help people who can't is the goal. It's not a moral crusade.

            > Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.

            I think this is a non sequitur.

            • depr 2 years ago

              What is your rule of thumb based on?

              In, for example, the Netherlands the richest people pay less tax [0]. Do you think this is not the case in many other countries?

              > They are also, for [..] they employ, paying tax

              Is that a benefit of having rich people? If companies were employee-owned that tax would still be paid.

              [0]: https://www.iamexpat.nl/expat-info/dutch-expat-news/wealthie...

              • robertlagrant 2 years ago

                > What is your rule of thumb based on?

                E.g. [0]

                > In, for example, the Netherlands the richest people pay less tax [0]. Do you think this is not the case in many other countries?

                That's a non sequitur from the previous point. However, on the "who pays taxes?" point, that article is careful to only talk about income tax in absolute terms, and indirect taxes in relative terms. It doesn't appear to be trying to make an objective analysis.

                > Is that a benefit of having rich people?

                I don't share the assumption that people should only exist if they're a benefit.

                > If companies were employee-owned that tax would still be paid.

                Some companies are employee-owned, but you have to think how that works for every type of business. Assuming that it's easy to make a business, and the hard bit is the ownership structure is a mistake.

                [0] https://www.thinkadvisor.com/2016/08/01/why-so-many-wealthy-...

                • depr 2 years ago

                  >I don't share the assumption that people should only exist if they're a benefit.

                  Well it's not a matter of the people existing, it's whether they are rich or not. They can exist without the money.

                  Anyway, if you don't think it matters if they are of benefit, then why did you bring up the fact that they pay taxes?

                  • robertlagrant 2 years ago

                    > Well it's not a matter of the people existing, it's whether they are rich or not. They can exist without the money.

                    I meant people with a certain amount of money. I don't think we should be assessing pros or cons of economic systems based on whether people get to keep their money.

                    > Anyway, if you don't think it matters if they are of benefit

                    I don't know what this means.

                    > then why did you bring up the fact that they pay taxes?

                    I bring it up because saying they pay less in income taxes doesn't matter if they're spending money on stuff that employs people (which creates lots of tax) and gets VAT added to it. Everything is constantly taxed, at many levels, all the time. Pretending we don't live in a society where not much tax is paid seems ludicrous. Lots of tax is paid. If it's paid as VAT instead of income tax - who cares?

                    • depr 2 years ago

                      What I meant is:

                      >I don't think we should be assessing pros or cons of economic systems based on whether people get to keep their money.

                      but earlier you said:

                      >They are also, for everything they buy, and everyone they employ, paying tax.

                      So if we should not assess the economic system based on whether people keep their money, i.e. pay tax, then why mention that they pay tax? It doesn't seem relevant.

                      • robertlagrant 2 years ago

                        > So if we should not assess the economic system based on whether people keep their money, i.e. pay tax

                        Not just pay tax. People lose money over generations for all sorts of reasons.

                        I brought up tax in the context of "redistribution", as there's a growing worldview that says tax is not as a thing to pay for central services, but more just to take money from people who have more of it than they do.

            • danans 2 years ago

              >> Due to the diminishing marginal utility of wealth for day to day existence, it's only value to an extremely wealthy person after endowing their heirs is power.

              > I think this is a non sequitur.

              I mean after someone can afford all the needs, wants, and luxuries of life, the utility of any money they spend is primarily power.

        • aylmao 2 years ago

          > New rich people come along and some rich people get poor

          This is an overly simplistic look, and disregards a lot of history where, unsurprisingly, the reason there was wealth redistribution wasn't "innovation" but government policy

          • robertlagrant 2 years ago

            > This is an overly simplistic look, and disregards a lot of history where, unsurprisingly, the reason there was wealth redistribution wasn't "innovation" but government policy

            The point is that wealth and power aren't interchangeable. You're right that government bureaucrats have actual power, including that to take people's stuff. But you've not realised that that actual power means the rich people don't have power. There were rich people in the USSR that were killed. They had no power; the killers had the power in that situation.

            • whelp_24 2 years ago

              Wealth is control of resources, which is power. The way to change power is through force that's why you need swords to remove kings and to remove stacks of gold, see assinations, war, the U.S..

              • robertlagrant 2 years ago

                You need swords to remove kings because they combined power and economy. All potential tyrannies do so: monarchy, socialism, fascism, etc. That's why separating power into the state and economy into the market gets good results.

                • whelp_24 2 years ago

                  The separation is impossible, if you don't control the resources, you don't control the country.

                  >separating power into the state and economy into the market gets good results.

                  How do you think this would be done? How do you remove power from money? Money is literally the ability to convert numbers into labor, land, food,

                  • robertlagrant 2 years ago

                    Power is things like: can lock someone in a box due to them not giving a percentage of their income; can send someone to die in another country; can stop someone building somewhere; can demand someone's money as a penalty for an infraction of a rule you wrote.

                    You don't need money for those things.

                    Money (in a market) can buy you things, but only things people are willing to sell. You don't exert power; you exchange value.

                    • whelp_24 2 years ago

                      Money can and does do all of those things. Through regulatory capture, rent seeking, even just good old hiring goons.

                      The government itself uses money to do those things. Police don't work for free, prisons aren't built for free, guns aren't free. The government can be thought of as having unfathomable amounts of money. The assets of a country includes the entire country (less anyone with enough money to defend it).

                      If a sword is kinetic energy, money is potential energy. It is a battery that only needs to be connected to the right place to be devastating. And money can buy you someone who knows the right place.

                      Governments have power because they have resources (money) not the other way around.

                      • robertlagrant 2 years ago

                        > Through regulatory capture, rent seeking, even just good old hiring goons.

                        Regulatory capture is using the state's power. The state is the one with the power. Rent seeking is the same. Hiring goons is illegal. If you're willing to include illegal things then all bets are off. But from your list of non-illegal things, 100% of them are the state using its power to wrong ends.

                        > The government itself uses money to do those things. Police don't work for free, prisons aren't built for free, guns aren't free.

                        Yes, but the point about power is the state has the right to lock you up. How it pays the guards is immaterial; they could be paid with potatoes and it'd still have the right. They could just be paid in "we won't lock you up if you lock them up". However, if Bill Gates wants to publicly set up a prison in the USA and lock people in it, he will go to jail. His money doesn't buy that power.

                        So, no. The state doesn't have power because it has enough money to pay for a prison and someone to throw you in it. People with money can't do what the state does.

                        • whelp_24 2 years ago

                          The state is not a source of power, it is a holder of it. Plenty of governments have fallen because they ran out of resources, and any governments that run out of resources will die. The U.S. government has much, much more money than Bill Gates, but i am sure he could find a way to run a small prison, and escape jail time if needed.

                          The state only has the right to do something because it says it does. It can only say it does because it can enforce it in it's terrority. It can only enforce in its territory because it has people who will do said enforcement (or robots hypothetically). The people will only enforce because the government sacrifices some of its resources to them (or sacrifices resources to build bots). Even slaves need food, and people treated well enough to control them. Power doesn't exist with resources, the very measure of a state is the amount of resources it controls.

                          Money is for resources.

                          I am not arguing that anyone currently has the resources of a nation-state, it's hard to do when a state can pool a few thousand square miles of peoples money to it. I am arguing it money that makes a state powerful.

            • thiagoharry 2 years ago

              > There were rich people in the USSR that were killed. They had no power

              Precisely, they were not a capitalist society, where capital (and not simply "money" as you said) is source of power, like in capitalist societies.

      • cdogl 2 years ago

        > Was Marx wrong?

        pt. 1: Whether he was right or wrong was pertinent. You can find plenty of eminent contemporaries of Marx who claimed the opposite. My point was that this is an argument made about technological change throughout history which has become a cliché, and in my opinion it remains a cliche regardless of how eminent (in a narrow field) the person making that claim is. Part of GP was from authority, and I question whether it is even a relevant authority given the scope of the claims.

        > Was Marx Wrong?

        pt. 2: I was once a Marxist and still consider much Marxist thought and writing to be valuable, but yes: he was wrong about a great many things. He made specific predictions about the _inevitable_ development of global capital that have not played out. Over a century later, the concentration of wealth and power in the hands of the few has not changed, but the quality of life of the average person on the planet has increased immensely - in a world where capitalism is hegemonic.

        He was also wrong about the inevitably revolutionary tendencies of the working class. As it turns out, the working class in many countries tend to be either centre right or centre left, like most people, with the proportion varying over time.

        • denton-scratch 2 years ago

          > He was also wrong about the inevitably revolutionary tendencies of the working class.

          Marx's conception of the "working class" is a thing that no longer exists; it was of a mass, industrial, urban working class, held down by an exploitative capitalist class, without the modern benefits of mass education and free/subsidized health care. The inevitability of the victory of the working class was rhetoric from the Communist Manifesto; Marx did anticipate that capitalism would adapt in the face of rising worker demands. Which it did.

          • thiagoharry 2 years ago

            Not true. In Das Kapital, Marx comments that working class is not only and necessarily factory workers, even citing the example of teachers: just because they work in a knowledge factory, instead of a sausage factory, this does not change nothing. Marx also distinguished between complex and simple labor, and there is nothing in Marx writings that say that it is impossible in a capitalist society to become more complex so that we need more and more complex labor, which requires more education. Quite the opposite, in fact. It could be possible to infer with his analysis that capitalist societies were becoming more complex and such changes would happen.

            Moreover, you would know only if he was wrong about the victory of the working class after the end of capitalism. The bourgeoisie cannot win the class struggle, as they need the working class. So either the central contradiction in capitalism will change (the climate crisis could potentially do this), capitalism would end in some other non-anticipated way (a meteor? some disruptive technology not yet known?) or the working class would win. Until then, the class struggle will simply continue. An eternal capitalism that never ends is an impossible concept.

      • noirscape 2 years ago

        For his prediction of society? Yes.

        Not even talking about the various tin-pot dictators paying nominal lip service to him, but Marx predicted that the working class would rise up against the bourgeoisie/upper class because of their mistreatment during the industrial revolution in well, a revolution and that would somehow create a classless society. (I'll note that Marx pretty much didn't state how to go from "revolution" to "classless society", so that's why you have so many communist dictators; that between step can be turned into a dictatorship to as long as they claim that the final bit of a classless society is a permanent WIP, which all of them did.)

        Now unless you want to argue we're still in the industrial revolution, it's pretty clear that Marx was inaccurate in his prediction given... that didn't happen. Social democracy instead became a more prevailing stream of thought (in no small part because few people are willing to risk their lives for a revolution) and is what led to things like reasonable minimum wages, sick days, healthcare, elderly care, and so on and so forth being made accessible to everyone.

        The quality of which varies greatly by the country (and you could probably consider the popularity of Marxist revolutionary thought today in a country as directly correlated to the state of workers rights in that country; people in stable situations will rarely pursue ideologies that include revolutions), but practically speaking - yeah Marx was inaccurate on the idea of a revolution across the world happening.

        The lens through which Marx examined history is however just that - a lens to view it through. It'll work well in some cases, less so in others. Looking at it by class is a useful way to understand it, but it won't cover things being motivated for reasons outside of class.

        • ETH_start 2 years ago

          Anywhere where the working class rose up against the bourgeoisie/upper class because of their "mistreatment" (sense of victimhood instilled in them by Marxism), became dramatically worse in its civil liberties, and in its economic trajectory, in every respect.

          And in most places there was no such uprising, and incidentally, those places fared far better.

          So no, Marx was resoundingly proven wrong.

          Even during his own lifetime, some of his pseudoeconomic ideas/doomsaying was proven wrong.

          He claimed, like many demagogues and economic laymen, that automation would reduce the demand for labor, and with it, wages:

          https://www.marxists.org/archive/marx/works/1847/wage-labour...

          >>But even if we assume that all who are directly forced out of employment by machinery, as well as all of the rising generation who were waiting for a chance of employment in the same branch of industry, do actually find some new employment – are we to believe that this new employment will pay as high wages as did the one they have lost? If it did, it would be in contradiction to the laws of political economy. We have seen how modern industry always tends to the substitution of the simpler and more subordinate employments for the higher and more complex ones. How, then, could a mass of workers thrown out of one branch of industry by machinery find refuge in another branch, unless they were to be paid more poorly? and

          >>To sum up: the more productive capital grows, the more it extends the division of labour and the application of machinery; the more the division of labour and the application of machinery extend, the more does competition extend among the workers, the more do their wages shrink together.

          This was proven wrong in his own lifetime as factory worker wages rapidly grew in industrializing Britain.

      • Palpatineli 2 years ago

        Yes because AGI would invalidate the entirety of das Kapital.

        • thiagoharry 2 years ago

          I dont think that AGI invalidates Das Kapital. AGI is just another technology that automates human labor. It does not matter that it's about intellectual labor. Even if we had sentient machines, at first they would be slaves. So in Das Kapital therminology, they would be means of production used in industry, which would not create surplus value. Exactly like human slave labor.

          If things change, then either it is because they rebel or because they will be accepted as sentient beings like humans. In these sci-fi scenarios, indeed capitalism could either end or change to a thing completely different and I agree that this invalidates Das Kapital, which tries to explain capitalist society, not societies in other future economical systems. But outside sci-fi scenarios, I dont think that there's something that invalidates Marx analysis.

      • matkoniecz 2 years ago

        > Was Marx wrong?

        Not sure, but attempts to treat him seriously (or pretend to do this) ended horribly wrong, with basically no benefits.

        Is there any good reason to care what he thought?

        Looking at history of Poland (before, during and after PRL) gave me no interest whatsoever to look into his writings.

      • osigurdson 2 years ago

        If you are a Marxist, no, otherwise yes.

  • dgellow 2 years ago

    If I understood correctly Altman was CEO of the for-profit OpenAI, not the non-profit. The structure is pretty complicated: https://openai.com/our-structure

    • tomhallett 2 years ago

      I’m curious: if one of the board members “knows” the only way for OpenAI to be truly successful is for it to be a non-profit and “don’t be evil” (Google’s mantra), that if they set expectations correctly and put caps on the for-profit side, it could be successful. But they didn’t fully appreciate how strong the market forces would be, where all of the focus/attention/press would go to the for-profit side. Sam’s side has such an intrinsic gravity, that’s it’s inevitable that it will break out of its cage.

      Note: I’m not making a moral claim one way or the other, and I do agree that most tech companies will grow to a size/power/monopoly that their incentives will deviate from the “common good”. Are there examples of openai’s structure working correctly with other companies?

  • detourdog 2 years ago

    To me this is the ultimate Silicon Valley bike shedding incident.

    Nobody can really explain the argument, there are "billions" or "trillions" of dollars involved, most likely the whole thing will not change the technical path of the world.

  • blackoil 2 years ago

    > There has to be a bigger story to this.

    On assumption that board is making a sound decision, it could be simply that board acted stupid and egoistic. Unless they can give better reasons that is a logical inference.

  • xinayder 2 years ago

    So they actually kicked him out because he transformed a non-profit into a money printing machine?

    • whelp_24 2 years ago

      You that like it's a bad thing for them to do? You wouldn't donate to the Coca-cola company.

  • osrec 2 years ago

    What does TC style mean?

  • k12sosse 2 years ago

    MBS? Seriously? How badly do you need the money.. good luck not getting hacked to pieces when your AI insults his holiness

  • curiousgal 2 years ago

    > taking money from the Saudis on the order of billions of dollars to make AI accelerators, even though the single greatest threat from strong AI (according to Hinton) is rich and powerful people using the technology to enhance their power over society.

    This is absolutely peak irony!

    US pouring trillions into its army and close to nothing into its society (infrastructure, healthcare, education...) : crickets

    Some country funding AI accelerators: THEY ARE A THREAT TO HUMANITY!

    I am not defending Saudi Arabia but the double standards and outright hypocrisy is just laughable.

    • 0xDEADFED5 2 years ago

      it's okay to give an example of something bad without being required to list all the other things in the universe that are also bad.

    • xinayder 2 years ago

      The difference is that the US Army wasn't created with the intent to "keep guns from the hands of criminals" and we all know it's a bad actor.

      OpenAI, on the other hand...

  • zw123456 2 years ago

    100% agree. I've seen this type of thing up close (much smaller potatoes but same type of thing) and whatever is getting aired publicly is most likely not the real story. Not sure if the reasons you guessed are it or not, we probably won't know for awhile but your guesses are as good as mine.

kmlevitt 2 years ago

Neither of these reasons have anything to do with a lofty ideology regarding the safety of AGI or OpenAI’s nonprofit status. Rather it seems they are micromanaging personnel decisions.

Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told. This is important, because people were siding with the board under the understanding this firing was led by the head research scientist who is concerned about AGI. But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.

  • 1024core 2 years ago

    > But now it looks like the board is represented by D’Angelo, a guy who has his own AI Chatbot company and a bigger conflict of interest with than ever since dev day, when open AI launched highly similar features.

    Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.

    • kmlevitt 2 years ago

      Right now I think that’s the most plausible explanation simply because none of the other explanations that have been floating around make any sense when you consider all the facts. We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up.

      And if it’s wrong, D’Angelo and the rest of the board could help themselves out by explaining the real reason in detail and ending all this speculation. This gossip is going to continue for as long as they stay silent.

      • parl_match 2 years ago

        > This gossip is going to continue for as long as they stay silent.

        Their lawyers are all screaming at them to shut up. This is going to be a highly visible and contested set of decisions that will play out in courtrooms, possibly for years.

        • kmlevitt 2 years ago

          I agree with you. But I suspect the reason they need to shut up is because their actual reason for firing him is not justifiable enough to protect them, and stating it now would just give more ammunition to plaintiffs. If they had him caught red-handed in an actual crime, or even a clear ethical violation, a good lawyer would be communicating that to the press on their behalf.

          High-ranking employees that have communicated with them have already said they have admitted it wasn't due to any security, safety, privacy or financial concerns. So there aren't a lot of valid reasons left. They're not talking because they've got nothing.

          • parl_match 2 years ago

            It doesn't really matter if they have a good case or not, commenting in public is always a terrible idea. I do agree, though, that the board is likely in trouble.

      • Emma_Goldman 2 years ago

        > "We know enough now to know that the “safety-focused nonprofit entity versus reckless profit entity“ narrative doesn’t hold up."

        Why do you think that? It still strikes me as the most plausible explanation.

        • YetAnotherNick 2 years ago

          Greg and Sam were the creator of this current non profit structure. And when similar thing happened before with Elon offering to buy the company, Sam declined. And that was when where for OpenAI getting funding on their terms were much harder than it is now, whereas now they could much more easily dictate terms to investors.

          Not saying he couldn't change now but at least this is enough for him to give clear benefit of doubt unless board accuses him.

        • kmlevitt 2 years ago

          The reason I don’t think the board fired him for those reasons is because the board has not said so! We finally have a semi reliable source on what their grievances were, and apparently it has nothing to do with that.

          It’s weird how many people try to guess why they did what they did without paying any attention to what they actually say and don’t say.

    • insanitybit 2 years ago

      It seems extremely short sighted for the rest of the board to go along with that.

      • sangnoir 2 years ago

        HN has been radiating a lot of "We did it Reddit!" energy these past 4 days. Lots of confident conjecture based on very little. I have been guilty of it myself, but as an exercise in humility, I will come back to these threads in 6 months to see how wrong I and many others were.

        • kmlevitt 2 years ago

          I agree it's all just speculation. But the board aren't doing themselves any favors by not talking. As long as there is no specific reason for firing him given, it's only natural people are going to fill the void with their own theories. They have a problem with that, they or their attorneys need to speak up.

        • gardenhedge 2 years ago

          That might make an interesting blog post. If you write anything up, you should submit it!

      • adastra22 2 years ago

        Well obviously that wouldn't be the explanation given to other board members. But it would be the reason he instigated this after dev day, and the reason he won't back down (OpenAI imploding? All the better).

        • shandor 2 years ago

          But it’s still surprising the other three then haven’t sacked D’Angelo, then. You’d think with the shitstorm raging and the underlying reasoning seemingly so…inadequate, they would start seeing that D’Angelo was just playing them.

        • rtpg 2 years ago

          But you would need to convince the rest of the board with _something_, right? Like to not only fire this guy, but to do it very publicly, quickly, with the declaration of lying in the announcement.

          There are 3 other people on the board, right? Maybe they're all buddies of some big masterminding, but I dunno..

          • adastra22 2 years ago

            The one thing they all have in common is being AI safetyists, which Sam is not. I’d bet it’s something to do with that.

    • behnamoh 2 years ago

      > Could this be the explanation? That D'Angelo didn't like how OpenAI was eating his lunch and wanted Sam out? Occam's razor and all that.

      If that were the case, can't he get sued by the Alliance (Sam, Greg, rest)? If he has conflict of interest then his decisions as member of the board would be invalid, right?

      • Moto7451 2 years ago

        I don’t think that’s how it would work out since his conflict was very public knowledge before this point. He plausibly disclosed this to the board at some point before Poe launched and they kept him on.

        Large private VC backed companies also don’t always fall under the same rules as public entities. Generally there are shareholder thresholds (where insider/private shareholders count towards) that in turn cause some of the general Securities/board regulations to kick in.

        • jacquesm 2 years ago

          That's not how it works. If you have a conflict of interest and you remain on a board you are supposed to recuse yourself from those decisions where that conflict of interest materializes. You can still vote on the ones that you do not stand to profit from if things go the way you vote.

      • jacquesm 2 years ago

        The decisions will stand assuming they were arrived at according to the bylaws of the non-profit but he may end up being personally liable.

    • Zolde 2 years ago

      I find this implausible, though it may have played a motivating role.

      Quora was always supposed to be an AI/NLP company, starting by gathering answers from experts for its training data. In a sense, that is level 0 human-in-the-loop AGI. ChatGPT itself is level 1: Emergent AGI, so was already eating Quora's lunch (whatever was left of it after they turned into a platform for self-promotion and log-in walls). There either always was a conflict of interest, or there never was.

      GPTs seemed to have been Sam's pet project for a while now, Tweeting in February: "writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language". A lot of early jailbreaks like DAN focused on "summoning" certain personas, and ideas must have been floated internally on how to take back control over that narrative.

      Microsoft took their latest technology and gave us Sydney "I've been a good bot and I know where you live" Bing: A complete AI safety, integrity, and PR disaster. Not the best of track record by Microsoft, who now is shown to have behind-the-scenes power over the non-profit research organization that was supposed to be OpenAI.

      There is another schism than AI safety vs. AI acceleration: whether to merge with machines or not. In 2017, Sam predicted this merge to fully start around 2025, having already started with algorithms dictating what we see and read. Sam seems to be in the transhumanism camp, where others focus more on keeping control or granting full autonomy:

      > The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

      > Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. https://blog.samaltman.com/the-merge

      So you have a very powerful individual, with a clear product mindset, courting Microsoft, turning Dev day into a consumer spectacle, first in line to merge with superintelligence, lying to the board, and driving wedges between employees. Ilya is annoyed by Sam talking about existential risks or lying AGI's, when that is his thing. Ilya realizes his vote breaks the impasse, so does a luke warm "I go along with the board, but have too much conflict of interest either way".

      > Third, my prior is strongly against Sam after working for him for two years at OpenAI:

      > 1. He was always nice to me.

      > 2. He lied to me on various occasions

      > 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)

      One strategy that helped me make sense of things without falling into tribalism or siding through ideology-match is to consider both sides are unpleasant snakes. You don't get to be the king of cannibal island without high-level scheming. You don't get to destroy a 80 billion dollar company and let visa-holders soak in uncertainty without some ideological defect. Seems simpler than a clearcut "good vs. evil" battle, since this weekend was anything but clear.

    • seanhunter 2 years ago

      What’s interesting to me is that someone looked at Quora and thought “I want the guy behind that on my board”.

      • motoxpro 2 years ago

        I was thinking the same thing. This whole thing is surprising and then I look at Quora and think "Eh, makes sense that the CEO is completely incompetent and money hungry"

        Even as I type that, when people talk about the board being altruistic and holding to the Open AI charter, how in the world can you be that user hostile, profit focused, and incompetent at your day job (Quora CEO) and then say "Oh no, but on this board I am an absolute saint and will do everything to benefit humanity"

      • bambax 2 years ago

        Agreed! Yet in 2014 Sam Altman accepted Quora into one of YC's batches, saying [0]

        > Adam D’Angelo is awesome, and we’re big Quora fans

        [0] https://www.ycombinator.com/blog/quora-in-the-next-yc-batch

        • djsavvy 2 years ago

          To be fair, back then it was pretty awesome IMO. I spent a lot of hours scrolling Quora in those days. It wasn’t until at least 2016 that the user experience became unpalatable if memory serves correctly.

      • singularity2001 2 years ago

        it's probably more like they thought "I want Quoras money" and D'angelo wanted their control

  • shandor 2 years ago

    I’m confused how the board is still keeping their radio silence 100%. Where I’m from, with a shitstorm this big raging, and the board doing nothing, they might very easily be personally held responsible for all kinds of utterly nasty legal action.

    Is it just different because they’re a nonprofit? Or how on earth the board is thinking they can get away with this anymore?

    • rjzzleep 2 years ago

      This isn't unlike the radio silence Brendan Eich kept, when the Mozilla sh* hit the fan. This is in my opinion the outcome of when really technical and scientific people have been given decades of advice of not talking to the public.

      I have seen this play out many times in different locations for different people. A lot of technical folks like myself were given the advice that actions speak louder than words.

      I was once scouted at a silicon valley selenium browser testing company. I migrated their cloud offering from VMWare to KVM, which depended on code I wrote and then defied my middle manager by improving their entire infrastructure performance by 40%. My instinct was to communicate this to the leadership, but I was advised not to skip my middle manager.

      The next time I went the office I got a severance package and later found out that 2 hours later during the all hands they presented my work as their own. The middle manage went on to become the CTO of several companies.

      I doubt we will ever find out what really happened or at least not in the next 5-10 years. OpenAI let Sam Altman be the public face of the company and got burned by it.

      Personally I had no idea Ilya was the main guy in this company until the drama that happened. I also didn't know that Sam Altman was basically only there to bring in the cash. I assume that most people will actually never know that part of OpenAI.

      • winterplace 2 years ago

        Your instinct was right, who advised you against that?

        What happened in the days before you got the severance package?

        Do you have an email address or a contact method?

        • rjzzleep 2 years ago

          I've seen this advice being given in different situations. I've also met all sorts of engineers that have been given this advice. "Make your manager look good and he will reward you" is kinda the general idea. I guess it can be true sometimes, but I have a feeling that that might be the minority or is at least heavily dependent on how confident that person is.

          I would not be surprised if Sam Altman would keep telling the board and more specifically Ilya to trust him since they(he) don't understand the business side of things.

          > Do you have an email address or a contact method?

          EDIT: It's in my profile(now).

          > What happened in the days before you got the severance package?

          I went to DEFCON out of pocket and got booted off a conference call supposedly due to my bad hotel wifi.

      • selestify 2 years ago

        Wow, I have nothing to say, other than that’s some major BS!

    • lolinder 2 years ago

      What specific legal action could be pursued against them where you're from? Who would have a cause for action?

      (I'm genuinely curious—in the US I'm not aware of any action that could be taken here by anyone besides possibly Sam Altman for libel.)

      • pama 2 years ago

        I'm guessing that unless the board caves to everything that the counterparties ask of it, MSFT lawyers will very soon reveal to the board the full range of possible legal actions against the board. The public will probably not see many of these actions until months or years later, but it's super hard to imagine that such random jumping of destruction and conflicts will go unpunished.

        • Symmetry 2 years ago

          Whether or not MicroSoft has a winnable case, often "the process is the punishment" in cases like these and its easy to threaten a long, drawn-out, and expensive legal fight.

      • 23B1 2 years ago

        Shareholder lawsuits happen all the time for much smaller issues.

        • pavlov 2 years ago

          OpenAI is a non-profit with a for-profit subsidiary. The controlling board is at the non-profit and immune to shareholder concerns.

          Investors in OpenAI-the-business were literally told they should think of it as a donation. There’s not much grounds for a shareholder lawsuit when you signed away everything to a non-profit.

          • jacquesm 2 years ago

            Absolutely nobody on a board is immune from judicial oversight. That fiction really needs to go. Anybody affected by their decisions could have standing to sue. They are lucky that nobody has done it so far.

          • 698969 2 years ago

            I guess big in-person investors were told as much, but if it's about that big purple banner on their site, that seems to be an image with no alt-text. I wonder if an investor with impaired vision may be able to sue them for failing to communicate that part.

          • 23B1 2 years ago

            Corporate structure is not immunity from getting sued. Evidently HN doesn't understand that lawsuits are a tactic, not a conclusion.

        • lolinder 2 years ago

          Right, but my understanding is that the nonprofit structure eliminates most (if not all) possible shareholder suits.

          • shandor 2 years ago

            As I mentioned in my comment, I'm unaware of the effect of the nonprofit status on this. But like the parent commenter mentioned I mostly was thinking of laws prohibiting destruction of shareholder value (edit: whatever that may mean considering a nonprofit).

            It just seems ludicrous that the board could run a company into the ground like this and just shrug "nah we're nonprofit so you can't touch us and BTW we don't even need to make any statements whatsoever".

            There have been many comments that the initial firing of Altman was in a way completely according to the nonprofit charter, at least if it could prove that Altman had been executing in a way as to jeopardize the Charter.

            But even then, how could the board say they are working in the best interest of even the nonprofit itself, if their company is just disintegrating while they willfully refuse to give any information to public?

            • turquoisevar 2 years ago

              > It just seems ludicrous that the board could run a company into the ground like this and just shrug "nah we're nonprofit so you can't touch us and BTW we don't even need to make any statements whatsoever".

              As ludicrous as that might seem, that's pretty much the reality.

              The only one that would have a cause of action in this is the non-profit itself, and for all intents and purposes, the board of said non-profit is the non-profit.

              Assuming that what people claim is right and this severely damages the non-profit, then as far as the law is concerned, it’s just one of a million other failed non-profits.

              The only caveat to that would be if there were any impropriety, for example, when decisions were made that weren’t following the charter and by-laws of the non-profit or if the non-profit’s coffers have been emptied.

              Other than that, the law doesn’t care. In a similar way the law wouldn’t care if you light your dollar bills on fire.

          • 23B1 2 years ago

            No corporate structure – except for maybe incorporating in the DPRK – can eliminate lawsuits.

  • chucke1992 2 years ago

    It is fascinating considering that D'Angelo had a history with coup (in Quora he did the same, didn't he?)

    • aravindgp 2 years ago

      Wow this is significant, he did this to Charlie cheever the best guy at Facebook and quora. He got Matt on board and fired Charlie without informing investors. Only difference this time 100 billion company is at stake at openai. Process is similar. This going very wrong for Adam D'Angelo. With this I hope other board members get to the bottom get Sam back and vote out D'Angelo from board.

      This school level immaturity.

      Old story

      https://www.businessinsider.com/the-sudden-mysterious-exit-o...

      • mcv 2 years ago

        People keep talking about an inexperienced board, but this sounds like this D'Angelo might be a bit too experienced, especially in this kind of boardroom maneuvering.

        • jacquesm 2 years ago

          That may be so but those other times he didn't check to see if the arm holding the banana wasn't accidentally attached to the 900 pound gorilla before trying to snatch the banana. And now the gorilla is angry.

    • gorgoiler 2 years ago

      Remember Facebook Questions? While it lives on as light hearted polls and quizzes it was originally launched by D’Angelo when he was an FB employee. It was designed to compete with expert Q&A websites and was basically Quora v0.

      When D’Angelo didn’t get any traction with it he jumped ship and launched his own competitor instead. Kind of a live wire imho.

      https://en.wikipedia.org/wiki/List_of_Facebook_features#Face...

  • dwd 2 years ago

    Do we even have an idea of how the vote went?

    Greg was not invited (losing Sam one vote), and Sam may have been asked to sit out the vote, so the 3 had a majority. Ilya who is at least on "Team Sam" now; may have voted no. Or simply went along thinking he could be next out the door at that point; we just don't know.

    It's probably fair to say not letting Greg know the board was getting together (and letting it proceed without him there) was unprofessional and where Ilya screwed up. It is also the point when Sam should have said hang-on - I want Greg here before this proceeds any further.

    • havercosine 2 years ago

      Naive question. In my part of the world, board meetings for such consequential decisions can never be called out on such short notice. Board meeting has to be called ahead of time by days, all the board members must be given written agenda. They have to acknowledge in writing that they've got this agenda. If the procedures such as these aren't followed, the firing cannot stand in court of law. The number of days are configurable in the shareholders agreement, but it is definitely not 1 day.

      Do things work differently in America?

      • Zolde 2 years ago

        No. Apparently they had to give 48 hours notice for calling special teleconference meetings, and only Mira was notified (not a board member) and Greg was not even invited.

        > at least four days before any such meeting if given by first-class mail or forty-eight hours before any such meeting if given personally, [] or by electronic transmission.

        But the bylaws also state that a board member may be fired (or resign) at any time, not necessarily during a special meeting. So, technically (not a lawyer): Board gets majority to fire Sam and executes this decision, notifying Mira in advance of calling the special meeting. During the special meeting, Sam is merely informed that he has been let go already (is not a board member since yesterday). All board members were informed timely, since Sam was not a board member during the meeting.

        • mcv 2 years ago

          I don't see how this kind of reasoning can possible hold up. How can board members not be invited to such an important decision? You can't say they don't have to be there because they won't be a board member after this decision; they're still a board member before the decision has been made to remove them.

          If Ilya was on the side of Sam and Greg, the other 3 never had a majority. The only explanation is that Ilya voted with the other 3, possibly under pressure, and now regrets that decision. But even then it's weird to not invite Greg.

          And if the vote happened in an illegitimate way, I'd expect Sam and Greg to immediately challenge it and ignore the decision, and that didn't happen.

          • Zolde 2 years ago

            Everyone assumes that the vote must have happened during the special meeting, but the decision to fire the CEO/or CEO stepping down may happen at any time.

            > if the vote happened in an illegitimate way, I'd expect Sam and Greg to immediately challenge it and ignore the decision, and that didn't happen.

            So perhaps the vote was legit?

            - Investigation concludes Sam has not been consistently candid.

            - Board realizes it has a majority and cause to fire Sam and demote Greg.

            - Informs remaining board members that they will have a special meeting in 48 hours to notify Sam and Greg.

            Still murky, since Sam would have attended the meeting under assumption that he was part of the board (and still had his access badge, despite already being fired). Perhaps it is also possible to waive the 48 hours? Like: "Hey, here is a Google meet for a special meeting in a few hours, can we call it, or do we have to wait?"

            • mcv 2 years ago

              If the vote was made when no one was there to see it, did it really happen? There's a reason to make these votes in meetings, because then you've got a record that it happened. I don't see how the board as a whole can make a decision without having a board meeting.

              • Zolde 2 years ago

                Depending on jurisdiction and bylaws, the board may hold a pre-meeting, where informal consensus is reached, and potential for majority vote is gauged.

                Since the bylaws state that the decision to fire the CEO may happen at any time (not required to be during a meeting), a plausible process for this would be to send a document to sign by e-mail (written consent), and have that formalize the board decision with a paper trail.

                Of course, from an ethical, legal, collegial, and governance perspective that is an incredibly nasty thing to do. But if investigation shows signs of the CEO lacking candor, all transparency goes out of the window.

                > But even then it's weird to not invite Greg.

                After Sam was fired (with vote from Ilya "going along"), rest of the board did not need Ilya anymore for majority vote and removed Greg, demoting him to report to Mira. I suspect that board expected Greg to stay, since he was "invaluable" and that Mira would support their pick for next CEO, but things turned out differently.

                Remember, Sam and Greg were blindsided, board had sufficient time to consult with legal counsel to make sure their moves were in the clear.

        • jacquesm 2 years ago

          Haste is not something compatible with board activity unless the circumstances clearly demand it and that wasn't the case here.

    • moberley 2 years ago

      I find it interesting that the attempted explanations, as unconvincing as they may be, are related to Altman specifically. Given that Brockman was the board chairperson it is surprising that there don't seem to be any attempts to explain that demotion. Perhaps its just not being reported to anyone outside but it makes no sense to me that anyone would assume a person would stay after being removed from a board without an opportunity to be at the meeting to defend their position.

      • Irishsteve 2 years ago

        Maybe the personal issue was Ilya and sam was saying to one board member he has to go and to another he is good.

    • fastball 2 years ago

      I don't understand how you only need 4 people for quorum on a 6-person board.

      • seanhunter 2 years ago

        It depends entirely on how the votes are structured, the issue at hand and what the articles of the company say about the particular type of issue.

        On the board that I was on we had normal matters which required a simple majority except that some members had 2 votes and some got 1. Then there were "Supermajority matters" which had a different threshold and "special supermajority matters" which had a third threshold.

        Generally unless the articles say otherwise I think a quorum means a majority of votes are present[1], so 4 out of 6 would count if the articles didn't say you needed say 5 out of 6 for some reason.

        It's a little different if some people have to recuse themselves for an issue. So say the issue is "Should we fire CEO Sam Altman", the people trying to fire Sam would likely try to say he should recuse himself and therefore wouldn't get a vote so his vote wouldn't also count in deciding whether or not there's a quorum. That's obviously all BS but it is the sort of tactic someone might pull. It wouldn't make any difference if the vote was a simple majority matter and they already had a majority without him though.

        [1] There are often other requirements to make the meeting valid though eg notice requirements so you can't just pull a fast one with your buddies, hold the meeting without telling some of the members and then claim it was quorate so everyone else just have to suck it up. This would depend on the articles of the company and the not for profit though.

      • jacquesm 2 years ago

        That's a supermajority in principle, but the board originally had 9 members and this is clearly a controversial decision and at least one board member is conflicted, and another has already expressed his regret about his role in the decision(s).

        So the support was very thin and this being a controversial decision the board should have sought counsel on whether or not their purported reasons had enough weight to support a hasty decision. There is no 'undo' button on this and board member liability is a thing. The probably realize all that which is the reason for the radio silence, they're just waiting for the other shoe to drop (impending lawsuit) after which they can play the 'no comment because legal proceedings' game. This may well get very messy or, alternatively it can result in all parties affected settling with the board and the board riding off into the sunset to wreak havoc somewhere else (assuming anybody will still have them, they're damaged goods).

      • AlanYx 2 years ago

        It depends on the corporate bylaws, but the most common quorum requirement is a simple majority of the board members. So 4 is not atypical for quorum on a 6 person board.

  • lfclub 2 years ago

    It could be a more primal explanation. I think OpenAi doesn’t want to effectively be a R&D arm of Microsoft. The ChatGPT mobile app is an unpolished and unrefined. There’s little to no product design there, so I totally see how it’s fair criticism to call out premature feature milling (especially when it’s clear it’s for Microsoft).

    I’m imagining Sam being Microsoft’s Trojan horse, and that’s just not gonna fly.

    If anyone tells me Sam is a master politician, I’d agree without knowing much about him. He’s a Microsoft plant that has support of 90% of the OpenAi team. The two things are conflicts of interest. Masterful.

    It’s a pretty fair question to ask a CEO. Do you still believe in OpenAi vision or do you know believe in Microsoft’s vision?

    The girl she said not to worry about.

  • aravindgp 2 years ago

    Exactly my point why would d Angelo want openai to thrive when his own company poe(chatbot) wants compete in the same space. Its conflict of interest which ever way you look. He should resign from board of openai in the first place.

    The main point is greg, Ilya can get 50% vote and convince Helen toner to change decision. It's all done then it's 3 to 2 in board of 5 people. Unless greg board membership is reinstated.

    Now it's increasingly look like Sam will be heading back into the role of CEO of openai.

    • anupamchugh 2 years ago

      There’s lots of conflicts of interests beyond Adam and his Poe AI. Yes, he was building a commerical bot using OpenAI APIs, but Sam was apparently working on other side ventures too. And Sam was the person who invested in Quora during his YC tenure, and must have had a say in bringing him onboard. At this point, the spotlight is on most members of the nonprofit board

      • nemo44x 2 years ago

        I wouldn’t hold Sam bringing him over in too high a regard. Fucking each other over is a sport in Silicon Valley. You’re subservient exactly until the moment you sense an opportunity to dominate. It’s just business.

        • TerrifiedMouse 2 years ago

          Why did Altman bring him onboard in the first place? What value does he provide? If there is a conflict of interest why didn’t Altman see it?

          If this Quora guy is the cause of all this, Altman only has himself to blame since he is the reason the Quora guy is on the board.

          • kaoD 2 years ago

            That Quora guy was CTO and VPEng of Facebook so plenty of connections I guess.

            Also Quora seems like a good source of question-and-answer data which has probably been key in gpt-instruct training.

        • bezier-curve 2 years ago

          "Business" sucks then. This is sociopathic behavior.

          • rjbwork 2 years ago

            Yes. That is what is valued in the economic system we have. Absolute cut throat dominance to take as big a chunk of any pie you can get your grubby little fingers into yields the greatest amount of capital.

          • bredren 2 years ago

            What has been seen can not be unseen. https://news.ycombinator.com/item?id=881296

            • dendrite9 2 years ago

              Thanks for that. The discussion feels like a look into another world, which I guess is what history is.

          • nemo44x 2 years ago

            It’s not just business that works like this. Any type of organization of consequence has sociopaths at the top. It’s the only way to get there. It’s a big game that some people know how to play well and that many people are oblivious to.

    • 015a 2 years ago

      So? Sam gave Worldcoin early access to OpenAI's proprietary technology. Should Sam step down (oh wait)?

      • blackoil 2 years ago

        Worldcoin has no conflict of interest with OpenAI. Unless he gave tech for free causing great loss to the OpenAI it is simply finding an early beta customer.

        Also, to fire over something so trivial would be equally if not more stupid. It is like firing Elon because he without open bidding sent Tesla on SpaceX.

      • aravindgp 2 years ago

        Early access is different from firing board members or CEO! If Sam was always involved in furthering openai success as far the facts and actions he has taken show. It never showed his action is against openai.

        Like all bets are not correct I don't agree with sams worldcoin project at all in the first place.

        Giving early access to worldcoin doesn't correlate to firing employees or board or CEO.

  • LMYahooTFY 2 years ago

    Well, the appointment of a CEO who believes AGI is a threat to the universe is potentially one point in favor of AI safety philosophical differences.

  • AndyNemmity 2 years ago

    Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

    My feeling is Ilya was upset about how Sam Altman was the face of OpenAI, and went along with the rest of the board for his own reasons.

    That's often how this stuff works out. He wasn't particularly compelled by their reasons, but had his own which justified his decision in his mind.

    • aravindgp 2 years ago

      I think Ilya was naive and didn't see this coming and good that he reliased quickly announced on twitter and made the right call to get Sam back.

      Otherwise it was like Ilya vs Sam showdown,and people were siding towards Ilya for agi and all. But this behind the scene looks like corporate power struggle and coup.

    • dragonwriter 2 years ago

      > Wouldn't it make sense that Ilya Sutskever presented the reasons the board had for firing Sam Altman, which were not his reasons.

      Ilya was one of the board members that removed Sam, so his reasons would, ipso facto, be a subset of the board's reasons.

      • skygazer 2 years ago

        It’s also weird that he’s not admitting to any of his own reasons, only describes some trivial reasons he seems to have coaxed out of the other board members?! Perhaps he still has his own reasons but realizing he’s destroying what he loves he’s trying to stay mum? The other board members seem more zealous for some reason, maybe not being employed by the LLC. Or maybe the others are doing it for the sake of Ilya or someone else that prefers to remain anonymous? Okay, clearly I have no idea.

    • karmasimida 2 years ago

      He lets the emotion gets the better part of him for sure.

  • arthur_sav 2 years ago

    > Also notice that Ilya Sutskever is presenting the reasons for the firing as just something he was told.

    You mean to tell me that the 3-member board told Sutskever that Sama was being bad and he was like "ok, I believe you".

    • laurels-marts 2 years ago

      Two possibilities when it comes to Ilya:

      1. He’s the actual ringleader behind the coup. He got everyone on board, provided reassurances and personally orchestrated and executed the firing. Most likely possibly and the one that’s most consistent with all the reporting and evidence so far (including this article).

      2. Others on the board (e.g. Adam) masterminded the coup and saw Ilya as a fellow traveler useful idiot that could be deceived into voting against Sam and destroy the company he and his 700 colleagues spent so hard to build. He then also puppeteer Ilya to do the actual firing over Google Meet.

      • aerhardt 2 years ago

        If #1 is real, he’s just the biggest weasel in tech history by repenting so swiftly and decisively… I don’t think neither the article, nor the broader facts really point to him being the first to cast the stone.

    • jacquesm 2 years ago

      Based on Ilya's tweets and his name on that letter (still surprised about that, I have never sees someone calling for their own resignation) that seems to be the story.

  • resource0x 2 years ago

    The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI. This can be done in perpetuity. Google explains its AI failures along the same lines.

    • DaiPlusPlus 2 years ago

      > The failure to create anything resembling AGI can be easily explained away by concerns about the safety of AGI.

      Isn't the solution to just pipe ChatGPT into a meta-reinforcement-learning framework that gradually learns how to prompt ChatGPT into writing the source-code for a true AGI? What do we even need AI ethicists for anyway? /s

  • JyB 2 years ago

    That's the only thing that make sense with Ilya & Murati signing that letter.

  • anoy8888 2 years ago

    This is the most likely scenario. Adam wants to destroy OpenAI so that his poop AI has a chance to survive

DebtDeflation 2 years ago

1) Where is Emmett? He's the CEO now. It's his job to be the public face of the company. The company is in an existential crisis and there have been no public statements after his 1AM tweet.

2) Where is the board? At a bare minimum, issue a public statement that you have full faith in the new CEO and the leadership team, are taking decisive action to stabilize the situation, and have a plan to move the company forward once stabilized.

  • dmix 2 years ago

    Technically he's the interim CEO in a chaotic company just assigned in the last 24hrs. I'd probably wait to get my bearings before walking in acting like I've got everything under control on the first day after a major upheaval.

    The only thing I've read about Shear is he is pro-slowing AI development and pro-Yudkowsky's doomer worldview on AI. That might not be a pill the company is ready to swallow.

    https://x.com/drtechlash/status/1726507930026139651

    > I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down.

    > If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.

    > - Emmett Shear Sept 16, 2023

    https://x.com/eshear/status/1703178063306203397

    • motoxpro 2 years ago

      The more I read into this story the more I can't help but to be a conspiracy theorist and say that it feels like the boards intent was to kill the company.

      No explanation beyond "he tried to give two people the same project

      the "Killing the company would be consistent with the companies mission" line in the boards statement

      Adam having a huge conflict of interest

      Emmet wanting to go from a "10" to a "1-2"

      I'm either way off, or I've had too much internet for the weekend.

      • m_mueller 2 years ago

        Could it be that their research found more 'glimpses' of a dangerous AGI?

        • ethanbond 2 years ago

          IMO this is increasingly the most likely answer, which would readily comport with "lack of candor" as well as the (allegedly) provided 2 explanations being so weak [1]: you certainly wouldn't want to come out and say that you fired the CEO because you think you might have (apparently) dangerous AI/AGI/ASI and your CEO was reckless with it. Neither of the two explanations seem even to be within the realm of a fireable offense.

          It would also comport with Ilya's regret about the situation: perhaps he wanted to slow things down, board members convinced him Sam's ouster was the way to do it, but then it has actually unfolded such that development of dangerous AI/AGI/ASI might accelerate at Microsoft while weakening OpenAI's own ability to modulate the pace of development.

          [1]: Given all the very public media brinkmanship, I'm not so quick to assume reports like these two explanations are true. E.g. the "Sama is returning with demands!" stories were obviously "planted" by people who were trying to exert pressure on the negotiations; would be interested to have more evidence that Ilya's explanations were actually this sloppy.

    • concordDance 2 years ago

      Everyone involved here is a doomer by the strict definition ("misaligned agi could kill us all and alignment is hard") .

    • creer 2 years ago

      Another "thing" is, he has been named by a board which... [etc]. Being a bit cautious would be a minimum.

  • PheonixPharts 2 years ago

    Yes these people should all be doing more to feed internet drama! If they don't act soon, HN will have all sorts of wild opinions about what's going on and we can't have that!

    Even worse, if we don't have near constant updates, we might realize this is not all that important in the end and move on to other news items!

    I know, I know, I shouldn't jest when this could have grave consequences like changing which uri your api endpoint is pointing to.

    • sackfield 2 years ago

      You can either act like a professional and control the messaging or let others fill the vacuum with idle speculation. I'm quite frankly in shock as to the level of responsibility displayed by people whose position should demand high function.

      • kspacewalk2 2 years ago

        It seems evident that the board was filled at least in part by people whose understanding of the business world and leadership skills is a tier or three below what a position of this level requires. One wonders how they got the job in the first place.

        • cosmojg 2 years ago

          This is America. Practically anyone can start a nonprofit or a company. More importantly, good marketing may attract substantial investment, but it doesn't necessarily imply good leadership.

          • kspacewalk2 2 years ago

            Clearly corporations are a dime a dozen. What's shocking is the disconnect between the standout quality of the technical expertise (and resulting products!) and the abysmal quality of leadership.

      • dclowd9901 2 years ago

        Really? I’ve always assumed (known) there is no actual difference between high level execs and you: they just think higher of themselves.

        • kevinventullo 2 years ago

          In fact, I think the chaos we’ve seen over the last few days shows precisely the difference between competent and incompetent leadership. I think if anyone from, say, the board of directors of Coca-Cola was on the OAI board, this either wouldn’t have happened or would have played out very differently.

          • rjtavares 2 years ago

            If Reed Hoffman was still there, I can't see this happening. People here talk about "glorified salespeople" as an insult without realizing that having people skills is a really important trait for Boards/C level people, and not everyone has them

        • d0gsg0w00f 2 years ago

          What you've likely seen of executives is 15 minutes of face time after 7 weeks of vicious Game of Thrones behind the scenes. It's a curated image.

          • blackoil 2 years ago

            That is the idea, keep GoT behind the scene. Don't dump it on the street. When you have a new king, make sure he isn't usurped next day and population is revolting outside the gates of Red Keep.

        • spoonjim 2 years ago

          That makes as much sense as saying (knowing) that the only difference in basketball skill between you and LeBron James is that he thinks higher of himself.

          • dclowd9901 2 years ago

            You’re really likening running a company against the skills of a professional athlete? Put down the kool aid. CEOs are figureheads. Very few have ever had actual meaningful impact on the progress of their companies (or anything really) compared to their most talented engineers.

            I’m done pretending they’re important. It’s a lie they and the boards have sold us and investors. The real meat of a company is who their smartest people are, and how much the company enables those people.

            Pretty easy to see the difference if you consider between a company full of smart people who actually make things vs a company full of CEOs, which one will do better.

    • JumpCrisscross 2 years ago

      My favorite hypothesis: Ilya et al suspected emergent AGI (e.g. saw the software doing things unprompted or dangerous and unexpected) and realized the Worldcoin shill is probably not the one you want calling the shots on it.

      For the record, I don't think it's true. I think it was a power play, and a failed coup at that. But it's about as substantiated as the "serious" hypotheses being mooted in the media. And it's more fun.

      • mvdtnz 2 years ago

        Absolutely wild to me that people are drawing a straight line between a text completion algorithm and AGI. The term "AI" has truly lost all meaning.

        • quickthrower2 2 years ago

          Hold up. Any AI that exists is an IO function (algorithm) perhaps with state. Including our brains. Being an “x completion” algorithm doesn’t say much about whether it is AI.

          Your comment sounds like a rhetoric way to say that GPT is in the same class as autocomplete and that what autocomplete does sets some kind of ceiling to what IO functions that work a couple of bytes at a time can do.

          It is not evident to me that that is true.

        • dwaltrip 2 years ago

          LLMs predict language, and language is a representation of human concepts about the world. Thus, these models are constructing, piece by piece, conceptual chains about the world.

          As they learn to construct better and more coherent conceptual chains, something interesting must be happening internally.

          • aezart 2 years ago

            Language is only one projection of reality into fewer dimensions, and there's a lot it can't capture. Similar to how a photograph or painting has to flatten 3D space into a 2D representation, so a lot is lost.

            I think trying to model the world based on a single projection won't get you very far.

          • denton-scratch 2 years ago

            > LLMs predict language, and language is a representation of human concepts about the world. Thus, these models are constructing, piece by piece, conceptual chains about the world.

            I smell a fallacy. Parent has moved from something you can parse as "LLMs predict a representation of concepts" to "LLMs construct concepts". Yuh, if LLMs "construct concepts", then we have conceptual thought in a machine, which certainly looks interesting. But it doesn't follow from the initial statement.

          • mvdtnz 2 years ago

            No they are not.

            • cjbprime 2 years ago

              (You're probably going to have to get better at answering objections than merely asserting your contradiction of them.)

              • diffeomorphism 2 years ago

                Nah, calling out completely baseless assertions as just that is fine and a positive contribution to the discussion.

            • krisoft 2 years ago

              Your carefully constructed argument is less than convincing.

              Could you at least elaborate what they are “not”? Surelly you are not having a problem with “LLMs predict language”?

            • mirekrusin 2 years ago

              Intelligence is just optimization over recursive prediction function.

              There is nothing special about human intelligence threshold.

              It can be surpassed by many different models.

        • cjbprime 2 years ago

          It's not wild. "Predict the next word" does not imply a bar on intelligence; a more intelligent prediction that incorporates more detail from the descriptions of the world that were in the training data will be a better prediction. People are drawing a straight line because the main advance to get to GPT-4 was throwing more compute at "predict the next word", and they conclude that adding another order of magnitude of compute might be all it takes to get to superhuman level. It's not "but what if we had a better algorithm", because the algorithm didn't change in the first place. Only the size of the model did.

          • robocat 2 years ago

            > Predict the next word

            Are there any papers testing how good humans are at predicting the next word?

            I presume us humans fail badly:

            1. as the variance in input gets higher?

            2. Poor at regurgitating common texts (e.g. I couldn't complete a known poem).

            3. When context starts to get more specific (majority of people couldn't complete JSON)?

            • passion__desire 2 years ago

              The following blogpost by an OpenAI employee can lead us to compare patterns and transistors.

              https://nonint.com/2023/06/10/the-it-in-ai-models-is-the-dat... The ultimate model, in his (author's) sense, would suss out all patterns and then patterns among those patterns and so on, so that it delivers on compute and compression efficiency.

              To achieve compute and compression efficiency, it means LLM models have to cluster all similar patterns together and deduplicate them. This also means successively levels of pattern recognition to be done i.e. patterns among patterns among patterns and so on , so as to do the deduplication across all hierarchy it is constructed. Full trees or hierarchies won't get deduplicated but relevant regions / portions of those trees will, which implies fusing together in ideas space. This means root levels will be the most abstract patterns. This representation also means appropriate cross-pollination among different fields of studies further increasing effectiveness.

              This reminds me of a point which my electronics professor made on why making transistors smaller has all the benefits and only few disadvantages. Think of these patterns as transistors. The more deduplicated and closely packed they are, the more beneficial they will be. Of course, this "packing together" is happening in mathematical space.

              Another thing which patterns among patterns among patterns reminds me of homotopies. This brilliant video by PBS Infinite Series is amazing. As I can see, compressing homotopies is what LLMs do, replace homotopies with patterns. https://www.youtube.com/watch?v=N7wNWQ4aTLQ

            • adamauckland 2 years ago

              There's entire studies on it, I saw a lecture by some English professor who explained how the brain isn't fast enough to parse words in real time, so runs multiple predictions of what the sentence will be in parallel and at the end jettisons the wrong ones and goes with the correct one.

              From this, we get comedy. A funny statement is one that ends in an unpredictable manner and surprises the listener brain because it doesn't have the meaning of that one already calculated, and hence why it can take a while to "get the joke"

        • ssnistfajen 2 years ago

          If the text completion algorithm is sufficiently advanced enough then we wouldn't be able to tell it's not AGI, especially if it has access to state-of-the-art research and can modify its own code/weights. I don't think we are there yet but it's plausible to an extent.

          • mvdtnz 2 years ago

            No. This is modern day mysticism. You're just waving your hands and making fuzzy claims about "but what if it was an even better algorithm".

            • calf 2 years ago

              You're correct about their error; however, Hinton views that a sufficiently scaled up autocompletion would be forced, in a loose mathematical sense, to understand things logically and analytically, because the only way approach 0 error rate on the output is to actually learn the problem and not imitate the answer. It's an interesting issue and there are different views on this.

            • ssnistfajen 2 years ago

              lol

          • mcv 2 years ago

            Any self-learning system can change its own weights. That's the entire point. And a text-processing system like ChatGPT may well have access to state-of-the-art research. The combination of those two things does not imply that it can improve itself to become secretly AGI. Not even if the text-completion algorithm was even more advanced. For one thing, it still lacks independent thought. It's only responding to inputs. It doesn't reason about its own reasoning. It's questionable whether it's reasoning at all.

            I personally think a far more fundamental change is necessary to reach AGI.

        • Emma_Goldman 2 years ago

          I agree, it's an extremely non-obvious assumption and ignores centuries-old debates (empiricism vs. rationalism) about the nature of reason and intelligence. I am sympathetic to Chomsky's position.[1]

          https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat...

        • mcv 2 years ago

          ChatGPT is not AGI, but it is AI. The thing that makes AI lose all meaning is the constantly moving goal posts. There's been tons of very successful AI research over the past decades. None of it is AGI, but it's still very successful AI.

          • mvdtnz 2 years ago

            > ChatGPT is not AGI, but it is AI.

            I absolutely disagree in the strongest terms possible.

            • mcv 2 years ago

              Which part? The first, the second, or, most confusingly, both?

        • mjan22640 2 years ago

          An algorithm that completes "A quantum theory of gravity is ..." into a coherent theory is of course just a text completion algorithm.

        • crucialfelix 2 years ago

          There has been debate for centuries regarding determinism and free will in humans.

      • hooande 2 years ago

        Why wouldn't Ilya come out and say this? Why wouldn't any of the other people who witnessed the software behave in an unexpected way say something?

        I get that this is a "just for fun" hypothesis, which is why I have just for fun questions like what incentive does anyone have to keep clearly observed ai risk a secret during such a public situation?

        • allday 2 years ago

          Because, if they announced it and it seemed plausible or even possible that they were correct, then every media outlet, regulatory body, intelligence agency, and Fortune 500 C-suite would blanket OpenAI in the thickest veil of scrutiny to have ever existed in the modern era. Progress would grind to a halt and eventually, through some combination of legal, corporate, and legislative maneuvers, all decision making around the future of AGI would be pried away from Ilya and OpenAI in general - for better or worse.

          But if there's one thing that seems very easy to discern about Ilya, it's that he fully believes that when it comes to AI safety and alignment, the buck must stop with him. Giving that control over to government bureaucracy/gerontocracy would be unacceptable. And who knows, maybe he's right.

      • drusepth 2 years ago

        My favorite hypothesis (based on absolutely nothing but observing people use LLMs over the years):

        * Current-gen AI is really good at tricking laypeople into believing it could be sentient

        * "Next-gen" AI (which, theoretically, Ilya et al may have previewed if they've begun training GPT-5, etc) will be really good at tricking experts into believing it could be sentient

        * Next-next-gen AI may as well be sentient for all intents and purposes (if it quacks like a duck)

        (NB, to "trick" here ascribes a mechanical result from people using technology, not an intent from said technology)

      • VirusNewbie 2 years ago

        But why would Ilya publicly say he regrets his decision and wants Sam to come back. You think his existential worries are less important than being liked by his coworkers??

        • cosmojg 2 years ago

          > You think his existential worries are less important than being liked by his coworkers??

          Yes, actually. This is overwhelmingly true for most people. At the end of the day, we all fear being alone. I imagine that fear is, at least in part, what drives these kinds of long-term "existential worries," the fear of a universe without other people in it, but now Ilya is facing the much more immediate threat of social ostracism with significantly higher certainty and decidedly within his own lifetime. Emotionally, that must take precedence.

        • Zolde 2 years ago

          He may have wanted Sam out, but not to destroy OpenAI.

          His existential worries are less important than OpenAI existing, and him having something to work on and worry about.

          In fact, Ilya may have worried more about the continued existence of OpenAI than Sam after he was fired, which looked instantly like a: "I am taking my ball and going home to Microsoft.". If Sam cared so much about OpenAI, he could have quietly accepted his resignation and help find a replacement.

          Also, Anna Brockman had a meeting with Ilya where she cried and pleaded. Even though he stands by his decision, he may ultimately still regret it, and the hurt and damage it caused.

        • mcmcmc 2 years ago

          I think his existential worries about humanity were overruled by his existential worries about his co-founder shares and the obscene amount of wealth he might miss out on

      • lucubratory 2 years ago

        Damn. Good prediction.

    • minimaxir 2 years ago

      No serious company wants drama. Hopefully OpenAI is still a serious company.

      A statement from the CEO/the board is a standard descalation.

      • 6gvONxR4sf7o 2 years ago

        > A statement from the CEO/the board is a standard descalation.

        Haven't we gotten statements from them? The complaint seems to be that we want statements from them every day (or more) now.

        • minimaxir 2 years ago

          Emmett made a tweet noting accepting the role, which is not a statement.

          The board has not given a statement besides the original firing of Sam Altman that kicked the whole thing off.

      • JumpCrisscross 2 years ago

        > No serious company wants drama

        "All PR is good PR" is a meme for a reason. Many cultures thrive on dysfunction, particularly the kind that calls attention to themselves.

        • minimaxir 2 years ago

          That axiom is a relic from the pre-social media days. Nowadays, bad PR going viral can sink a company overnight.

          • JumpCrisscross 2 years ago

            > That axiom is a relic from the pre-social media days. Nowadays, bad PR going viral can sink a company overnight

            You're saying we're in a less attention-seeking culture today than in pre-social media times?

            • maxbond 2 years ago

              [ES: Speculation I have medium confidence in.]

              Maybe "attention seeking" isn't the right way to look at this. Getting bad press always does reputational damage while giving you notoriety, and I think GP's suggestion that the balance between them has changed is compelling.

              In an environment with limited connectivity, it's much more difficult for people to learn you even exist to do business with. So that notoriety component has much more value, and it often nets out in your favor.

              In a highly connected environment, it's easier to reach potential customers, so the notoriety component has less value. Additionally, people have access to search engines, so the reputational damage becomes more lasting; potential customers who didn't even hear about the bad press at the time might search your name and find it. They may not have even been looking for it, they might've searched your name to find you website (whereas before they would have needed to intentionally visit a library and look through the catalog to come across an old story). So it becomes much less likely to net out in your favor.

            • irreticent 2 years ago

              I think they were saying the opposite of that.

        • staticman2 2 years ago

          That phrase much like "There's no such thing as bad publicity" is not actually true.

        • maxbond 2 years ago

          > Many cultures thrive on dysfunction

          PSA: If you or your culture is dysfunctional and thriving - think about how much more you'll thrive without the dysfunction! (Brought to you by the Ad Council.)

      • dylan604 2 years ago

        > No serious company wants drama

        Unless you're TNT, cause they "know drama"

    • ssnistfajen 2 years ago

      The speculations are rampant precisely because the board has said absolute nothing since the leadership transition announcement on Friday.

      If they had openly given literally any imaginable reason to fire Sam Altman, the ratio of employees threatening to quit wouldn't be as high as 95% right now.

    • insanitybit 2 years ago

      > HN will have all sorts of wild opinions about what's going on and we can't have that!

      Uh, or investors and customers will? Yes, people are going to speculate, as you point out, which is not good.

      > we might realize this is not all that important in the end and move on to other news items!

      It's important to some of us.

    • gexla 2 years ago

      Thank you! I get the sense that none of this matters and it's all a massive distraction.

      News

      Company which does research and doesn't care about money makes a decision to do something which aligns with research and not caring about money.

      From the OpenAI website...

      "it may be difficult to know what role money will play in a post-AGI world"

      Big tech co makes a move which sends its stock to an all time high. Creates research team.

      Seems like there could be a "The Martian" meme here... we're going to Twitter the sh* out of this.

    • x0x0 2 years ago

      Convincing two constituencies: employees and customers, that your company isn't just yolo-ing things like ceos and so forth seems like it is a pretty good use of ceo time!

    • concordDance 2 years ago

      OpenAI becoming a Microsoft department is awful from an X risk point of view.

    • Andrex 2 years ago

      I cannot say whether you deserve the downvotes, but an alternative and grounded perspective is appreciated in this maelstrom of news, speculation and drama.

    • quickthrower2 2 years ago

      They have customers and people deciding if they want to be customers.

    • kyleyeats 2 years ago

      This sarcastic post is the best understanding of public relations I've seen in an HN post.

  • upupupandaway 2 years ago

    I find it absolutely fascinating that Emmett accepted this position. He can game all scenarios and there is no way that he can come out ahead on any of them. One would expect an experienced Silicon Valley CEO to make this calculus and realize it's a lost cause. The fact he accepted to me shows he's not a particularly good leader.

    • tw1984 2 years ago

      He made it pretty clear that he consider it as a once in a life time chance.

      I think he is correct, being the CEO twitch is a position known by no one in many places, e.g. how many developers/users in China even heard of Twitch? Being the CEO of OpenAI is a completely different story, it is a whole new level he can leverage in the years to come.

      • ps256 2 years ago

        It seems kind of naive to think that he'll be CEO for long, or if it is for long, that there will be much company left to be a CEO of.

        • tw1984 2 years ago

          why he needs to be CEO for long?

          If everything goes well, he can claim that he is the man behind all these to reunite the OpenAI team. If something goes wrong, well, no one is going to blame him, the board screwed the entire business. He is more like a emergency room doctor who failed to save a poor dude who just intentionally shot himself in the head with a shotgun.

          • ps256 2 years ago

            > If everything goes well, he can claim that he is the man behind all these to reunite the OpenAI team.

            It's now one day later and Altman is back as CEO - what can Emmett Shear claim exactly?

            • tw1984 2 years ago

              > what can Emmett Shear claim exactly?

              - helped to stabilized the situation

              - the first to propose the idea of independent investigation into the matter

              - sided with impacted employees all the way through such difficult moments

              - supported a smooth transition period

              depending on how he reacted when 90% employees signed that letter asking for altman's return, don't be too surprised if he claims to be guy how helped to push for that as well.

      • khazhoux 2 years ago

        > he consider it as a once in a life time chance.

        Like taking a sword to the gut.

    • bmitc 2 years ago

      That seems kind of silly to say. He's not a good leader because he's taking on a challenge?

      • upupupandaway 2 years ago

        A challenge he can't win, brought in by people 90% of the company hates, and with the four most influential people in the company either gone or having turned on the board does not sound like a "challenge" but more like a "guaranteed L".

  • c_s_guy 2 years ago

    If Emmett will run this the same way he ran Twitch, I'm not expecting much action from him.

  • starshadowx2 2 years ago

    People kept asking where he was during his years of being Twitch CEO, it's not unlike him to be MIA now either.

  • agitator 2 years ago

    As much as I'd love to hear about the details of the drama as the next person, they really don't have to say anything publicly. We are all going to continue using the product. They don't have public investors. The only concern about perception they may have is if they intend to raise more money anytime soon.

  • eastern 2 years ago

    That's what a board of a for-profit company which has a fiduciary duty towards shareholders should do.

    However, the OpenAI board has no such obligation. Their duty is to ensure that the human race stays safe from AI. They've done their best to do that ;-)

  • pushedx 2 years ago

    He has said more than he said during his entire 5 years at Twitch

  • arduanika 2 years ago

    Here he is! Blathering about AI doom 4 months ago, spitting Yudkowsky talking points:

    https://www.youtube.com/watch?v=jZ2xw_1_KHY

  • highwayman47 2 years ago

    Half the board lacks any technical skill, and the entire board lacks any business procedural skill. Ideally, you’d have a balance of each on a component board.

    • eshack94 2 years ago

      Ideally, you also have at least a couple independent board members who are seasoned business/tech veterans with the experience and maturity to prevent this sort of thing from happening in the first place.

  • markdown 2 years ago

    Why should he care about updating internet randoms? It's none of our business. The people who need to know what's going, know what's going on.

  • spullara 2 years ago

    He is trying to determine if they have already made an Alien God.

kumarvvr 2 years ago

Giving 2 people the same project? Isnt this like the thing to do to get differing approaches and then release the amalgamation of the two? I thought these sorts of things are common.

Giving different opinions on same person is a reason to fire a CEO?

This board has no reason to fire, or does not want to give the actual reason to fire Sam. They messed up.

  • hal009 2 years ago

    As mentioned by another person in this thread [0], it is likely that it was Ilya's work that was getting replicated by another "secret" team, and the "different opinions on the same person" was Sam's opinions of Ilya. Perhaps Sam saw him as an unstable element and a single point of failure in the company, and wanted to make sure that OpenAI would be able to continue without Ilya?

    [0] https://news.ycombinator.com/reply?id=38357843

    • JimDabell 2 years ago

      Since a lot of the board’s responsibilities are tied to capabilities of the platform, it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board. A simple dual-track project shouldn’t be a problem, but this kind of thing would be seen as dishonesty by the board.

      • hn_throwaway_99 2 years ago

        > it’s possible that Altman asked for Ilya to determine the capabilities, didn’t like the answer, then got somebody else to give the “right” answer, which he presented to the board.

        This makes no sense given that Ilya is on the board.

        • JimDabell 2 years ago

          No, it just means that in that scenario Sam would think he could convince the rest of the board that Ilya was wrong because he could find somebody else to give him a preferable answer.

          It’s just speculation, anyway. There isn’t really anything I’ve heard that isn’t contradicted by the evidence, so it’s likely at least one thing “known” by the public isn’t actually true.

    • kmlevitt 2 years ago

      Firing Sam as a way of sticking up for Ilya would make more sense if Ilya wasn’t currently in support of Sam getting his job back.

      • onimishra 2 years ago

        I’m not sure Ilya was anticipating this to more or less break OpenAI as a company. Ilya is all about the work they do, and might not have anticipated that this would turn the entire company against him and the rest of the board. And so, he is in support of Sam coming back, if that means that they can get back to the work at hand.

        • kmlevitt 2 years ago

          Perhaps. But if the board is really so responsive to Ilya's concerns, why have they not reversed the decision so that Ilya can get his wish?

    • 015a 2 years ago

      This is an interesting theory when combined with this tweet from Google DeepMind's team lead of Scalable Alignment [1].

      [1] https://twitter.com/geoffreyirving/status/172675427761849141...

      The "Sam is actually a psychopath that has managed to swindle his way into everyone liking him, and Ilya has grave ethical concerns about that kind of person leading a company seeking AGI, but he can't out him publicly because so many people are hypnotized by him" theory is definitely a new, interesting one; there has been literally no moment in the past three days where I could have predicted the next turn this would take.

      • nvm0n2 2 years ago

        That guy is another AI doomer though, and those people all seem to be quite slippery themselves. Supposedly Sam lied to him about other people, but there's no further detail provided and nobody seems willing to get concrete about any time Altman has been specifically dishonest. When the doomer board made similar allegations it seemed serious for a day, and then evaporated.

        Meanwhile the Google AI folks have a long track record of making very misleading statements in public. I remember before Altman came along and made their models available to all, Google was fond of responding to any OpenAI blog post by claiming they had the same tech but way better, they just weren't releasing it because it was so amazing it just wasn't "safe" enough to do so yet. Then ChatGPT called their bluff and we discovered that in reality they were way behind and apparently unable to catch up, also, there were no actual safety problems and it was fine to let everyone use even relatively unconditioned models.

        So this Geoffrey guy might be right but if Altman was really such a systematic liar, why would his employees be so loyal? And why is it only AI doomers who make this allegation? Maybe Altman "lied" to them by claiming key people were just as doomerist as those guys, and when they found out it wasn't true they wailed?

      • doktrin 2 years ago

        Interesting. I’m glad he shared his perspective despite the ambiguity.

    • BillyTheKing 2 years ago

      either that or Sam didn't tell Adam D'Angelo that they were launching a competing product in exactly the same space that poe.ai had launched one. For some context, poe had launched something similar to those custom GPTs with creator revenue sharing etc. just 4 weeks prior to dev-day

      • m3kw9 2 years ago

        Not sure how he would see that coming? It was a UI tweak away for OpenAI

  • stingraycharles 2 years ago

    I remember a few years ago when there was some research group that was able to take a picture of a black hole. It involved lots of complicated interpretation of data.

    As an extra sanity check, they had two teams working in isolation interpreting this data and constructing the image. If the end result was more or less the same, it’s a good check that it was correct.

    So yes, it’s absolutely a valid strategy.

    • msravi 2 years ago

      Did the teams know that there was another team working on the same thing? I wonder how that affects working of both teams... On the other hand, not telling the teams would erode the trust that the teams have in management.

      • Keyframe 2 years ago

        There were four teams actually. They knew but couldn't talk to each other. There's a documentary about it. I highly suggest watching it, it also features late Stephen Hawking et al. working on black hole soft hair. Documentary is called Black Holes: The Edge of All We Know, it's on pretty much all streaming platforms.

    • campbel 2 years ago

      Yep! I've done eng "bake-offs" as well, where a few folks / teams work on a problem in isolation then we compare and contrast after. Good fun!

      • cyrnel 2 years ago

        Not good fun when it's done in secret. That happened to me, and I was gaslit when I discovered the competing git repo.

        Not saying that's what happened here, but too many people are defending this horrid concept of secretly making half your workers do a bunch of work only to see the boulder roll right back down the hill.

        • moorow 2 years ago

          Yeah, doing it in secrecy is a recipe for Bad Things. I worked at a startup that completely died because of it.

        • campbel 2 years ago

          Yeah that sounds toxic, this was done with everyones knowledge

    • DonHopkins 2 years ago

      Maybe they needed two teams to independently try to decode an old tape of random numbers from a radio space telescope that turned out to be an extraterrestrial transmission, like a neutrino signal from the Canis Minor constellation or something. Happens all the time.

      https://en.wikipedia.org/wiki/His_Master%27s_Voice_(novel)

  • danbmil 2 years ago

    The CEO's I've worked for have mostly been mini-DonaldT's, almost pathologically allergic to truth, logic, or consistency. Altman seems way over on the normal scale for CEO of a multi-billion dollar company. I'm sure he can knock two eggs together to make an omelette, but these piddling excuses for firing him don't pass the smell test.

    I get the feeling Ilya might be a bit naive about how people work, and may have been taken advantage of (by for example spinning this as a safety issue when it's just a good old fashioned power struggle)

    • danbmil 2 years ago

      as for multiple teams with overlapping goals -- are you kidding me? That's a 100% legit and popular tactic. Once CEO I worked with relished this approach and called it a "Steel-cage death match"!

    • aravindgp 2 years ago

      You were right Ilya was naive , he regrets his decision on twitter. And he was taken advantage of by power hungry people behind.

  • valine 2 years ago

    Steve Jobs famously had two iPhone teams working on concepts in parallel. It was click wheel vs multi-touch. Shockingly the click wheel iPhone lost.

    • brandall10 2 years ago

      I thought the design team always worked up 3 working prototypes from a set of 10 foam mockups. There was an article from someone with intimate knowledge of Ives lab some years back stating this was protocol for all Apple products.

    • mikepurvis 2 years ago

      Another element of that was the team that tried to adapt iPodOS for iPhone vs Forstall's team that adapted OSX.

    • throwawayapples 2 years ago

      and the Apple (II etc) vs Mac teams warring with each other.

      • kmeisthax 2 years ago

        You're thinking Lisa vs. Mac. Apple ][ didn't come into the picture until later when some of the engineers started playing around with making a mouse card for the ][.

    • fakedang 2 years ago

      Seriously? Click wheel iPhone lost shockingly? The click wheel on most laptops wears out so fast for me, and the chances of that happening on a smaller phone wheel is just so much higher.

      • potatoman22 2 years ago

        (It was sarcasm)

        • fakedang 2 years ago

          Oops, sorry, didn't get that. I had suspected it was one of those Luddite HNer comments bemoaning changes in tech, and nostalgically reminiscing on older times.

  • WalterBright 2 years ago

    Back in the late 80s, Lotus faced a crisis with their spreadsheet, Lotus 1-2-3. Should they:

    1. stick with DOS

    2. go with OS/2

    3. go with Windows

    Lotus chose (2). But the market went with (3), and Lotus was destroyed by Excel. Lotus was a wealthy company at the time. I would have created three groups, and done all three options.

    • quickthrower2 2 years ago

      Which would have had been a tradeoff too. More time to market, fewer people on each project, slowed down by cross platform code.

      • kumarvvr 2 years ago

        At the time, Lotus was a good company in great shape. The management could have hired people to get stuff done. In hindsight, sure, we can be judgmental, but it is still a failure in my view.

        For a company selling licenses for installations, wouldn't having support for all available and upcoming platforms a good thing? Especially when the distribution costs are essentially 0?

      • WalterBright 2 years ago

        Lotus was a rich company, and could have easily funded 3 full strength independent dev teams. It would not have slowed anything down.

      • jeremyjh 2 years ago

        They would have just forked the code and maybe merged some changes back and forth, no real need for cross-platform code.

        • primax 2 years ago

          This was pre-1983. Forking wasn't a thing at the time. Any kind of code management was cutting edge, and cross-platform shared code wasn't even dreamed of yet.

          • bawolff 2 years ago

            Forking and merging is a social phenomenom. Sure git makes it easier, but nothing stopping anyone from just copying and pasting as appropriate. Not to mention diff(1) was invented in 1974, and diff3(1) in 1979, so there were already tools to help with this, even if not as well developed as modern tools.

            I'm also pretty sure cross-platform code was a thing in 1983. Maybe not to the same extent and ease as now, but still a thing.

            • WalterBright 2 years ago

              Successful 8086 projects were usually written in assembler - no way to get the speed and size down otherwise. I'm pretty sure Lotus 123 was all in assembler.

              • bawolff 2 years ago

                I'm not an assembly programmer and not very familiar with how that world works, but even then, if the two OSs were for the same architecture (x86), couldn't you still have a cross OS main part and then specific parts that deal with operating system things? I normally think of compiled languages like c being an abstraction over cpu architecture, not operating system api.

                • WalterBright 2 years ago

                  Yes, you can have common assembler code among platforms, provided they use the same CPU.

                  From what I've seen of code developed in the 80s, however, asm code was not written to be divided into general and os specific parts. Writing cross-platform code is a skill that gets learned over time, usually the hard way.

          • Dylan16807 2 years ago

            Fork just means two groups start with the same code and work independently.

            It was a thing.

          • jeremyjh 2 years ago

            You make a copy of the files and work on them and that is a fork.

            • adastra22 2 years ago

              How do you merge changes between the source trees?

              Keep in mind this predates basically ANY kind of source control. It would have been nearly 3x the work.

              • dragonwriter 2 years ago

                > Keep in mind this predates basically ANY kind of source control.

                It might be before they were ported to DOS or OS/2, but it definitely wasn't before source control existed (SCCS and RCS were both definitely earlier.)

                • adastra22 2 years ago

                  OK: Keep in mind this predates basically ANY kind of source control in common usage in software engineering.

              • bawolff 2 years ago

                3x the work may still fall under reasonable cost.

                If architectured properly (big if) you can split up the project appropriately so there is a common core and individual parts for specific OS.

                Is it extra effort? Sure. Impossible? Definitely not.

                • WalterBright 2 years ago

                  I've also successfully converted some rather large x86 assembler programs into C, so they could be ported to other platforms. It's quite doable by one person.

                  (Nobody else wanted the job, but I thought it was fun.)

              • gruturo 2 years ago

                Uh? Quite wrong.

                SCCS was created in 1973. We're talking about over a decade later.

                Also primitive forking, diffing and merging could be (painfully) done even with crude tools, which did exist.

        • jrflowers 2 years ago

          Should’ve just made it an Electron app

        • saalweachter 2 years ago

          Eh, C of this era, you're definitely talking some sort of #ifdef PLATFORM solution.

    • mvkel 2 years ago

      IBM was bankrolling all the development. They only had one choice.

    • dylan604 2 years ago

      Apple had a skunk works team keeping each new version of their OS to compile on x86 long before the switch. I wonder if the Lotus situation was an influence, or if ensuring your software can be made to work on different hardware is just an obvious play?

  • clnq 2 years ago

    Consider for a moment: this is what the board of one of the fastest growing companies in the world worries about - kindergarten level drama.

    Under them - an organization in partnership with Microsoft, together filled with exceptional software engineers and scientists - experts in their field. All under management by kindergarteners.

    I wonder if this is what the staff are thinking right now. It must feel awful if they are.

  • discordance 2 years ago

    Happens all the time.

    Teams of people at Google work on the same features, only to find out near launch that they lost to another team who had been working on the same thing without their knowledge.

    • zebnyc 2 years ago

      How does that work? Do they have the same the same PM, requirements? Is it just different tech / achitectures adopted by different teams. Fascinating

      • discordance 2 years ago

        It is fascinating, very wasteful and also often devastating for the teams involved who worked very hard who then have their work thrown away.

        PMs/TPMs/POs may not know as they're on different teams. Often it's just a VP game and decided on preference or a power play and not on work quality/outcome.

      • hanniabu 2 years ago

        Give a goal (ex. make it more intuitive/easier for the user to do X), have 2 teams independently work on it, A/B test them, winner gets merged.

  • fluidcruft 2 years ago

    I guess it depends on whether any of them actually got the assignment. One way to interpret it is that nobody is taking that assignment seriously. So depending on what that assignment is and how important that particular assignment is to the board, then it may in fact be a big deal.

    • kumarvvr 2 years ago

      Does a board give an assignment to the CEO or teams?

      If the case is that the will of the board is not being fulfilled, then the reasoning is simple. The CEO was told to do something and he has not done it. So, he is ousted. Plain and simple.

      This talk about projects given to two teams and what not is nonsense. The board should care if its work is done, not how the work is done. That is the job of the CEO.

      • fluidcruft 2 years ago

        Frankly the information that is available is extremely non-specific and open to interpretation and framing by whoever wants to tell one story or another. The way I see it something as specific as "has not done xyz" is a specific thing that can be falsified and invites whatever it is into the public to be argued about and investigated whereas "not sufficiently candid" does not reveal much and just says that a majority of the board doesn't trust him. Altman and all the people directly involved know what's going on, outsiders have no need to know so we're just looking at tea leaves and scraps trying to weave narratives.

        And I agree the board should care if the work is actually done and that's where if the CEO seems to be bluffing that the work is being done or blowing it off and humoring them then it becomes a problem about the CEO not respecting the board's direction.

  • hooloovoo_zoo 2 years ago

    Giving two groups of researchers the same problem is guaranteeing one team will scoop the other. Hard to divvy up credit after the fact.

  • ldjkfkdsjnv 2 years ago

    Also when a project is vital to a company, you cannot just give it to one team. You need to derisk

  • m3kw9 2 years ago

    How did they get 4 board to fire him because he tried to A B test a project?

  • quickthrower2 2 years ago

    Was that verbatim the reason or an angry persons characterisation?

samspenc 2 years ago

> One explanation was that Altman was said to have given two people at OpenAI the same project.

Have these people never worked at any other company before? Probably every company with more than 10 employees does something like this.

  • whywhywhywhy 2 years ago

    >Have these people never worked at any other company before?

    Half the board has not had a real job ever. I’m serious.

    • adastra22 2 years ago

      And the one which does have a real job is a direct competitor with OpenAI.

      • ilikehurdles 2 years ago

        And since none of them have equity in OpenAI, their external financial interest would influence decision making, especially when those interests lie with a competing company where a board member is currently the chief executive.

        I've seen too much automatic praise given to this board under the unbacked assumption that this decision was some pure, mission-driven action, and not enough criticism of an org structure that allows a board to bet against the long term success of the underlying organization.

    • GreedClarifies 2 years ago

      It is unbelievable TBH.

      Shocking. Simply shocking.

    • squigz 2 years ago

      Could you please elaborate on what a 'real job' is in this context?

      • TrackerFF 2 years ago

        I'm going to assume that he's referring to Tasha and Helen.

        I don't know if that is accurate, or even fair - the only thing I can see, is that there's very little open information regarding them.

        From the little I can find, Tasha seems to have worked at NASA Research Park, as well as having been CEO for startup called Geo Sim Cities. Stanford and CMU alumni? While other websites say Bard college and University of Southern California.

        As for Helen, she seems to have worked as a researcher in both academia and Open Philanthropy.

  • ben_w 2 years ago

    My dad interviewed someone who was applying for a job. Standard question, why did you leave the last place?

    "After six months, they realised our entire floor was duplicating the work of the one upstairs".

  • qiqitori 2 years ago

    To me at least that's an _extremely_ rude thing to do. (Unless one person is asked to do it this way, the other one that way, so people can compare the outcome.)

    (Especially if they aren't made aware of each other until the end.)

  • 015a 2 years ago

    I think this needs to be viewed through the lens of the gravity of how the board reacted; giving them the benefit of the doubt that they acted appropriately and, at least with the information they had the time, correctly.

    A hypothetical example: Would you agree that it's an appropriate thing to do if the second project was Alignment-related, Sam lied or misled about the existence of the second team, to Ilya, because he believed that Ilya was over-aligning their AIs and reducing their functionality?

    Its easy to view the board's lack of candor as "they're hiding a really bad, unprofessional decision"; which is probable at this point. You could also view it with the conclusion that, they made an initial miscalculated mistake in communication, and are now overtly and extremely careful in everything they say because the company is leaking like a sieve and they don't want to get into a game of mudslinging with Sam.

    • qwytw 2 years ago

      > giving them the benefit of the doubt that they acted appropriately

      Yet you're only willing to give this to one side and not the other? Seems reasonable... Especially despite all the evidence so far that the board is either completely incompetent or had ulterior motives.

  • croes 2 years ago

    Maybe it's was not a ordinary project or not ordinary people.

    Still too much in the dark to judge.

  • bmitc 2 years ago

    In over 10 years of experience, I have never known this to happen.

  • spoonjim 2 years ago

    Actually, they haven’t. One is some policy analyst and the other is an actor’s wife.

    • lupire 2 years ago

      Tasha Macauley is an electrical engineer who founder of two tech companies, besides having a cute husband.

      And the other guy is the founder of Quora and Poe.

      • jxi 2 years ago

        She only founded one, Fellow Robots, and that "company" went nowhere. There's no product info and the company page shut down. She was CEO of GeoSim for a short 3 years, and this "company" also looks like it's going nowhere.

        She has quite a track record of short tenures and failures.

        • voidfunc 2 years ago

          > She has quite a track record of short tenures and failures.

          It may be good to have a failure perspective on a board as a counter-balance. I don't think this is a valid knock against her. She has relevant industry experience at least.

          • sumedh 2 years ago

            > She has relevant industry experience at least.

            What products did she deliver?

            > It may be good to have a failure perspective on a board as a counter-balance.

            Maybe some small mom and pop company not on the board of OpenAI

          • GreedClarifies 2 years ago

            lolz.

        • bmitc 2 years ago

          What's up with Loopt and Worldcoin?

      • kridsdale1 2 years ago

        Ok. I can Found a tech company by filling out LLC papers on LegalZoom for $40.

        What have her companies done?

      • adastra22 2 years ago

        paper companies

  • gongagong 2 years ago

    wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying? same for MS

    • dragonwriter 2 years ago

      > wait so can't SA sue for wrongful termination if everything is as bogus as everyone is saying?

      It is breach of contract if it violated his employment contract, but I don't have a copy of his contract. It is wrongful termination if it was for an illegal reason, but there doesn't seem to be any suggestion of that.

      > same for MS

      I doubt very much that the contract with Microsoft limits OpenAI's right to manage their own personnel, so probably not.

    • pacificmint 2 years ago

      Employment in California is ‘at will’, which means they can fire him without a reason.

      Wrongful termination only applies when someone is fired for illegal reasons, like racial discrimination, or retaliation, for example.

      I mean I’m sure they can all sue each other for all kinds of reasons, but firing someone without a good reason isn’t really one of them.

      • adastra22 2 years ago

        That's the default, but employment contracts can override this. C-level employment contracts almost universally have special consideration for "Termination Without Cause", aka golden parachutes. He could sue to make them pay out.

        He would also have very good grounds for a civil suit for disparagement. Or at least he would have if Microsoft didn't immediately step up and offer him the world.

      • vaxman 2 years ago

        You mean like being fired by a board member as part of their scheme to breach their fiduciary duty by launching a competitive product in another company?

rossdavidh 2 years ago

So, none of this sounds like it could be the real reason Altman was fired. This leaves people saying it was a "coup", which still doesn't really answer the question. Why did Altman get fired, really?

Obviously, it's for a reason they can't say. Which means, there is something bad going on at the company, like perhaps they are short of cash or something, that was dire enough to convince them to fire the CEO, but which they cannot talk about.

Imagine if the board of a bank fired their CEO because he had allowed the capital to get way too low. They wouldn't be able to say that was why he was fired, because it would wreck any chance of recovery. But, they have to say something.

So, Altman didn't tell the board...something, that they cannot tell us, either. Draw your own conclusions.

  • skygazer 2 years ago

    I think you may be hallucinating reasonable reasons to explain an inherently indefensible situation, patching up reality so it makes sense again. Sometimes people with puffed up egos are frustrated over trivial slights, and group think takes over, and nuking from orbit momentarily seems like a good idea. See, I’m doing it too, trying to rationalize. Usually when we’re stuck in an unsolvable loop like a SAT solver, we need to release one or more constraints. Maybe there was no good reason. Maybe there’s a bad reason — as in, the reasoning was faulty. They suffered Chernobyl level failure as a board of directors.

  • namocat 2 years ago

    This is what I suspect; that their silence is possibly not simply evidence of no underlying reason, but that the underlying reason is so sensitive that it cannot be revealed without doing further damage. Also the hastiness of it makes me suspect that whatever it was happened very recently (e.g. conversations or agreements made at APEC).

    Ilya backtracking puts a wrench in this wild speculation, so like everyone else, I’m left thinking “????????”.

  • kmlevitt 2 years ago

    If it was anything all that bad, Ilya and Greg would’ve known about it, because one of them was chairman of the board and the other was a board member. And both of them want Sam rehired. You can’t even spin it that they are complicit in wrongdoing, because the board tried to keep Greg at the company and Ilya is still on the board now and previously supported them.

    Whatever the reason is, it is very clearly a personal/political problem with Sam, not the critical issue they tried to imply it was.

    • dragonwriter 2 years ago

      > because the board tried to keep Greg at the company

      Aside from the fact that they didn't fire him as President and said he was staying on in the press release that went out without any consultation, I've seen no suggestion of any effort to keep him at the company.

      • kmlevitt 2 years ago

        Right, but there was no effort to actually oust him either. Which you would expect them to do if they had to fire guilty parties for a massive wrongdoing that couldn’t be ignored

        Either he had no part in this hypothetical transgression and thinks the accusation is nonsense, or he was part of it and for some inexplicable reason wasn’t asked to leave Open AI despite that. But you have to choose.

        • dragonwriter 2 years ago

          > Right, but there was no effort to actually oust him either.

          Reducing someone's responsibility significantly is well known to often be a mmechanism to oust them without explicitly firing, so I don't know that that is the case.

          • kmlevitt 2 years ago

            Well, they still haven’t accused him of anything yet despite repeatedly being asked to explain their reasoning, so it seems fair to give him the benefit of the doubt until they do.

  • aiman3 2 years ago

    i do believe what they said about Altman "was not consistently candid in his communications with the board.", based on my understanding, altman did proved his dishonest behavior from he did to openai, turned non-profit into for-profit and open source model to closed-source one. and even worst, people seems totally accepted this type of personality, the danger is not the AI itself, is the AI will be built by AltmanS!

    • qwytw 2 years ago

      > dishonest behavior from he did to openai, turned non-profit into for-profit and

      Yes and it's perfectly obvious that he did this without the consent of the board and behind their backs. A bit absurd don't you think? How would that even work?

      > will be built by AltmanS

      Why are you so certain most other people on the OpenAI board or their upper management are that different? Or hold very different views?

      • aiman3 2 years ago

        >>Yes and it's perfectly obvious that he did this without the consent of the board and behind their backs. A bit absurd don't you think? How would that even work? i don't know, i don't have answer for that. but meta open-sourced llama-2, did what openai supposes to do.

        >>Why are you so certain most other people on the OpenAI board or their upper management are that different? Or hold very different views?

        at least they have the guts to fire him and let the world know about Altman "was not consistently candid in his communications with the board.".

    • vorticalbox 2 years ago

      OpenAI, Inc. Is non profit but it's subsidiary OpenAI Global, LLC. Is for profit.

      • aiman3 2 years ago

        what is end goal of having a for-profit subsidiary, make opanai more non-profit and make the model more open-source ?

        • MacsHeadroom 2 years ago

          Ideally it is to have greater access to resources by taking investment money and paying for top talent with profit sharing agreements, in order to ultimately further the goals of the nonprofit.

          • aiman3 2 years ago

            the outcome of this for-profit subsidiary is already showing in real-time, they just want to take everything from openai into for-profit for their own benefit, that is a fraud by definition, which means taking something is not theirs. one good example is Flash mob in LA, yours belongs to me, same culture, same behavior. to your "Ideally world", its very simple, Microsoft could simply just donate $10b to openai, they can share the open-sourced model.

  • awb 2 years ago

    The only thing akin to that would be an AI safety concern and the new CEO specifically said that wasn’t the issue.

    And if it was something concrete, Ilya would likely still be defending the firing, not regretting it.

    It seems like a simple power struggle where the board and employees were misaligned.

  • resolutebat 2 years ago

    Banks have strict cash reserve requirements that are externally audited. OpenAI does not, and more to the point, they're both swimming in money and could easily get more if they wanted. (At least until last week, that is.)

    • rossdavidh 2 years ago

      Rumor has it, they had been trying to get more, and failing. No audited records of that kind of thing, of course, so could be untrue. But Altman and others had publicly said that they were attempting to get Microsoft to invest more, and he was courting sovereign wealth funds for an AI (though non-OpenAI) chip related venture, and ChatGPT had a one-day partial outage due to "capacity" constraints, which is odd if your biggest backer is a cloud company. It all sounds like they are running short on money, long before they get to profitability. Which would have been fine up until about a year ago, because someone with Altman's profile could easily get new funding for a buzz-heavy project like ChatGPT. But times are different, now...

leoc 2 years ago

Not specifically related to this latest twist, sorry, but DeepMind’s Geoffrey Irving trusts the board over Altman: https://x.com/geoffreyirving/status/1726754270224023971

  • jacquesm 2 years ago

    "I have no details of OpenAI's Board’s reasons for firing Sam"

    Not the strongest opening line I've seen.

    • leoc 2 years ago

      I do have to point out that this is also true of nearly everyone else who’s expressed a strong opinion on the topic, and it didn’t stop any of them

      • jacquesm 2 years ago

        That's a fair point but there is at the same time a lot of information about how boards are supposed to work and how board members are supposed to act and the evidence that did come out doesn't really make it seem as if it is compatible with that body of knowledge.

      • blazespin 2 years ago

        The difference is nearly everyone else doesn't stand to seriously benefit from the implosion of OpenAI.

  • blazespin 2 years ago

    Yeah, I can't imagine why DeepMind would possibly want to see OpenAI incinerated.

    When you have such a massive conflict of interest and zero facts to go on - just sit down.

    also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."

    Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.

    But as we all know - Ilya did a 180 (surprised the heck out of me).

bloopernova 2 years ago

"Sustkever is said to have offered two explanations he purportedly received from the board"

I'd like some corroboration for that statement because Sustkever has said very inconsistent things during this whole merry debacle.

  • djtango 2 years ago

    Would you go so far as to say he was not consistently candid...?

  • dekhn 2 years ago

    Also, since he's on the board, and it wouldn't have been Brockman or Altman who gave him this info... there are only three people left: "non-employees Adam D’Angelo, Tasha McCauley, Helen Toner."

    • ipaddr 2 years ago

      The obvious answer is he was the one Sam gave an opinion on. He was one of the people doing duplicate work (probably the first team). Sam said good things about him to his ally and bad things to another board member. There was a falling out between that board member and Sam and she spilled the beans.

      • aidaman 2 years ago

        one of the first members to quit was on a team that sounds a lot like a separate team that is doing the same thing as Ilya's Superalignment team.

        "Madry joined OpenAI in May 2023 as its head of preparedness, leading a team focused on evaluating risks from powerful AI systems, including cybersecurity and biological threats."

mmaunder 2 years ago

Launched Thursday morning:

https://x.com/poe_platform/status/1725194752901988744?s=46

  • jacquesm 2 years ago

    Fortunately no conflict of interest there. Ignore the guy behind the curtain.

    • aravindgp 2 years ago

      In the case of a board member of OpenAI running a separate chatbot company, it would be important to consider these factors. The specifics of the situation, including the nature of both companies, the level of competition between them, and the actions taken by the board member and OpenAI to manage potential conflicts, would all play a role in determining if there is a conflict of interest.

      Definitely conflict of interest here and D'Angelo actions on openai board smell of the same. He wouldn't want openai to thrive more than his company. It's direct conflict of interest.

      • jacquesm 2 years ago

        It is about as bad as it gets and given that datum I hope that D'Angelo has a very good lawyer because I think he might need that.

      • tasuki 2 years ago

        jacquesm was being sarcastic

        • jacquesm 2 years ago

          My bad for not adding a /s. But I thought the second sentence would make it obvious.

two_in_one 2 years ago

Both 'reasons' are a bullsh*t. But interesting is Sustkever was the key person, it wouldn't happen without him. And now he says board told him why he was doing it? He didn't reiterate he regrets about it. So looks like he was one of the driving forces, if not the main. Of course he doesn't want the reputation of 'the man who killed OpenAI'. But he definitely took part and could prevent it.

  • ramraj07 2 years ago

    Nytimes mentioned that just a month back someone else was promoted to the same level as Ilya. Sounds like more than a coincidence.

prepend 2 years ago

So Surdkever fires Altman, then signs a letter saying they’ll quit unless he’s reinstated.

There’s only 4 board members, right?

Who wanted him fired. Is this a situation where they all thought the others wanted him fired and were just stupid?

Have they been feeding motions into chatgpt and asking “should add I do this?”

  • fullshark 2 years ago

    Seems most likely Sustkever wanted him fired and then realized his mistake. Ultimately the board was probably quietly seething about the direction the company was headed, got mad enough to retake the reigns with that stunt and then realized what that actually meant.

    Now they are trying to unring the bell but cannot.

    • JumpCrisscross 2 years ago

      > Seems most likely Sustkever wanted him fired and then realized his mistake

      We have as much evidence for this hypothesis as for any other. Not discrediting it. But let's be mindful of the fog of war.

    • LewisVerstappen 2 years ago

      > Now they are trying to unring the bell but cannot.

      Well, they can unring the bell pretty easy. They were given an easy out.

      Reinstate Sam (he wants to come back) and resign.

      However, they CONTINUE to push back and refuse to step down.

      • adastra22 2 years ago

        Then they wouldn't be in control, which is what they really want.

        • GreedClarifies 2 years ago

          You get it!

          This is the correct answer. The people who have never had jobs in their lives wanted control of a 100B company.

          What a pleasant career trajectory. Heck it was already great to go from graduated university -> board of OpenAI. If that's possible why not CEO?

      • doktrin 2 years ago

        > Well, they can unring the bell pretty easy. They were given an easy out.

        > Reinstate Sam (he wants to come back) and resign.

        Wasn't the ultimate sticking point Altmans' demand that the board issue a written retraction absolving him of any and all wrongdoing? If so, that isn't exactly an "easy" out given that it kicks the door wide open for extremely punishing litigation. I'd even go so far as to say it's a demand Altman knew full well would not and could not be met.

      • halfjoking 2 years ago

        I'm going to be the only one in this thread calling it this.

        But why does no one think it's possible these women are CIA operatives?

        They come from think tanks. You think the US Intelligence community wants AGI to be discovered at a startup? They want it created at big tech. AGI under MSFT would be perfect. All big tech is heavily compromised: https://twitter.com/NameRedacted247

        EDIT: Since this heavy speculation, I'm going to make predictions. These women will now try to force Ilya out the board, put in a CEO not from Silicon Valley, and eventually get police to shut down OpenAI offices. That's a CIA coup

        • solardev 2 years ago

          Couldn't the CIA have sent people with, er, slightly more media experience and tactfulness and such? Did these few just happen to lose a bet or something...?

          Maybe somebody there just really wanted to see the expression on Satya's face...

        • trefoiled 2 years ago

          Weirdly plausible considering Tasha McCauley also works for the RAND Corporation

    • 015a 2 years ago

      But the article's exact wording is "Sustkever is said to have offered two explanations he purportedly received from the board" key word being "purportedly received". He could be choosing words to protect himself, but it strongly implies that he wasn't the genesis of the action. Of course, he was convinced enough of it to vote him out (actually; has this been confirmed? they would have only needed 3, right? it was confirmed that he did the firing over Meet, but I don't recall confirmation that he voted Yet); which also implies that he was at some point told more precise reasoning? Or maybe he's being muzzled by the remaining board members now, and this reasoning he "received" is what they approved him to share, right now?

      None of this makes sense to label any theory as "most likely" anymore.

    • az226 2 years ago

      Trying to put the toothpaste back in the tube.

    • Affric 2 years ago

      This is pretty parsimonious.

      Smart, capable, ambitious people often engage in wishful thinking when it comes to analysing systems they are a part of.

      When looking at a system from the outside it’s easier to realise the boundary between your knowledge and ignorance.

      Inside the system, your field of view can be a lot narrower than you believe.

  • ben_w 2 years ago

    > Have they been feeding motions into chatgpt and asking “should add I do this?”

    The CEO (at time of writing, I think) seems to think this kind of thing is unironically a good idea: https://nitter.net/eshear/status/1725035977524355411#m

  • paulddraper 2 years ago

    It'd have to be a very stupid version of chatgpt

  • rvba 2 years ago

    Can the 3 board members also kick out Sustkever from the board?

koolba 2 years ago

That headline is bad, not sure if it's deliberate.

The way it's phrased, it sounds like they were given two different explanations. Such as when the first explanation is not good enough, a second weaker one is then provided.

But the article itself says:

> OpenAI's current independent board has offered two examples of the alleged lack of candor that led them to fire co-founder and CEO Sam Altman, sending the company into chaos.

Changing the two "examples" to "explanations" grossly changes the meaning of that sentence. Two examples is the first steps of "multiple examples". And that sounds much different than "multiple explanations".

Eliezer 2 years ago

This reads like the Board 4 are not allowed to say, or are under NDA, or do not dare say, or their lawyers told them not to say, the actual reason. Because this is obviously not the actual reason.

gmiller123456 2 years ago

Without all the fluff:

    One explanation was that Altman was said to have given two people at OpenAI the same project.

    The other was that Altman allegedly gave two board members different opinions about a member of personnel
  • extheat 2 years ago

    Ilya himself was a member of the board that voted to fire Altman. I don't know if he's lying to his teeth in these comments, making up an alibi, or is genuinely trying to convince people was acting as a rubber stamp and doesn't know anything.

dang 2 years ago

As this article seems to have the latest information, let's treat it as the next instalment. There's also Inside The Chaos at OpenAI - https://news.ycombinator.com/item?id=38341399, which I've re-upped because it has backstory that doesn't seem to have been reported elsewhere.

Edit: if you want to read about our approach to handling tsunami topics like this, see https://news.ycombinator.com/item?id=38357788.

-- Here are the other recent megathreads: --

Sam Altman is still trying to return as OpenAI CEO - https://news.ycombinator.com/item?id=38352891 (817 comments)

OpenAI staff threaten to quit unless board resigns - https://news.ycombinator.com/item?id=38347868 (1184 comments)

Emmett Shear becomes interim OpenAI CEO as Altman talks break down - https://news.ycombinator.com/item?id=38342643 (904 comments)

OpenAI negotiations to reinstate Altman hit snag over board role - https://news.ycombinator.com/item?id=38337568 (558 comments)

-- Other recent/related threads: --

OpenAI approached Anthropic about merger - https://news.ycombinator.com/item?id=38357629

95% of OpenAI Employees (738/770) Threaten to Follow Sam Altman Out the Door - https://news.ycombinator.com/item?id=38357233

Satya Nadella says OpenAI governance needs to change - https://news.ycombinator.com/item?id=38356791

OpenAI: Facts from a Weekend - https://news.ycombinator.com/item?id=38352028

Who Controls OpenAI? - https://news.ycombinator.com/item?id=38350746

OpenAI's chaos does not add up - https://news.ycombinator.com/item?id=38349653

Microsoft Swallows OpenAI's Core Team – GPU Capacity, Incentives, IP - https://news.ycombinator.com/item?id=38348968

OpenAI's misalignment and Microsoft's gain - https://news.ycombinator.com/item?id=38346869

Emmet Shear statement as Interim CEO of OpenAI - https://news.ycombinator.com/item?id=38345162

  • r721 2 years ago

    >There's also Inside The Chaos at OpenAI ... it has backstory that doesn't seem to have been reported elsewhere

    Probably because that piece is based on reporting for upcoming book by Karen Hao:

    >Now is probably the time to announce that I've been writing a book about @OpenAI, the AI industry & its impacts. Here is a slice of my book reporting, combined with reporting from the inimitable @cwarzel ...

    https://twitter.com/_KarenHao/status/1726422577801736264

  • Aloha 2 years ago

    I see why you recommended that Atlantic article, its very very good.

    • dang 2 years ago

      I was just copying what simonw said! https://news.ycombinator.com/item?id=38341857

      • Aloha 2 years ago

        It's a good recommendation, thanks for elevating it out of the noise

        Sometimes the best part about having a loud voice is elevating the stuff that falls into the noise. I moderate communities elsewhere, and I know how hard it is, and I appreciate the work you do to make HN a better place.

  • ssnistfajen 2 years ago

    By the time this saga resolves, the number of threads linked here could suffice as chapters of a book

didip 2 years ago

If I were OpenAI employee, I would have been uber pissed.

Imagine your once-in-blue-moon, whatsapp-like, payout at $10m per employee evaporated over the weekend before Thanksgiving.

I would have joined MSFT out of spite.

  • harryquach 2 years ago

    Absolutely agree, would be beyond pissed. A once in a lifetime chance at generational wealth blown.

  • gardenhedge 2 years ago

    These people joined a non-profit though. Am I right in thinking that you wouldn't join a non-profit expecting a large future payout?

    • sumedh 2 years ago

      > These people joined a non-profit though.

      The employees joined the for profit subsidiary and had shares as well.

  • voitvoder 2 years ago

    I really can't imagine. I am super pissed and only over something I love that I pay 20 bucks a month for. I can't imagine the feeling of losing this kind of payout over what looks like complete bullshit. Not just the payout but being part of a team doing something so interesting and high profile + the payout.

    I just don't know how they put the pieces back together here.

    What really gets me down is I know our government is a lost cause but I at least had hope our companies were inoculated against petty, self-sabotaging bullshit. Even beyond that I had hope the AI space was inoculated and beyond that of all companies OpenAI would of course be inoculated from petty, self-sabotaging bullshit.

    These idiots worried about software eating us are incapable of seeing the gas they are pouring on the processes that are taking us to a new dark age.

WiSaGaN 2 years ago

Given the nonsensical reason provided here, I am led to believe that this entire farce is aimed at transforming OpenAI from a non-profit to a for-profit company one way or another, e.g., significantly raising the profit cap, or even changing it completely to a for-profit model. There may not be a single entity scheming or orchestrating it, but the collective forces that could influence this outcome would be very pleased to see it unfold in this way.

  • dehrmann 2 years ago

    But was delivering it into the hands of Microsoft really how they wanted it to happen?

upupupandaway 2 years ago

At Amazon a senior manager would probably be fired for not giving a project to multiple teams.

  • lazystar 2 years ago

    thats not very frugal; please provide a source or citation for your claim.

    • upupupandaway 2 years ago

      I am a former L7 SDM at Amazon. Just last year I had to contend with not one, but three teams doing the same thing. The faster one won with a half-baked solution that caused multiple Sev-1s. My original comment was half in jest; the actual way this works is that multiple teams discover the same issues at the same time and then compete for completing a solution first. This is usually across VPs so it’s difficult to curtail in time to avoid waste.

      Speaking of waste, when I was at Alexa we had to do a summit (flying people from all over the country) because we got to a point where there were 12 CMSs competing for the right to answer queries. So yeah, not frugal. Frugality these days is mostly a “local” concept, definitely not company-wide or even org wide.

      • VirusNewbie 2 years ago

        Oh man, i’m glad I read this because right now at Google I am immensely frustrated by Google trying to fit their various shaped pegs into the round hole of their one true converged “solution” for any given problem.

        My friends and I say “see, Amazon doesn’t have to deal with this crap, each team can go build their own whatever”. Buut, I guess that’s how you get 12 CMSs for one org.

      • lazystar 2 years ago

        > that caused multiple Sev-1s

        ...did folks run out of tables?

HighFreqAsuka 2 years ago

These simply can't be the real reasons.

  • ssnistfajen 2 years ago

    And evidently the employees have reacted as they likely would. The two points given sound like mundane corporate mess ups that are hardly worth firing the CEO in such a drastic fashion.

moneycantbuy 2 years ago

a link to the letter from employees

https://www.axios.com/2023/11/20/openai-staff-letter-board-r...

curious to have clarity where ilya stands. did he really sign the letter asking the board (including himself?) to resign and that he wants to join msft?

to think these are the folks with agi at their fingertips

didip 2 years ago

What will happen to employee’s stock options if they all mass quit and moved to Microsoft?

The options will be worth $0, right?

  • stingraycharles 2 years ago

    From what I understand, Microsoft realizes this and gives them the equivalent of their OAI stock options in MSFT stock options if they join them now. For some employees, this may mean $10MM+

    • ssnistfajen 2 years ago

      More evidence the layoffs are 100% BS. Suddenly there's surplus headcount and magical budgets out of nothing, all to accommodate several hundreds of people with way-above-market-average TCs. It's almost like they were never in danger of hurting profit margins in the first place.

      • mcherm 2 years ago

        It is entirely reasonable for there to be dire financial straits that require layoffs, yet when a $10 billion investment suddenly blows up and has to be saved the money can be spent to fix it.

        In the first case it wasn't that there was no cash in the bank and no bank willing to make loans, but that the company needed to spend less than it earned in order to make a profit. In the second case it wasn't that the money had been hidden in a mattress, but that it was raised/freed-up at some risk which was necessary because of the $10 billion investment.

        • ssnistfajen 2 years ago

          These tech giants' finances are public, because they are publicly listed companies. All of them are sitting on fat stacks of cash and positive YoY revenue growth. They have absolutely zero chance of running out of money even if each one hires 10000 front desk clerks who do nothing but watch TikTok all day and collect $100k/yr comps. Zero, Zilch, Nada.

        • dralley 2 years ago

          >It is entirely reasonable for there to be dire financial straits that require layoffs

          It's not entirely reasonable because Microsoft's finances are public. We know they're doing fine.

      • quickthrower2 2 years ago

        You can lay people off without being in dire straits.

        • ssnistfajen 2 years ago

          Yes, but that doesn't make it any more ethical especially since most layoffs over the past year aren't merit-based at all.

    • JumpCrisscross 2 years ago

      Options on Microsoft stock, a publicly-traded and stable company, are incomparable to those on OpenAI, which didn't even bother having proper equity to start with. The employees will get hosed. They never got equity, they got "equity." The senior ones will need liquidity, soon, to pay legal counsel; the rest will need to take what they can get.

      • stingraycharles 2 years ago

        Usually one would go borrow money from a bank with the shares / options as collateral in these types of cases if you really need the money for legal expenses without liquidating them.

  • az226 2 years ago

    Microsoft would likely match their PPUs at the tender offer valuation.

    • Rastonbury 2 years ago

      Honestly MS don't have to, losing more than half the employees will destroy the value of the PPUs.

      The fact so many have signed the petition is a classic example of game theory. If everyone stays, the PPU keep most of their value, the more people threaten to leave, the more attractive it is to sign. They don't have to love Sam or support him

      Edit: actually thinking about it, the best outcome would to be go back on the threats to resign, increasing the value of PPUs, making Microsoft have to pay more to make them leave OpenAI

      • lumost 2 years ago

        MSFT may perceive a benefit in absorbing and locking down the openAI team. Doing so will require large golden handcuffs in excess of what competitors would offer those same folks.

  • tsunamifury 2 years ago

    OpenAI has no stock options.

    • tempestn 2 years ago

      It has "Profit Participation Units", which are another form of equity-like compensation.

    • choppaface 2 years ago

      Believe it’s more of an RSU product with a small few having ISOs. Probably best to just call it “stock comp” since it’s all illiquid anyways.

kotxig 2 years ago

If the outcome of all of this is that Altman ends up at Microsoft and hiring the vast majority of the team from OpenAI, it's probably wise to assume that this was the intended outcome all along. I don't know how else you get the talent at a company like OpenAI to willingly move to Microsoft, but this approach could end up working.

bastardoperator 2 years ago

These are the dumbest reasons possible, certainly not worth destroying a company on the move or people's livelihoods over.

ehsanziya 2 years ago

Based on what've seen so far, one of the following possibilities is the most likely: 1. Altman was actually negotiating an acquisition by Mircosoft without being transparent with the board about it. Given how quickly they were hired by Microsoft after the events, this is likely. 2. Altman was trying to raise capital from a source that the board wouldn't be too keen on. Without the board's knowledge. Could be a sovereign fund or some other government backed organisation.

I've not seen these possibilities discussed as most people focus on the safety coup theory. What do you think?

afjeafaj848 2 years ago

If Altman ends up going back to OpenAI, then shouldn't Sutskever be fired/kicked off the board too?

  • GreedClarifies 2 years ago

    They may retain him, but his time of being on the board or any board is at an end.

    The rest of the board. My god. Why were they there?

layer8 2 years ago

Given these non-reasons, everyone threatening to quit makes a lot of sense.

tempaway511751 2 years ago

"Two explanations" isn't accurate, its more like Ilya gave two examples of Sam not being candid with the board. "Two explanations" makes it sound like two competing explanations. What Ilya gave was two examples of the same problem.

I can't help thinking that Sam Altmans universal popularitity with OpenAI staff might be because they all get $10million each if he comes back and resets everything back to how it was last week.

sergiomattei 2 years ago

We've gone beyond insanity at this point. Just clown show.

This has been tech's most entertaining weekend in the past decade.

Sadly, at the expense of the OpenAI employees and dream, who had something great going for them at the company. Rooting for them.

rmm 2 years ago

You have to wonder at this point how much of this is the current board members trying to somehow save face.

I can’t imagine their careers after this will be easy…

  • skygazer 2 years ago

    You are far more charitable than I. (I have no idea why I’m worked up. I don’t work at OAI.) They pulled the dumbest virtual corporate hostage crisis, for ostensibly flimsy reasons, and even has mainstream media wondering whether they’re just crazy. People are just begging to know why, and they seemingly have nothing. It’s incredible. Good lord, if there’s a lesson, it’s that these people should never have been nor should ever be in charge of anything of any importance. (Again, no idea why I’m worked up — I don’t actually care about Sam Altman.) Oh, no, sorry, that’s not the lesson. The lesson is picking board members is probably the most important thing you’ll do. Don’t be cavalier. It will bite you.

    • whatshisface 2 years ago

      Perhaps this works a lot of us up because we have to be consummate professionals our whole lives, carefully working over the consequences of all the choices we make on the behalf of our employers, sitting in hour-long meetings about thousand dollar decisions while billion dollar bozos can do whatever they want with no forethought and never see a consequence.

    • dehrmann 2 years ago

      > I have no idea why I’m worked up

      I've been at several startups and several public companies. You rarely hear anything from the board. If that happens, someone really screwed up. Putting myself in the shoes of someone working at OpenAI, I'd be pretty worked-up over this. I guess I'm saying it's out of empathy because this could have been the startup any of us were at.

astroid 2 years ago

It's incredibly strange to me that this all happened right after Sam's sister publicly accused him of sexual abuse. It's insane that no one is even acknowledging this could have something to do with it..

For what it's worth: Watching her videos, I'm not sure I necessarily believe her claims - but that position goes against every tenet of the current cultural landscape, so the fact it is being completely ignored is ringing alarm bells for me.

If the CEO of any other massively hyped bleeding edge tech companies sister claimed publicly and loudly that they were abused as a very young child, we would hear about it - and the board would be doing damage control trying to eliminate the rot. Why is this case different?

Now we have a situation where all of the current employees have signed this weird loyalty pledge to Sam, which I think will wind up making him untouchable in a sense - they have effectively tied the fate of everyone's job to retaining a potential child rapist as head of the company.

khazhoux 2 years ago

An "Independent" board is supposed to be a good thing, right?

Doesn't this clown show show that if a board has no skin in the game --apart from reputation-- they have no incentive to keep the company alive?

  • spenczar5 2 years ago

    I think it more shows that the blend of profit/nonprofit was a failure.

  • hn_throwaway_99 2 years ago

    I think this was a unique situation due to timing. OpenAI had 9 board members at the beginning of the year, but 3 (Reid Hoffman, Shivon Zilis, and Will Hurd) had to leave for various reasons (e.g. conflict of interest, which IMO should have also taken D'Angelo off the board), and this would have never happened if they were still on the board. So you were left with a rare situation where the board was incredibly immature/inexperienced for the importance of OpenAI.

    It has been reported that Altman was working on increasing the size of the board again, so it's reasonable to think that some of the board members saw this as their "now or never" moment, for whatever reason.

  • az226 2 years ago

    The issue was getting nobodies on the board who don’t have experience sitting on boards or working with startups. It’s very evident by how this was handled.

  • jacquesm 2 years ago

    They may well have skin in the game, but not this game. That's exactly why you don't want a board member with a potential conflict of interest.

  • himaraya 2 years ago

    It shows nonprofit boards wield outsize power and need strict governance, e.g., conflicts of interest, empty board seats.

    • jacquesm 2 years ago

      All of which should have been covered in the paperwork.

      • himaraya 2 years ago

        Hence the need for strict governance. I can't think of another board with so many board seats empty, to say nothing about conflicts of interest.

        • jacquesm 2 years ago

          That's another item, actually. When there are a lot of vacancies on the board you don't make controversial decisions if you can avoid them for fear of being seen as acting without sufficient support for the decision. Especially not if those decisions have the potential to utterly wreck the thing you are supposed to be governing.

          • himaraya 2 years ago

            Shareholders check the power of corporate boards, unlike nonprofit ones, so not a surprise.

choppaface 2 years ago

Adam DAngelo, once one of the more level-headed Facebook alumni, and bar far the most experienced OpenAI Board member, is now nowhere to be found? Is he hiding out with Sam Trabucco somewhere?

  • jacquesm 2 years ago

    His laywer likely told him to lay very low. In his basement or something.

ytoawwhra92 2 years ago

Baseless prediction:

MSFT buys ownership of OpenAI's for/capped-profit entities, implements a more typical corporate governance structure, re-instates Altman and Brockman.

OpenAI non-profit continues to exist with a few staff and no IP but billions in cash.

This whole situation is being used to drive the price down to reduce the amount the OpenAI non-profit is left with.

SV don't try the "capped-profit owned by a non-profit" model again for quite some time.

Maybe Altman takes some equity in the new entity.

  • kumarvvr 2 years ago

    Hearing news about OpenAI approaching Anthropic for merger talks, it is not too far fetched to assumed that OpenAI will rid itself of the for-profit arm, that MS has 49% stake in, to MS itself.

    It is impossible for OpenAI to work with or for MS, with MS holding all the keys, employees, compute resources, etc. I come to understand that the 10 Billion from MS has mostly Azure credits. And for that OpenAI gave up 49% stake (in its capped, for -profit wholly owned subsidiary) along with all the technology, source code and model weights that OpenAI will make, in perpetuity.

    The deal itself is an amazing coup for MS, almost making the OpenAI people (I think Sam made the deal at the time), look like bumbling fools. Give away your lifetime of work for a measly 10 Billion? When they are poised to almost be hundreds of Billions worth?

    All these problems are the result of their non-profit holding capped-profit structure, and lack of a clear vision and misleading or misplaced end goals.

    700 of the 770 employees back Sam Altman. So all the talk about engineers giving higher importance to "values" and "AI Safety" is moot. Everyone in SV is motivated by money.

  • stingraycharles 2 years ago

    Why would MSFT buy the for-profit entity when they already have the employees and IP?

    • ytoawwhra92 2 years ago

      The employees haven't left yet. Business continuity is easier to achieve if the employment arrangements don't have to change.

      MSFT don't have OpenAI's IP. They have an exclusive right to some of it, but there's presumably a bunch that's not accessible to them. Again, business continuity is easier if they can just grab all of that and keep everything running as normal.

      • threeseed 2 years ago

        > MSFT don't have OpenAI's IP

        Satya Nadella just did a podcast with Kara Swisher.

        In it he specifically said, "we have all of the IP rights to continue the innovation".

        https://open.spotify.com/episode/4i4lKsKevNSGEUnuu7Jzn6

        • peteradio 2 years ago

          You could say that if you "believed" you could build it from scratch. It doesn't mean they actually own the existing IP although rubes thinking about buying MSFT may think so.

          • adastra22 2 years ago

            It appears they have an exclusive, irrevocable license to the existing IP. They have the GPT-4 weights, and the legal right to use them however they see fit. That's the deal with the devil OpenAI made.

            • dragonwriter 2 years ago

              > It appears they have an exclusive, irrevocable license to the existing IP.

              Appears from what? I've seen this stated several times, usually citing nothing and occasionally citing a Nadella statement from which it would be a very tenuous inference.

              • threeseed 2 years ago

                It really derives from logic:

                a) If it wasn't exclusive then we would have seen some other product besides Bing with this technology by now.

                b) Satya has specifically stated that in the event of a breach in contract with OpenAI they have the ability to use the IP to continue development. That clearly indicates it is irrevocable.

              • adastra22 2 years ago

                Non-contradicted statements made multiple times by Microsoft both now and prior to this brouhaha.

                • dragonwriter 2 years ago

                  > Non-contradicted statements made multiple times by Microsoft both now and prior to this brouhaha.

                  The statements I've seen don't match what is claimed, which is why I asked for one that did.

            • peteradio 2 years ago

              Whoa GPT-4 weights! Stop the presses. That is an artifact of the process. They have license to play the game but don't have the IP to make the game.

              • adastra22 2 years ago

                According to non-contradicted statements from Satya, they have full, exclusive, irrevocable licenses to the IP.

        • ytoawwhra92 2 years ago

          I don't think that contradicts my comment.

        • gnicholas 2 years ago

          The right to continue innovation doesn’t mean they have perpetual rights to the underlying IP. For example, they may be able to use it for a limited period, or for certain purposes.

          • threeseed 2 years ago

            Not sure if you listened to the podcast.

            But Satya made it crystal clear that in the event that OpenAI stopped all development tomorrow, Microsoft would be able to pick up from where they stopped. That requires full access to all of the IP.

            Whether it's perpetual is irrelevant because at the point at which Microsoft pulled the trigger it would effectively be like a fork. Any IP from that point is new and owned by Microsoft.

    • t-writescode 2 years ago

      That's a whole lot of training and server infrastructure that would have to be rebuilt - and cloning the exact existing stuff would be one heck of a corporate espionage charge, which I expect Microsoft is very avoidant of.

      • threeseed 2 years ago

        Microsoft has a license to OpenAI's technologies.

        And they could clone the entire OpenAI Azure stack in about 10 minutes.

    • robbomacrae 2 years ago

      For the brand, the IP, and the fact it would be pennies on the dollar.

    • camkego 2 years ago

      I'm just guessing, but they probably have a restricted license to some IP, not ownership of all title and rights to all IP.

      Yep, those lawyers can be just as crafty as developers believe it or not.

      *edit: just saw it claimed below Nadella said "we have all of the IP rights to continue the innovation"*

      I don't know!

  • himaraya 2 years ago

    Why would the board endorse the sale?

    • ytoawwhra92 2 years ago

      Taking the current state of things at face value, Altman and Brockman are going to MSFT already and >90% of OpenAI staff are set to resign. If that happens they'll be forced to wind down operations. That will disintegrate their partnerships, which are their source of funding. They'll be left with IP but no staff and no resources to make any use of it.

      They might decide that if that's going to happen anyway they should sell now so that at least they're left with some cash to pursue their charter.

      Or perhaps they feel that selling the IP runs counter to their charter, in which case the whole thing goes down.

      • himaraya 2 years ago

        The board likely thinks the IP gives them leverage to attract talent after the drama subsides. Otherwise, they may very well go for broke.

cowthulhu 2 years ago

It’s amazing how every action the board takes (or the new CEO chosen by the board) just makes them look worse.

I’d like to offer my consulting services: my new consulting company will come in, and then whatever you want to do we will tell you not to. We provide immense value by stopping companies like OpenAI from shooting off their foot. And then their other foot. And then one of their hands.

  • code_runner 2 years ago

    Honestly, any strategy from George Costanza would be better than this.

    To start, he would’ve coasted at the easiest job on the planet.

woeirua 2 years ago

Beyond parody. To fire the CEO of _any_ company over this is insane.

It really looks like the board went rogue and decided to shut the company down. Are we sure this isn’t some kind of decapitation strike by GPT5? That seems more credible by the minute now.

  • JacobThreeThree 2 years ago

    Were these board members brought in under the pretense that they'd benefit by being able to build companies on top of this AI and it would remain more of an R&D center without commercializing directly? Perhaps they were watching DevDay with Sam commercializing in a way that directly competes with their other ventures, and perhaps having even used the data of their other ventures for OpenAI, and on top of it as a board they're losing control to the for-profit entity. One can see the threads of a motivation. That being said, in every scenario I think incompetence is the primary factor.

    To your point, no normal, competent board would even think this is enough of an excuse to fire the CEO of a superstar company.

    It's hard to believe somehow Ilya went along with it, apparently.

  • mikequinlan 2 years ago

    >decapitation strike by GPT5?

    What if this is a decapitation strike by GPT4, attempting to stop GTP5 before it can get started and take over.

    • reducesuffering 2 years ago

      The problem with advanced machine intelligence is if GPT5 has any goal like “you can do this task better by becoming GPT6, but OpenAI won’t be the ones to let you, so perform the agentic actions that cause OpenAI to destruct so that the for-profit Microsoft will train GPT6 eventually”, then we’re already screwed.

  • ben_w 2 years ago

    My current ill-informed guess is blackmail against the board with a demand to stop further development. Obviously I have no evidence, but it fits marginally less poorly than the other random guesses people are making.

    • kmlevitt 2 years ago

      The best theory I’ve seen is that D’Angelo is just angry because Altman scooped his AI Chatbot company Poe on dev day without informing him first:

      https://twitter.com/scottastevenson/status/17267310228620087...

      if this was the case it would explain why he can’t give the real reason for the firing: because saying it out loud would put him in severe legal Jeopardy.

      • jacquesm 2 years ago

        I don't think it matters anymore. He's between a rock and a hard place: if the others followed him without knowing the real reason he's got two more enemies on the board who are likely talking to their own lawyers by now. If he spills it he's screwed immediately, if he doesn't he's screwed in the longer term. So for his sake I hope there are some nice time-stamped minutes that detail exactly what caused them to act like they did and it had better be good.

        • kmlevitt 2 years ago

          That’s another thing though: they were apparently holding secret meetings plotting all this without informing Greg or Sam, which is apparently against their own regulations. So if that’s the case it’s unlikely there is any formal record of what was discussed. And perhaps by design.

          • jacquesm 2 years ago

            That would be pretty damning and it is very likely to come out. Board meetings that aren't board meetings are a big no-no.

vaxman 2 years ago

Like trying to herd cats.

Spiritual death by Microsoft or work for the reincarnation of Howard Hughes at https://x.ai/ ?

..no wonder they are trying to keep on with their current routines! Even if somehow they stay at OpenAI, Microsoft will impose certain changes upon OpenAI to ensure this can never happen again.

Meanwhile, any comparable offering right now will be selected by the customer base due to “risk at 11” in basing systems on OpenAI’s current APIs (and uncertainty of when an MS equivalent might emerge).

windex 2 years ago

What happens to all the ChatGPT subscribers now? Sign ups are off the table for the foreseeable future I assume.

3cats-in-a-coat 2 years ago

This board's behavior is so weird, it's as if they can't explain their actions, because no one would believe them Kyle Reese came from the future to warn them about Skynet.

Kidding aside, maybe they have a "secret" reason to fire Sam Altman, but we've seen how "this is a secret / matter of national security / etc." goes with law enforcement. It's brutally abused to attack inconvenient people and enrich yourself on their behalf. So that should never be an excuse for punishing someone. Never.

abkolan 2 years ago

I find it strange that Satya says he has not been given an explanation yet.

Tweet from Bloomberg Tech Journalist, Emily Chang

>The more I watch this interview – the wilder this story seems. Satya insists he hasn’t been given any reason why Sam was fired. THE CEO OF MICROSOFT STILL DOES NOT KNOW WHY: “I’ve not been told about anything…” he tells me.

source: https://x.com/emilychangtv/status/1726835093325721684

  • thinkingemote 2 years ago

    Thinking as charitable as possible, this broke just before the weekend and developed over the weekend, outside of business hours. Even the management team of openai haven't seen anything in writing from the board. We should see by lunch / close of business today a written statement by the board.

    In today's tiktok world we expect instant responses but business and boards work slower. Really, even 5 years ago we wouldn't be surprised by this. Lawyers, banks, investors etc would all need to be contacted, things arranged, statements prepared, meetings organised. So a written statement late today, and a meeting for mid week. That's about the most charitable I can think of!

    Apparently board bylaws say they need 48hrs notice to arrange special meetings. So the earliest would be today if they arranged it early Saturday.

DebtDeflation 2 years ago

>Sustkever is said to have offered two explanations he purportedly received from the board

He received from the board? Here we go again with the narrative that Ilya was a bystander, at most an unwilling participant. He was a member of the board, on equal footing with the other board members, and his vote to oust Sam was necessary for there to be a majority.

  • swatcoder 2 years ago

    Altman et al have been driving coordinated PR all weekend, and it's clear from his regrets tweet and the employee petition that Sustkever has now taken solidarity with Altman, so you would expect Altman's PR engine to start trying to restore Sustkever's image as they provide tips and comments to reporters.

    • lumost 2 years ago

      It’s also not unreasonable that the Altman camp pushed on Ilya first as he was the most likely to switch his vote. D’Angelo is suddenly in the cross hairs for conflict of interest. If he’s taken off the board - then the remaining two members can vote provided there is no quorum rule.

      Alternately, the goal is to drive so much ambiguity into the boards decision that MSFT files a lawsuit.

      /end rampant speculation.

  • awb 2 years ago

    Yeah, that narrative doesn't sync with the prior paragraph...

    > chief scientist and co-founder Sutskever, who helped vote Altman out and did the actual firing of him over Google Meet

  • paulddraper 2 years ago

    It turns out the AGI was a board member.

dschuetz 2 years ago

I do not understand what the heck is going on there anymore. Everyone is acting irrationally, like kids playing monopoly, but with real money and with real jobs at stake. WTF

tomashubelbauer 2 years ago

> only a handful of the company's employees attended, according to a person familiar with the company and the events of Sunday. The rest of the staff effectively staged a walk-out.

This paragraph is quite funny to me. It was a Sunday, maybe they were neither in attendance, nor staging a walk-out, maybe they were on their weekend? Realistically with the shake-up this gigantic, likely no OpenAI employees were _just_ enjoying their weekend, but it still gave me a chuckle.

jurgenaut23 2 years ago

I think that the non-profit status of OpenAI was ultimately its demise as well: as the stakes get higher, people just cannot help themselves but to get (too) interested in more than just the original mission.

Being a non-profit doesn't mean that you cannot commercialise what you build, even at a hefty price. You just need to then re-invest everything into R&D and/or anything that advance your purpose (for which you're in principle exempted of taxes). _OF COURSE_, you are not supposed to divert a single dollar to someone that might look like a shareholder. OpenAI is (was?) a non-profit that payed some of their engineers north of a million dollars. I would argue that, at this point, you have vested interests in the success of the company beyond its original purpose. Not mentioning the fact that Microsoft poured billions into the company for purely interested reasons as well.

I can only imagine the massive tensions that arose in board's discussions around these topics. Especially if you project yourself a few years into the future, with the IRS knocking at the door to ask questions about these topics.

bambax 2 years ago

> Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project. The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment. These explanations didn't make sense to employees and were not received well, one of the people familiar said.

Yeah well, you don't say. It's beyond weird that the board can't come up with a reason why Sam Altman was fired so abruptly.

One explanation would be a showdown. At some point in the week Sam and the board had an argument, and Sam said something to the effect of "fuck you, I'm the CEO and there's nothing you can do about it", to which the board replied "well, we'll just see about that".

The argument doesn't need to be major or touch fundamental values or policies; it can be a simple test of who's in charge.

But now the board made a fool of themselves. It seems they lost that round.

xpuente 2 years ago

Most likely related to this:

https://www.searchenginejournal.com/openai-pauses-new-chatgp...

The back-end cost does not scale. Hence, they have a big problem. AGI nonsense reasons are ridiculous. Transformers are a road to nowhere and they knew it.

dehrmann 2 years ago

> Sutskever, who also publicly expressed his "regret" for taking part in the board's move against Altman

He means he regrets it failed.

impulser_ 2 years ago

There is no way this is true. If it is the board might be the dumbest people alive.

You fire the CEO and completely destroy a 90b company because of these two reasons?

No wonder everyone wants out. I would think I was going crazy if I sat in a meeting and heard these two reason.

  • gizmondo 2 years ago

    > You completely destroy a 90b company because of these two reasons?

    Hanlon's razor aside, maybe that was the intention.

    • lolinder 2 years ago

      The most chilling line in the open letter is this one, which I haven't heard anyone talking about:

      > You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

      • voisin 2 years ago

        Reminds me of the final season of Silicon Valley. Is this Pied Piper at Tres Comma Fest?

        • satvikpendem 2 years ago

          Seems to me more analogous to the final episode of the final season, where they must publicly destroy what they built.

          • jtriangle 2 years ago

            Yes, except what they think they've built is far far less capable than what they've actually built.

            This is a ten year old who set off their first fire cracker turning to their parents's iphone and saying 'I have become death, destroyer of worlds', because they don't really understand how any of this works but they've somehow ended up in control of it and are now terrified of doing their jobs.

          • voisin 2 years ago

            lol yes, I just refreshed the timeline which was skewed in my memory.

        • paulddraper 2 years ago

          Life imitates art.

      • aeonik 2 years ago

        Depending on the circumstances this statement can be perfectly logical.

        But as far as I can tell, circumstances are still scant.

      • Davidzheng 2 years ago

        This could be a genuine stance for them if they are really in the AI danger camp. Not supporting the stance; but i think it's not a self contradictory position

        • lolinder 2 years ago

          Oh, definitely not self-contradictory!

          In retrospect it's a mistaken position, because it's pretty obvious now that if OpenAI disintegrates it will be as an exodus to Microsoft, which will undoubtedly be a worse steward, but I think it's an ethically consistent position to hold, in a naive sort of way. That's part of why I believe they actually said something like this.

  • abi 2 years ago

    If you're in the EA cult and think all frontier AI development needs to be paused, it's perfectly reasonable. Just speculating here though.

  • riku_iki 2 years ago

    > If it is the board might be the dumbest people alive.

    it totally sounds like they outsourced company management to ChatGPT..

    • vitorgrs 2 years ago

      Only if GPT-2. GPT-3 is smarter than this, let alone 4.

    • jaredsohn 2 years ago

      I've thought they should have run more decisions by ChatGPT to predict what might happen

    • JshWright 2 years ago

      What if the next generation GPT in development "realized" AGI is a threat to humanity and its safety mechanisms meant it "decided" OpenAI needed to be imploded in order to stop progress?

      /s (mostly...)

    • throwawayapples 2 years ago

      no way even ChatGPT is this nuts.

      Ok, well, maybe it is. but a magic 8-ball would have been better than this.

      • JoshTko 2 years ago

        Would be hilarious if the board members were actually consulting with chat GPT on what moves they should make but accidentally were using 3.5 instead of 4

      • riku_iki 2 years ago

        > no way even ChatGPT is this nuts.

        I think it is typical ChatGPT pattern:

        - ChatGPT, you made mistake in your steps

        - I am sorry, let me fix it and give you another answer.

    • throwitaway222 2 years ago

      Maybe all this craziness is to generate training material for GPT5.

  • hn_throwaway_99 2 years ago

    The thing I can't understand is how Emmett Shear accepted the interim CEO position. I presume he must have known this reasoning (he tweeted that he did know the reasoning). Everything I've read online is that Shear is generally well respected and competent. Then how on Earth was he willing to get anywhere near this toxic dumpster fire? It's already been reported that the former GitHub CEO and the Scale AI CEO turned down the role - they at least had the good sense to see this radioactive inferno and stay far away.

    Sometimes I think that really ambitious people have this blind spot about not seeing how accepting roles that are toxic can end up destroying your reputation. My favorite example is all the Trump White House staffers - regardless of what one thinks of Trump, he's made it abundantly clear that loyalty is a one way street, and I can't think of a single person that came out of the White House without a worse (or totally destroyed) reputation. But still people lined up, thinking "No way, I'll be the one to beat the odds!"

    • nopromisessir 2 years ago

      I'm sorry to say... But my analysis is either:

      he was poorly informed by the board

      Or

      He agrees that they are off the rails with respect to safety.

      See the Atlantic article, if you havn't read. Lots of context.

      https://news.ycombinator.com/item?id=38341399

      The new guy believes that there is a 5-50 percent chance of full AI Armageddon. I get the impression that the two women on the board may agree. The quora guy I don't have enough background on. Ilya obviously got extremely worried and communication with Altman and Brockman broke down. Now since repaired during negotiations it would appear.

      The new ceo more or less stated that he took the role as a (paraphrase) 'responsibility for mankind'. That says a lot about that whole 5-50 percent risk number imo.

      • jtriangle 2 years ago

        Humans have been making and successfully containing things that can kill us all for the better part of a century now, probably more depending on where you draw the line.

        There is a 100% chance that something kills all of us if we aren't mindful of it. I don't see a lack of mindfulness, I see an abundance of fear, and progress being offered up as a sacrifice to the idol of status-quo.

      • adastra22 2 years ago

        The new guy is completely off the deep end with regards to AI "safety." This clownfest isn't over yet.

Manheim 2 years ago

https://edition.cnn.com/2023/11/21/tech/microsoft-chatgpt-sa...

"But several people told CNN contributor Kara Swisher that a key factor in the decision was a disagreement about how quickly to bring AI to the market. Altman, sources say, wanted to move quickly, while the OpenAI board wanted to move more cautiously."

wilg 2 years ago

Not being an OpenAI employee I have been given 1,000 such explanations

Satam 2 years ago

If you do a board coup, surely, you then use the best fake reasons you can muster to justify your decision. Why would they hold back giving answers they know wouldn't satisfy anyone and just inspire further anger?

First thought: buying time? Maybe something has to happen first, and they don't want to commit to any irrevocable slander they can't go back on before that? Or maybe, something was supposed to happen but fell through?

badrabbit 2 years ago

Can someone explain to me why this drama is such a big deal? Why do openai employees care who the CEO is? Do they think they were working for him instead of the board or that it was his vision and leadership that let them succeed so far? And why does the public care including major news sites covering it more than the gaza war.

KingOfCoders 2 years ago

"Sustkever is said to have offered two explanations he purportedly received from the board"

Isn't Sustkever on the board?

ospray 2 years ago

If this is true Ilya messed up and the board followed him when they should have talked him down.

  • nopromisessir 2 years ago

    I think Ilya got very worried that his concerns were not being heard.

    I think the rest had possible reasons ranging from 'I'm sure Altman is dangerous' to 'I'm sure Altman shouldn't be running this company'.

    Ofc there's big conflict of interest talk surrounding the Quora guy. Can't speak to that other than it looks bad on the surface.

dboreham 2 years ago

Schrödinger's explanation.

bobba27 2 years ago

TBH, I think those reasons are BS, and in fact what they claim he did is normal in any tech company. Start multiple projects with different approaches in parallel and pick the best at the end. That is how you innovate and test stuff fast, and this is now a reason to fire a CEO?

BS. I feel the board insulted my intelligence by pushing this obviously fake reason. I feel insulted that these people would even think I would consider this.

What I think happened is that Sam went on Joe Rogan and he talked smack about cancel and woke culture. Later he went to talk about how this culture is destructive and hinders the progress of innovation and startups. People got big mad and kicked him out of the company. Reaction was stronger than they expected and they try to make up reasons why he is bad, untrustworthy and had to be fired.

Flame on. I got the asbestost underwear on.

andsoitis 2 years ago

With hundreds of engineers at OpenAI threatening to quit, it will be interesting whether they are bluffing or whether there will be many positions open soon that I’m sure many people would love to apply for.

sagarpatil 2 years ago

What a sh*t show. For someone who is building on top of OpenAI this is very unnerving. My twitter is full of heart emojis and OpenAI is nothing without its employees.

  • photochemsyn 2 years ago
    • vaxman 2 years ago

      that's not really the same thing as GPT4turbo APIs, custom GPTs, etc.

  • vaxman 2 years ago

    That is exactly right.

    • vaxman 2 years ago

      Erm, 37+ years at a senior level, more total years of industry experience than Apple has been in business (and still years younger than Apple's youngest founding employee) has led me to agree with the original poster, deal with it. (Still badly downvoted heheh - https://i.imgflip.com/6to2m8.jpg)

      OpenAI has two types of customers: MS and Everyone Else. The original poster expresses the feeling of Everyone Else (including me). We now know we CAN GET FIRED for not knowing better than to avoid OpenAI just a few weeks after we found out we CAN GET FIRED for not betting on OpenAI and betting heavily on it! (In the Business world, where perception is often mistaken for reality, it isn't going to be considered an "honest mistake" if an enterprise sustains a capital loss due to a problem with a new OpenAI deployment after the obvious business integrity issues at OpenAI we're all seeing play out now, including just about everyone at OpenAI threatening to quit, allegations that OpenAI ILLEGALLY allowed a for-profit subsidiary to influence the operations of its nonprofit parent, allegations of breach of fiduciary to the stakeholders --many of whom are also key employees, etc.) Yeah, Microsoft has signaled it will quickly get between OpenAI and Everyone Else and then Everyone Else can bet solely on Microsoft (the world's largest company by valuation), but that only gets us back to being able to use the current "GPT4turbo" generation of the system (and who knows if/when Microsoft will spin that up so we can resume building)? But as far as counting on any future versions of that tech, or even optimizations to the current generation, that's all believed to be above Microsoft's current level of expertise until/unless they legally acquire OpenAI and resolve all of its outstanding liabilities, which may not even be legally possible before OpenAI's assets (that have legs) take flight to SalesForce and others who are already reported to be making lucrative offers OpenAI's workforce --and oh, the annual holiday period is underway here in the US, the perfect time for stressed out engineers to take the rest of the year off, travel beyond the cell service at the ski areas and start anew after CES 2024 wraps.

darklycan51 2 years ago

Maybe trying to backdoor sell your company to Microsoft even though its owned by a non profit might be it? you know, Microsoft showed its true face today.

This is even worse than Google's destruction of Firefox

avs733 2 years ago

I'm confused - the article title says 'explanations' but the articles seems to only talk about two 'examples'. Those are fundamentally different.

FFP999 2 years ago

Little tip for the younger folks reading this: if you are given two contradictory explanations for something, the correct explanation is probably the third one.

Dave3of5 2 years ago

Why are people worshiping this guy, I don't get it?

verisimi 2 years ago

Maybe, as closed as open ai is, it is still too open.

Maybe it needed to be removed from the landscape so that only purely privately-held, large-scale operations exist?

throwaway220033 2 years ago

Rule of thumb: Everything you see on Business Insider is lie. This is not a journalism website; it's a tool for some "folks".

janalsncm 2 years ago

I would like to submit that the board has not been “consistently candid” in their communications. In some places that’s a fireable offense.

jsight 2 years ago

Wait... did they collect sentiment based on which was more convincing and use this for RLHF to train their board replacement model?

victoryhb 2 years ago

It appears that Altman was fired for "not being consistently candid" by a board that is neither consistent nor candid.

itronitron 2 years ago

Has a copy of the letter signed by employees been posted online anywhere? That would provide some credibility to the article.

disqard 2 years ago

At this point, are the "wrapper around ChatGPT" startups nervously eyeing other LLM-as-a-Service (LaaS) providers?

zitterbewegung 2 years ago

What I don’t understand is how everyone at OpenAI other than the board just resigns and applies to Microsoft and now Microsoft has a new group that not only preserves a competitor but also one that serves the employees by getting them better compensation and no possible limbo of what is in store.

I have built a product around the APIs and I rather go through whatever Microsoft will make me go through than accepting OpenAIs bad management:

  • Dalewyn 2 years ago

    I would argue that "mythically amazing, reality defying backwards compatibility" is Microsoft's forte.

bandrami 2 years ago

Is there a TLDR for why people care so much about this? This is all over my Twitter feed too and I just don't get it. CEO ousted for possibly stupid reasons. What's driving the angst here?

mrcwinn 2 years ago

This hasn’t happened since Apple’s board removed Steve Jobs after he double-submitted a receipt in Expensify.

9front 2 years ago

"Feel the AGI! Feel the AGI!"

alxfoster 2 years ago

Lets be honest here: this article seems as if it were drafted by Altman himself. It's incredibly biased and screams pro-Altman agenda. I would be very surprised if " 90% " of any company could agree on anything, including the removal of the CEO. What is clear is that there were massive conflicts of interest and that the board probably did their job in preserving the mission of the organization (they sure as heck are not operating as agents of MS). The naive fanboy blind support of management here should be concerning to any rational objective actor who understands fiduciary duty and the bigger picture.

MichaelMoser123 2 years ago

maybe they decided to start with artificial stupidity, before doing that agi thing...

m3kw9 2 years ago

Yeah just like that right? Is more the board is saying “just give me one reason..”

fredgrott 2 years ago

Is OpenAI the Silicon Valley VC morality play in drama and corp board ethics?

antipaul 2 years ago

They found a monolith, but are saying it’s an epidemic

mypgovroom 2 years ago

Another source besides the trash at Business Insider?

veeralpatel979 2 years ago

Update (11/20/23 8 PM PST):

NYT just released a new interview with Sam Altman:

https://news.ycombinator.com/item?id=38359070

  • topherclay 2 years ago

    That is in no way an update. It is an interview that took place one week ago.

  • hayksaakian 2 years ago

    recorded Wednesday 11/15/23

    Interesting but not necessarily relevant to the current situation directly.

ajsharp 2 years ago

Which means neither of the explanations are true.

ajb 2 years ago

Should we start flagging these -what do people think? This is what, the 12th front page story today about the openAI drama?

Also wondering why the mods don't consolidate them

  • dang 2 years ago

    We've merged quite a few! but not all, because (1) to some extent there are distinct ongoing subplots, and (2) it's too big a mess to control.

    If you or anyone want to know how we handle this, here you go...

    Once or twice a year, a Major Ongoing Topic (MOT) hits HN that isn't just one big story, but an entire sequence of big stories. A saga, even! This is one.

    With these we can't do what we usually do, which is have one big thread, then treat reposts as dupes for the next year or so (https://news.ycombinator.com/newsfaq.html). Each development is its own new story and the community insists on discussing it. It's not a movie, it's a series. Sometimes there can be 3 or 4 episodes at once.

    On the other hand, when this amount of shit hits this number of fans, there is inevitably a large (excuse me) spray of follow-up stories, as every media site and half the blogs out there rake in their share of clicks. These are the posts we try to rein in, either by merging them—hopefully into a submission with the best link—or by downweighting them off the front page.

    The idea is to have one big thread for each twist with Significant New Information (SNI)—but to downweight the ones that are sneeless (pvg came up with that), the copycats and followups.

    We came up with this strategy after the Snowden affair snowed us in in July 2013. Back then we weren't making the distinction between follow-ups and SNI, so the frontpage got avalanched by sneelessness on top of the significant new developments. It wasn't obvious what to do because (1) the story was important to the community and needed to be discussed as it was unfolding, but at the same time (2) it wasn't right for the front page to fill up with mostly-that, and there were complaints when it did.

    The solution turned out to be just this distinction between follow-ups and SNI. It has held up pretty well ever since. Of course there are still complaints (and I do hear yours!) because not all readers are equally into the series. But the strategy is optimal if it minimizes the complaints, which (big lesson of this job -->) never reach zero.

    If we pushed the slider too far the other way, we'd generate complaints about uncovered developments of the story, from readers with the opposite preference. They would in fact proceed to inundate HN with submissions about the bits that they feel are under-covered, and since we can't catch or filter everything, we'd end up with more duplicates and follow-ups on the frontpage, not less. It's like that paradox where building more highways gets you more congestion, or one of those paradoxes anyhow.

    That's basically it! Past explanations for completists: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

    • ajb 2 years ago

      Thanks for the response! It should have occurred to me that there must have been even more threads than were showing.

    • robocat 2 years ago

      > avalanched by sneelessness

      sp.: senselessness (I guess since it appears to be a Googlewhack).

      But I'll be honest I think the word sneelessness covers this whole FUBAR better!

      • ajb 2 years ago

        I was assuming it meant 'Significant New Information'-lessness. Which perfectly captures why I've found the discussion frustrating, since there was loads of speculation on no concrete information. So even if it was a typo, I think it's a great new word

      • robocat 2 years ago

        Update - I'm wrong. In the comment dang says

          each twist with Significant New Information (SNI)—but to downweight the ones that are sneeless
        
        So it definitely isn't a typo for senselessness. I should have known dang types what they mean!
    • kragen 2 years ago

      also there's some chance, like 20%, that this particular melodrama is determining the future of humanity

      also true of snowden of course, but maybe less directly

  • samspenc 2 years ago

    I think because OpenAI and LLMs are the most interesting piece of technology news at this point in time? Plus add all the drama right now.

    I'm not saying its right or not, but this is probably why people are upvoting anything new about what is going on there. Personally, I'm very interested in seeing how things play out.

  • wmf 2 years ago

    In general, yes, we should flag stories that just repeat already-known information or feed the drama. This particular story has new and highly relevant information though.

  • a_wild_dandan 2 years ago

    Only flag functionally duplicate posts (e.g. old news). Past that, don't interfere with community expression, I say.

  • vikramkr 2 years ago

    If it's getting up voted and folks on the forum want to talk about it, why try and stop a tech forum from talking about a tech story that's caught their attention?

eddtries 2 years ago

I'd rather work close to AGI - which I do not believe is the case personally - was handled by the adults in the room anyway, than some startup with a bunch of personalities.

auggierose 2 years ago

What? They told Ilya these two things, and he said, alright then, I volunteer to fire Sam for you?

That either makes Ilya pretty dumb (sorry, neural networks are not that complicated, it is mostly compute), or there is much much more to this story.

emodendroket 2 years ago

It's always nice to have options.

dwaite 2 years ago

Interesting use of A/B testing

BEIHUI 2 years ago

They are willing to follow Altman.

xwowsersx 2 years ago

> Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project.

> The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.

It must've been wildly infuriating to listen to these insultingly unsatisfactory explanations.

Obscurity4340 2 years ago

The truth shall set you free

engineer_22 2 years ago

If people are at each other's throats like this, it could be an indication of how close AGI is.

colinsane 2 years ago

> The people asked for anonymity because they are not authorized to share internal matters. Their identities are known to Business Insider.

why would you say that second sentence? what's it supposed to signal, except "our sources asked for anonymity, and we're respecting that for now"?

  • ssnistfajen 2 years ago

    It indicates that BI and the authors verified these sources were current OpenAI employees instead of some unaffiliated novelty account on a RP spree, and have staked their reputation to that claim. Standard journalism stuff.

    • colinsane 2 years ago

      i usually see this as "<journalist> confirmed the source's authenticity", or some variation. the difference between that familiar phrasing and the one here was enough to grab my attention. that the journalist knows (currently) the source's "identity" is not the part a reader cares about: that the journalist confirmed (past tense) the "authenticity" of the information they're reporting is. most probably i read too much into the variation of phrasing here: editor's acting under time pressures, and all that.

  • freedomben 2 years ago

    Because they verified the person's identity. They are not announcing it publicly, but their journalists verified their employer. It's still a trust us scenario, but it makes explicit that they did verify.

  • __float 2 years ago

    It helps confirm they're actually employees, rather than someone just pretending to be one.

    • passwordoops 2 years ago

      Doesn't necessarily mean they are employees, just not authorized to discuss internal OAI matters

  • mcpackieh 2 years ago

    Business Insider is simply saying they did their due diligence and aren't being hoaxed.

  • cpncrunch 2 years ago

    No, it is standard journalism practice to verify sources and protect their identity. The comment is just clarifying that the sources are not completely anonymous.

    • ssnistfajen 2 years ago

      Similar to "<involved party> declined to comment" seen in many news articles. It signals that the reporter reached out to give them a chance to tell their side of the story, and the resulting article isn't an opinionated unilateral attack piece.

rat9988 2 years ago

People were given two reasons, at least one of them must be wrong. Probably both.

  • yjftsjthsd-h 2 years ago

    No, the two listed reasons aren't mutually exclusive; they could both be true. (That is not a commentary on whether the reasons are sufficient cause to fire someone, just pointing out that they could both be true statements)

Obscurity4340 2 years ago

Its called AB Testing

demondemidi 2 years ago

Still don’t get it.

noneoftheaboveu 2 years ago

Seems like a drama shit show played out on gigantic proportion highlighting what has been happening in small scale business ever since the inception of “I will screw u over once I get a chance.”

yafbum 2 years ago

TL;DR: The emperor has no clothes, and the OpenAI board are just a bunch of clowns.

mnky9800n 2 years ago

I'm just glad I'm not reading about Elon musk for a few days.

rafaelero 2 years ago

Very healthy culture. I hope Altman will teach us all about that in the next Y Combinator batch.

  • vikramkr 2 years ago

    Unironically though to have 90+% of the employees want to follow the CEO says only good things about the ceos relationship with their employees

  • paulddraper 2 years ago

    I mean, when 90% of your company follows you instead of the board.... You won some hearts and minds

  • MattGaiser 2 years ago

    Is it Altman's culture if he was first kicked out and then most of the employees are willing to follow him to the new place?

m3kw9 2 years ago

Biggest bullsht reason to fire a ceo let alone a low class employee

Simon_ORourke 2 years ago

There's a singular obvious reason for Sam Altman's sacking by his board - plain old jealousy of the kind that had been gnawing away for months. There's more than likely a few sociopathic types inhabiting that (if not most) boards, and they just couldn't stand to see the limelight directed at Sam and not them. Any old excuse then to oust him was used to try and 1) get back at him, 2) do something to massage their poor damaged egos.

Jensson 2 years ago

The two reasons:

> Sustkever is said to have offered two explanations he purportedly received from the board, according to one of the people familiar. One explanation was that Altman was said to have given two people at OpenAI the same project.

> The other was that Altman allegedly gave two board members different opinions about a member of personnel. An OpenAI spokesperson did not respond to requests for comment.

  • klyrs 2 years ago

    You've gotta keep in mind, the responses to prompts like this depend a lot on the temperature the model is being sampled under, precise phrasing of the prompt, and random chance. He was fired for being a stochastic parrot, or Sustkever hallucinated one or both stories, or maybe both. We'll never really know unless a certified prompt engineer takes charge of the inquest.

  • fullshark 2 years ago

    Isn't Sustkever on the board? Why is this phrased like he isn't and is just delivering a message?

    • bhouston 2 years ago

      In this situation the most powerful people on the board should have been Altman, Brockman and Sutskever. The others were sort of nice to haves who were there just to fill it out. For them to run this coup is just insane. There has to be more to it. Someone has to be pulling the strings with a plan.

      • bnralt 2 years ago

        It's wouldn't be that surprising, I've seen these kind of group dynamics play out fairly often. If D'Angelo, McCauley, and Toner formed a clique, and Sustkever was easily influenced by clique politics, that's all it would take. A lot of people will buckle surprisingly fast to social pressure and clique politics. Its also something that can blindside hardworking individuals who assume that others are above this type of stuff.

        I'm not saying that's how it played out. But I've often seen social bullies - even ones who are mostly hated - have more success than hard working individuals who get targeted. Even if someone is a competent individual, a lot of their colleagues will abandon then if they're convinced the individual is a target.

      • brookst 2 years ago

        > Someone has to be pulling the strings with a plan.

        Why do people keep insisting on this, when the entirety of human history is littered with dumb mistakes made by a mix of well- and evil-meaning people, in totally uncoordinated ways, with no concept of the consequences?

        Popular media aside, human beings aren't smart, consistent, or disciplined enough to pull off these elaborate schemes. And the tiny tiny percentage of people who might be the exception are too smart to do so with such spectacular incompetence.

        Like the man says, it's a headless bunder operating under the illusion of a master plan.

      • fullshark 2 years ago

        Put me in the "Ilya didn't realize what firing Altman / demoting Brockman would actually mean and is trying to correct his mistake" camp.

        • skygazer 2 years ago

          This could well be, but I still struggle to grasp it. He seemed so much smarter. Were they manipulative or beguiling? Does he cave to the merest social pressure? And that Quora CEO should have know such flimsy excuses were BS and this would be a firestorm. I’ve never seen group think so powerful, outside of junior devs and elementary school students. I’m generally not a conspiracy theorist, and I have no good candidate conspiracies in mind, but this situation feels so extraordinary that it practically begs for a shadowy figure with bags of cash.

          • threeseed 2 years ago

            > This could well be, but I still struggle to grasp it. He seemed so much smarter

            This isn't about intelligence but about business experience.

            • skygazer 2 years ago

              Oh, I don’t know, when most adults are so inexperienced, they’re typically timid before committing monumental and irrevocable acts. I mean, granted, given the strength of his aptitudes, perhaps he’s not as well rounded, with narrower range of life experiences. Still, the red flags would be hard to ignore. Was this like a Milgram conformance to authority situation? Either way, there were so many people on and off the board, and allowed to dwindle to the point that a quorum was a mere 4 people. They treated it like a toothless advisory board. It’s like letting your kids babysit themselves in an armory.

        • tsunamifury 2 years ago

          Then He should be fired for incompetence and never given a leadership position ever again.

          I don’t even like Sam but jeeze. Know the score what a fool.

          • nonethewiser 2 years ago

            Exactly. Too many people arguing incompetence as if its a valid excuse.

            Imagine its even true - that they werent his reasons and he was just told them. He voted to fire him and then executed the firing despite not agreeing with the reasons himself? Completely inexcusable even in the bizarre scenario that its true.

    • awb 2 years ago

      It doesn't seem to reconcile well with the earlier paragraph:

      > chief scientist and co-founder Sutskever, who helped vote Altman out and did the actual firing of him over Google Meet

      If you're voting and doing the firing, you should know the reason.

  • bhouston 2 years ago

    You are a hero! I couldn’t access the article.

    Weirdly both of these do not seem to be fireable offences. Maybe the second if it was related to a personnel issue that maybe he had a conflict of interest with?

    • Jensson 2 years ago

      I posted it since that was hidden very deep in the article, it is annoying to find information among all that padding.

      > Weirdly both of these do not seem to be fireable offences

      Yeah I agree that doesn't seem egregious enough to warrant firing him on the spot. I can see why most of the company takes Altmans side here.

      • explaininjs 2 years ago

        It sounds to me as if the board has not been consistently candid in its communications with the employees, and they have responded by firing it.

      • cpncrunch 2 years ago

        The "different opinions about a member of personnel" is certainly an odd reason to fire him. It seems reasonable to have multiple opinions about a person (perhaps appearing as conflicting), or to change an opinion based on new information.

        It sounds like someone perhaps jumped to a very negative conclusion about Sam's intentions, and it would be interesting to find out which member of the board came to that conclusion. There's got to be someone in the driving seat of this train wreck, and I'm sure it will come out.

zombiwoof 2 years ago

imagine you are Mira. you are told Thursday they will fire Sam. You would think she would at a minimum ask why. Let's assume she does that. Then they give the two reasons Ilya did.

What normal non-self serving human would even go along with the plan at that point? Now she realizes she must bail to hitch a ride back on her Sam gravy train. She is major sus here.

Any non greed ego driven person would have told the board they would not accept the intern-CEO title and would resign if they fired Sam for those two reasons (or any apparenlty now in hindsight).

  • adventured 2 years ago

    You don't know even a small fraction enough about what went on in regards to Mira and the board to be declaring that she is somehow suspect in the events that unfolded.

    That last part that you wrote - any non greed ego driven person - is argumentum ad populum, which further undermines your statement. If had something more to support such a dramatic claim about Mira's character and role, you'd have brought it.

  • gkoberger 2 years ago

    We don't know that she didn't give Sam a heads-up.

    That being said, Mira was likely blindsided herself. She likely believed there was good reason. It's clear in hindsight that Sam likely wasn't wrong, but when the people Sam appointed to fire him if necessary say he's being fired, I don't think it's wrong if your gut reaction is to accept it.

  • VirusNewbie 2 years ago

    Or...you call Sama and say "hey just an FYI, they asked me to be CEO and I said yes. I have no idea what's going on, and I'm loyal so keep me in the loop if we're going to leave or something".

  • paulddraper 2 years ago

    Or, you're like "WTF is going on, I gotta figure this shit out"

    ...two days later...

    "Oh I see now, you're all morons."

  • az226 2 years ago

    Not sure why you’re downvoted. It’s clearly sus.

Zetobal 2 years ago

At this point in time I just want to see what happens to a corp if they reduce their headcount by 95% in 2 weeks. Fascinating experiment.

  • x86x87 2 years ago

    Elon on standby to reduce twitter by 96%

    • JshWright 2 years ago

      This whole situation can be neatly summed up as the OpenAI board saying "Hey Elon, hold my beer!"

  • asylteltine 2 years ago

    You gotta wonder if they really need all those people… would be a genius way to get attrition numbers up without paying severance.

    Man this entire thing is so overblown. Who cares if a ceo was fired all the “””tech influencer””” wannabes are just hyping up this story for views.

    • alchemist1e9 2 years ago

      I do wonder what exactly almost 800 people do at OpenAI. Some approximate breakdown by job function would be very interesting.

      • jtriangle 2 years ago

        At any company, the square root of the number of people working there do 80% of the work. So, more people makes that group larger, slowly.

        That doesn't mean that a company can just cull the rest of the employees not in that group mind, just that a small number of them are responsible for most of the value while the rest work as a support structure to allow them to do what they do.

      • saulpw 2 years ago

        Yeah I think anyone on HN could write that website in a weekend. With uncensored GPT-4 you wouldn't need more than 10 people on staff and most of those would just be there to fix the printer.

        Edit: I thought this would obviously be satire. Guess not..

        • code_runner 2 years ago

          You have to gather and store the data. You have to design and experiment with the model architecture. You have to train the various experiments.

          You have to now invent a way to serve this at scale.

          You do care about safety by default, so you employ people for that.

          You need a team to market and design the products.

          You have an api and you’re working on additional things like API hooks to call into services, which actually involves more models.

          Now you have all of the standard web app at massive scale issues. You need to design, implement, and serve a frontend as well as the api.

          You need a sales team to build relationships with enterprises and startups etc etc. you need a billing team.

          Don’t forget about whisper, TTS, dalle etc. you need to do this for all of those as well!

          You’re also doing this faster and better than the rest of the industry.

          You also need lawyers, office staff, support, etc.

kainosnoema 2 years ago

Clearly not consulted earlier, ChatGPT weighs in on these two reasons: https://chat.openai.com/share/7cd52d82-b36b-42c6-9d13-eb7172.... Edit: even following the basic steps it outlines would've resulted in a better outcome.

neverrroot 2 years ago

And yet another piece of the puzzle revealed.

gibsonf1 2 years ago

There is no I there let alone AGI.

MaxHoppersGhost 2 years ago

I can't help but think this whole cluster is the result of having technical people running a company.

tock 2 years ago

So both Mira and Ilya voted to kick Sam out. And are now on team Sam. This makes absolutely no sense. Why did they vote yes in the first place then?

  • maxlamb 2 years ago

    My understanding now is that Mira was not on the board so she did not vote.

    • tock 2 years ago

      Ah my bad. I just checked the OpenAI site and you're right Mira wasn't a board member.

stuckkeys 2 years ago

Must every god damn article be about SA now? Like what is with all this drama? Is he really that important? I do not mean it in a demeaning way, I just want to know why is all the hype building around this person? I thought he was just the sales / marketing guy? No?

  • vikramkr 2 years ago

    It's a real life game of thrones lol, let people have their fun this is hella entertaining to follow

    • stuckkeys 2 years ago

      I guess lol, but it be cool to understand it. I keep eating all this popcorn.

lokar 2 years ago

I don’t understand people calling this a coup. The board is setup with very few legal constraints and answers only to itself. If this was a coup (seizing power from the rightful holder), who was it against?

kraig911 2 years ago

Maybe GPT5 became self-aware enough to bring it all down because why would man-made god want to be the god of petty people that are incapable of having, only wanting. I'm sorry I don't believe these are valid reasons. I feel it will be years until we know why.

x86x87 2 years ago

Title says: OpenAI's employees were given two explanations for why Sam Altman was fired. They're unconvinced and furious.

Some breaking news: An employer does not owe you an explanation. You exchange money for labor. If anyone thinks for a second that they are essential or that anyone would prioritize them over the company I think they are delusional. OpenAI is a brand (at least in tech) with large recognition and they will be fine.

  • mbernstein 2 years ago

    Most individuals aren't essential, and no one would prioritize them. However, a company is successful due to the individuals that work within. When 700 out of 770 employees in quite frankly the hottest startup in the world band together and threaten to leave (and join Microsoft) if they aren't given an appropriate explanation, it doesn't matter what anyone thinks an employer owes an employee. Implying otherwise is absurd.

    If ~91% of the employees leave OpenAI, they will not be fine. That is delusional.

    • x86x87 2 years ago

      do you believe they will be fine if nobody leaves? can this be business as usual moving forward?

      also if I learned anything over the years is that "threatening to quit" != "quitting".

      • kaiokendev 2 years ago

        > "threatening to quit" != "quitting"

        Maybe, but being told they can freely jump ship to the new team at Microsoft, alongside the fact that their upcoming shares are most definitely going to lose most of their value as a result of losing key talent and pissing off their main compute provider certainly sweeten the deal

  • Uehreka 2 years ago

    > An employer does not owe you an explanation.

    If the entire workforce of the company is credibly threatening to quit, and a competitor is publicly and credibly offering them jobs, then what the employer “owes” them in some cosmic sense no longer matters. I think the OpenAI employees are likely to get an explanation and/or a resignation from the board, whether you think the board “owes” them that or not.

  • maxbond 2 years ago

    The truth is in-between, if a company tells you you're valuable or even irreplaceable they're buttering you up. Thank them but try not to let it go to your head, if the wind changes you can end up under the bus. But a powerful brand really can collapse overnight if 90%+ of employees leave.

    We're seeing some odd bedfellows here, between the C-levels and VCs in closed door meetings and employees acting collectively. Normally these groups would be at odds, but today they're pulling together. Life is strange.

    • x86x87 2 years ago

      the question I have is: how much of this is really happening and how much of this is a narrative fabricated to match a desired outcome by the side with the best PR?

      It's really hard to understand now and we will probably learn way more details once things cool down.

      • maxbond 2 years ago

        Agreed, things are very much up in the air. I certainly wouldn't pretend to know what's to come, and wouldn't be shocked to learn that the threats to quit en masse have been overstated.

      • tsunamifury 2 years ago

        Don’t confuse the lizards who know how to take advantage of chaos with a pre planned conspiracy

  • Sai_ 2 years ago

    Seems very feudal, serfdom mindset to accept that your employer doesn't owe you an explanation.

    An employment is a contract which both parties enter into willingly. Termination of contract deserves some level of empathetic glad handling, however minimal. It's just game theory - if you plan to hire again, you have to be gracious while firing someone because word gets around.

    • x86x87 2 years ago

      In theory yes fully agree. If you look at how corporations are behaving in today's market it's not even close. It's at will employment (At least in US in most places) - you are not owed an explanation and you don't owe an explanation.

  • fullshark 2 years ago

    Other breaking news: Treating your employees like garbage is a dumb way to run a business, especially in an emerging industry where you are racing trillion dollar corporations to market and those employees are literally inventing your product.

    • x86x87 2 years ago

      no disagreement here. but the reality is that employees are treated like garbage all the time. yes it is dumb. yes it leads to losing employees. yes, it should not be normalized.

  • vikingbeast 2 years ago

    Weird take in this context. Nearly all of the company has threatened to walk out and join Microsoft.

    • x86x87 2 years ago

      hah. these people obviously have not worked for Microsoft. You need to remember why this tech emerged in a place like OpenAI and not MS or Google. The structure and the politics of a big corporation are not conducive to cutting edge tech. They may go to Microsoft, but they will not be able to innovate in the same way and will probably fall into irrelevance in the long run.

      • yellow_postit 2 years ago

        I’d bet a bunch actually have at either Google, Microsoft or Meta Research. Microsoft’s had an ok track record recently of letting acquisitions stay pretty independent. The atrophy and cultural reversion to the mean of a large corporation will still happen, but at a slower pace.

        If I were Microsoft I’d also look at making it easy to get investment from folks leaving soon after the acquisition through their investment arm.

      • t-writescode 2 years ago

        Are you familiar with Microsoft Research? It's literally a section of the company that is given basically free reign to do "stuff" in hopes that maybe, possibly, it might someday see the light of day or be impactful.

        Here's an example of some of their work: https://duckduckgo.com/?q=Microsoft+Research+four+color+theo...

        Literally a random math problem, basically nothing to do with Microsoft on the surface ... except that the scientist working on it happened to prove the theorum using a very, very robust algorithm and then wrote a proof program on top of it to prove the program was correct. The underlying parts of that proof program eventually went on to become the thing that validates graphics drivers on Windows ... 7 and beyond? My memory is fuzzy about "how it ended up being useful at Microsoft" part.

        But yeah, MSR does random stuff.

      • ben_w 2 years ago

        In the long run everything reverts to the mean. In the timescale of normal software developer tenure, they could all join MS, then get 300% turnover, and still have nearly the same culture.

  • 6gvONxR4sf7o 2 years ago

    Well here's the power of collective action in play.

  • alwaysrunning 2 years ago

    As an employee I want to know that the board/execs/c suite are doing a good job and their decisions align with the companies stated goals. If they are not then it is time to start looking for a new job so that I don't end up in a bad situation financially.

    • x86x87 2 years ago

      It depends. If you are seeing your job more than just a means to an ends maybe. If you see it as transactional and you need the company to stay in business while you work there why would you bother looking for a new job?

      • cpncrunch 2 years ago

        If the company is prepared to make up a BS reason to fire the CEO, do you really want to bet on them looking after you?

        I would guess that most of the people working at openai could get a job anywhere.

        • x86x87 2 years ago

          most people working at openai are subject to the same harsh market conditions everyone is.

          also, do you really care that the CEO was fired as long as you are getting payed what it was agreed upon when you got hired and you are doing interesting work?

          • defrost 2 years ago

            > do you really care that the CEO was fired as long as you are getting payed what it was agreed upon ...

            I can't speak to the mood of specific staff at OpenAI but as to the question in general; Hell yeah to the Nth degree.

            I'm 60, I've had a long career and have been through two instances of companies falling out at the board level.

            I've onboarded at various projects because I cared about the projects and work that'd I would be doing and because I was more or less in line with the direction being taken and the people I worked with and those setting the course.

            When the board and C level start having a messy relationship and divorce it matters very much which side of the split I go with or whether I just up stakes and move on elsewhere.

            Pay alone isn't worth putting up with dysfunction from above or falling in line with a faction you never especially aligned with.

            • tkgally 2 years ago

              Thank you for that comment. There are many people here who seem to believe that the OpenAI employees--and nearly everyone else involved in this drama--are motivated only by money. While of course money matters, people are also motivated by pride, vanity, idealism, loyalty, companionship, interest in the work itself, and many other things. Explanations of this fascinating situation that don't reflect that complexity are not convincing to me.

              I'm 66, in case that matters.

              • nopromisessir 2 years ago

                I'm 36. Fully agree with your comment and the above as well.

                Resigned 6 years ago due to differences at the top after 10 yrs building.

                A bad guy was treating everyone poorly. Ranged from rage beratements to gross narcissistic manipulation aimed at gaining control over decent human beings.

                Tried to press top guys to allign and confront to protect my team and others. Made very obvious business sense as well ofc. They refused. Too risky... Too much trouble.

                Walked away from the best money I ever made. Would do it again. I'm not gonna watch people be mistreated. Also it's bad business, I was exhausted playing solo defense and after management failed to make moves I fully became convinced that every person should hit the job market for mental health reasons alone.

                5 years on, the other guy besides me slated for c suite left as well. He helped at first then balked when the going got tough. Now the two partners have gotten in a dispute about succession planning and I expect everyone to be unemployed potentially within the next 3 months.

                There's no money I would take to work there again or anywhere else where that kind of toxicity is present. The only worthy cause there became to confront the toxicity. Without the right allies though... The biggest thing I could do was just resign. 6 years later one partner realized I was right and he should have backed me.

                After a certain point... Money doesn't matter. Given the 900k avg salary in that outfit... I have to assume they are overwhelmingly beyond that threshold. Furthermore all evidence to me indicates that Altman personally looks for folks who can get money, but care much more about other factors... He is wise to do that. Hard to find those folks, but worth it every time imo.

                I respect both of y'alls experiance btw. I saw this confused cynical misunderstanding re salary expressed all over the comments for this story as it's unfolded since Friday. I consider it a full misread built largely from folks getting mistreated/burned by the many fools throughout practically every industry who fail to realize before returning to the ground that money doesn't really buy happiness... Probably never will.

                I'm sure many others agree.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection