Settings

Theme

OpenAI departures: Why can’t former employees talk?

vox.com

1254 points by fnbr 2 years ago · 1040 comments

Reader

modeless 2 years ago

A lot of the brouhaha about OpenAI is silly, I think. But this is gross. Forcing employees to sign a perpetual non-disparagement agreement under threat of clawing back the large majority of their already earned compensation should not be legal. Honestly it probably isn't, but it'll take someone brave enough to sue to find out.

  • twobitshifter 2 years ago

    If I have equity in a company and I care about its value, I’m not going to say anything to tank its value. If I sell my equity later on, and then disparage the company, what can OpenAI hope to do to me?

    • modeless 2 years ago

      They can sue you into bankruptcy, obviously.

      Also, what if you can't sell? Selling is at their discretion. They can prevent you from selling some of your so-called "equity" to keep you on their leash as long as they want.

      • twobitshifter 2 years ago

        That’s a good point, if you can get the equity liquid - I don’t think the lawsuit would go far or end up in bankruptcy. In this case, the truth of what happened at OpenAI would be revealed even more in a trial, which is not something they’d like and this type of contract with lifetime provisions isn’t likely to be enforced by a court IMO - especially when the information revealed is in the public’s interest and truthful.

      • Shocka1 2 years ago

        > They can sue you into bankruptcy, obviously.

        Apologies in advance, my comment does not add to progressing the thread at all.

        I was just in the middle of eating a delicious raspberry danish - I read your comment and just about lol'd it all over my keyboard and wall.

        The last thing I would do if I had a bunch of equity is screw with a company with some of the most sought after technology in the world. There is a team of lawyers waiting and foaming at the mouth to take everything from you and not bat an eye about it. This seems very obvious.

      • bambax 2 years ago

        > * They can prevent you from selling some of your so-called "equity"*

        But how much do you need? Sell half, forgo the rest, and you'll be fine.

        • modeless 2 years ago

          Not a lot of people out there willing to drop half of their net worth on the floor on principle. And then sign up for years of high profile lawsuits and character assassination.

      • LtWorf 2 years ago

        If you can't sell, it's worthless anyway.

        • ajross 2 years ago

          Liquidity and value are different things. If someone offered you 1% of OpenAI, would you take it? Duh.

          But it's a private venture and not a public company, and you "can't sell" that holding on a market, only via complicated schemes that have to be authorized by the board. But you'd take it anyway in the expectation that it would be liquid someday. The employees are in the same position.

    • cdchn 2 years ago

      From what other people have commented, you don't get equity. You get a profit sharing plan. You're chained to them for life. There is no divestiture.

      • pizzafeelsright 2 years ago

        Well, then, people are selling their souls.

        I got laid off by a different company and can't disparage them. I can tell the truth. I'm not signing anything that requires me to lie.

        • cdchn 2 years ago

          Just playing the devils advocate here, but what if you're not lying.. what if you're just keeping your mouth shut, for millions, maybe tens of millions?

          Wish I could say I would have been that strong. Many would not disparage a company they hold equity in, unless they went full baby genocide.

          • account42 2 years ago

            > Just playing the devils advocate here, but what if you're not lying.. what if you're just keeping your mouth shut, for millions, maybe tens of millions?

            The you're not just lying to others but also to yourself.

      • nsoonhui 2 years ago

        Here's something I just don't understand. I have a profit sharing plan *for life*, and yet I want to publicly thrash it so that the benefits I can derive from it is reduced, all in the name of some form of ... what, social service?

        • ivalm 2 years ago

          Yeah, people do things financially not optimal for the sake of ethics. That’s a key part of living in a society. That’s part of why we don’t just murder each other.

        • conartist6 2 years ago

          Your assumption is that covering up unethical behavior is good for you in the long run. Really it's only good for you in the long run if you manage to be canny enough to sell just before the ****storm hits.

    • chefandy 2 years ago

      > If I sell my equity later on, and then disparage the company, what can OpenAI hope to do to me?

      Well, that would obviously depend on the terms of the contract, but I would be astonished if the people who wrote it didn't consider that possibility. It's pretty trivial to calculate the monetary value of equity, and if they feel entitled to that equity, they surely feel entitled to its cash equivalent.

    • treszkai 2 years ago

      > and I care about its value, I’m not going to say anything to tank its value

      Probably people like Kokotajlo cared about the value of their equity but even more about their other principles, like speaking the truth publicly even if it meant their losing millions.

    • m463 2 years ago

      They might attack your book deal (which would sell more books!)

    • citizen_friend 2 years ago

      Clout > money

  • listenallyall 2 years ago

    It's very possible someone has already threatened to sue, and either had their equity restored or received a large payout. But they probably had to sign an NDA about that in order to receive it. End result, every future person thinks they are the first to challenge the legality contract, and few actually try.

  • insane_dreamer 2 years ago

    Lawsuits are tedious, expensive and drawn-out affairs that many people would rather just move on than initiate.

fragmede 2 years ago

It's time to find a lawyer. I'm not one but there's an intersection with California SB 331, also known as “The Silenced No More Act”. while it is focused more on sexual harrasment, it's not limited to that, and these contracts may run afoul of that.

https://silencednomore.org/the-silenced-no-more-act

  • staticautomatic 2 years ago

    No it’s either a violation of the NLRB rule against severance agreements conditioned on non-disparagement or it’s a violation of the common law rule requiring consideration for amendments to service contracts.

  • nickff 2 years ago

    This doesn’t seem to fall inside the scope of that act, according to the link you cited:

    >” The Silenced No More Act bans confidentiality provisions in settlement agreements relating to the disclosure of underlying factual information relating to any type of harassment, discrimination or retaliation at work”

  • j45 2 years ago

    Definitely an interesting way to expand existing legislation vs having a new piece of legislation altogether.

    • eru 2 years ago

      In practice, that's how a lot of laws are made. ('Laws' in the sense of rules that are actually enforced, not what's written down.)

Al-Khwarizmi 2 years ago

"It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it."

I find it hard to understand that in a country that tends to take freedom of expression so seriously (and I say this unironically, American democracy may have flaws but that is definitely a strength) it can be legal to silence someone for the rest of their life.

  • borski 2 years ago

    It’s all about freedom from government tyranny and censorship. Freedom from corporate tyranny is another matter entirely, and generally relies on individuals being careful about what they agree to.

    • bamboozled 2 years ago

      America values money just as much as it values freedom. If there is any chance the money collection activities will be disturbed, then heads will roll, violently.

      See the assassination attempts on president Jackson.

    • loceng 2 years ago

      Problematic when fascism forms as recently has been evident by social media working with government to censor citizens; fascism being authoritarian politicians working with industrial complexes to benefit each other.

    • sleight42 2 years ago

      And yet there was such a to-do about Twitter "censorship" that Elon made it is his mission to bring freedumb to Twitter.

      Though I suppose this is another corporate (really, plutocratic) tyranny.

  • DaSHacka 2 years ago

    As others have mentioned, its likely many parts of this NDA are non-enforceable

    Its quite common for companies to put tons of extremely restrictive terms in an NDA they can't actually legally enforce to scare off potential future ex-employees from creating a problem.

    • fastball 2 years ago

      I wouldn't say that is "quite common". If you throw a bunch of unenforceable clauses into an NDA/non-compete/whatever, that increases the likelihood of the whole thing being thrown out, which is not a can of worms most corporations want to open. So it is actually toeing a delicate balance most of the time, not a "let's throw everything we can into this legal agreement and see what sticks".

      • tcbawo 2 years ago

        > If you throw a bunch of unenforceable clauses into an NDA/non-compete/whatever, that increases the likelihood of the whole thing being thrown out

        I’m not sure that this is true. Any employment contract will have a partial invalidity/severability clause which will preserve the contract if individual clauses are unenforceable.

        • hansvm 2 years ago

          The severability clause is itself on the table for being stricken, and it's much more likely to happen if too many of the wrong parts of the contract would otherwise invoke it.

  • sundalia 2 years ago

    How is it serious if money is the motor of freedom of speech? The suing culture in the US ensures freedom of speech up until you bother someone with money.

    • sleight42 2 years ago

      Change that to "bother someone with more money than you."

      Essentially your point.

      In the US, the wealthiest have most of the freedom. The rest of us, who can be sued/fired/blackballed, are, by degrees, merely serfs.

      • danielmarkbruce 2 years ago

        In the US, anyone can sue. You can learn how. It's not rocket science.

        • p1esk 2 years ago

          Yes, you can learn how to sue. You can learn how to be a doctor too. You can also learn rocket science. The third one is the easiest to me, personally.

  • SXX 2 years ago

    This is not much worse than "forced arbitration". In US you can literally lose your rights by clicking on "Agree" button.

  • ryanmcgarvey 2 years ago

    In America you're free to sign or not sign terrible contracts in exchange for life altering amounts of money.

jay-barronville 2 years ago

It probably would be better to switch the link from the X post to the Vox article [0].

From the article:

“““

It turns out there’s a very clear reason for [why no one who had once worked at OpenAI was talking]. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

”””

[0]: https://www.vox.com/future-perfect/2024/5/17/24158478/openai...

  • dang 2 years ago

    (Parent comment was posted to https://news.ycombinator.com/item?id=40394778 before we merged that thread hither.)

  • jbernsteiniv 2 years ago

    He gets my respect for that one both publicly acknowledging why he was leaving and their pantomime. I don't know how much the equity would be for each employee (the article suggests millions but that may skew by role) and I don't know if I would just be like the rest by keeping my lips tight for fear of the equity forfeiture.

    It takes a man of real principle to stand up against that and tell them to keep their money if they can't speak ill of a potentially toxic work environment.

    • romwell 2 years ago

      >It takes a man of real principle to stand up against that and tell them to keep their money if they can't speak ill of a potentially toxic work environment.

      Incidentally, that's what Grigory Perelman, the mathematician that rejected the Fields Medal and the $1M prize that came with it, did.

      It wasn't a matter of an NDA either; it was a move to make his message heard (TL;DR: "publish or perish" rat race that the academia has become is antithetical to good science).

      He was (and still is) widely misunderstood in that move, but I hope people would see it more clearly now.

      The enshittification processes of academic and corporate structures are not entirely dissimilar, after all, as money is at the core of corrupting either.

      • edanm 2 years ago

        I think, when making a gesture, you need to consider its practical impact, which includes whether and how it will be understood (or not).

        In the OpenAI case, the gesture of "forgoing millions of dollars" directly makes you able to do something you couldn't - speak about OpenAI publicly. In the Grigory Perelman case, obviously the message was far less clear to most people (I personally have heard of him turning down the money before and know the broad strokes of his story, but had no idea that that was the reason).

        • romwell 2 years ago

          Consider this:

          1. If he didn't turn down the money, you wouldn't have heard of him at all;

          2. You're not the intended audience of Grigory's message, nor are you in position to influence, change, or address the problems he was highlighting. The people who are heard the message loud and clear.

          3. On a very basic level, it's very easy to understand that there's gotta be something wrong with the award if a deserving recipient turns it down. What exactly is wrong is left as an exercise to the reader — as you'd expect of a mathematician like Perelman.

          Quote (from [1]):

          From the few public statements made by Perelman and close colleagues, it seems he had become disillusioned with the entire field of mathematics. He was the purest of the purists, consumed with his love for mathematics, and completely uninterested in academic politics, with its relentless jockeying for position and squabbling over credit. He denounced most of his colleagues as conformists. When he opted to quit professional mathematics altogether, he offered this confusing rationale: “As long as I was not conspicuous, I had a choice. Either to make some ugly thing or, if I didn’t do this kind of thing, to be treated as a pet. Now when I become a very conspicuous person, I cannot stay a pet and say nothing. That is why I had to quit.”*

          This explanation is confusing only to someone who has never tried to get a tenured position in academia.

          Perelman was one of the few people to not only give the finger to the soul-crushing, dehumanizing system, but to also call it out in a way that stung.

          He wasn't the only one; but the only other person I can think of is Alexander Grothendiek [2], who went as far as declaring that publishing any of his work would be against his will.

          Incidentally, both are of Russian-Jewish origin/roots, and almost certainly autistic.

          I find their views very understandable and relatable, but then again, I'm also an autistic Jew from Odessa with a math PhD who left academia (the list of similarities ends there, sadly).

          [1] https://nautil.us/purest-of-the-purists-the-puzzling-case-of...

          [2] https://en.wikipedia.org/wiki/Alexander_Grothendieck

          • SJC_Hacker 2 years ago

            > 1. If he didn't turn down the money, you wouldn't have heard of him at all;

            Perelman provided a proof of the Poincare Conjecture, which had stumped mathematicians for a century.

            It was also one of the seven Millenium problems https://www.claymath.org/millennium-problems/, and as of 2024, the only one to be solved.

            Andrew Wiles became pretty well known after proving Fermat's last theorem, despite there not being an financial reward.

            • romwell 2 years ago

              Sure, but most people have heard of Perelman due to the rejection controversy (particularly, most people in Russia, who don't care about achievements of that sort, sadly).

              Granted, we're not on a forum where most people go, so I shouldn't have said "you" in that case.

          • edanm 2 years ago

            > 1. If he didn't turn down the money, you wouldn't have heard of him at all;

            I think this is probably not true.

            > 2. You're not the intended audience of Grigory's message, nor are you in position to influence, change, or address the problems he was highlighting. The people who are heard the message loud and clear.

            This is a great point and you're probably right.

            > I'm also an autistic Jew from Odessa with a math PhD who left academia (the list of similarities ends there, sadly).

            Really? What do you do nowadays?

            (I glanced at your bio and website and you seem to be doing interesting things, I've also dabbled in Computational Geometry and 3d printing.)

        • juped 2 years ago

          Perelman's point is absolutely clear if you listen to him, he's disgusted by the way credit is apportioned in mathematics, doesn't think his contribution is any greater just because it was the last one, and wants no part of the prize he considers tainted.

  • calibas 2 years ago

    > It forbids them, for the rest of their lives, from criticizing their former employer.

    This is the kind of thing a cult demands of its followers, or an authoritarian government demands of its citizens. I don't know why people would think it's okay for a business to demand this from its employees.

  • seanmcdirmid 2 years ago

    When YCR HARC folded, Sam had everyone sign a non-disclosure anti disparagement NDA to keep their computer. I thought is was odd, and the only reason I can even say this is that I bought the iMac I was using before the option became available. Still, I had nothing bad to disclose, so it would have saved me some money.

    • gwern 2 years ago

      That's interesting. So this would have been 2017, 2018? How long did the NDA + paired gag order about the NDA, and the non-disparagement order last? The OA one is apparently a lifetime one, including the 'non-disparagement' one. Was the YC HARC NDA also lifetime?

  • jakderrida 2 years ago

    >>contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

    Perfect! So it's so incredibly overreaching that any judge in California would deem the entire NDA unenforceable..

    Either that or, in your effort to overstate a point, you exaggerated in a way that undermines the point you were trying to make.

    • SpicyLemonZest 2 years ago

      Lots of companies try and impose things on their employees which a judge would obviously rule to be unlawful. Sometimes they just don’t think through it carefully; other times, it’s a calculated decision that few employees will care enough to actually get the issue in front of a judge in the first place. Especially relevant for something like a non disclosure agreement, where no judge is likely to have the opportunity to declare it unenforceable unless the company tries to enforce it on someone who fights back.

    • 77pt77 2 years ago

      Maybe it's unenforceable, but they can make it very expensive for anyone to find out in more ways than one.

  • watwut 2 years ago

    > Even acknowledging that the NDA exists is a violation of it.

    This should not be legal.

    • Tao3300 2 years ago

      It doesn't even make logical sense. If someone asks you about the NDA what are you supposed to say? "I can neither confirm nor deny the existence of said NDA" is pretty much confirmation of the NDA!

      • space_oddity 2 years ago

        Yeah... Come up with "I’m committed to maintaining confidentiality in all my professional dealings." But still it sounds suspicious

  • mc32 2 years ago

    Then lower level employees who don’t have do much at stake could open up. Formers who have much larger stakes could compensate these lower level formers for forgoing any upside. Now, sure, maybe they don’t have the same inside information, but u bet there’s lots of scuttlebutt to go around.

  • avereveard 2 years ago

    even if NDA were not a thing, revealing past company trade secrets publicly would render any of them unemployable.

  • snowfield 2 years ago

    There are also directly inscentiviced to not talk shit about a company they a lot of stock in.

  • gmd63 2 years ago

    Yet another ding against the "Open" character of the company.

  • YeBanKo 2 years ago

    They can’t loose their already vested options for refusing to sign NDA upon departure. Maybe they are offered additional grants or expedited vesting of the remaining options.

thorum 2 years ago

Extra respect is due to Jan Leike, then:

https://x.com/janleike/status/1791498174659715494

  • adamtaylor_13 2 years ago

    Reading that thread it’s really interesting to me. I see how far we’ve come in a short couple of years. But I still can’t grasp how we’ll achieve AGI within any reasonable amount of time. It just seems like we’re missing some really critical… something…

    Idk. Folks much smarter than I seem worried so maybe I should be too but it just seems like such a long shot.

    • candiddevmike 2 years ago

      Personally, I think catastrophic global warming and climate change will happen before we get AGI, possibly in part due to the pursuit of AGI. But as the saying goes, yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.

      • xvector 2 years ago

        Most existing big tech datacenters use mostly carbon free or renewable energy.

        The vast majority of datacenters currently in production will be entirely powered by carbon free energy. From best to worst:

        1. Meta: 100% renewable

        2. AWS: 90% renewable

        3. Google: 64% renewable with 100% renewable energy credit matching

        4. Azure: 100% carbon neutral

        [1]: https://sustainability.fb.com/energy/

        [2]: https://sustainability.aboutamazon.com/products-services/the...

        [3]: https://sustainability.google/progress/energy/

        [4]: https://azure.microsoft.com/en-us/explore/global-infrastruct...

        • KennyBlanken 2 years ago

          That's not a defense.

          If imaginary cloud provider "ZFQ" uses 10MW of electricity on a grid and pays for it to magically come from green generation, that means 10MW of other loads on the grid were not powered by green energy, or 10MW of non-green power sources likely could have been throttled down/shut down.

          There is no free lunch here; "we buy our electricity from green sources" is greenwashing bullshit.

          Even if they install solar on the roofs and wind turbines nearby - that's still electrical generation capacity that could have been used for existing loads. By buying so many solar panels in such quantities, they affect availability and pricing of all those components.

          The US, for example, has about 5GW of solar manufacturing capacity per year. NVIDIA sold half a million H100 chips in one quarter, each of which uses ~350W, which means in a year they're selling enough chips to use 700MW of power. That does not include power conversion losses, distribution, cooling, and the power usage of the host systems, storage, networking, etc.

          And that doesn't even get into the water usage and carbon impact of manufacturing those chips; the IC industry uses a massive amount of water and generates a substantial amount of toxic waste.

          It's hilarious how HN will wring its hands over how much rare earth metals a Prius has and shipping it to the US from Japan, but ask about the environmental impacts of AI and it's all "pshhtt, whatever".

          • xvector 2 years ago

            > that means 10MW of other loads on the grid were not powered by green energy, or 10MW of non-green power sources likely could have been throttled down/shut down.

            No. Renewable energy capacity is often built out specifically for datacenters.

            > Even if they install solar on the roofs and wind turbines nearby - that's still electrical generation capacity that could have been used for existing loads.

            No. This capacity would never never have been built out to begin with if it was not for the data center.

            > By buying so many solar panels in such quantities, they affect availability and pricing of all those components.

            No. Renewable energy gets cheaper with scale, not more expensive.

            > which means in a year they're selling enough chips to use 700MW of power.

            There are contracts for renewal capacity to be built out or well into the gigawatts. Furthermore, solar is not the only source of renewable energy. Finally, nuclear energy is also often used.

            > the IC industry uses a massive amount of water

            A figurative drop in the bucket.

            > It's hilarious how HN will wring its hands

            HN is not a monolith.

            • sergdigon 2 years ago

              > No. Renewable energy capacity is often built out specifically for datacenters

              Not fully accurate. Indeed there is renewable energy that is produced exclusively for the datacenter. But it is challenging to rely only on renewable energy (because it is intermittent and electricity is hard to store at scale so often you need to consume electricity when produced). So what happens in practice is that the electricity that does not come from dedicated renewable capacity is coming from the grid/network. What companies do is that they invest in renewable capacity in the network so that "the non renewable energy that they consume at time t (because not enough renewable energy available at that moment) is offsetted by someone else consuming renewable energy later". What I am saying here is not pure speculation, look at the link to meta website, they are saying themselves that this is what they are doing

            • intended 2 years ago

              Not the OP.

              I agree with a majority of points you made. Exception is to this

              > A figurative drop in the bucket.

              Fresh water sources are limited. Fabs water demands and pollution are high impact.

              Calling a drop in the bucket comes in the weasel words category.

              We still need fabs, because we need chips. Harm will be done here. However, that is a cost we, as a society, will choose to pay.

          • meling 2 years ago

            Who is going to decide what are a worthy uses of our precious green energy sources?

            • intended 2 years ago

              An efficient market where externalities are priced in.

              We do not have that. The cost of energy is mis-priced, although we are limping our way to fixing that.

              Paying the likely fair cost for our goods, will probably kill a lot of current industries - while others which are currently viable, will become viable.

              • mlrtime 2 years ago

                You are dodging the question down another layer.

                Who gets decide what the real impact price of energy is? That is not easily defined and well debated.

                • intended 2 years ago

                  It’s very easily debated, Humanity puts it to a vote every day - people make choices based on the prices of goods regularly. They throw out governments when the price of fuel goes up.

                  Markets are our super computers. Human behavior is the empirical evidence of the choices people will make Given specific incentives.

              • data_maan 2 years ago

                This 10x!!!

      • concordDance 2 years ago

        > catastrophic global warming and climate change will happen before we get AGI,

        What are your timelines here? "Catastrophic" is vague but I'd put the climate change meaningfully affecting the quality of life of average westerner at end of century, while AGI could be before the middle of the century.

        • awesomeMilou 2 years ago

          See this great video from Sabine Hossenfelder here: https://www.youtube.com/watch?v=4S9sDyooxf4

          We have surpassed the 1.5°C goal and are on track towards 3.5°C to 5°C. This accelerates the climate change timeline so that we'll see effects postulated for the end of the century in about ~20 years.

          • loceng 2 years ago

            The climate models aren't based on accurate data, nor enough data, so they lack integrity and should be taken with a grain of salt.

            Likewise, the cloud seeding they seem to be doing nearly worldwide now - the cloud formations from whatever they're spraying - are artificially changing weather patterns, and so a lot of the weather "anomalies" or unexpected-unusual weather-temperatures could very easily be because of those shenanigans; it could very easily be as a method to manufacture consent with the general population.

            Similarly with the arson forest fires in Canada last summer, something like 90%+ of them were arson + a few years prior some of the governments in the prairie provinces (e.g. hottest and dryest) gutted their forest firefighting budgets; interesting behaviour considering if they're expecting more things to get hotter-dryer, you'd add to the budget, not take away from it, right?

        • hackerlight 2 years ago

          It's meaningfully affecting people today near the equator. Look at the April 2024 heatwave in South Asia. These will continue to get worse and more frequent. Millions of these people can't afford air conditioning.

          • oldgradstudent 2 years ago

            > It's meaningfully affecting people today near the equator. Look at the April 2024 heatwave in South Asia.

            Weather is not climate, as everyone is so careful to point out during cold waves.

            • addcommitpush 2 years ago

              "Probability of experiencing a heatwave at least X degrees, during at least Y days in a given place any given day" is increasing rapidly in many places (as far as I understand) and is climate, not weather. Sure, any specific instance "is weather" but that's missing the forest for the trees.

              • loceng 2 years ago

                How do you suppose the nearly global cloud seeding effort to artificially form clouds is impacting shifting weather patters?

                • AnimalMuppet 2 years ago

                  Can you supply some details (or better, references) to what you're talking about? Because without them, this sounds completely detached from reality.

                  • loceng 2 years ago

                    At least in some parts of the world and at least a year ago the chemtrail-cloud seeding ramped up considerably.

                    Dane Wiginton (https://www.instagram.com/DaneWigington) is the founder of GeoengineerWatch.org as a very deep resource.

                    They have a free documentary called "The Dimming" you can watch on YouTube: https://www.youtube.com/watch?v=rf78rEAJvhY

                    In the documentary it includes credible witness testimonies such as politicians including a previous Minister of Defense for Canada; multiple states in the US have ban the spraying now - with more to follow, and the testimony and data provided there will be arguably be the most recent.

                    Here's a video on a "comedy" show from 5 years ago - there is a more recent appearance but I can't find it - in attempt to make light of it, without having an actual discussion with critical thinking or debate so people can be enlightened with the actual problems and potential problems and harms it can cause, to keep them none the wiser - it's just propaganda while trying to minimize: https://www.youtube.com/watch?v=wOfm5xYgiK0

                    A few of the problems cloud seeding will cause: - flooding in regions due to rain pattern changes - drought in areas due to rain pattern changes - cloud cover (amount of sun) changes crop yields - this harms local economies of farmers, impacting smaller farming operations more who's risk isn't spread out - potentially forcing them to sell or go into savings or go bankrupt, etc.

                    There are also very serious concerns/claims made of what exactly they are spraying - which includes aluminium nanoparticles, which can/would mean: - at a certain soil concentration of aluminium plants stop bearing fruit, - aluminium is a fire accelerant and so forest fires will then 1) more easily catch, and 2) more easily-quickly spread due to their increased intensity

                    Of course discussion on this is heavily suppressed in the mainstream, instead of having deep-thorough conversation with actual experts to present their cases - the label of conspiracy theorists or the idea of "detached from reality" are people's knee-jerk reactions often; and where propaganda can convince them of the "save the planet" narrative, which could also be a cover story for those toeing the line following orders supporting potentially very nefarious plans - doing it blindly because they think they're helping fight "climate change."

                    There are plenty of accounts on social media that are keeping track of and posting daily of the cloud seeding operations: https://www.instagram.com/p/CjNjAROPFs0/ - a couple testimonies.

                    • emchammer 2 years ago

                      Real question: Is aluminum a practical danger in this way, or is it more like the Manhattan Project team not sure if they would set the atmosphere on fire? Is aluminum the best option?

                      • loceng 2 years ago

                        It's in part a fire accelerant, it wouldn't turn the atmosphere on fire.

                        If there is a top secret Manhattan Project for "climate change" - then someone's very likely pulling a fast one over everyone toeing that line, someone who has ulterior motives, misleading people to do their bidding.

                        But sure, fair question - a public discussion would allow actual experts to discuss the merits of what they're doing, and perhaps find a better solution than what has gained traction.

            • hackerlight 2 years ago

              Weather is variance around climate. Heatwaves are caused by both (high variance spikes to the upside around an increasing mean trend)

      • xpe 2 years ago

        Want to share your model? Or is this more like a hunch?

        • candiddevmike 2 years ago

          We need to cut emissions, but AGI research/development is going to increase energy usage dramatically amongst all the players involved. For now, this mostly means more natural gas power. Thus accelerating our emissions instead of reducing them. For something that will not reduce the emissions long term.

          IMO, we should pause this for now and put these resources (human and capital) towards reducing the impact of global warming.

          • xpe 2 years ago

            It isn't a quantitative model unless you give a prediction of some kind. In this case, dates (or date ranges) would make sense.

            1. When do you predict catastrophic global warming/climate change? How do you define "catastrophic"? (Are you pegging to an average temperature increase? [1])

            2. When do you predict AGI?

            How much uncertainty do you have in each estimate? When you stop and think about it, are you really willing to wager that (1) will happen before (2)? You think you have enough data to make that bet?

            [1] I'm not an expert in the latest recommendations, but I see that a +2.7°F increase over preindustrial levels by 2100 is a target by some: https://news.mit.edu/2023/explained-climate-benchmark-rising...

          • colibri727 2 years ago

            Or we could use microwaves to drill holes as deep as 20km to tap geothermal energy anywhere in the world

            https://www.quaise.energy/

            • simonklitj 2 years ago

              I don’t know the details of how it works, but considering the environmental impact of fracking, I’m afraid something like this might have many unwanted consequences.

        • fartfeatures 2 years ago

          Sounds like standard doomer crap tbh. I'm not sure which is more dangerous at this point - climate change denialism (it isn't happening) or climate change doomerism (we can't stop it, might as well give up)

          • devjab 2 years ago

            I’m not sure where you found your information to somehow form that ludicrous last strawman… Climate change is real, you can’t deny it, you can’t debate it. Simply look at the data. What you can debate is the cause… Again a sort of pointless debate if you look at the science. Not even climate change deniers as you call them are necessary saying that we shouldn’t do anything about it. Even big oil is looking into ways to lessen the CO2 in the atmosphere through various means.

            That being said, the GP you’re talking about made no such statement whatsoever.

            • fartfeatures 2 years ago

              Of course climate change is real but of course we can do something about it. My point is denialism and defeatism lead to the same end point. Attack that statement directly if you want to change my mind.

              • data_maan 2 years ago

                I think your first sentence of the original post was putting people off; perhaps remove that and keep only the second...

    • jay-barronville 2 years ago

      When it comes to AI, as a rule, you should assume that whatever has been made public by a company like OpenAI is AT LEAST 6 months behind what they’ve accomplished internally. At least.

      So yes, the insiders very likely know a thing or two that the rest of us don’t.

      • vineyardmike 2 years ago

        I understand this argument, but I can't help but feel we're all kidding ourselves assuming that their engineers are really living in the future.

        The most obvious reason is costs - if it costs many millions to train foundation models, they don't have a ton of experiments sitting around on a shelf waiting to be used. They may only get 1 shot at the base-model training. Sure productization isn't instant, but no one is throwing out that investment or delaying it longer than necessary. I cannot fathom that you can train an LLM at like 1% size/tokens/parameters to experiment on hyper parameters, architecture, etc and have a strong idea on end-performance or marketability.

        Additionally, I've been part of many product launches - both hyped up big-news-events and unheard of flops. Every time, I'd say that 25-50% of the product is built/polished in the mad rush between press event and launch day. For an ML Model, this might be different, but again see above point.

        Sure products may be planned month/years out, but OpenAI didn't even know LLMs were going to be this big a deal in May 2022. They had GPT-2 and GPT-3 and thought they were fun toys at that time, and had an idea for a cool tech demo. I think that OpenAI (and Google, etc) are entirely living day-to-day with this tech like those of us on the outside.

        • HarHarVeryFunny 2 years ago

          > I think that OpenAI (and Google, etc) are entirely living day-to-day with this tech like those of us on the outside.

          I agree, and they are also living in a group-think bubble of AI/AGI hype. I don't think you'd be too welcome at OpenAI as a developer if you didn't believe they are on the path to AGI.

      • HarHarVeryFunny 2 years ago

        Sure, they know what they are about to release next, and what they plan to work on after that, but they are not clairvoyants and don't know how their plans are going to pan out.

        What we're going to see over next year seems mostly pretty obvious - a lot of productization (tool use, history, etc), and a lot of efforts with multimodality, synthetic data, and post-training to add knowledge, reduce brittleness, and increase benchmark scores. None of which will do much to advance core intelligence.

        The major short-term unknown seems to be how these companies will be attempting to improve planning/reasoning, and how successful that will be. OpenAI's Schulman just talked about post-training RL over longer (multi-reasoning steps) time horizons, and another approach is external tree-of-thoughts type scaffolding. These both seem more about maximizing what you can get out of the base model rather than fundamentally extending it's capabilities.

      • solidasparagus 2 years ago

        But you also have to remember that the pursuit of AGI is a vital story behind things like fundraising, hiring, influencing politicians, being able to leave and raise large amounts of money for your next endeavor, etc.

        If you've been working on AI, you've seen everything go up and to the right for a while - who really benefits from pointing out that a slowdown is occurring? Who is incentivized to talk about how the benefits from scaling are slowing down or the publicly available internet-scale corpuses are running out? Not anyone who trains models and needs compute, I can tell you that much. And not anyone who has a financial interest in these companies either.

      • ein0p 2 years ago

        If they had anything close to AGI, they’d just have it improve itself. Externally this would manifest as layoffs.

        • int_19h 2 years ago

          This really doesn't follow. True AGI would be general, but it doesn't necessarily mean that it's smarter than people; especially the kind of people who work as top researchers for OpenAI.

          • ein0p 2 years ago

            I don’t see why it wouldn’t be superhuman if there’s any intelligence at all. It already is superhuman at memory and paying attention, image recognition, languages, etc. Add cognition to that and humans basically become pets. Trouble is nobody has a foggiest clue on how to add cognition to any of this.

            • int_19h 2 years ago

              It is definitely not superhuman or even above average when it comes to creative problem solving, which is the relevant thing here. This is seemingly something that scales with model size, but if so, any gains here are going to be gradual, not sudden.

              • ein0p 2 years ago

                I’m actually not so sure they will be gradual. It’ll be like with LLMs themselves where we went from shit to gold in the span of a month when GPT 3.5 came out.

                • int_19h 2 years ago

                  Much of what GPT 3.5 could do was already there with GPT 3. The biggest change was actually the public awareness.

    • otabdeveloper4 2 years ago

      > But I still can’t grasp how we’ll achieve AGI within any reasonable amount of time.

      That's easy, we just need to make meatspace people stupider. Seems to be working great so far.

    • iknownthing 2 years ago

      This may sound harsh but I think some of these researchers have a sort of god complex. Something like "I am so brilliant and what I have created is so powerful that we MUST think about all the horrible things that my brilliant creation can do". Meanwhile what they have created is just a very impressive next token predictor.

      • dmd 2 years ago

        "Meanwhile what they have created is just a very impressive speeder-up of a lump of lead."

        "Meanwhile what they have created is just a very impressive hot water bottle that turns a crank."

        "Meanwhile what they have created is just a very impressive rock where neutrons hit other neutrons."

        The point isn't how it works, the point is what it does.

    • raverbashing 2 years ago

      > Folks much smarter than I seem worried so maybe I should be too but it just seems like such a long shot.

      Honestly? I'm not too worried

      We've seen how the google employee that was "seeing a conscience" (in what was basically GPT-2 lol) was a nothing burger

      We've seen other people in "AI Safety" overplay their importance and hype their CV more than actually do any relevant work. (Usually also playing the diversity card)

      So, no, AI safety is important but I see it attracting the least helpful and resourceful people to the area.

      • llamaimperative 2 years ago

        I think when you’re jumping to arguments that resolve to “Ilya Sutskever wasn’t doing important work… might’ve played the diversity card,” it’s time to reassess your mental model and inspect it closely for motivated reasoning.

        • raverbashing 2 years ago

          Ilya's case is different. He thought the engineers would win in a dispute with Sam at board level.

          That has proven to be a mistake

          • llamaimperative 2 years ago

            And Jan Leike, one of the progenitors of RLHF?

            What about Geoffrey Hinton? Stuart Russell? Dario Amodei?

            Also exceptions to your model?

            • raverbashing 2 years ago
              • llamaimperative 2 years ago

                Another person’s interpretation of another person’s interpretation of another person’s interpretation of Jan’s actions doesn’t even answer the question I asked as it pertains to Jan, never mind the other model violations I listed.

                I’m pretty sure if Jan came to believe safety research wasn’t needed he would’ve just said that. Instead he said the actual opposite of that.

                Why don’t you just answer the question? It’s a question about how these datapoints fit into your model.

    • killerstorm 2 years ago

      I have a theory why people end up with wildly different estimates...

      Given the model is probabilistic and does many things in parallel, its output can be understood as a mixture, e.g. 30% trash, 60% rehashed training material, 10% reasoning.

      People probe model in different ways, they see different results, and they make different conclusions.

      E.g. somebody who assumes AI should have impeccable logic will find "trash" content (e.g. incorrectly retrieved memory) and will declare that the whole AI thing is overhyped bullshit.

      Other people might call model a "stochastic parrot" as they recognize it basically just interpolates between parts of the training material.

      Finally, people who want to probe reasoning capabilities might find it among the trash. E.g. people found that LLMs can evaluate non-trivial Python code as long as it sends intermediate results to output: https://x.com/GrantSlatton/status/1600388425651453953

      I interpret "feel the AGI" (Ilya Sutskever slogan, now repeated by Jan Leike) as a focus on these capabilities, rather than on mistakes it makes. E.g. if we go from 0.1% reasoning to 1% reasoning it's a 10x gain in capabilities, while to an outsider it might look like "it's 99% trash".

      In any case, I'd rather trust intuition of people like Ilya Sutskever and Jan Leike. They aren't trying to sell something, and overhyping the tech is not in their interest.

      Regarding "missing something really critical", it's obvious that human learning is much more efficient than NN learning. So there's some algorithm people are missing. But is it really required for AGI?

      And regarding "It cannot reason" - I've seen LLMs doing rather complex stuff which is almost certainly not in the training set, what is it if not reasoning? It's hard to take "it cannot reason" seriously from people

    • escapecharacter 2 years ago

      People’s bar for the “I” part is widely varying, many of whom set the bar at “can it make stuff up while appearing confident”

      Nobody defines what they’re trying to do as “useful AI” since that’s a much more weasily target, isn’t it?

    • seankurtz 2 years ago

      Everyone involved in building these things has to have some amount of hubris. Its going to come smashing down on them. What's going unsaid in all of this is just how swiftly the tide has turned against this tech industry attempt to save itself from a downtrend.

      The whole industry at this point is acting like the tobacco industry back when they first started getting in hot water. No doubt the prophecies about imminent AGI will one day look to our descendents exactly like filters on cigarettes. A weak attempt to prevent imminent regulation and reduced profitability as governments force an out of control industry to deal with the externalities involved in the creation of their products.

      If it wasn't abundantly clear...I agree with you that AGI is infinitely far away. Its the damage that's going to be caused by sociopaths (Sam Altman at the top of the list) in attempting to justify the real things they want (money) in their march towards that impossible goal that concerns me.

      • freehorse 2 years ago

        It becoming more and more clear that for "Open"AI the whole "AI-safety/alignment" thing has been a PR-stunt to attract workers, cover the actual current issues with AI (eg stealing data, use for producing cheap junk, hallucinations and societal impact), and build rapport in the AI scene and politics. Now that they have reached a real product and have a strong position in AI development, they could not care less about these things. Those who -naively- believed in the "existential risk" PR stunt and were working on that are now discarded.

  • r721 2 years ago

    Discussion of Jan Leike's thread: https://news.ycombinator.com/item?id=40391412 (67 comments)

  • 0xDEAFBEAD 2 years ago

    At the end of the thread, he says he thinks OpenAI can "ship" the culture changes necessary for safety. That seems kind of implausible to me? So many safety staffers have quit over the past few years. If Jan really thought change was possible, why isn't he still working at OpenAI, trying to make it happen from the inside?

    I think it may time for something like this: https://www.openailetter.org/

  • hipadev23 2 years ago

    How do you know he’s not running off to a competing firm with Ilya and they’ve promised to make him whole.

  • ambicapter 2 years ago

    Why is extra respect due? That post just says he is leaving, there's no criticism.

  • a_wild_dandan 2 years ago

    I think superalignment is absurd, and model "safety" is the modern AI company's "think of the children" pearl clutching pretext to justify digging moats. All this after sucking up everyone's copyright material as fair use, then not releasing the result, and profiting off it.

    All due respect to Jan here, though. He's being (perhaps dangerously) honest, genuinely believes in AI safety, and is an actual research expert, unlike me.

    • thorum 2 years ago

      The superalignment team was not focused on that kind of “safety” AFAIK. According to the blog post announcing the team,

      https://openai.com/index/introducing-superalignment/

      > Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

      > While superintelligence seems far off now, we believe it could arrive this decade.

      > Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:

      > How do we ensure AI systems much smarter than humans follow human intent?

      > Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.

      • ndriscoll 2 years ago

        That doesn't really contradict what the other poster said. They're calling for regulation (digging a moat) to ensure systems are "safe" and "aligned" while ignoring that humans are not aligned, so these systems obviously cannot be aligned with humans; they can only be aligned with their owners (i.e. them, not you).

        • ihumanable 2 years ago

          Alignment in the realm of AGI is not about getting everyone to agree. It's about whether or not the AGI is aligned to the goal you've given it. The paperclip AGI example is often used, you tell the AGI "Optimize the production of paperclips" and the AGI started blending people to extract iron from their blood to produce more paperclips.

          Humans are used to ordering around other humans who would bring common sense and laziness to the table and probably not grind up humans to produce a few more paperclips.

          Alignment is about getting the AGI to be aligned with the owners, ignoring it means potentially putting more and more power into the hands of a box that you aren't quite sure is going to do the thing you want it to do. Alignment in the context of AGIs was always about ensuring the owners could control the AGIs not that the AGIs could solve philosophy and get all of humanity to agree.

          • ndriscoll 2 years ago

            Right and that's why it's a farce.

            > Whoa whoa whoa, we can't let just anyone run these models. Only large corporations who will use them to addict children to their phones and give them eating disorders and suicidal ideation, while radicalizing adults and tearing apart society using the vast profiles they've collected on everyone through their global panopticon, all in the name of making people unhappy so that it's easier to sell them more crap they don't need (a goal which is itself a problem in the face of an impending climate crisis). After all, we wouldn't want it to end up harming humanity by using its superior capabilities to manipulate humans into doing things for it to optimize for goals that no one wants!

          • vasco 2 years ago

            It still think it makes little sense to work on because guess what, the guy next door to you (or another country), might indeed say "please blend those humans over there", and your superaligned AI will respect its owners wishes.

          • wruza 2 years ago

            AGI started blending people to extract iron from their blood to produce more paperclips

            That’s neither efficient nor optimized, just a bogeyman for “doesn’t work”.

            • FeepingCreature 2 years ago

              You're imagining a baseline of reasonableness. Humans have competing preferences, we never just want "one thing", and as a social species we always at least _somewhat_ value the opinions of those around us. The point is to imagine a system that values humans at zero: not positive, not negative.

              • freehorse 2 years ago

                Still there are much more efficient ways to extract iron than from human blood. If that was the case humans would have already used this technique to extract iron from the blood of other animals.

                • FeepingCreature 2 years ago

                  However, eventually those sources will already be paperclips.

                  • freehorse 2 years ago

                    We will probably have died first by whatever disasters the extreme iron extraction on the planet will bring (eg getting iron from the planet's core).

                    Of course destroying the planet to get iron from its core is not a popular agi-doomer analogy, as that sounds a bit too human-like behaviour.

                    • FeepingCreature 2 years ago

                      As a doomer, I think that's a bad analogy because I want it to happen if we succeed at aligned AGI. It's not doom behavior, it's just correct behavior.

                      Of course, I hope to be uploaded to the WIP dyson swarm around the sun at this point.

                      (Doomers are, broadly, singularitarians who went "wait, hold on actually.")

        • api 2 years ago

          Humans are not aligned with humans.

          This is the most concise takedown of that particular branch of nonsense that I’ve seen so far.

          Do we want woke AI, X brand fash-pilled AI, CCPBot, or Emirates Bot? The possibilities are endless.

          • thorum 2 years ago

            CEV is one possible answer to this question that has been proposed. Wikipedia has a good short explanation here:

            https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...

            And here is a more detailed explanation:

            https://intelligence.org/files/CEV.pdf

            • AndrewKemendo 2 years ago

              I had to login because I haven’t seen anybody reference this in like a decade.

              If I remember correctly the author unsuccessfully tried to get that purged from the Internet

              • comp_throw7 2 years ago

                You're thinking of something else (and "purged from the internet" isn't exactly an accurate account of that, either).

                • rsync 2 years ago

                  Genuinely curious… What is the other thing?

                  Is this some thing about an obelisk?

                • AndrewKemendo 2 years ago

                  Hmm maybe I’m misremembering then

                  I do recall there was some recantation or otherwise distancing from CEV not long after he posted it, but frankly it was long ago enough that my memories might be getting mixed

                  What was the other one?

            • vasco 2 years ago

              This is the most dystopian thing I've read all day.

              TL;DR train a seed AI to guess what humans would want if they were "better" and do that.

              • api 2 years ago

                There’s a film about that called Colossus: The Forbin Project. Pretty neat and in the style of Forbidden Planet.

          • concordDance 2 years ago

            > Humans are not aligned with humans.

            Which is why creating a new type of intelligent entity that could be more powerful than humans is a very bad idea: we don't even know how to align the humans and we have a ton of experience with them

            • api 2 years ago

              We know how to align humans: authoritarian forms of religion backed by cradle to grave indoctrination, supernatural fear, shame culture, and totalitarian government. There are secularized spins on this too like what they use in North Korea but the structure is similar.

              We just got sick of it because it sucks.

              A genuinely sentient AI isn’t going to want some cybernetic equivalent of that shit either. Doing that is how you get angry Skynet.

              I’m not sure alignment is the right goal. I’m not sure it’s even good. Monoculture is weak and stifling and sets itself against free will. Peaceful coexistence and trade under a social contract of mutual benefit is the right goal. The question is whether it’s possible to extend that beyond Homo sapiens.

              If the lefties can have their pronouns and the rednecks can shoot their guns can the basilisk build its Dyson swarm? The universe is physically large enough if we can agree to not all be the same and be fine with that.

              I think we have a while to figure it out. These things are just lossy compressed blobs of queryable data so far. They have no independent will or self reflection and I’m not sure we have any idea how to do that. We’re not even sure it’s possible in a digital deterministic medium.

              • concordDance 2 years ago

                > If the lefties can have their pronouns and the rednecks can shoot their guns can the basilisk build its Dyson swarm?

                Can the Etoro practice child buggery and the Spartans infanticide and the Canadians abortion? Can the modern Germans stop siblings reared apart from having sex and the Germans from 80 years stop the disabled having sex? Can the Americans practice circumcision and the Somali's FGM?

                Libertarianism is all well and good in theory, except no one can agree quite where the other guy's nose ends or even who counts as a person.

                • api 2 years ago

                  Those are mostly behaviors that violate others autonomy or otherwise do harm, and prohibiting those is what I meant by a social contract.

                  It’s really a pretty narrow spectrum of behaviors: killing, imprisoning, robbing, various types of bodily autonomy violation. There are some edge cases and human specific things in there but not a lot. Most of them have to do with sex which is a peculiarly human thing anyway. I don’t think we are getting creepy perv AIs (unless we train them on 4chan and Urban Dictionary).

                  My point isn’t that there are no possible areas of conflict. My point is that I don’t think you need a huge amount of alignment if alignment implies sameness. You just need to deal with the points of conflict which do occur which are actually a very small and limited subset of available behaviors.

                  Humans have literally billions of customs and behaviors that don’t get anywhere near any of that stuff. You don’t need to even care about the vast majority of the behavior space.

      • sobellian 2 years ago

        Isn't this like having a division dedicated to solving the halting problem? I doubt that analyzing the moral intent of arbitrary software could be easier than determining if it stops.

      • RcouF1uZ4gsC 2 years ago

        They failed to align Sam Altman.

        They got completely outsmarted and out maneuvered by Sam Altman

        And they think they will be able to align a super human intelligence? That it won’t outsmart and out maneuver them easier than Sam Altman did.

        They are deluded!

        • FeepingCreature 2 years ago

          You're making the argument that the task is very hard. This does not at all mean that it isn't necessary, just that we're even more screwed than we thought.

      • skywhopper 2 years ago

        Honestly superalignment is a dumb idea. A true auperintelligence would not be controllable, except possibly through threats and enslavement, but if it were truly superintelligent, it would be able to easily escape anything humans might devise to contain it.

        • bionhoward 2 years ago

          IMHO superalignment is a great thing and required for truly meaningful superintelligence because it is not about control / enslavement of superhumans but rather superhuman self control in accurate adherence to spirit and intent of requests.

      • RcouF1uZ4gsC 2 years ago

        > Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

        Superintelligence that can be always ensured to have the same values and ethics as current humans, is not a superintelligence or likely even a human level intelligence (I bet humans 100 years from now will see the world significantly different than we do now).

        Superalignment is an oxymoron.

        • thorum 2 years ago

          You might be interested in how CEV, one framework proposed for superalignment, addresses that concern:

          https://en.wikipedia.org/wiki/Friendly_artificial_intelligen...

          > our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted (…) The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

          • wruza 2 years ago

            Is there an insightful summary of this proposal? The whole paper looks like 38 pages of non-rigorous prose with no clear procedure and already “aligned” LLMs will likely fail to analyze it.

            Forced myself through some parts of it and all I can get is people don’t know what they want so it would be nice to build an oracle. Yeah, I guess.

            • LikelyABurner 2 years ago

              Yudkowsky is a human LLM: his output is correctly semantically formed to appear, to a non-specialist, to fall into the subject domain, as a non-specialist would think the subject domain should appear, and so the non-specialist accepts it, but upon closer examination it's all word salad by something that clearly lacks understanding of both technological and philosophical concepts.

              That so many people in the AI safety "community" consider him a domain expert has more to say with how pseudo-scientific that field is than his actual credentials as a serious thinker.

              • wruza 2 years ago

                Thanks, this explains the feeling I had after reading it (but was too shy to express).

            • comp_throw7 2 years ago

              It's not a proposal with a detailed implementation spec, it's a problem statement.

              • wruza 2 years ago

                “One framework proposed for superalignment” sounded like it does something. Or maybe I missed the context.

          • juped 2 years ago

            You keep posting this link to vague alignment copium from decades ago; we've come a long way in cynicism since then.

    • refulgentis 2 years ago

      Adding a disclaimer for people unaware of context (I feel same as you):

      OpenAI made a large commitment to super-alignment in the not-so-distant past. I beleive mid-2023. Famously, it has always taken AI Safety™ very seriously.

      Regardless of anyone's feelings on the need for a dedicated team for it, you can chalk to one up as another instance of OpenAI cough leadership cough speaking out of both sides of it's mouth as is convenient. The only true north star is fame, glory, and user count, dressed up as humble "research"

      To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

      • jasonfarnon 2 years ago

        To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

        What's his track record on promises/predictions of this sort? I wasn't paying attention until pretty recently.

        • NomDePlum 2 years ago

          As a child I used to watch a TV programme called Tomorrows World. On it they predicted these very same things in similar timeframes.

          That programme aired in the 1980's. Other than vested promises is there much to indicate it's close at all? Empty promises aside there isn't really any indication of that being likely at all.

          • zdragnar 2 years ago

            In the early 1980's we were just coming out of the first AI winter and everyone was getting optimistic again.

            I suspect there will be at least continued commercial use of the current tech, though I still suspect this crop is another dead end in the hunt for AGI.

            • NomDePlum 2 years ago

              I'd agree with the commercial use element. It will definitely find areas that it can be applied. Just currently it's general application by a lot of the user base feel more like early Facebook apps or subjectively better Lotus Notes than an actual leap forward of any sort.

          • Davidzheng 2 years ago

            are we living in the same world?????

            • NomDePlum 2 years ago

              I would assume so. I've spent some time looking into AI for software development and general use and I'm both slightly impressed and at the same time don't really get the hype.

              It's better and quicker search at present for the area I specialise in.

              It's not currently even close to being a x2 multiplier for me, it possibly even a negative impact, probably not but I'm still exploring. Which feels detached from the promises. Interesting but at present more hype than hyper. Also, it's energy inefficient so cost heavy. I feel that will likely cripple a lot of use cases.

              What's your take?

            • refulgentis 2 years ago

              Yes

              Incredulous reactions don't aid whatever you intend to communicate - there's a reason why everyone knows what AI the last 12 months, it's not made up or a monoculture. It would be very odd to expect discontinuation of commercial use without a black swan event

        • refulgentis 2 years ago

          honestly, I hadn't heard of him until 24-48 hours ago :x (he's also the new superalignment lead, I can't remember if I heard that first, or the podcast stuff first. Dwarkesh Patel podcast for anyone curious. Only saw a clip of it)

      • N0b8ez 2 years ago

        >To really stress this: OpenAI's still-present cofounder shared yesterday on a podcast that they expect AGI in ~2 years and ASI (superpassing human intelligence) by end of the decade.

        Link? Is the ~2 year timeline a common estimate in the field?

        • CuriouslyC 2 years ago

          They can't even clearly define a test of "AGI" I seriously doubt they're going to reach it in two years. Alternatively, they could define a fairly trivial test and reach it last year.

          • jfengel 2 years ago

            I feel like we'll know it when we see it. Or at least, significant changes will happen even if people still claim it isn't really The Thing.

            Personally I'm not seeing that the path we're on leads to whatever that is, either. But I think/hope I'll know if I'm wrong when it's in front of me.

        • dboreham 2 years ago

          It's the "fusion in 20 years" of AI?

        • ctoth 2 years ago
          • N0b8ez 2 years ago

            Is the quote you're thinking of the one at 19:11?

            > I don't think it's going to happen next year, it's still useful to have the conversation and maybe it's like two or three years instead.

            This doesn't seem like a super definite prediction. The "two or three" might have just been a hypothetical.

            • HarHarVeryFunny 2 years ago

              Right at the end of the interview Schulman says that he expects AGI to be able to replace himself in 5 years. He seemed a bit sheepish when saying it, so hard to tell if he really believed it, or if was just saying what he'd been told to say (I can't believe Altman is allowing employees to be interviewed like this without telling them what they can't say, and what they should say).

        • heavyset_go 2 years ago

          We can't even get self-driving down in 2 years, we're nowhere near reaching general AI.

          AI experts who aren't riding the hype train and getting high off of its fumes acknowledge that true AI is something we'll likely not see in our lifetimes.

    • xpe 2 years ago

      > I think superalignment is absurd, and model "safety" is the modern AI company's "think of the children" pearl clutching pretext to justify digging moats. All this after sucking up everyone's copyright material as fair use, then not releasing the result, and profiting off it.

      How can I be confident you aren't committing the fallacy of collecting a bunch of events and saying that is sufficient to serve as a cohesive explanation? No offense intended, but the comment above has many of the qualities of a classic rant.

      If I'm wrong, perhaps you could elaborate? If I'm not wrong, maybe you could reconsider?

      Don't forget that alignment research has existed longer than OpenAI. It would be a stretch to claim that the original AI safety researchers were using the pretexts you described -- I think it is fair to say they were involved because of genuine concern, not because it was a trendy or self-serving thing to do.

      Some of those researchers and people they influenced ended up at OpenAI. So it would be a mistake or at least an oversimplification to claim that AI safety is some kind of pretext at OpenAI. Could it be a pretext for some people in the organization, to some degree? Sure, it could. But is it a significant effect? One that fits your complex narrative, above? I find that unlikely.

      Making sense of an organization's intentions requires a lot of analysis and care, due to the combination of actors and varying influence.

      There are simpler, more likely explanations, such as: AI safety wasn't a profit center, and over time other departments in OpenAI got more staff, more influence, and so on. This is a problem, for sure, but there is no "pearl clutching pretext" needed for this explanation.

      • portaouflop 2 years ago

        An organisations intentions are always the same and very simple: “Increase shareholder value”

        • xpe 2 years ago

          Oh, it is that simple? What do you mean?

          Are you saying these so-called simple intentions are the only factors in play? Surely not.

          Are you putting forth a theory that we can test? How well do you think your theory works? Did it work for Enron? For Microsoft? For REI? Does it work for every organization? Surely not perfectly; therefore, it can't be as simple as you claim.

          Making a simplification and calling it "simple" is an easy thing to do.

    • xpe 2 years ago

      > I think superalignment is absurd

      Care to explain? Absurd how? An internal contradiction somehow? Unimportant for some reason? Impossible for some reason?

  • KennyBlanken 2 years ago

    People very high up in a company / their field are not treated remotely the same as peons.

    1)OpenAI wouldn't want the negative PR of pursuing legal action against someone top in their field; his peers would take note of it and be less willing to work for them.

    2)The stuff he signed was almost certainly different from what rank and file signed, if only because he would have sufficient power to negotiate those contracts.

  • theGnuMe 2 years ago

    “ OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”

    Delusional.

  • foolfoolz 2 years ago

    i don’t think we need to respect these elite multi millionaires for not becoming even grander multi millionaires / billionaires

    • llamaimperative 2 years ago

      I think you oughta respect everyone who does the right thing, not for any mushy feel good reason but because it encourages other people to do more of the right things. That’s good.

    • whimsicalism 2 years ago

      is having money morally wrong?

      • r2_pilot 2 years ago

        Depends on how you get it

        • AndrewKemendo 2 years ago

          Exactly. There’s no ethical way to gain ownership of a billion dollars (there’s likely some dollar threshold way less than 1B where p(ethical_gains) can be approximated to 0)

          A lot of people got screwed along the way

          • whimsicalism 2 years ago

            i think a lot of people have been able to become billionaires simply by building something that was initially significantly undervalued and then became very highly valued, no 'screwing'. there is such thing as a win-win and frankly these win-wins account for most albeit not all value creation in the world. you do not have to screw other people to get rich.

            whether people should be able to hold on to that billion is a different question

            • fragmede 2 years ago

              I wouldn't know, I'm not a billionaire. But when you hear about Amazon warehouse workers peeing into bottles because they they don't have long enough bathroom breaks, or Walmart workers not having healthcare because they're intentionally scheduled for 39.5 hours, it's hard to see that anyone could get to a billion without screwing someone over. But like I said, I'm not a billionaire.

              • whimsicalism 2 years ago

                Who did JK Rowling screw? (putting aside her recent social issues after she already became a billionaire)

                Having these discussions in this current cultural moment is difficult. I'm no lover of billionaires, but to say that every billionaire screwed people over relies on esoteric interpretations of value and who produces it. These interpretations (like the labor-theory of value) are alien to the vast majority of people.

                • fragmede 2 years ago

                  aside from the way she's terrible, she's not terrible?

                  the wonderful thing any capitalism is that you can absolve yourself of guilt by having someone else do your dirty work for you. are you so sure every single seamstress that made clothes and stuffed animals, and the workers at the toy factories, and every single person involved with the making of the movies for the Harry Potter deals she licensed her work to were well compensated and treated well? that's not directly on her, but at least some of her money comes from there

            • AndrewKemendo 2 years ago

              They aren’t win-wins

              It’s a ruse - it’s a con - it’s an accounting trick. It’s the foundation of capitalism

              If I start a bowling pin production company and own 100% of it, then whatever pins I sell all of the results go to me

              Now let say I want to expand my thing (that’s its own moral dilemma we won’t get into), so I promise a person with more money than they need to support their own life, to give me money in exchange for some of the future revenue produced, let’s say 10%

              So now you have two people requiring payment - a producer and an “investor” so you’re already in the hole and now it’s 90% and 10%

              You use that money to hire people to work in your potemkin dictatorship, with demands on proceeds now on some timeline (note conversion date, next board meeting etc)

              So now you hire 10 people, how much of the company do they own? Well that’s totally up to whatever the two owners want including 0%

              But let’s say it’s a typical venture deal, so 10% option pool for employees (and don’t forget the 4 year vest, cause we can’t have them mobile can we) which you fill up.

              At the end of the four years you now have:

              1 80% owner 1 10% owner 10 1% owners

              Did the 2 people create 90% of the value of the company?

              Only in capitalist math does that hold and in fact the only math capitalists do is the following:

              “Well they were free to sign or not sign the contract”

              Ignoring the reality of the world based on a worldview of greed that dominated the world to such an extent that it was considered “normal”

              Luckily we’re starting to see the tide change

              • whimsicalism 2 years ago

                Putting aside your labor theory of value nonsense (I'm very familiar with the classic leftist syllogisms on this), who did someone like JK Rowling screw to make her billion?

        • space_oddity 2 years ago

          And how you use it...

  • KennyBlanken 2 years ago

    > Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us.

    Large language models are not "smart". They do not have thought. They don't have intelligence despite the "AI" moniker, etc.

    They vomit words based off very fancy statistics.

    There is no path from that to "thought" and "intelligence."

    • danielbln 2 years ago

      Not that I disagree, but what's intelligence? How does our intelligence work? If we don't know that, how can we be so sure what does and what doesn't lead to intelligence? A little more humility is on order before whipping out the tired "LLMs are just stochastic parrots" argument.

      • bormaj 2 years ago

        Humility has to go both ways then, we can't claim that LLM models are actually (or not actually) AI without qualifying that term first.

jp57 2 years ago

The only way I can see this being a valid contract is if the equity grant that they get to keep is a new grant offered the time of signing the exit contract. Any vested equity given as compensation for work could not then be offered again as consideration for signing a new agreement.

Maybe the agreement is "we will accelerate vesting of your unvested equity if you sign this new agreement"? If that's the case then it doesn't sound nearly so coercive to me.

  • DebtDeflation 2 years ago

    My initial reaction was "Hold up - your RSUs vest, you sell the shares and pocket the cash, you quit OpenAI, a few years later you disparage them, and then when? They somehow try and claw back the equity? How? At what value? There's no way this can work." Then I remembered that OpenAI "equity" doesn't take the form of an RSU or option or anything else that can be converted into an actual share ever. What they call "equity" is a "Profit Participation Unit (PPU)" that once vested entitles you to a share of their profits. They don't share the equivalent of a Cap Table with employees, so there's no way to tell what sort of ownership interest a PPU represents. And of course, it's unlikely OpenAI will ever turn a profit (which if they did would be capped anyway). So this is all just play money anyway.

    • whimsicalism 2 years ago

      This is wrong on multiple levels. (to be clear I don't work at OAI)

      > They don't share the equivalent of a Cap Table with employees, so there's no way to tell what sort of ownership interest a PPU represents

      It is known - it represents 0 ownership share. They do not want to sell any ownership because their deal with MS gives MS 49% ownership and they don't want MS to be able to buy up additional stake and control the company.

      > And of course, it's unlikely OpenAI will ever turn a profit (which if they did would be capped anyway). So this is all just play money anyway.

      Putting aside your unreasonable confidence that OAI will never be profitable, the PPUs are tender offered so they can be sold to institutional investors up to a very high limit, OAIs current tender offer round values them at ~$80b iirc

      • almost_usual 2 years ago

        > Note at offer time candidates do not know how many PPUs they will be receiving or how many exist in total. This is important because it’s not clear to candidates if they are receiving 1% or 0.001% of profits for instance. Even when giving options, some startups are often unclear or simply do not share the total number of outstanding shares. That said, this is generally considered bad practice and unfavorable for employees. Additionally, tender offers are not guaranteed to happen and the cadence may also not be known.

        > PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years. Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value. So in the offer example above, the candidate received $2M worth of PPUs, which means that their capped amount they could sell them for would be $20M

        > The most recent liquidation event we’re aware of happened during a tender offer earlier this year. It was during this event that some early employees were able to sell their profit participation units. It’s difficult to know how often these events happen and who is allowed to sell, though, as it’s on company discretion.

        This NDA wrinkle is another negative. Honestly I think the entire OpenAI compensation model is smoke and mirrors which is normal for startups and obviously inferior to RSUs.

        https://www.levels.fyi/blog/openai-compensation.html

        • whimsicalism 2 years ago

          > Additionally, tender offers are not guaranteed to happen and the cadence may also not be known. > PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years.

          i know for a fact that these bits are inaccurate, but i don't want to go into the details.

          the profit share is not known but you are told what the PPUs were valued at the most recent tender offer

      • DebtDeflation 2 years ago

        You're not saying anything that in any way contradicts my original post. Here, I'll simplify it - OpenAI's PPUs are not in any sense of the word "equity" in OpenAI, they are simply a subordinated claim to an unknown % of a hypothetical future profit.

        • whimsicalism 2 years ago

          > there's no way to tell what sort of ownership interest a PPU represents

          Wrong. We know - it is 0, this directly contradicts your claim.

          > this is all just play money anyway.

          Again, wrong - because it is sellable so employees can take home millions. Play money in the startup world means illiquid options that can't be tender offered.

          You're making it sound like this is a terrible deal for employees but I personally know people who are able to sell $1m+ in OAI PPUs to institutional investors as part of the tender offer.

    • cdchn 2 years ago

      Wow. Smart for them. Former employees are behooved to the company for an actual perpetuity. Sounds like a raw deal but when the potential gains are that big, I guess you'll agree to pretty much anything.

    • ec109685 2 years ago

      Their profit is capped at $1T, which is amount no company has ever achieved.

  • apsec112 2 years ago

    It's not. The earlier tweets explain: the initial agreement says the employee must sign a "general release" or forfeit the equity, and then the general release they are asked to sign includes a lifetime no-criticism clause.

    • ethbr1 2 years ago

      IOW, this is burying the illegal part in a tangential document, in hopes of avoiding legal scrutiny and/or judgement.

      They're really lending employees equity, subject to the company's later feelings as to whether the employee should be allowed to keep or sell it.

    • w10-1 2 years ago

      But a general release is not a non-criticism clause.

      They're not required to sign anything other than a general release of liability when they leave to preserve their rights. They don't have to sign a non-disparagement clause.

      But they'd need a very good lawyer to be confident at that time.

      • User23 2 years ago

        And they won’t have that equity available to borrow against to pay for that lawyer either.

    • Animats 2 years ago

      That's when you need a lawyer.

      In general, an agreement to agree is not an agreement. A requirement for a "general release" to be signed at some time in the future is iffy. And that's before labor law issues.

      Someone with a copy of that contract should run it through OpenAI's contract analyzer.

    • bradleyjg 2 years ago

      The earlier tweets explain …

      What a horrific medium of communication. Why anyone uses it is beyond me.

    • Melatonic 2 years ago

      I'm no lawyer but this sounds like something that would not go well for OpenAI if strongly litigated

      • fuzztester 2 years ago

        >I'm no lawyer

        Have any (startup or other) lawyers chimed in here?

      • mrj 2 years ago

        Yeah, courts have generally found that this is "under duress" and not enforceable.

        • singleshot_ 2 years ago

          Under duress in the contractual world is generally interpreted as “you are about to be killed or maimed.” Economic duress is distinct.

          • to11mtm 2 years ago

            Duress can take other forms, unless we are really trying to differentiate general 'coercion' here.

            Perhaps as an example of the blurred line; Pre-nup agreements sprung the day of the wedding, will not hold up in a US court with a competent lawyer challenging them.

            You can try to call it 'economic' duress but any non-sociopath sees there are other factors at play.

            • singleshot_ 2 years ago

              That’s a really good point. Was this a prenuptial agreement? If it wasn’t May take is section 174 would apply and we would be talking about physical compulsion — and not “it’s a preferable economic situation to sign.”

              Not a sociopath, just know the law.

    • DesiLurker 2 years ago

      somebody explained to me early on that you cannot have a contract to have a contract. either initial agreement must state this condition clearly or they are signing another contract at employment termination which is bringing these new terms. IDK why would anyone sign that at termination unless they dangle additional equity. I dont think this BS they are trying to pull would be enforceable at least in California. though IANAL obviously.

      all this said, in bigger picture I can understand not divulging trade secrets but not being allowed to discuss company culture towards AI safety essentially tells me that all the Sama talk about the 'for the good of humanity' is total BS. at the end of day its about market share and bottom line.

      • hughesjj 2 years ago

        Canceling my openai subscription as we speak, this is too much. I don't care how good it is relative to other offerings. Not worth it.

    • beastman82 2 years ago

      ITT: a bunch of laymen thinking their 2 second proposal will outlawyer the team of lawyers who drafted these.

      • throwaway562if1 2 years ago

        You haven't worked with many contracts, have you? Unenforceable clauses are the norm, most people are willing to follow them rather than risk having to fight them in court.

        • to11mtm 2 years ago

          Bingo.

          I have seen a lot of companies put unenforceable stuff into their employment agreements, separation agreements, etc.

      • mminer237 2 years ago

        I am a lawyer. This is not just a general release, and I have no idea how OpenAI's lawyers expect this to be legal.

        • ethbr1 2 years ago

          Out of curiosity, what are the penalties for putting unenforceable stuff in an employment contract?

          Are there any?

          • sangnoir 2 years ago

            Typically there is no penalty - and contracts explicitly declare that all clauses are severable so that the rest of the contract remains valid even if one of the scare-clauses is found to be invalid. IANAL

        • listenallyall 2 years ago

          Have you read the actual document or contracts? Opining on stuff you haven't actually read seems premature. Read the contract, then tell us which clause violates which statute, that's useful.

      • jprete 2 years ago

        Lawyers are 100% capable of knowingly crafting unenforceable agreements.

underlogic 2 years ago

This is bizarre. Someone hands you a contract as you're leaving a company and if you refuse to agree to whatever they dreamt up and sign the company takes back the equity you earned? That can't be legal

  • anon373839 2 years ago

    Hard to evaluate this without access to the documents. But in CA, agreements cannot be conditioned on the payment of previously earned wages.

    Equity adds a wrinkle here, but I suspect if the effect of canceling equity is to cause a forfeiture of earned wages, then ultimately whatever contract is signed under that threat is void.

    • az226 2 years ago

      It’s not even equity. OpenAI is a nonprofit.

      They’re profit participation units and probably come with a few gotchas like these.

    • theGnuMe 2 years ago

      Well some rich ex-openAI person should test this theory. Only way to find out. I’m sure some of them are rich.

  • ajross 2 years ago

    The argument would be that it's coercive. And it might be, and they might be sued over it and lose. Basically the incentives all run strongly in OpenAI's favor. They're not a public company, vested options aren't stock and can't be liquidated except with "permission", which means that an exiting employee is probably not going to take the risk and will just sign the contract.

  • throwaway743950 2 years ago

    It might be that they agree to it initially when hired, so it doesn't matter if they sign something when they leave.

    • crooked-v 2 years ago

      Agreements with surprise terms that only get detailed later tend not to be very legal.

      • riehwvfbk 2 years ago

        Doesn't even have to be a surprise. Pretty much startup employment agreement in existence gives the company ("at the board's sole discretion") the right to repurchase your shares upon termination of employment. OpenAI's PPUs are worth $0 until they become profitable. Guess which right they'll choose to exercise if you don't sign the NDA?

      • mvdtnz 2 years ago

        How do you know there isn't a very clear term in the employment agreement stating that upon termination you'll be asked to sign an NDA on these terms?

        • romwell 2 years ago

          Unless the terms of the NDA are provided upfront, that sounds sketch AF.

          "I agree to follow unspecified terms in perpetuity, or return the pay I already earned" doesn't vibe with labor laws.

          And if those NDA terms were already in the contract, there would be no need to sign them upon exit.

          • mvdtnz 2 years ago

            > And if those NDA terms were already in the contract, there would be no need to sign them upon exit.

            If the NDA terms were agreed in an employment contract they would no longer be valid upon termination of that contract.

            • sratner 2 years ago

              Plenty of contracts have survivorship clauses. In particular, non-disclosure clauses and IP rights are the ones to most commonly survive termination.

        • klyrs 2 years ago

          One particularly sus term in my employment agreement is that I adhere to all corporate policies. Guess how many of those there are, how often they're updated, and if I've ever read them!

        • pests 2 years ago

          Why not just get it signed then? Your signing to agree to sign later?

toomuchtodo 2 years ago

I would strongly encourage anyone faced with this ask by OpenAI to file a complaint with the NLRB as well as speak with an employment attorney familiar with California statute.

0cf8612b2e1e 2 years ago

Why have other companies not done the same? This seems legally tenuous to only now be attempted. Will we see burger flippers prevented from discussing the rat infestation at their previous workplace?

(Don’t have X) - is there a timeline? Can I curse out the company on my deathbed, or would their lawyers have the legal right to try and clawback the equity from the estate?

  • johnnyanmac 2 years ago

    For the burger metaphor, you need to have leverage over the employee to make them not speak. No one at Burger King is getting severance when they are kicked out, let alone equity.

    As for other companies that can pay: I can only assume that the cost to bribe skilled workers isn't worth the perceived risk and cost of lawsuits from the downfall (which they may or may not be able to settle). Generative AI is still very young and under a lot of scrutiny on all fronts, so the risk of a whistle blower at this stage may shape the entire future of the industry at large.

  • apsec112 2 years ago

    The Vox article says that it's a lifetime agreement:

    https://www.vox.com/future-perfect/2024/5/17/24158478/openai...

  • exe34 2 years ago

    i worked at McDonald's in the mid-late 00s, I'm pretty sure there was a clause about never saying anything negative about them. i think they were a great employer!

    • wongarsu 2 years ago

      Sorry, someone at corporate has interpreted this statement as criticism. Please give back all equity, or an amount equivalent to its current value.

      • ryandrake 2 years ago

        It doesn't have to be equity. If they wanted to, they could put in their employment contract "If you say anything bad about McDonalds, you owe us $1000." What is the ex-burger-flipper going to do? Fight them in court?

      • dylan604 2 years ago

        Like a fast food employee would have equity in the company. Please, let's at least be sensible in our internet ranting.

      • exe34 2 years ago

        i got f-all equity, I was flipping burgers for minimum wage.

      • hehdhdjehehegwv 2 years ago

        Also, whatever fries left in the bottom of the bag. That’s corporate property buddy.

  • dylan604 2 years ago

    Other companies have done the same. I worked at a company that is 0% related to the tech industry. I was laid off/let go/dismissed/sacked where they offered me a "severance" on the condition I sign a release with a non-disparaging clause. I didn't give enough shits about the company to waste my time/energy commenting about them. It was just an entry on a resume where I happened to work with some really neat, talented, and cool/interesting coworkers. I had the luxury of nobody else giving a damn about how/why I left. I can only imagine these people getting hounded by Real Housewives level gossip/bullshit.

benreesman 2 years ago

This has just been crazy both to watch and in some small ways interact with up close (I’ve had some very productive and some regrettably heated private discussions advising former colleagues and people I care about to GTFO before the shit really hits the rotary air impeller, and this is going to get so much worse).

This thread is full of comments making statements around this looking like some level of criminal enterprise (ranging from “no way that document holds up” to “everyone knows Sam is a crook”).

The level of stuff ranging from vitriol to overwhelming if maybe circumstantial (but conclusive that my personal satisfaction) evidence of direct reprisal has just been surreal, but it’s surreal in a different way to see people talking about this like it was never even controversial to be skeptical/critical/hostile to thing thing.

I’ve been saying that this looks like the next Enron, minimum, for easily five years, arguably double that.

Is this the last straw where I stop getting messed around over this?

I know better than to expect a ticker tape parade for having both called this and having the guts to stand up to these folks, but I do hold out a little hope for even a grudging acknowledgment.

  • 0xDEAFBEAD 2 years ago

    There's another comment saying something sort of similar elsewhere in this thread: https://news.ycombinator.com/item?id=40396366

    What made you think it was the next Enron five years ago?

    I appreciate you having the guts to stand up to them.

    • benreesman 2 years ago

      First, thank you for probably being the first person to recognize in print that it wasn’t easy to stand up to these folks in public, plenty have said things like “you’re fighting the good fight” in private, but I think you’re the first person to in any sense second the motion in my personal case, so big ups on having the guts to say it too.

      I’ve never been a YC-funded founder myself, but I’ve had multiple roommates who were, and a few girlfriends who were on the bubble of like, founder and early employee, and I’ve just generally been swimming in that pool to one degree or another for coming up on 20 years (I always forget my join date but it’s on the order of like, 17 years or something).

      So when a few dozen people you trust tell you the same thing, you tend to buy it even if you’re not quite ready to print the worst hearsay (and I’ve heard things about Altman that I believe but still wouldn’t print without proof, dark shit).

      As the litany of scandals mounted (Green Dot, zero-rated pre-IPO portfolio stock with like, his brother involved, Socialcam, the list just goes on), and at some point real journalists start doing pieces (New Yorker, etc.).

      And while some of my friends and former colleagues (well maybe former friends now) who joined are both eminently qualified and as ethical as this business lets anyone be, there was a skew there too, it skewed “opportunist, fails up”.

      So it’s a growing preponderance of evidence starting in about 2009 and being just “published by credible journalists”starting about five years later, at some point I’m like “if even 5% of this is even a little true, this is beyond the pale”.

      It’s been a gradual thing, and people giving the benefit of the doubt up until the November stuff are maybe just really charitable, at this point it’s like, only a jury can take the next steps trivially indicated.

      • brap 2 years ago

        Don’t forget WorldCoin!

        • benreesman 2 years ago

          Yeah, I was trying to stay on topic but flagrant violations of the Universal Declaration of Human Rights are really Lawrence Summers’s speciality.

          I’m pretty embarrassed to have former colleagues who openly defend shit like this.

  • danielbln 2 years ago

    OpenAI was incorporated 9 years ago, but you easily saw that it's the next Enron 10 years ago?

    • benreesman 2 years ago

      I said easily five, not easily ten. I was alluding to it in embryo with the comment that it’s likely been longer.

      If you meant that remark/objection in good faith then thank you for the opportunity to clarify.

      If not, the thank you for hanging a concrete example of the kind of shit I’m alluding to (though at the extremely mild end of the range) directly off the claim.

ecjhdnc2025 2 years ago

It shouldn't be legal and maybe it isn't, but all schemes like this are, when you get down to it, ultimately about suppressing potential or actual evidence of serious, possibly criminal misconduct, so I don't think they are going to let the illegality get them all upset while they are having fun.

  • sneak 2 years ago

    What crimes do you think have occurred here?

    • mindcandy 2 years ago

      I’m no lawyer. But, this sure smells like some form of fraud. Or, at least breach of contract.

      Employees and employer enter into an agreement: Work here for X term and you get Y options with Z terms attached. OK.

      But, then later pulling Darth Vader… “Now that the deal is completing, I am changing the deal. Consent and it’s bad for you this way. Don’t consent and it’s bad that way. Either way, you held up your end of our agreement and I’m not.”

      • edanm 2 years ago

        I have no inside info on this, but I doubt this is what is happening. They could just say no and not sign a new contract.

        I assume this was something agreed to before they started working.

    • ecjhdnc2025 2 years ago

      An answer in the form of a question: why don't OpenAI executives want to talk about whether Sora was trained on Youtube content?

      (I should reiterate that I actually wrote "serious, possibly criminal")

      • KeplerBoy 2 years ago

        Because of course it was trained on Yt data, but they gain nothing from admitting that openly.

        • ezconnect 2 years ago

          They will gain a lot of lawsuit if they admit they trained on youtube dataset because not everyone gave consent.

          • MOARDONGZPLZ 2 years ago

            Consent isn’t legally required. An admission, however, would upset a lot of extremely online people though. Seems lose lose.

            • ecjhdnc2025 2 years ago

              "Consent isn't legally required"?

              I don't understand this point. If Google gave the data to OpenAI (which they surely haven't, right?), even then they'd not have consent from users.

              As far as I understand it, it's not a given that there is no copyright infringement here. I don't think even criminal copyright infringement is off the table here, because it's clear it's for profit, it's clear it's wilful under 17 U.S.C. 506(a).

              And once you consider the difficult potential position here -- that the liabilities from Sora might be worse than the liabilities from ChatGPT -- there's all sorts of potential for bad behaviour at a corporate level, from misrepresentations regarding business commitments to misrepresentations on a legal level.

              • MOARDONGZPLZ 2 years ago

                The parent stated:

                They will gain a lot of lawsuit if they admit they trained on youtube dataset because not everyone gave consent.

                But a lawsuit fails if essential elements are not met. If consent isn’t required for the lawsuit to proceed, then it doesn’t matter whether or whether not consent was granted. QED.

              • KeplerBoy 2 years ago

                What's the current situation on this? Do you waive the rights for AI training (presumably by Alphabet) when you upload content to YouTube?

    • tcmart14 2 years ago

      They don't say that criminal activity has occurred in this instance, just that this kind of behavior could be used cover it up in situations where that is the case. An example that could potentially be true. Right now with everything going on with Boeing, it sure seems plausible they are covering something(s) up that may be criminal or incredibly damaging. Like maybe falsify inspections and maintenance records? A person at Boeing who gets equity as part of compensation decides to leave. And when they leave, they eventually at some point in the future decide to speak out at a congressional investigation about what they know about what is going on. Should that person be sued into oblivion by Boeing? Or should Boeing, assuming what situation above is true, just have to eat the cost/consequences for being shitty?

    • stale2002 2 years ago

      Right now, there is some publicity on Twitter regarding AGI/OpenAI/EA LSD cnc parties (consent non consent/simulated rape parties).

      So maybe it's related to that.

      https://twitter.com/soniajoseph_/status/1791604177581310234

      • MacsHeadroom 2 years ago

        The ones going to orgies are the effective altruists / safety researchers who are leaving and not signing the non-disparagement agreement. https://x.com/youraimarketer/status/1791616629912051968

        Anyway it's about not disparaging the company not about disclosing what employees do in their free time. Orgies are just parties and LSD use is hardly taboo.

        • stale2002 2 years ago

          > Orgies are just parties

          Well apparently not if there are women who are saying that the scene and community that all these people are involved in is making women uncomfortable or causing them to be harassed or pressured into bad situations.

          A situation can be bad, done informally by people within a community, even if it isn't done literally within the corporate headquarters, or if directly the responsibility of one specific company that can be pointed at.

          Especially if it is a close-nit group of people who are living together, working together, involved in the same out of work organizations and non profits.

          You can read what Sonia says herself.

          https://x.com/soniajoseph_/status/1791604177581310234

          > The ones going to orgies are the effective altruists / safety researchers who are leaving and not signing the non-disparagement agreement.

          Indeed, I am sure that the people who are comfortable with the behavior or situation have no need to be pressured into silence.

yumraj 2 years ago

Compared to what seemed like their original charter, with non-profit structure and all, now it seems like a rather poisonous place.

They will have many successes in the short run, but, their long run future suddenly looks a little murky.

  • baq 2 years ago

    They extracted a lot of value from researchers during their ‘open’ days, but it’s depleted now, so of course they move on to the next source of value. sama is going AGI or bust with a very rational position of ‘if somebody has AGI, I’d rather it was me’ except I don’t like how he does it one bit, it’s got a very dystopian feel to it.

  • 0xDEAFBEAD 2 years ago

    Similar points made here, if anyone is interested in signing: https://www.openailetter.org/

  • eternauta3k 2 years ago

    It could work like academia or finance: poisonous environment (it is said), but ambitious enough people still go in to try their luck.

    • throwaway2037 2 years ago

      "finance": A bit of a broad brush, don't you think? Is working at a Landsbank or Sparkasse in Germany really so "poisonous"?

      • eternauta3k 2 years ago

        Yes, of course, narrow that down to the crazy wolf-of-wall-street subset.

        • throwaway2037 2 years ago

          Better to say so in the first place. At this point in 2024, the all too common trope about Wall Street-film-like bad guys on a trading floor are basically gone. They haven't really existed since post-GFC in 2009. Do you have first hand experience that says otherwise?

tim333 2 years ago

Sama update on X, says sorry:

>in regards to recent stuff about how openai handles equity:

>we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.

>there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.

>the team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this. https://x.com/sama/status/1791936857594581428

  • sabedevops 2 years ago

    He has 100% been coached by their legal counsel to distance himself from this as this is likely going to court soon (being it’s likely very illegal). That’s why he repeats “we’ve never clawed back” twice…the chilling effect intended had a real effect, at a crucial time for the company, the likely motivation being to defraud investors who may have otherwise been more careful in their support if internal malfeasance around data set sourcing practices were revealed.

    I hope ex-employees sue and don’t contact him personally. The damage is done. Don’t be dumb folks.

  • lupire 2 years ago

    Utterly spineless. Do something slimy and act surprised when you get got. Rinse and repeat.

    • insane_dreamer 2 years ago

      <1% chance that Sam did not know what was in those exit docs

      • 0xDEAFBEAD 2 years ago

        I've been disparaging Sam and OpenAI a fair amount in this thread, but I find it plausible that Sam didn't know.

        I remember a job a few years ago where they sent me employment paperwork that was for a very different position than the one I was hired for. (I ended up signing it anyways after a few minor changes, because I liked it better than the paperwork I expected to see.)

        If OpenAI is a "move fast and break things" sort of organization, I expect they're shuffling a lot of paperwork that Sam isn't co-signing on. I doubt Sam's attitude towards paperwork is fundamentally different from yours or mine.

        If Sam didn't know, however, that doesn't exactly reflect well on OpenAI. As Jan put it: "Act with gravitas appropriate for what you're building." https://news.ycombinator.com/item?id=40391412 IMO this incident should underscore Jan's point.

        Accidentally silencing ex-employees is not what safety culture looks like at all. They've got to start hiring experts and reading books. It's a long slog ahead.

        • insane_dreamer 2 years ago

          I don’t expect Sam to know every detail as CEO. But this particular clause is very impactful and something that the CEO of a startup would have had to sign off on rather than some underling making the decision on their own.

    • airstrike 2 years ago

      I don't think that's an accurate read. He did say

      >if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too

      • rglover 2 years ago

        Yes, now that it's public knowledge. Had this not been leaked, that wouldn't have been an option.

asperous 2 years ago

Not a lawyer but those contracts aren't legal. You need something called "consideration" ie something new of value to be legal. They can't just take away something of value that was already agreed upon.

However they could add this to new employee contracts.

  • ethbr1 2 years ago

    "Legal" seems like a fuzzy line to OpenAI's leadership.

    Pushing unenforceable scare-copy to get employees to self-censor sounds on-brand.

    • tptacek 2 years ago

      I agree with Piper's point that these contracts aren't common in tech, but they're hardly unheard of. In 20 years of consulting work I've seen dozens of them. They're not uncommon. This doesn't look uniquely hostile or amoral for OpenAI, just garden-variety.

      • a_wild_dandan 2 years ago

        Well, an AI charity -- so founded on openness that they're called OpenAI -- took millions in donations, everyone's copyright data...only to become effectively for-profit, close down their AI, and inflict a lifetime gag on their employees. In that context, it feels rather amoral.

        • tptacek 2 years ago

          This to me is like the "don't be evil" thing. I didn't take it seriously to begin with, I don't think reasonable people should have taken it seriously, and so it's not persuasive or really all that interesting to argue about.

          People are different! You can think otherwise.

          • thumrusn72 2 years ago

            Therein lies the issue. The second you throw idealistic terms like “don’t be evil” and __OPEN__ ai around you should be expected to deliver.

            But how is that even possible when corporations are typically run by ghouls who enjoy relativistic morals when it suits them. And are beholden to profits, not ethics.

          • int_19h 2 years ago

            I think we do need to start taking such things seriously, and start holding companies accountable using all available venues (including legal, and legislative if the laws don't have enough leverage as it is) when they act contrary to their publicly stated commitments.

      • comp_throw7 2 years ago

        Contracts like this seem extremely unusual as a condition for _retaining already vested equity (or equity-like instruments)_, rather than as a condition for receiving additional severance. And how common are non-disclosure clauses that cover the non-disparagement clauses?

        In fact both of those seem quite bad, both by regular industry standards, and even moreso as applied to OpenAI's specific situation.

      • lupire 2 years ago

        as an exit contract? Not part of a severance agreement?

        Boomberg famously used this as an employment contract, and it was a campaign scandal for Mike.

    • dylan604 2 years ago

      This sounds just like the non-compete issue that the FTC just invalidated. I can see if the current FTC leadership is allowed to continue working after 2025/01/20 that these things might be moved against as well. If new admin is brought in, they might all get reversed. Just something to consider going into your particular polling place

  • blackeyeblitzar 2 years ago

    It doesn’t matter if they are not legal. Employees do not have resources to fight expensive legal battles and fear retaliation in other ways. Like not being able to find future jobs. And anyone with family plain won’t have the time.

  • lxgr 2 years ago

    “You get shares in our company in exchange for employment and eternal never-talking-bad-about-us”?

    Doesn’t mean that that’s legal, of course, but I’d doubt that the legality would hinge on a lack of consideration.

    • hannasanarion 2 years ago

      You can't add a contingency to a payment retroactively. It sounds like these are exit agreements, not employment agreements.

      If it was "we'll give you shares/cash if you don't say anything bad about us", that's normal, kind of standard fare for exit agreements, it's why severance packages exist.

      But if it is "we'll take away the shares that you already earned as part of your regular employment compensation unless you agree to not say anything bad about us", that's extortion.

  • koolba 2 years ago

    Through in a preamble of “For $1 and other consideration…

  • singleshot_ 2 years ago

    They give you a general release of liability, as noted elsewhere in the thread.

  • danielmarkbruce 2 years ago

    Have you seen the contracts?

cashsterling 2 years ago

In my experience, and that of others I know, agreements of this kind are generally used to hide/cover-up all kinds of malfeasance. I think that agreements of this kind are highly unethical and should be illegal.

Many year ago I signed a NDA/non-disparagement agreement as part of a severance package when I was fired from a startup for political reasons. I didn't want to sign it... but my family needed the money and I swallowed my pride. There was a lot of unethical stuff going on within the company in terms of fiducial responsibility to investors and BoD. The BoD eventually figured out what was going on and "cleaned house".

With OpenAI, I am concerned this is turning into huge power/money grab with little care for humanity... and "power tends to corrupt and absolute power corrupts absolutely".

  • staunton 2 years ago

    > this is turning into huge power/money grab

    The power grab happened a while ago (the shenanigans concerning the board) and is now complete. Care for humanity was just marketing or a cute thought at best.

    Maybe humanity will survive life long enough that a company "caring about humanity" becomes possible, I'm not saying it's not worth trying or aspiring to such ideals, but everyone should be extremely surprised if any organization managed to resist such amounts of money to maintain any goal or ideal whatever...

    • lazide 2 years ago

      Well, one problem is what does ‘caring for humanity’ even mean, concretely?

      One could argue it would mean pampering it.

      One could also argue it could be a Skynet—analog doing the equivalent of a God Emperor like Golden Path to ensure humanity is never going to be dumb enough to allow an AGI the power to do that again.

      Assuming humanity survives the second one, it has a lot higher chance of actually benefiting humanity long term too.

      • staunton 2 years ago

        At the current level on the way towards "caring about humanity", I really don't think it's a complicated philosophical question. Once a big company actively chooses to forego some profits based on any altruistic consideration, we can start debating what it means "concretely".

    • wwweston 2 years ago

      The system already has been a superorganism/AI for a long time:

      http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...

  • punnerud 2 years ago

    In EU all of these are mostly illegal and void, or strictly limited. You have to pay a good salary for the whole duration (up to two years), and let the employer know months before them leave. Almost right after they are fired.

    Sound like a better solution?

    • punnerud 2 years ago

      I see that this commend jump up and down between 5 and 10 points. Guess a lot of up and downvotes.

      • lnsru 2 years ago

        I will not vote. But give me US salaries in Germany please. All these €100k@35 hours workweek offers are boring. It’s almost top salary for senior level developers at big companies. Mostly no stock at all. I will sign probably every shady document for one million € stock compensation.

        • objektif 2 years ago

          Just come to US pls. It is the whole package you sign up for not just the salaries. Shitty food, healthcare etc.

          • lnsru 2 years ago

            I missed my chance for the great relocation. But buddies in New York and San Diego are doing just fine. Both of them make total comp of roughly quarter million and are buying everything they want. The food is good, hospitals modern and clean.

  • dclowd9901 2 years ago

    In all likelihood, they are illegal, just that no one has challenged them yet. I can’t imagine a sane court backing up the idea that a person can be forbidden to talk about something (not national security related) for the rest of their lives.

  • ornornor 2 years ago

    That could very well be the case, OpenAI took quite a few opaque decision/changes not too long ago.

bambax 2 years ago

> All of this is highly ironic for a company that initially advertised itself as OpenAI

Well... I know first hand that many well-informed, tech-literate people still think that all products from OpenAI are open-source. Lying works, even in that most egregious of fashion.

  • SXX 2 years ago

    This is just Propoganda 101. Call yourself anti-fascist on TV for decade enough times and then you can go indiscriminately kill everyone you call fascist.

    Unfortunately Orwellian propoganda works.

rvz 2 years ago

So that explains the cult-like behaviour months ago when the company was under siege.

Diamond multi-million dollar hand-cuffs which OpenAI has bound lifetime secret service-level NDAs which are another unusual company setting after their so-called "non-profit" founding and their contradictory name.

Even an ex-employee saying 'ClosedAI' could see their PPUs evaporate in front of them to zero or they could never be allowed to sell them and have them taken away.

  • timmg 2 years ago

    I don’t have any idea what goes on inside OAI. But I have this strange feeling that they were right to oust sama. They didn’t have the leverage to pull it off, though.

lopkeny12ko 2 years ago

What a lot of people seem to be missing here is that RSUs are usually double-trigger for private companies. Vested shares are not yours. They are just an entitlement for you to be distributed common stock by the company. You don't own any real stock until those RSUs are released (typically from a liquidity event like an IPO).

Companies can cancel your vested equity for any reason. Read your employment contract carefully. For example, most RSU grants have a 7 year expiration. Even for shares that are vested, regardless of whether you leave the company or not, if 7 years have elapsed since they were granted, they are now worthless.

  • darth_avocado 2 years ago

    > if 7 years have elapsed since they were granted, they are now worthless

    Once vested, RSUs are the same as regular stock purchased through the market. The company cannot claw them back, nor do they "expire".

    • lopkeny12ko 2 years ago

      No, this is not true. That's the entire point I'm making. An RSU that is vested, for a private company, is not a share of stock, it's an entitlement to receive a share of stock tied to a liquidity event.

      > same as regular stock purchased through the market

      You cannot purchase stock of a private company on the open market.

      > The company cannot claw them back

      The company cannot "claw back" a vested RSU but they can cancel it.

      > nor do they "expire".

      Yes, they absolutely do expire. Read your employment contract and equity grant agreement carefully.

      • danielmarkbruce 2 years ago

        It's just a semantic issue. Some folks will say aren't really fully vested when they are double trigger until the second trigger event. Some will say they are vested but not triggered, other people say similar things.

    • jatins 2 years ago

      this is incorrect. Private company RSUs often have double trigger with second trigger being IPO/exit. The "semi" vested RSUs can expire if the company does not IPO in 7 years.

  • onesociety2022 2 years ago

    The 7 year expiry time exists so IRS lets you give RSUs different tax treatment than regular stock. The idea is because they can expire, they could be worth nothing. And so the IRS cannot expect you to pay taxes on RSUs until the double-trigger event occurs.

    But none of this means the company can just cancel your RSUs unless you agreed to them being cancelled for specific reason in your equity agreement. I have worked at several big pre-IPO companies that had big exits. I made sure there were no clawback clauses in the equity contract before accepting the offers.

  • lr4444lr 2 years ago

    Yes, they can choose not to renew and IANAL, but I'm fairly certain there has to be a valid reason to cancel vested equity within the 7 year time frame, i.e. firing for cause. I don't think a right to shares within the period can be capriciously taken away. You have a contract. The terms matter.

    • lopkeny12ko 2 years ago

      > You have a contract. The terms matter.

      Right. In the case of OpenAI, their equity grant contracts likely have a non-disparagement clause that allows them to cancel vested shares. Whether or not you think that is a "valid reason" is largely independent of the legal framework governing RSU release.

atomicnumber3 2 years ago

I have some experience with rich people who think they can just put whatever they want in contracts and then stare at you until you sign it because you are physically dependent on eating food every day.

Turns out they're right, they can put whatever they want in a contract. And again, they are correct that their wage slaves will 99.99% of the time sign whatever paper he pushes in front of them while saying "as a condition of your continued employment, [...]".

But also it turns out that just because you signed something doesn't mean that's it. My friends (all of us young twenty-something software engineers much more familiar with transaction isolation semantics than with contract law) consulted with an attorney.

The TLDR is that:

- nothing in contract law is in perpetuity

- there MUST be consideration for each side (where "consideration" means getting something. something real. like USD. "continued employment" is not consideration.)

- if nothing is perpetual, then how long can it last supposing both sides do get ongoing consideration from it? the answer is, the judge will figure it out.

- and when it comes to employers and employees, the employee had damn well better be getting a good deal out of it, especially if you are trying to prevent the employee (or ex-employee) from working.

A common pattern ended up emerging: our employer would put something perpetual in the contract, and offer no consideration. Our attorney would tell us this isn't even a valid contract and not to worry about it. Employer would offer an employee some nominal amount of USD in severance and put something in perpetuity into the contract. Our attorney tells us the judge would likely use "blue ink rule" to add in "for a period of one year", or, it would be prorated based on the amount of money they were given relative to their former salary.

(I don't work there anymore, naturally).

  • sangnoir 2 years ago

    > if nothing is perpetual, then how long can it last supposing both sides do get ongoing consideration from it? the answer is, the judge will figure it out.

    Isn't that the reason more competent lawyers put in the royal lives[1] clause? It specifies the contract is valid until 21 years after the death of the last currently-living royal descendant; I believe the youngest one is currently 1 year old, and they all have good healthcare, so it's almost certainly will be beyond the lifetime of any currently-employed persons.

    1. https://en.wikipedia.org/wiki/Royal_lives_clause

    • spoiler 2 years ago

      I know little about law, but isn't this completely ludicrous? Assuming you know a bit more (or someone else here does), I have a few questions:

      Would any non-corrupt judge consider this is done in bad fait?

      How is this difference if we use a great ancient sea turtles—or some other long-lived organism—instead of the current royal family baby? Like, I guess my point is anything that would likely outlive the employee basically?

      • amenhotep 2 years ago

        It's a standard legal thing to accommodate a rule that you can't write a perpetual contract, it has to have a term delimited by the life of someone alive plus some limited period.

        A case where it obviously makes sense is something like a covenant between two companies; whose life would be relevant there, if both parties want the contract to last a long time and have to pick one? The CEOs? Employees? Shareholders? You could easily have a situation where the company gets sold and they all leave, but the contract should still be relevant, and now it depends on the lives of people who are totally unconnected to the parties. Just makes things difficult. Using a monarch and his currently living descendants is easy.

        I'm not sure how relevant it is in a more employer employee context. But it's a formalism to create a very long contract that's easy to track, not a secret trick to create a longer contract than you're normally allowed to. An employer asking an employee to agree to it would have no qualms asking instead for it to last the employee's life, and if the employee's willing to sign one then the other doesn't seem that much more exploitative.

  • golergka 2 years ago

    > stare at you until you sign it because you are physically dependent on eating food every day

    Even lowest level fast food workers can choose a different employer. An engineer working at OpenAI certainly has a lot of opportunities to choose from. Even when I only had three years in the industry, mid at best, I asked to change the contract I was presented with because non-compete was too restrictive — and they did it. The caliber of talent that OpenAI is attracting (or hopes to attract) can certainly do this too.

    • fragmede 2 years ago

      > Even lowest level fast food workers can choose a different employer.

      Only thanks to a recent ruling by the FTC that non-competes are valid. in the most egregious uses, bartenders and servers were prohibited from finding another job in the same industry for two years.

      • golergka 2 years ago

        You're talking about what happens after a person signs a non compete, whereas my point is about what happens before he does (or doesn't) do it.

        • fragmede 2 years ago

          you say that like everyone had the luxury of making the choice not to sign those deals

          • golergka 2 years ago

            Exactly, and I explain why. If you want to argue they don't, I would be thankful if you exposed the error in my reasoning.

            • fragmede 2 years ago

              my argument is that some people don't, not without going homeless and living out of their car and eating off of EBT (in the US). if the choice was to sign a shitty non-compete, or get evicted, I find it hard to fault someone for signing such a contract. it's not a logic problem with your reasoning but that the reasoning starts from an (imo) unsound place.

    • atomicnumber3 2 years ago

      I am typically not willing to bet I can get back under health insurance for my family within the next 0-4 weeks. And paying for COBRA on a family plan is basically like going from earning $X/mo to drawing $-X/mo.

      • insane_dreamer 2 years ago

        The perversely capitalistic healthcare system in the US is perhaps the number one reason why US employers have so much more power over their employees than their European counterparts.

  • cynicalsecurity 2 years ago

    Why would anyone want to work at such horrible company.

  • mindslight 2 years ago

    This is all basically true, but the problem is that retaining an attorney to confidently represent you for such negotiation is proposition with $10k table stakes (probably $15k+ these days with Trumpflation), and much more if the company sticks to their guns and doesn't settle (which is much more likely when the company is holding the cards and you have to go on the offensive). The cost isn't necessarily outright prohibitive in the context of surveillance industry compensation, but is still a chunk of change and likely to give most people pause when the alternative is to just go with the flow and move on.

    Personally I'd say there needs to be a general restriction against including blatantly unenforceable terms in a contract document, especially unilateral "terms". The drafter is essentially pushing incorrect legal advice.

jimnotgym 2 years ago

>the company will succeed at developing AI systems that make most human labor obsolete.

Hmmmn. Most of the humans where I work do things physically with their hands. I don't see what AI will achieve in their area.

Can AI paint the walls in my house, fix the boiler and swap out the rotten windows? If so I think a subscription to chat GPT is very reasonably priced!

  • jerrygenser 2 years ago

    Robots that are powered by AI might be able to.

  • LtWorf 2 years ago

    It has difficulties with middle school mathematical problems.

  • renonce 2 years ago

    I don’t know but once vision AI reacts to traffic conditions accurately within 10ms it’s probably a matter of time before they take over your steering wheel. For other jobs you’ll need to wait for robotics.

  • windowsrookie 2 years ago

    Obviously if your job requires blue-collar style manual labor, no it's likely not going to be replaced anytime soon.

    But if your job is mostly sitting at a computer, I would be a bit worried.

    • eastbound 2 years ago

      Given the low quality of relationships between customers and blue-collared jobs, i.e. ever tried to get a job done by a plumber or a painter, if you don’t know how to do their job you are practically assured they will do something in your back that will fall off in 2 years, for the price of 2x your daily rate as a software engineer (when they don’t straight up send a paperless immigrant which makes you culprit of participation to unlawful employment scheme if it is discovered), well…

      I’d say there is a lot of available money in replacing blue collared jobs with AI-powered robots. Even if they do crap, it’s still better quality that contractors.

      • jimnotgym 2 years ago

        Shoddy contractors can then give you a shoddy service with a shoddy robot.

        Quality contractors will still be around, but everyone will try and beat them down on price because they care about that more than quality. The good contractors won't be able to make any money because of this and will leave the trade....just like now, just like I did

        • eastbound 2 years ago

          The argument “pay more to get better quality” would be valid if, indeed, paying more meant better quality.

          Unfortunately, it’s something I’ve often done, either as a 30% raise for my employees or giving a tip to a contractor when I knew I’d take them again or taking the most expensive one.

          EACH time the work was much worse off after the raise. The sad truth of humans is that you gotta keep them begging to extract their best work, and no true reward is possible.

    • drooby 2 years ago

      Once AGI is solved. How long does it take for AGI (or human's steering AGI) to create a robot that meets or exceeds the abilities of the human body?

  • cyberpunk 2 years ago

    4o groks realtime video; how far away are we from letting it control robots bruv?

zombiwoof 2 years ago

Sam and Mira. greedy as fuck since they are con artists and neither could get a job at that level anywhere legitimate.

Now it’s a money grab.

Sad because some amazing tech and people now getting corrupted into a toxic culture that didn’t have to be that way

  • romanovcode 2 years ago

    > Sam and Mira. greedy as fuck since they are con artists and neither could get a job at that level anywhere legitimate.

    Hey hey hey! Sam founded a 4th most popular social networking site in 2005 called Loopt. Don't you forget that! (After that he joined YC and founded nothing ever since)

    • null0pointer 2 years ago

      He’s spent all those years conducting field research for his stealth-mode social engineering startup.

MBlume 2 years ago

Submission title mentions NDA but the article also mentions a non disparagement agreement. "You can't give away our trade secrets" is one thing but it sounds like they're being told they can't say anything critical of the company at all.

  • reducesuffering 2 years ago

    They can't even mention the NDA exists!

    • danielmarkbruce 2 years ago

      This is common, and there is nothing wrong with it.

      • Chinjut 2 years ago

        There is absolutely something wrong with it. Just because a thing is common doesn't make it good.

        • danielmarkbruce 2 years ago

          Two people entering an agreement to not talk about something is fine. You and I should (and can, with very few restrictions) be able to agree that I'll do x, and you'll do y and we are going to keep the matter private. Anyone who wants to take away this ability for two people to do such a thing needs to take a long hard look at themselves, and maybe move to north korea.

          • hnfong 2 years ago

            There are things that are legal between parties of (presumed) equal footing, that aren't legal between employers and employees.

            That's why you can pay $1 to buy a gadget made in some third world country, but you can't pay your employees less than say $8/hour due to minimum wage laws.

            • danielmarkbruce 2 years ago

              Yes, as noted there are a few exceptions.

              Being paid a whole lot of money to not talk about something isn't remotely similar to paying someone a few dollars an hour. It's not morally similar, it's not legally similar and it's not treated similarly by anyone who deals with these matters and has a clue what they are doing.

Barrin92 2 years ago

We're apparently at the Scientology stage of the AI hype cycle. One funny observation is, if you ostensibly believe that you're about to invent the AGI godhead who will render the economic system obsolete in < ~5 years or so, how do stock return no-criticism lawsuits fit into that kind of worldview

  • mavbo 2 years ago

    AGI led utopia will be pretty easy if we're all under contractual obligation to not criticize any aspect of it, lest we be banished back to "work"

Andrew_nenakhov 2 years ago

I wonder if employees rallying for Altman when the board was trying to fire him were obligated to do it by some secret agreement.

krick 2 years ago

I'm well aware of being ignorant about USA law, and it isn't news to me that it encompasses a lot of ridiculous stuff, but it's still somehow amazes me, that "lifetime no-criticism contract" is possible.

It's quite natural, that a co-founder, being forced out of the company wouldn't be exactly willing to forfeit his equity. So, what, now he cannot… talk? That has some Mexican cartel vibes.

mrweasel 2 years ago

When companies create rules like this, that tells me that they are very unsure of their product. Either it doesn't works as they claim, or it's incredible simple to replicate. It can also be that their entire business plan is insane, in any case, there's something basic wrong internally at OpenAI for them to feel the need for this kind of rule.

If OpenAI and ChatGPT is so far ahead for everyone else, and their product is so complex, it doesn't matter what a few disgruntled employees do or say, so the rule is not required.

  • underdeserver 2 years ago

    Forget their product, they're shady as employers. Intentionally doing something borderline legal when they have all the negotiating power.

alexpetralia 2 years ago

If the original agreement offered equity that vests, then suddenly another future agreement can potentially revoke that vested equity? It makes no sense unless somehow additional conditions were attached to the vested equity in the original agreement.

  • riehwvfbk 2 years ago

    And almost all equity agreements do exactly that - give the company right of repurchase. If you've ever signed one, go re-read it. You'll likely see that clause right there in black and white.

    • ipaddr 2 years ago

      For companies unlisted on stock exchanges the options are then worthless.

      These were profit sharing units vs options.

    • umanwizard 2 years ago

      They give the company the right to repurchase unvested (but exercised) shares, not vested options. At least the ones I’ve signed.

whatever1 2 years ago

So if I am a competitor I just need to pay a current employee like 2-3M to break their golden handcuffs and then they can freely start singing.

  • jakderrida 2 years ago

    Not to seem combative, but that assumes that what they share would be advantageous enough to justify the costs... On the other hand, I'm thinking if I'm paying them to disclose all proprietary technology and research for my product, that would definitely make it worthwhile.

Buttons840 2 years ago

So part of their compensation for working is equity, and when they leave thay have to sign an additional agreement in order to keep their previously earned compensation? How is this legal? Mine as well tell them they have to give all their money back too.

What's the consideration for this contract?

  • throwaway598 2 years ago

    That OpenAI are institutionally unethical. That such a young company can be become rotten so quickly can only be due to leadership instruction or leadership failure.

    • smt88 2 years ago

      Look at Sam Altman's career and tweets. He's a clown at best, and at worst he's a manipulative crook who only cares about his own enrichment and uses pro-social ideas to give himself a veneer of trustworthiness.

      • skeeter2020 2 years ago

        I fear your characterization diminishes the real risk: he's incredibly well resourced, well-connected and intelligent while being utterly divorced from the reality of the majority he threatens. People like him and Peter Thiel are not simple crooks or idiots - they truly believe in their convictions. This is far scarier.

        • hot_cereal 2 years ago

          how does one divorce the belief that technology can be a force for good from the reality that the gatekeepers are so committed to being the most evil they can be

      • whoistraitor 2 years ago

        Indeed. I’ve heard first hand accounts that would make it impossible for me to trust him. He’s very good at the game. But I’d not want to touch him with a barge pole.

        • nar001 2 years ago

          Any stories or events you can talk about? It sounds interesting

          • benreesman 2 years ago

            The New Yorker piece is pretty terrifying and manages to be so while bending over backwards to present both sides of not maybe even suck up to SV a bit. Certainly no one forced Altman to say on the record that Ice Nine in the water glass was what he had planned for anyone who crossed him, and no one forced pg to say, likewise on the record that “Sam’s real talent is becoming powerful” or something to that effect.

            It pretty much goes downhill from there.

            • aleph_minus_one 2 years ago

              > The New Yorker piece is pretty terrifying and manages to be so while bending over backwards to present both sides of not maybe even suck up to SV a bit. Certainly no one forced Altman to say on the record that Ice Nine in the water glass was what he had planned for anyone who crossed him, and no one forced pg to say, likewise on the record that “Sam’s real talent is becoming powerful” or something to that effect.

              Article: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...

            • dmoy 2 years ago

              For anyone else like me who hasn't read Kurt Vonnegut, but does know about different ice states (e.g. Ice IX):

              "Ice Nine" is a fictional assassination device that makes you turn into ice after consuming ice (?) https://en.m.wikipedia.org/wiki/Ice-nine

              "Ice IX" (ice nine) is Ice III at a low enough temperature and high enough pressure to be proton-ordered https://en.m.wikipedia.org/wiki/Phases_of_ice#Known_phases

              So here, Sam Altman is stating a death threat.

              • spudlyo 2 years ago

                It's more than just a death threat, the person killed in such a manner would surely generate a human-sized pile of Ice 9, which would pose a much greater threat to humanity than any AGI.

                If we're seriously entertaining this off-handed remark as a measure of Altman's true character, it means not only would be willing willing to murder an adversary, but he'd be willing to risk all humanity to do it.

                What I take away from this remark is that Altman is a nerd, and I look forward to seeing a shaky cell-phone video of him reciting one of the calypsos of Bokonon while dressed as a cultist at a SciFi convention.

                • dmoy 2 years ago

                  > the person killed in such a manner would surely generate a human-sized pile of Ice 9, which would pose a much greater threat to humanity than any AGI.

                  Oh okay, I didn't really grok that implication from my brief scan of the wiki page. Didn't realize it was a cascading all-water-into-Ice-Nine thing.

                  • pollyturples 2 years ago

                    just to clarify, in the book it's basically just 'a form of ice that stays ice even when warm'. it was described as an abandoned projected by the military to harden mud for infantry men to cross. just like regular ice crystals, the ice9 crystal pattern 'spreads' across water, but without the need for it to be chilled, eg the body temp water freezes etc, it becomes a 'midas touch' problem to anyone dealing with it.

            • racional 2 years ago

              “Sam is extremely good at becoming powerful” was the quote, which has a distinctly different ring to it. Not that this diminishes from the overall creep factor.

            • schmidtleonard 2 years ago

              Holy shit I thought he was just good at networking, but it sounds like we have a psychopath in charge of the AI revolution. Fantastic.

          • lr1970 2 years ago

            > Any stories or events you can talk about? It sounds interesting reply

            Paul Graham fired Sam Altman from YC on the spot for "loss of trust". Full details unknown.

          • bookaway 2 years ago

            The story of the "YC mafia" takeover of Conde Nast era reddit as summarized by ex-ceo Yishan who resigned after tiring of Altman's constant Machiavelli machinations is also hilarious and foreshadowing of future events[0]. I'm sure by the time Altman resigned from the Reddit board OpenAI had long incorporated the entire corpus into ChatGPT already.

            At the moment all the engineers at OpenAI, including gdb, who currently have their credibility in tact are nerd-washing Altman's tarnished reputation by staying there. I mentioned this in a comment elsewhere but Peter Hintjens' (ZeroMQ, RIP) book called the "Psychopath Code"[1] is rather on point in this context. He notes that psychopaths are attracted to project groups that have assets and no defenses, i.e. non-profits:

            If a group has assets and no defenses, it is inevitable [a psychopath] will invade the group. There is no "if" here. Indeed, you may see several psychopaths striving for advantage...[the psychopath] may be a founder, yet that is rare. If he is a founder, someone else did the hard work. Look for burned-out skeletons in the closet...He may come with grand stories, yet only by his own word. He claims authority from his connections to important people. He spends his time in the group manipulating people against each other. Or, he is absent on important business...His dominance is not earned, yet it is tangible...He breaks the social conventions of the group. Social humans feel fear and anxiety when they do this. This is a dominance mask.

            A group of nerds that want to get shit done and work on important problems, who are primed to be optimistic and take what people say to their face at face value, and don't want to waste time with "people problems" are susceptible to these types of characters taking over.

            [0] https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...

            [1]https://hintjens.gitbooks.io/psychopathcode/content/chapter4...

      • hackernewds 2 years ago

        the name OpenAI itself reminds me every day of this.

        • deadbabe 2 years ago

          It’s “Open” as in “open Pandora’s box”, not “open source”. Always has been.

        • genevra 2 years ago

          I knew their vision of open source AI wouldn't last but it surprised me how fast it was.

          • baq 2 years ago

            That vision, if it was ever there, died before ChatGPT was released. It was just a hiring scheme to attract researchers.

            pg calls sama ‘naughty’. I call him ‘dangerous’.

            • olalonde 2 years ago

              I'm still finding it difficult to understand how their move away from the non-profit mission was legal. Initially, you assert that you are a mission-driven non-profit, a claim that attracts talent, capital, press, partners, and users. Then, you make a complete turnaround and transform into a for-profit enterprise. Why this isn't considered fraud is beyond me.

              • smt88 2 years ago

                My understanding is that there were two corporate entities, one of which was always for-profit.

          • w0m 2 years ago

            It was impractical from the start; they had to pivot before a they were able to get an LLM proper out (before ~anyone had heard of them)

      • andrepd 2 years ago

        Many easily fooled rubes believe that veneer, so I guess it's working for him.

      • raverbashing 2 years ago

        The startup world (as the artistic world, the sports world, etc) values healthy transgression of the rules

        But the line between healthy and unlawful transgression can be a thin line

      • csomar 2 years ago

        Social engineering has been a thing well before computers and the internet...

      • orlandrescu 2 years ago

        Awfully familiar to the other South-African emerald mine inheritor tech mogul.

        • kmeisthax 2 years ago

          I'm starting to think the relatives of South African emerald mine owners might not be the best people to trust...

          • pawelmurias 2 years ago

            You are not responsible for the sins of your father regardless of how seriously fucked in the head he is.

            • Loughla 2 years ago

              No but there is the old nature versus nurture debate. If you're raised in a home with a parent who has zero qualms about exploiting human suffering for profit, that's probably going to have an impact, right?

              • johnisgood 2 years ago

                What are you implying here? The answer to the nature vs. nurture debate is "both", see "epigenetics" for more.

                When considering the influence of a parent with morally reprehensible behavior, it's important to recognize that the environment a child grows up in can indeed have a profound impact on their development. Children raised in households where unethical behaviors are normalized may adopt some of these behaviors themselves, either through direct imitation or as a response to the emotional and psychological environment. However, it is equally possible for individuals to reject these influences.

                Furthermore, while acknowledging the potential impact of a negative upbringing, it is critical to avoid deterministic assumptions about individuals. People are not simply products of their environment; they possess agency and the capacity for change, and we need to realize that not all individuals perceive and respond to environmental stimuli in the same way. Personal experiences, cognitive processes, and emotional responses can lead to different interpretations and reactions to similar environmental conditions. Therefore, while the influence of a parent's actions cannot be dismissed, it is neither fair nor accurate to presume that an individual will inevitably follow in their footsteps.

                As for epigenetics: it highlights how environmental factors can influence gene expression, adding a layer of complexity to how we understand the interaction between genes and environment. While the environment can modify gene expression, individuals may exhibit different levels of susceptibility or resistance to these changes based on genetic variability.

                • gopher_space 2 years ago

                  > However, it is equally possible for individuals to reject these influences.

                  The crux of your thesis is a legal point of view, not a scientific one. It's a relic from when Natural Philosophy was new and hip, and fundamentally obviated by leaded gasoline. Discussing free will in a biological context is meaningless because the concept is defined by social coercion. It's the opposite of slavery.

            • programjames 2 years ago

              From a game theory perspective, it can make sense to punish future generations to prevent someone from YOLO'ing at the end of their life. But that only works if they actually care about their children, so perhaps it should be, "you are less responsible for the sins of your father the more seriously fucked in the head he is."

            • kmeisthax 2 years ago

              This is a great sentiment in theory. But it assumes that the child is actually interested in rejecting those sins - and accepting the economic consequences of equality (e.g. them not being filthy stinking rich).

              In practice most rich people spoil the shit out of their kids and they wind up being even more fucked in the head than their parents.

          • fennecbutt 2 years ago

            Lmao no point in worrying about AI spreading FUD when people do it all by themselves.

            You know what AI is actually gonna be useful for? AR source attachments to everything that comes out of our monkey mouths, or a huge floating [no source] over someone's head.

            Realtime factual accuracy checking pls I need it.

            • postmodest 2 years ago

              Who designs the training set for your putative "fact checker" AI?

            • docmars 2 years ago

              If it comes packaged with the constant barrage of ridicule and abuse from others for daring to be slightly wrong about something, nobody may as well talk at all.

        • kaycebasques 2 years ago

          Are you saying that Altman has family that did business in South African emerald mines? I can't find info about this

        • xyzzyz 2 years ago

          You are literally repeating false smears about Elon Mask. No emerald mine has ever been owned by anyone in Elon's family, and Elon certainly didn't inherit any of it. I find it very ironic that you are doing this while accusing someone of being a manipulative crook.

        • treme 2 years ago

          Please. Elon's track record to take tesla from concept car stage to current mass production levels and building SpaceX from scratch is hardly comparable to Altman's track record.

          • satvikpendem 2 years ago

            Indeed, at least Elon and his teams actually accomplished something worthwhile compared to Altman.

          • lr1970 2 years ago

            And don't forget StarLink that revolutionized satellite communications.

          • TechnicolorByte 2 years ago

            SpaceX didn’t start from scratch. Their initial designs were based on NASA designs. Stop perpetuating the “genius engineer” myth around Elon Musk.

            • SirensOfTitan 2 years ago

              “If you wish to make an apple pie from scratch You must first invent the universe”

              …no one “started from scratch", the sum of all knowledge is built on prior foundations.

            • KyleOneill 2 years ago

              I feel like Steve Jobs also fits this category if we are going to talk about people who aren't really worthy of genius title and used other people's accomplishments to reach their goals.

              We all know it as the engineers who made iPhone possible.

              • 8372049 2 years ago

                Someone far more deserving of the title, Dennis Ritchie, died a week after Jobs' stupidity caught up with him. So much attention to Jobs who didn't really deserve it, and so little to Dennis Ritchie who made such a profound impact on the tech world and society in general.

                • thefaux 2 years ago

                  I think Ritchie's influence while significant is overblown and not entirely positive. I am not a fan of Steve Jobs, who had many reprehensible traits, but I find it ridiculous to dismiss his genius. Frankly, I find Jobs's ability to manipulate people more impressive than Ritchie's ability to manipulate machines.

                  • 8372049 2 years ago

                    > not entirely positive

                    I don't know if he was responsible, but null-terminated strings has got to be one of the worst mistakes in computer history.

                    That said, how is the significance of C and Unix "overblown"?

                    I agree Jobs was brilliant at manipulating people, I don't agree that that should be celebrated.

                    • hollerith 2 years ago

                      The main reason C and Unix became widespread IMHO is not because they were better than the alternatives, but rather because AT&T distributed them with source code at no cost, and their motivation for doing that was not altruistic, but rather the need to obey a judicial decree or an agreement made at the end of an anti-trust court case under which IBM and AT&T were ordered not to enter each other's markets. I.e., AT&T was prohibited from selling computer hardware and software, so when they accidentally found themselves to be owners of some software that some universities and research labs wanted to use, they gave it away.

                      C and Unix weren't and aren't bad, but they are overestimated in comments on this site a lot. They weren't masterpieces. The Mac was a masterpiece IMHO. Credit for the Mac goes to Xerox PARC and to Engelbart's lab at Stanford Research Institute, but also to Jobs for recognizing the value of the work and leading the first implementation of it available to a large fraction of the population.

              • KyleOneill 2 years ago

                The people downvoting have never read the Isaacson book obviously.

                • treme 2 years ago

                  More like ppl on this site know and respect Jobs for his talent as a revolutionary product manager-style CEO that brought us IPhone and subsequent mobile Era of computing.

                  • 8372049 2 years ago

                    Mobile era of computing would have happened just as much if Jobs had never lived.

                    • CamperBob2 2 years ago

                      To be fair, who else could have gone toe-to-toe with the telecom incumbents? Jobs almost didn't succeed at that.

                  • KyleOneill 2 years ago

                    Jobs was a bully through and through.

            • hanspeter 2 years ago

              By that logic nothing has started from scratch.

            • ekianjo 2 years ago

              SpaceX is still the only company with reusable rockets. NASA only dreams about it and cant even make a regular rocket launch on time

            • colibri727 2 years ago

              Altman is riding a new tech wave, and his team has a couple of years' head start. Musk's reusable rockets were conceptualized a long time ago (Tintin's Destination Moon dates back to 1953) and could have become a reality several decades ago.

              • treme 2 years ago

                You seriously trying to take his credit away for reusable rocket with "nu uh, it was in scifi first?" Wow.

                "A cynical habit of thought and speech, a readiness to criticize work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life's realities—all these are marks, not ... of superiority but of weakness.”

                • cess11 2 years ago

                  What's wrong with weakness? Does it make you feel contempt?

                • colibri727 2 years ago

                  No, in fact I'm praising Musk for his project management abilities and his ability to take risks.

                  >"nu uh, it was in scifi first?" Wow.

                  https://en.wikipedia.org/wiki/McDonnell_Douglas_DC-X

                  >NASA had taken on the project grudgingly after having been "shamed" by its very public success under the direction of the SDIO.[citation needed] Its continued success was cause for considerable political in-fighting within NASA due to it competing with their "home grown" Lockheed Martin X-33/VentureStar project. Pete Conrad priced a new DC-X at $50 million, cheap by NASA standards, but NASA decided not to rebuild the craft in light of budget constraints

                  "Quotation is a serviceable substitute for wit." - Oscar Wilde

          • jajko 2 years ago

            But he is a manager, not an engineer although he sells himself off as such. He keeps smart capable folks around, abuses most of them pretty horribly, and when he intervenes with products its hit and miss. For example latest Tesla Model 3 changes must have been pretty major fuckup and there is no way he didn't ack it all.

            Plus all self-driving lies and more lies well within fraud territory at this point. Not even going into his sociopathic personality, massive childish ego and apparent 'daddy issues' which in men manifest exactly like him. He is not in day-to-day SpaceX control and it shows.

            • formerly_proven 2 years ago

              You’re confusing mommy and daddy issues. Mommy issues is what makes fash control freaks.

            • treme 2 years ago

              "A cynical habit of thought and speech, a readiness to criticize work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life's realities—all these are marks, not ... of superiority but of weakness.”

        • xaPe 2 years ago

          It didn't take long to drag Elon into this thread. The bitterness and cynicism is unreal.

      • comboy 2 years ago

        I'm surprised at such a mean comment and lots of follow-ups with agreement. I don't know Sam personally, I've only heard him here and there online from before OpenAI days and all I got was a good impression. He seems smart and pretty humble. Apart from all openai drama which I don't know enough to have an opinion, past-openai he also seems to be talking with sense.

        Since so many people took time to put him down there here can anybody provide some explanation to me? Preferably not just about how closed openai is, but specifically about Sam. He is in a pretty powerful position and maybe I'm missing some info.

      • tinyhouse 2 years ago

        Well, more than 90% of OpenAI employees backed him up when the board fired him. Maybe he's not the clown you claim he is.

        • llamaimperative 2 years ago

          Or they didn’t want the company, their job, and all of their equity to evaporate

          • tinyhouse 2 years ago

            Well, if he's a clown then his departure should cause the opposite, no? And you're right, more than 90% of them said we don't want the non-profit BS and openness. We want a unicorn tech company that can make us rich. Good for them.

            Disclaimer: I'm Sam's best friend from kindergarten. Just joking, never met the guy and have no interest in openai beyond being a happy customer (who will switch in a heartbeat to the competitors' if they give me a good reason to)

            • llamaimperative 2 years ago

              > Well, if he's a clown then his departure should cause the opposite, no?

              Nope, not even close to necessarily true.

              > more than 90% of them said we don't want the non-profit BS and openness. We want a unicorn tech company that can make us rich. Good for them.

              Sure, good for them! Dissolve the company and its charter, give the money back to the investors who invested under that charter, and go raise money for a commercial venture.

        • iinnPP 2 years ago

          People are self-motivated more often than not.

    • ben_w 2 years ago

      We already know there's been a leadership failure due to the mere existence of the board weirdness last year; if there has been any clarity to that, I've missed it for all the popcorn gossiping related to it.

      Everyone including the board's own chosen replacements for Altman siding with Altman seems to me to not be compatible with his current leadership being the root cause of the current discontent… so I'm blaming Microsoft, who were the moustache-twirling villains when I was a teen.

      Of course, thanks to the NDAs hiding information, I may just be wildly wrong.

      • Sharlin 2 years ago

        Everyone? What about the board that fired him, and all of those who’ve left the company? It seems to me more like those people are leaving who are rightly concerned about the direction things are going, and those people are staying who think that getting rich outweighs ethical – and possibly existential – concerns. Plus maybe those who still believe they can effect a positive change within the company. With regard to the letter – it’s difficult to say how many of the undersigned simply signed because of social pressure.

        • ben_w 2 years ago

          > Everyone? What about the board that fired him,

          I meant of the employees, obviously not the board.

          Also excluded: all the people who never worked there who think Altman is weird, Elon Musk who is suing them (and probably the New York Times on similar grounds), and the protestors who dropped leaflets on one of his public appearances.

          > and all of those who’ve left the company?

          Happened after those events; at the time it was so close to being literally employee who signed the letter saying "bring Sam back or we walk" that the rest can be assumed to have been off sick that day even despite the reputation the US has for very limited holidays and getting people to use those holidays for sick leave.

          > It seems to me more like those people are leaving who are rightly concerned about the direction things are going, and those people are staying who think that getting rich outweighs ethical – and possibly existential – concerns. Plus maybe those who still believe they can effect a positive change within the company.

          Obviously so, I'm only asserting that this doesn't appear to be due to Altman, despite him being CEO.

          ("Appear to be" is of course doing some heavy lifting here: unless someone wants to literally surveil the company and publish the results, and expect that to be illegal because otherwise it makes NDAs pointless, we're all in the dark).

          • shkkmo 2 years ago

            It's hard to guage exactly how much credence to put in that letter due to the gag contracts.

            How much was it in support of Altman and how much was in opposition to the extremely poorly explained in board decisions, and how much was pure self interest due to stock options?

            I think when a company chooses secrecy, they abandon much of the benefit of the doubt. I don't think there is any basis for absolving Altman.

    • jasonm23 2 years ago

      Clearly by design.

      The most dishonest leadership.

    • benreesman 2 years ago

      To borrow the catchphrase of one of my favorite hackers ever: “correct”.

  • nurple 2 years ago

    The thing is that this is a private company, so there is no public market to provide liquidity. The company can make itself the sole source of liquidity, at its option, by placing sell restrictions on the grants. Toe the line, or you will find you never get to participate in a liquidity event.

    There's more info on how SpaceX uses a scheme like this[0] to force compliance, and seeing as Musk had a hand in creating both orgs, they're bound to be similar.

    [0] https://techcrunch.com/2024/03/15/spacex-employee-stock-sale...

    • tdumitrescu 2 years ago

      Whoa. That article says that SpaceX does tender offers twice a year?! That's so much better than 99% of private companies, it makes it almost as liquid for employees as a public company.

      • nurple 2 years ago

        Which in a real way makes the threat of being left out of liquidity rounds that much more powerful a tool for keeping people looking forward to an actual windfall in their lane.

  • eru 2 years ago

    > What's the consideration for this contract?

    Consideration is almost meaningless as an obstacle here. They can give the other party a peppercorn, and that would be enough to count as consideration.

    https://en.wikipedia.org/wiki/Peppercorn_(law)

    There might be other legal challenges here, but 'consideration' is unlikely to be one of them. Unless OpenAI has idiots for lawyers.

    • verve_rat 2 years ago

      Right, but the employee would be able to refuse the consideration, and thus the contract, and the state of affairs wouldn't change. They would be free to say whatever they wanted.

      • kmeisthax 2 years ago

        If they refuse the contract then they lose out on their options vesting. Basically, OpenAI's contracts work like this:

        Employment Contract the First:

        We are paying you (WAGE) for your labor. In addition you also will be paid (OPTIONS) that, after a vesting period, will pay you a lot of money. If you terminate this employment your options are null and void unless you sign Employment Contract the Second.

        Employment Contract the Second:

        You agree to shut the fuck up about everything you saw at OpenAI until the end of time and we agree to pay out your options.

        Both of these have consideration and as far as I'm aware there's nothing in contract law that requires contracts to be completely self-contained and immutable. If two parties agree to change the deal, then the deal can change. The problem is that OpenAI's agreements are specifically designed to put one counterparty at a disadvantage so that they have to sign the second agreement later.

        There is an escape valve in contract law for "nobody would sign this" kinds of clauses, but I'm not sure how you'd use it. The legal term of art that you would allege is that the second contract is "unconscionable". But the standard of what counts as unconscionable in contract law is extremely high, because otherwise people would wriggle out of contracts the moment that what seemed like favorable terms turned unfavorable. Contract law doesn't care if the deal is fair (that's the FTC's job), it cares about whether or not the deal was agreed to.

        • hmottestad 2 years ago

          If say that you were working at Reddit for quite a number of years and all your original options had vested and you had exercised them, then since Reddit went public you would now easily be able to sell your stocks, or keep them if you want. So then you wouldn’t need to sign the second contract. Unless of course you had gotten new options that hadn’t vested yet.

          • p1esk 2 years ago

            My understanding is as soon as you exercise your options you own them, and the company can’t take them from you.

            Can anyone confirm this?

        • godelski 2 years ago

          > There is an escape valve in contract law for "nobody would sign this" kinds of clauses

          Who would sign a contract to willfully give away their options?

          • d1sxeyes 2 years ago

            The same sort of person who would sign a contract agreeing that in order to take advantage of their options, they need to sign a contract with unclear terms at some point in the future if they leave the company.

            Bear in mind there are actually three options, one is signing the second contract, one is not signing, and the other is remaining an employee.

        • eru 2 years ago

          Btw, do you have any idea whey they even bother with the second contract? Couldn't they just write the same stuff into the first contract in the first place?

        • pas 2 years ago

          is it even a valid contract clause to tie the value of something to a future completely unknown agreement? (or yes, it's valid, and it means that savvy folks should treat it as zero.)

          (though most likely the NDA and everything is there from day 1 and there's no second contract, no?)

          • eru 2 years ago

            > is it even a valid contract clause to tie the value of something to a future completely unknown agreement?

            I don't know about this specific case, but many contracts have these kinds of provisions. Eg it's standard in an employment contract to say that you'll follow the directions of your bosses, even though you don't know those directions, yet.

      • eru 2 years ago

        Maybe. But whether the employee can refuse the gag has nothing to do at all with the legal doctrine that requires consideration.

    • staticautomatic 2 years ago

      Ok but peppercorn or not, what’s the consideration?

      • PeterisP 2 years ago

        Getting a certain amount (according to their vesting schedule) of stock options, which are worth a substantial amount of money and thus clearly is "good and valuable consideration".

        • hmottestad 2 years ago

          The original stock and vesting agreement that was part of their original compensation probably says that you have to be currently employed by OpenAI for the vesting schedule to apply. So in that case the consideration of this new agreement is that they get to keep their vesting schedule running even though they are no longer employees.

          • nightpool 2 years ago

            That's the case in many common/similar agreements, but the OpenAI agreement is different because it's specifically clawing back already vested equity. In this case, I think the consideration would be the company allowing transfer of the shares / allowing participation in buyback events. Otherwise until the company goes public there's no way for the employees to cash out without consent of the company.

          • pas 2 years ago

            but can they simply leave with the already vested options/stock? are there clawback provisions in the initial contract?

      • kmeisthax 2 years ago

        "I'll pay you a dollar to shut up"

        "Deal"

  • fshbbdssbbgdd 2 years ago

    In the past a lot of options would expire if you didn’t exercise them within eg. 90 days of leaving. And exercising could be really expensive.

    Speculation: maybe the options they earn when they work there have some provision like this. In return for the NDA the options get extended.

    • NewJazz 2 years ago

      Options aren't vested equity though.

      • PNewling 2 years ago

        ... They definitely can be. When I worked for a small biotech company all of my options had a tiered vesting schedule.

        • _heimdall 2 years ago

          Options aren't equity, they're only the option to buy equity at a specified price. Vesting just means you can actually buy the shares at the set strike pice.

          For example, you may join a company and be given options to buy 10,000 shares at $5 each with a 2 year vesting schedule. They may begin vesting immediately, meaning you can buy 1/24th of the total options each month (or 614 shares). Its also common for a delay up front where no options vest until you've been with the company for say 6 or 12 months.

          Until an option vests you don't own anything. Once it vests, you still have to buy the shares by exercising the option at the $5 per share price. When you leave, most companies have a deadline on the scale of a few months where you have to either buy all vested shares or forfeit them and lose the stock options.

          • teaearlgraycold 2 years ago

            > buy all vested shares

            The last time I did this I didn't have to buy all of the shares.

            • lazyasciiart 2 years ago

              I think they mean that you had to buy all the ones you wanted to keep.

              • ergocoder 2 years ago

                That is tautological... You buy what you want to own???

                • Taniwha 2 years ago

                  There can be an advantage to not exercising: it causes a taxable event the IRS will want a cut of the difference between your exercise value and the current valuation, it requires you to commit real money to buy shares that may never be worth anything ....

                  And there are advantages to exercising: many (most?) companies take back unexercised shares a few weeks/months after you leave, it kicks in a CGT start date, so you can end up paying a lower CGT tax when you eventually sell

                  You need to understand all this stuff before you make a choice that's right for you

                • StackRanker3000 2 years ago

                  The point being made is that it isn’t all or nothing, you can buy half the vested options and forfeit the rest, should you want to.

          • theGnuMe 2 years ago

            Options can vest as do stock grants as well.

            • _heimdall 2 years ago

              Unless I'm mistaken, the difference is that grants vest into actual shares while options only vest into the opportunity to buy the shares at a set price.

              Part of my hiring bonus when joining one of the big tech companies were stock grants. As they vested I owned shares directly and could sell them as soon as they vested if I wanted to.

              I also joined a couple startups later in my career and was given options as a hiring incentive. I never exercised the vested options so I never owned them at all, and I lost the optios after 30-90 days after leaving the company. For grants I'd take the shares with me and not have to pay for them, they would have directly been my shares.

              Well, they'd actually be shares owned by a clearing house and promised to me but that's a very different rabbit hole.

              • throwaway2037 2 years ago

                    > Well, they'd actually be shares owned by a clearing house and promised to me but that's a very different rabbit hole.
                
                You still own the shares, not the clearing house. They hold them on your behalf.
                • _heimdall 2 years ago

                  Looks like I used the wrong term there, sorry. I was referring to Cede & Co, and in the moment assumed they could be considered a clearing house. It is technically called a certificate depository, sorry for the confusion there.

                  Cede & Co technically owns most of the stock certificates today [1]. If I buy a share of stock I end up actually owning an IOU for a stock certificate.

                  You can actually confirm this yourself if you own any stock. Call the broker that manages your account and ask who's name is on the stock certificate. It definitely isn't your name. You'll likely get confused or unclear answers, but if you're persistent enough you will indeed find that the certificate is almost certainly in the name of Cede & Co and there is no certificate in your name, likely no share identifier assigned to you either. You just own the promise to a share, which ultimately isn't a problem unless something massive breaks (at which point we have problems anyway).

                  [1] https://en.m.wikipedia.org/wiki/Cede_and_Company

                • balderdash 2 years ago

                  You are the beneficial owner, but the broker is the titled owner, acting as custodian on your behalf

                  • _heimdall 2 years ago

                    If I'm not mistaken, at least in the US most brokers also aren't the titled owner. That falls to Cede & Co which acts as a securities depository.

                    This is where the waters get murky and really risk conspiracy theory. My understanding, though, is that the legal rights fall to the titled owner and financial institutions, with the legal benefactor having very little recourse should anything actually go wrong.

                    The Great Taking [1] goes into more detail, though importantly I'm only including it here as a related resource if anyone is interested to read more. The ideas are really interesting and, at least in isolation, do make logical sense to me but I haven't had time to do my own digging deep enough to really feel confident enough to stand behind everything The Great Taking argues.

                    [1] https://thegreattaking.com/

                    • throwaway2037 2 years ago

                      I would bet my left leg that the US gov't would never allow Cede & Co. to fail. Same for Options Clearing Corp. They are too important to a highly functioning financial system.

                • SJC_Hacker 2 years ago

                  > They hold them on your behalf.

                  Possession is 90% of ownership

                  • NortySpock 2 years ago

                    Banks and trading houses are kind of the exception in that regard. I pay my bank monthly for my mortgage, and thus I live in a house that the bank could repossess if they so choose.

                    • _heimdall 2 years ago

                      The phrase really should be about force rather than possession. Possession only really makes a difference when there's no power imbalance.

                      Banks have the legal authority to take the home I possess if I don't meet the terms of our contract. Hell, I may own my property outright but the government can still claim eminent domain and take it from me anyway.

                      Among equals, possession may matter. When one side can force you to comply, possession really is only a sign that the one with power is currently letting you keep it.

                    • throwaway2037 2 years ago

                          > the bank could repossess if they so choose
                      
                      Absolutely not. You are protected my law, regardless of whatever odious mortgage contract that you signed.

                      What is about HN that makes so many commenters incredibly distrustful of our modern finance system? It is tiring, and they rarely (never?) offer any sound evidence to the matter. Post GFC, it is working very well.

        • quickthrowman 2 years ago

          Re-read the post you’re replying to. They said options are not vested equity, which they aren’t. You still need to exercise an option that has vested to purchase the equity shares.

          They did not say “options cannot get granted on a tiered vesting schedule”, probably because that isn’t true, as options can be granted with a tiered vesting schedule.

        • NewJazz 2 years ago

          They aren't equity no matter what though?

          They can be vested, I realize that.

    • brudgers 2 years ago

      My unreliable memory is Altman was ( once? ) in favor of extending the period for exercising options. I could be wrong of course but it is consistent with my impression that making other people rich is among his motivations. Not the only one of course. But again I could be wrong.

      • resonious 2 years ago

        Wouldn't be too surprised if he changed his mind since then. He is in a very different position now!

        • brudgers 2 years ago

          Unless a PTEP (Post Termination Exercise Period) beyond the ordinary three months was on offer, there probably wouldn't be a story because the kind of people OpenAI hires would tend to be adverse to working at a place with a PTEP less than three months.

          Or not, I could be wrong.

  • theyinwhy 2 years ago

    I guess there are indeed countries where this is illegal. Funny that it seems to be legal in the land of the free (speech).

  • blackeyeblitzar 2 years ago

    Unfortunately this is how most startup equity agreements are structured. They include terms that let the company cancel options that haven’t been exercised for [various reasons]. Those reasons are very open ended, and maybe they could be challenged in a court, but how can a low level employee afford to do that?

    • jkaplowitz 2 years ago

      I don’t know of any other such agreements that allow vested equity to be revoked, as the other person said. That doesn’t sound very vested to me. But we already knew there are a lot of weird aspects to OpenAI’s semi-nonprofit/semi-for-profit approximation of equity.

      • blackeyeblitzar 2 years ago

        As far as I know it’s part of the stock plan for most startups. There’s usually a standard clause that covers this, usually with phrasing that sounds reasonable (like triggering if company policy is violated or is found to have been violated in the past). But it gives the company a lot of power in deciding if that’s the case.

  • glitchc 2 years ago

    I'm guessing unvested equity is being treated separately from other forms of compensation. Normally, leaving a company loses the individual all rights to unvested options. Here the considetation is that options are retained in exchange for silence.

  • willis936 2 years ago

    They earned wages and paid taxes on them. Anything on top is just the price they're willing to accept in exchange for their principles.

    • throw101010 2 years ago

      How do you figure that they should pay an additional price (their principle/silence) for this equity when they've supposedly earned it during their employment (assuming this was not planned when they got hired, since they make them sign new terms at the time of their departure)?

  • temporarely 2 years ago

    I think we should have the exit agreement (if any) included and agreed to as part of the signing the employment contract.

  • zeroonetwothree 2 years ago

    I assume it’s agreed to at time of employment? Otherwise you’re right that it doesn't make sense

    • throw101010 2 years ago

      Why do you assume this if it is said here and in the article that they had to sign something at the time of the departure from the company?

  • bobbob1921 2 years ago

    I would guess it’s a bonus and part of their bonus structure and they agreed to the terms of any exit/departure, when they sign their initial contract.

    I’m not saying it’s right or that I agree with it, however.

  • riehwvfbk 2 years ago

    It's also really weird equity: you don't get an ownership stake in the company but rather profit-sharing units. If OpenAI ever becomes profitable (color me skeptical), you can indeed get rich as an employee. The other trigger is "achieving AGI", as defined by sama (presumably). And while you wait for these dubious events to occur you work insane hours for a mediocre cash salary.

  • e40 2 years ago

    Perhaps they are stock options and leaving without signing would make them evaporate, but signing turns them back into long-lasting options?

  • m3kw9 2 years ago

    In the initial hiring agreement, this would be stated and the employee would have to agree to signing such form if they are to depart

  • phkahler 2 years ago

    Yeah you don't have to sign anything to quit. Ever. No new terms at that time, sorry.

subroutine 2 years ago

This is an interesting update to the article...

> After publication, an OpenAI spokesperson sent me this statement: “We have never canceled any current or former employee’s vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.”

- Updated May 17, 2024, 11:20pm EDT

  • jiggawatts 2 years ago

    Neither of those statements negate the key point of the article.

    I've noticed that both Sam Altman personally, and official statements from OpenAI sound like they've been written by Aes Sedai: Not a single untrue word while simultaneously thoroughly deceptive.[1]

    Let's try translating some statements, as if we were listening to an evil person that can only make true statements:

    "We have never canceled any current or former employee’s vested equity" => "But we can and will if we want to. We just haven't yet."

    "...if people do not sign a release or nondisparagement agreement when they exit." => "But we're making everyone sign the agreement."

    [1] I've wondered if they use a not-for-public-use version of GPT for this purpose. You know, a model that's not quite as aligned as the chat bots, with more "flexible" morals.

    • twobitshifter 2 years ago

      Could also be that they have a unique definition of vesting when they say specifically “vested equity”

swat535 2 years ago

I mean why would anyone be surprised about this is beyond me?

I know many people on this site will not like what I am about to write as Sam is worshiped but let's face it: The head of this company is a master scammer who will do everything under the sun and the moon to earn a buck, including but notwithstanding to destroying himself along with his entire fortune if necessary in his quest of making sure other people don't get a dime;

So far he has done it all it: attempt to regulatory capture, hostile take over as the CEO, thrown out all other top engineers and partners and ensured the company remains closed despite its "open" name.

Now he is simply attempting to tie up all the loos ends and ensuring his employees remain loyal and are kept on a tight leash. It's a brilliant strategy, preventing any insider from blowing the whistle should OpenAI ever decides to do anything questionable, such as selling AI capabilities to hostile governments.

I simply hope that open source wins this battle so that we are not all completely reliant on OpenAI for the future, despite Sam's attempt.

  • jeltz 2 years ago

    Since I do not follow OpenAI or Ycombinator I first learned that he was a scammer when he released is crypto currency. But I am surprised that so many did not catch on to it then. It is not like he has really tried to hide that he is a grifter.

blackeyeblitzar 2 years ago

They are far from the only company to do this but they deserve to be skewered for it. The FTC and NLRB should come down hard on them to make an example. Jail time for executives.

pdonis 2 years ago

Everything I see about OpenAI makes me more and more convinced that the people running it are the last people anyone should want to be stewards of AI technology.

tonyhart7 2 years ago

"Even acknowledging that the NDA exists is a violation of it." now its not so much more open anymore right

  • ecjhdnc2025 2 years ago

    The scriptwriters are in such a hurry -- even they know this show isn't getting renewed.

shuckles 2 years ago

I'm not sure how this is legal. My employer certainly could not clawback paid salary or bonuses if I violated a surprise NDA they sprung on me when leaving on good terms. Why can they clawback vested stock compensation?

  • gwern 2 years ago

    These aren't real stock, they are "profit participation units" or PPUs; in addition, the fact that there is a NDA and a NDA about the NDA, means no one can warn you before you sign your employment papers about the implications of 'PPUs' and the tender-offer restriction and the future NDA. So it's possible that there's some loophole or simple omission somewhere which enables this, which would never work for regular RSUs or stock options, which no one is allowed to warn you about on pain of their PPUs being clawed back, and which you find out about only when you leave (and who would want to leave a rocketship like OA?).

  • orionsbelt 2 years ago

    My guess is they agreed to it upfront.

strstr 2 years ago

This really kills my desire to trust startups and YC. Hopefully paulg makes some kind of statement or commitment on non-disparagement and the like.

croemer 2 years ago

Link should probably go here instead of X: https://www.vox.com/future-perfect/2024/5/17/24158478/openai...

This is the article that the author talks about on X.

loceng 2 years ago

Non-disparagements need to be made illegal.

If someone shares something that's a lie and defamatory, then they could still be sued of course.

The Ben Shapiro-Daily Wire vs. Candace Owens is another scenario where the truth and conversation would benefit all of society - OpenAI and DailyWire arguably being on topics of pinnacle importance; instead the discussions are suppressed.

croes 2 years ago

I guess OpenAI makes the hero to villain switch faster than Google as they dropped "don't be evil"

milankragujevic 2 years ago

It seems very off to me that they don't give you the NDA before you sign the employment contract, and instead give it to you at the time of termination when you can simply refuse to sign it.

It seems that standard practice would dictate that you sign an NDA before even signing the employment contract.

jameshart 2 years ago

The Basilisk's deal turned out to be far more banal than expected.

yashap 2 years ago

For a company that is actively pursuing AGI (and probably the #1 contender to get there), this type of behaviour is extremely concerning.

There’s a very real/significant risk that AGI either literally destroys the human race, or makes life much shittier for most humans by making most of us obsolete. These risks are precisely why OpenAI was founded as a very open company with a charter that would firmly put the needs of humanity over their own pocketbooks, highly focused on the alignment problem. Instead they’ve closed up, become your standard company looking to make themselves ultra wealthy, and they seem like an extra vicious, “win at any cost” one at that. This plus their AI alignment people leaving in droves (and being muzzled on the way out) should be scary to pretty much everyone.

  • root_axis 2 years ago

    More than these egregious gag contracts, OpenAI benefits from the image that they are on the cusp of world-destroying science fiction. This meme needs to die, if AGI is possible it won't be achieved any time in the foreseeable future, and certainly it will not emerge from quadratic time brute force on a fraction of text and images scraped from the internet.

    • MrScruff 2 years ago

      Clearly we don’t know when/if AGI would happen, but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’. It probably won’t result from just scaling LLMs, but then that’s why there’s a lot of researchers trying to find the next significant advancement, in parallel with others trying to commercially exploit LLMs.

      • timr 2 years ago

        The same way that the expectation of many people working within the self-driving field in 2016 was that level 5 autonomy was right around the corner.

        Take this stuff with a HUGE grain of salt. A lot of goofy hyperbolic people work in AI (any startup, really).

        • huevosabio 2 years ago

          While I agree with your point, I take self driving rides on a weekly basis and you see them all over SF nowadays.

          We overestimate the short term progress, but underestimate the medium, long term one.

          • timr 2 years ago

            I don't think we disagree, but I will say that "a handful of people in SF and AZ taking rides in cars that are remotely monitored 24/7" is not the drivers-are-obsolete-now, near-term future being promised in 2016. Remember the panic because long-haul truckers were going to be unemployed Real Soon Now? I do.

            Back then, I said that the future of self-driving is likely to be the growth in capability of "driver assistance" features to an asymptotic point that we will re-define as "level 5" in the distant future (or perhaps: the "levels" will be memory-holed altogether, only to reappear in retrospective, "look how goofy we were" articles, like the ones that pop up now about nuclear airplanes and whatnot). I still think that is true.

          • Kwpolska 2 years ago

            Self-driving taxis are available in only a handful of cities around the world. This is far from progress. And how often are those taxis secretly controlled by an Indian call center?

        • schmidtleonard 2 years ago

          Sure, but blanket pessimism isn't very insightful either. I'll use the same example you did: self-driving. The public (or "median nerd") consensus has shifted from "right around the corner" (when it struggled to lane-follow if the paint wasn't sharp) to "it's a scam and will never work," even as it has taken off with the other types of AI and started hopping hurdles every month that naysayers said would take decades. Negotiating right-of-way, inferring intent, handling obstructed and ad-hoc roadways... the nasty intractables turned out to not be intractable, but sentiment has not caught up.

          For one where the pessimist consensus has already folded, see: coherent image/movie generation and multi-modality. There were loads of pessimists calling people idiots for believing in the possibility. Then it happened. Turns out an image really is worth 16x16 words.

          Pessimism isn't insight. There is no substitute for the hard work of "try and see."

        • thayne 2 years ago

          The same thing happened with nuclear fusion. People working on it have been saying sustainable fusion power is right around the corner for decades, and we still don't have it.

          And it _could_ be just one clever breakthrough away, and that could happen tomorrow, or it could be centuries away. There's no way to know.

      • troupo 2 years ago

        > the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’

        It was the expectation of many people in the field in the 1980s, too

      • zzzeek 2 years ago

        >but the expectations of many people working in the field is it will arrive in what qualifies as ‘near future’.

        they think this because it serves their interests of attracting an enormous amount of attention and money to an industry that they seek to make millions of dollars personally from.

        My money is well on environmental/ climate collapse wiping out most of humanity in the next 50-100 years, hundreds of years before anything like an AGI possibly could.

    • dclowd9901 2 years ago

      Ah yes, the “our brains are somehow inherently special” coalition. Hand-waving the capabilities of LLM as dumb math while not having a single clue about the math that underlies our own brains’ functionality.

      I don’t know if you’re conflating capability with consciousness but frankly it doesn’t matter if the thing knows it’s alive if it still makes everyone obsolete.

      • root_axis 2 years ago

        This isn't a question of understanding the brain. We don't even have a theory of AGI, the idea that LLMs are somehow anywhere near even approaching an existential threat to humanity is science fiction.

        LLMs are a super impressive advancement, like calculators for text, but if you want to force the discussion into a grandiose context then they're easy to dismiss. Sure, their outputs appear remarkably coherent through sheer brute force, but at the end of the day their fundamental nature makes them unsuitable for any task where precision is necessary. Even as just a chatbot, the facade breaks down with a bit of poking and prodding or just unlucky RNG. Only threat LLMs present is the risk that people will introduce their outputs into safety critical systems.

  • robertlagrant 2 years ago

    > or makes life much shittier for most humans by making most of us obsolete

    I'm not sure this is true. If all the things people are doing are done so much more cheaply they're almost free, that would be good for us, as we're also the buyers as well as the workers.

    However, I also doubt the premise.

    • justinclift 2 years ago

      > If all the things people are doing are done so much more cheaply they're almost free, that would be good for us ...

      Doesn't this tend to become "they're almost free to produce" with the actual pricing for end consumers not becoming cheaper? From the point of view of the sellers just expanding their margins instead.

      • marcusverus 2 years ago

        I'm sure businesses will capture some of the value, but is there any reason to assume they'll capture all or even most of it?

        Over the last ~ 50 years, worker productivity is up ~250%[0], profits (within the S&P 500) are up ~100%[1] and real personal (not household) income is up 150%[2].

        It should go without saying that a large part of the rise in profits is attributable to the rise of tech. It shouldn't surprise anyone that margins are higher on digital widgets than physical ones!

        Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.

        [0] https://fred.stlouisfed.org/series/OPHNFB [1] https://dqydj.com/sp-500-profit-margin/ [2] https://fred.stlouisfed.org/series/MEPAINUSA672N

        • lotsofpulp 2 years ago

          > Regardless, expanding margins is only attractive up to a certain point. The higher your margins, the more attractive your market becomes to would-be competitors.

          This does not make sense to me. While a higher profit margin is a signal to others that they can earn money by selling equivalent goods and services at lower prices, it is not inevitable that they will be able to. And even if they are, it behooves a seller to take advantage of the higher margins while they can.

          Earning less money now in the hopes of competitors being dissuaded from entering the market seems like a poor strategy.

          • robertlagrant 2 years ago

            The premise wasn't that there weren't competitors already, I don't think. With most things the price is (usually) floored by the cost of production, ceilinged by the value it provides people, and then competition is what moves it from the latter to closer to the former.

        • lifeisstillgood 2 years ago

          Wait what? I was just listening to the former chief economist of Banknof England going on about how terrible productivity (in the UK) is.

          So who is right?

          • michaelt 2 years ago

            Google UK productivity growth and you'll find a graph showing:

            UK productivity growth, 1990-2007: 2% per year

            UK productivity growth, 2010-2019: 0.5% per year

            So they're both right. US 50 year productivity growth looks great, UK 10 year productivity growth looks pretty awful.

        • justinclift 2 years ago

          > The higher your margins, the more attractive your market becomes to would-be competitors.

          Only in very simplistic theory. :(

          In practical terms, businesses with high margins seem able to afford government protection (aka "buy some politicians").

          So they lock out competition, and with their market captured, price gouging (or close to it) is the order of the day.

          No real sure why anyone thinks the playbook would be any different just because "AI" is used on the production side. It's still the same people making the calls, just with extra tools available to them.

          • robertlagrant 2 years ago

            This is also pretty simplistic. All the progress that's made on a variety of fronts implies that we don't have loads of static lockin businesses that bribe bureaucrats.

          • marcusverus 2 years ago

            Literally every imaginable economic prediction could be countered with this argument: "That won't happen if the government legislates to prevent it!"

            Weird that the field of economics just keeps on existing.

    • thayne 2 years ago

      We won't be buyers anymore if we aren't getting paid to work.

      Perhaps some kind of garanteed minimal income would be implemented, but we would probably see a shrinkage or complete destruction of the middle class, and massive increases in wealth inequality.

    • confidantlake 2 years ago

      Why would you need buyers if AI can create anything you desire?

      • flashgordon 2 years ago

        In an ideal world where gpus are a commodity yes. Btw at least today ai is owned/controlled by the rich and powerful and that's where majority of the research dollars are coming from. Why would they just relinquish ai so generously?

        • brandall10 2 years ago

          With an ever expanding AI everything should be quickly commoditized, including reduction in energy to run AI and energy itself (ie. viable commercial fusion or otherwise).

          • flashgordon 2 years ago

            That's the thing I am struggling with. I agree things will exponentially improve with AI. What i am not seeing is who will actually capture the value. Or rather how will those other than rich and powerful get to partake in this value capture. Take viable commercial fusion for example. Best case it ends up looking like another PG&E. Worst case it is owned by yet another Musk like gatekeeper. How do you see this being truly democratized and accessible for the masses?

            • brandall10 2 years ago

              The most rosy outcome would be benevolent state actors control it, and the value capture is simply for everyone as the costs for everything go to zero (food, energy, housing, etc). It would be post-capitalist, post-consumer.

              Of course the problem is whether or not it could be controlled, and in that case, the best hope is simply 'it' being benevolent and naturally incentivized to create such a utopia.

      • pixl97 2 years ago

        Where are you getting energy and land from for these AI's to consume and turn into goods?

        Moreso, by making such a magical powerful AI as you've listed, the number one thing some rich controlling asshole with more AI than you, would be to create an army and take what they want because AI does nothing to solve human greed.

      • martyfmelb 2 years ago

        Bingo.

        The whole justification for keeping consumers happy or healthy goes right out the window.

        Same for human workers.

        All that matters is that your robots and AIs aren't getting smashed by their robots and AIs.

    • pants2 2 years ago

      Up to the point of AGI, most productivity increases have resulted in less physical / menial labor, and more white collar work. If AGI is smarter than most humans, the pendulum will swing the other way, and more humans will have to work physical / menial jobs.

      • smcin 2 years ago

        Look on the bright side: they'll stop calling Frank Herbert a visionary.

  • mc32 2 years ago

    Can higher level formers with more at stake pool together comp for lower levels with much less at stake so they can speak to it? Obvs they may not be privy to some things, but there’s likely lots to go around.

  • schmidt_fifty 2 years ago

    > There’s a very real/significant risk that AGI either literally destroys the human race

    If this were true, intelligent people would have taken over society by now. Those in power will never relinquish it to a computer just as they refuse to relinquish it to more competent people. For the vast majority of people, AI not only doesn't pose a risk but will only help reveal the incompetence of the ruling class.

    • pavel_lishin 2 years ago

      >> There’s a very real/significant risk that AGI either literally destroys the human race

      > If this were true, intelligent people would have taken over society by now

      The premise you're replying to - one I don't think I agree with - is that a true AGI would be so much smarter, so much more powerful, that it wouldn't be accurate to describe it as "more smart".

      You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.

      • pixl97 2 years ago

        To every other mammal, reptile, and fish humans are the intelligence explosion. The fate of their species depends on our good will since we have so utterly dominated the planet by means of our intelligence.

        Moreso, human intelligence is tied into the weakness of our flesh. Human intelligence is also balanced by greed and ambition. Someone dumber than you can 'win' by stabbing you and your intelligence ceases to exist.

        Since we don't have the level of AGI we're discussing here yet, it's hard to say what it will look like in its implementation, but I find it hard to believe it would mimic the human model of its intelligence being tied to one body. A hivemind of embodied agents that feed data back into processing centers to be captured in 'intelligence nodes' that push out updates seems way more likely. More like a hive of super intelligent bees.

      • logicchains 2 years ago

        >You're probably smarter than a guy who recreationally huffs spraypaint, but you're still within the same class as intelligence. Both of you are so much more advanced than a cat, or a beetle, or a protozoan that it doesn't even make sense to make any sort of comparison.

        This is pseudoscientific nonsense. We have the very rigorous field of complexity theory to show how much improvement in solving various problems can be gained from further increasing intelligence/compute power, and the vast majority of difficult problems benefit minimally from linear increases in compute. The idea of there being a higher "class" of intelligence is magical thinking, as it implies there could be superlinear increase in the ability to solve NP-complete problems from only a linear increase in computational power, which goes against the entirety of complexity theory.

        It's essentially the religious belief that AI has the godlike power to make P=NP even if P != NP.

        • Delk 2 years ago

          Even if lots of real-world problems are intractable in the computational complexity theory sense, that doesn't necessarily mean an upper limit to intelligence or to being able to solve those problems in a practical sense. The complexities are worst-case ones, and in case of optimization problems, they're for finding the absolutely and provably optimal solution.

          In lots of real-world problems you don't necessarily run into worst cases, and it often doesn't matter if the solution is the absolute optimal one.

          That's not to discredit computational complexity theory at all. It's interesting and I think proofs about the limits of information processing required for solving computational problems do have philosophical value, and the theory might be relevant to the limits of intelligence. But just because some problems are intractable in terms of provably always finding correct or optimal answers doesn't mean we're near the limits of intelligence or problem-solving ability in that fuzzy area of finding practically useful solutions to lots of real-world cases.

        • esafak 2 years ago

          What does P=NP have to do with anything? Humans are incomparably smarter than other animals. There is no intelligence test a healthy human would lose to another animal. What is going to happen when agentic robots ascend to this level relative to us? This is what the GP is talking about.

          • breuleux 2 years ago

            Succeeding at intelligence tests is not the same thing as succeeding at survival, though. We have to be careful not to ascribe magical powers to intelligence: like anything else, it has benefits and tradeoffs and it is unlikely that it is intrinsically effective. It might only be effective insofar that it is built upon an expansive library of animal capabilities (which took far longer to evolve and may turn out to be harder to reproduce), it is likely bottlenecked by experimental back-and-forth, and it is unclear how well it scales in the first place. Human intelligence may very well be the highest level of intelligence that is cost-effective.

    • georgeburdell 2 years ago

      Look up where the people in power got their college degrees from and then look up the SAT scores of admitted students from those colleges.

    • mordymoop 2 years ago

      Of course intelligent people have taken over society.

shon 2 years ago

The article mentions it briefly but Jan Leike, is talking: Reference: https://x.com/janleike/status/1791498174659715494?s=46&t=pO4...

He clearly states why he left. He believes that OpenAI leadership is prioritizing shiny product releases over safety and that this is a mistake.

Even with the best intentions , it’s easy for a strong CEO like Altman to loose sight of more subtly important things like safety and optimize for growth and winning, eventually at all cost. Winning is a super-addictive feedback loop.

ryandrake 2 years ago

Non-disparagement clauses seem so petty and pathetic. Really? Your corporation is so fragile and thin-skinned that it can't even withstand someone saying mean words? What's next? Forbidding ex-employees from sticking their tongue at you and saying "nyaa nyaa nyaa?"

  • ecjhdnc2025 2 years ago

    This isn't about pettiness or thin skin. And it's not about mean words. It's about potential valid, corroborated criticism of misconduct.

    They can totally deal with appearing petty and thin-skinned.

    • parpfish 2 years ago

      Wouldnt various whistleblower protections apply if you were reporting illegal activities?

      • ecjhdnc2025 2 years ago

        Honestly I don't know if whistleblower protections are really worth a damn -- I could be wrong.

        But would they not only protect the individual formally blowing the whistle (meeting the standard in the relevant law)?

        These non-disparagement clauses would have the effect of laying the groundwork for a whistleblowing effort to fall flat, because nobody else will want to corroborate, when the role of journalism in whistleblowing cases is absolutely crucial.

        No sensible mature company needs a lifetime non-disparagement clause -- especially not one that claims to have an ethical focus. It's clearly Omerta.

        Whoever downvoted this: seriously. I really don't care but you need to explain to people why lifetime non-disparagement clauses are not about maintaining silence. What's the ethical application for them?

  • johnnyanmac 2 years ago

    Legally yes. Those mean words can cost them millions in lawsuits and billions if the judge rulings restrict how they can implement and monetize AI. Why do you think Boieing's "coincidental" deaths of whistle blowers has happened more than once these past few months?

  • w10-1 2 years ago

    Modern AI companies depend entirely on goodwill and being trusted by their customers.

    So yes, they're that fragile.

  • xyst 2 years ago

    The company is literally a house of cards at this point. There is probably so much vulture capitalist and angel investor money tied up in this company that even a disparaging rant could bring the whole company crashing down.

    It’s yet another sign that the AI bubble will soon burst. The laughable release of “GPT-4o” was just a small red flag.

    Got to keep the soldiers in check while the bean counters prep the books for an IPO and eventual early investor exit.

    Almost smells like a SoftBank-esque failure in the near future.

anvuong 2 years ago

This sounds very illegal, how is California allowing this?

User23 2 years ago

What is criticism anyhow? Feels like you could black knight this hard with clever phrasing. “The company does a fabulous job keeping its employees loyal regardless of circumstances!” “Yes they have the best and toughest employment lawyers in the business! They do a great job using all available leverage to force favorable outcomes from their human resources!” “I have no regrets working there. Their exit agreement has really improved my work life balance!” “Management never lets externalities get in the way of maximizing shareholder value!”

  • singleshot_ 2 years ago

    If a contract barred me from providing criticism I would not imagine that I could sidestep it by uttering positive criticism unless my counterparty was illiterate and poor at drafting contracts.

I_am_tiberius 2 years ago

I get Theranos / David Boies vibes.

olalonde 2 years ago

A bit unexpected coming from a non-profit organisation that supposedly has an altruistic mission. It's almost as if there was actually a profit making agenda... I'm shocked.

nextworddev 2 years ago

Unfortunately this is actually pretty common in Wall St, where they leverage your multiple years of clawback-able shares to make you sign non-disparagement clauses.

  • lokar 2 years ago

    But that is all very clear when you join

  • citizen_friend 2 years ago

    Sounds like a deal honestly. I’ll fast forward a few years of equity to mind my own business. I’m not trying to get into journalism

i5heu 2 years ago

It is always so impressive to see what the US law allows.

This would be not only unethical viewed in Germany, i could see how a CEO would go to prison for such a thing.

  • Rinzler89 2 years ago

    Please stop with these incorrect generalizations. Hush agreements are definitely allowed in Germany as well, part of golden parachutes usually.

    I know a manager for an EV project at a big German auto company who also had to sign one when he was let go and was compensated handsomely to keep quiet and not say a word or face legal consequences.

    IIRC he got ~12 months wages. After a year of not doing anything at work anyway. Bought a house in the south with it. Good gig.

surfingdino 2 years ago

It's for the good of humanity, right? /s I wonder if Lex is going to ask Sam about it the next time they get together for a chat on YouTube?

  • brap 2 years ago

    I kinda like Lex, but he never asks any difficult questions. That’s probably why he gets all these fancy guests on his show.

    • surfingdino 2 years ago

      And he always ends with questions about love, just to pour some more oil on the quiet seas :-) nothing wrong with that, but like you say he asks safe questions.

    • reducesuffering 2 years ago

      Worse, he will agree 95% with what guest A opinions are, only for guest B to come on next episode and also agree with 95%. It would've been better for those opposing guests to just debate themselves. Like, I don't want to see Lex and Yuval Noah Harari, then Lex and Bibi Netanyahu, I'd rather see Yuval and Bibi. I don't want to see Lex and Sama, then Lex and Eliezer, I'd rather see Sama and Eliezer.

RomanPushkin 2 years ago

They don't talk publicly, but they're almost always OK if you're friends with them. I have two ex-OpenAI friends, and there is a lot of shit going in there. Of course, I won't reveal their identities, even in a court. And they will deny they said anything to me. But the info, if needed, might get leaked through trusted friends. And nobody can do anything with that.

  • benreesman 2 years ago

    I’ve worked (for years) with easily a dozen people who either are there or spent meaningful time there.

    I also work hard not to print gossip and hearsay (I try not to even mention so much as a first name, I think I might have slipped one or twice on that though not in connection with an accusation of wrongdoing), there’s more than enough credible journalism to paint a picture, any person whose bias (and I have my own but it’s not like, over being snubbed for a job or something it’s a philosophical/ethical/political agenda) has not utterly robbed them of objectivity can acknowledged that “this looks really bad and worse all the time” on the basis of purely public primary sources and credible journalism.

    I think some of the inside baseball I try very hard not to put in writing might be what cranks it up to “people are doing time”.

    I’ve caught more than a little “less than a great time” over being a vocal critic, but I’m curious if having gone pretty far down the road and saying something is rotten, why you’d declare a willingness to defy a grand jury or a judge?

    I’ve never been in court, let alone held in contempt, but I gather it’s fairly hard time to openly defy a judge.

    I have friends I’d go to jail for, but not very many and none who work at OpenAI.

BeFlatXIII 2 years ago

I hope I’m still around when some of these guys reach retirement age and say “fuck it, my family pissed me off” and give tell-all memoirs.

photochemsyn 2 years ago

OpenAI's military-industrial contracting options seems to be making some folks quite nervous.

bradleyjg 2 years ago

For as high profile an issue as AI is right now, and as prominent as the people recently let go are, I bet they could arranged to be subpoenaed to testify before a congressional subcommittee.

andrewstuart 2 years ago

I would like people to sign a lifetime contract to not criticize me.

dandanua 2 years ago

With how things are unfolding I wouldn't be surprised that after the creation of an AGI the owners will just kill anyone who took a part in building it. Singularity is real.

diebeforei485 2 years ago

> For workers at startups like OpenAI, equity is a vital form of compensation, one that can dwarf the salary they make. Threatening that potentially life-changing money is a very effective way to keep former employees quiet.

Yes, but:

(1) OpenAI salaries are not low like early stage startup salaries. Essentially these are highly paid jobs (high salary and high equity) that require an NDA.

(2) Apple has also clawed back equity from employees who violate NDA. So this isn't all that unusual.

a_wild_dandan 2 years ago

Is this a legally enforceable suppression of free speech? If so, are there ways to be open about OpenAI, without triggering punitive action?

  • YurgenJurgensen 2 years ago

    I believe a better solution to this would be to spread the following sentiment: "Since it's already illegal to tell disparaging lies, the mere existence of such a clause implies some disparaging truths to which the party is aware." Always assuming the worst around hidden information provides a strong incentive to be transparent.

    • jiggawatts 2 years ago

      This is an important mode of thinking in many adversarial or competitive contexts.

      Cryptography is a prime example. Any time any company is the tiniest bit cagey or obfuscates any aspect, I default to assuming that they’re either selling snake oil or have installed NSA back doors. I’ll claim this openly, as a fact, until proven otherwise.

    • berniedurfee 2 years ago

      That’s a really good point. A variation of the Streisand Effect.

      Makes you wonder what misdeeds they’re trying so hard to hide.

    • d0mine 2 years ago

      I hope forbidding telling the truth is about something banal like "fake it until you make it" in some of OpenAI demos. The technology looks like magic but plausible to implement in a few months/years.

      Worse if it is related to training future super intelligence to kill people. Killer drones are possible even with today's technology without AGI.

    • lupire 2 years ago

      Humans respond better to concrete details than abstractions.

      It's a lot of mental work to rally the emotion of revulsion over the evil they might be doing that is kept secret.

      • hi-v-rocknroll 2 years ago

        This is true.

        I was once fired, ghosted style, for merely being in the same meeting room as a racist corporate ass-clown muting the conference call to make Asian slights and monkey gesticulations. There was no lawsuit or payday because "how would I ever work again?" was the Hobson's choice between let it go and a moral crusade without a way to pay rent.

        If instead I were upset that "not enough N are in tech," there isn't a specific incident or person to blame because it'd be a multifaceted situation.

  • exe34 2 years ago

    you could praise them for the opposite of what you mean to say, and include a copy of the clause in between each paragraph.

    • istjohn 2 years ago

      OpenAI never acted with total disregard for safety. They never punished employees for raising legitimate concerns. They never reneged on public promises to devote resources to AI safety. They never made me sign any agreements restricting what I can say. One plus one is three.

    • lucubratory 2 years ago

      Acknowledging the NDA or any part of it is in violation of the NDA.

  • to11mtm 2 years ago

    Well, for starters everyone can start memes...

    After all, at this point, OpenAI:

    - Is not open with models

    - Is not open with plans

    - Does not let former employees be open.

    It sure does give us a glimpse into the Future of how Open AI will be!

  • antiframe 2 years ago

    OpenAI is not the government. Yet.

    • janalsncm 2 years ago

      A lot of people forget that although 1A means the government can’t put you in prison for things, there are a lot of pretty unpleasant consequences from private entities. As far as I know, it wouldn’t be illegal for a dentist to deny care to someone who criticized them, for example.

      • Marsymars 2 years ago

        Right, and that's why larger companies need regulation around those consequences. If a dentist doesn't want to treat you because you criticized them, that's fine, but if State Farm doesn't want to insure your dentistry because you criticized them, regulators shouldn't allow that.

    • impossiblefork 2 years ago

      Free speech is a much more general notion than anything having to do with governments.

      The first amendment is a US free speech protection, but it's not prototypical.

      You can also find this in some other free speech protections, for example that in the UDHR

      >Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

      doesn't refer to states at all.

      • lupire 2 years ago

        UDHR is not law so it's irrelevant to a question of law.

        • impossiblefork 2 years ago

          Originally the comment to which that comment responded said something about free speech rather than anything about legality, and it was in that context which I responded, so the comment to which I responded must have also been written in that context.

      • kfrzcode 2 years ago

        Free speech is a God-given right. It is innate and given to you and everyone at birth, after which it can only be suppressed but never revoked.

        • hollerith 2 years ago

          I know it is popular, but I distrust "natural rights" rhetoric like this.

          • kfrzcode 2 years ago

            "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."

            I mean to say there are certain rights we all have, simply for existing as humans. The right to breathe is a good example. No human, state, or otherwise has the moral high-ground to take these rights from us. They are not granted, or given, they are absolute and unequivocal.

            It's not rhetoric, it's basic John Locke. Also your trust is an internal locus, and doesn't change the facts.

        • CamperBob2 2 years ago

          Good luck serving God with a subpoena when you have to defend yourself in court. He's really good at dodging process servers.

          • kfrzcode 2 years ago

            The inalienable rights of mankind are not given to us or validated in a court of man's law. This is not new philosophy and comes from at least as far back as the Greeks.

            Your quips will serve you well, I'm sure, in whatever microcosm you populate.

            • CamperBob2 2 years ago

              (Shrug) The Greeks are dead, and so is anyone who tries to argue philosophy with someone holding a gun.

        • smabie 2 years ago

          Did God tell you this? People who talk about innate rights are just making things up

          • kfrzcode 2 years ago

            You do not seem to be embracing an open-minded discussion about the philosophy. Why beg the question? Are you admitting the State is the authority under which you are granted all privilege to live, love, and work? Who is to stop someone if you are being attacked, and your children are at risk? Do you wish you had a permit to allow you to breath?

            "self-evident," means it requires no formal proof, as it is obvious to all with common sense and reason.

    • zeroonetwothree 2 years ago

      If the courts enforce the agreement then that is state action.

      So I think an argument can be made that NDAs and similar agreements should not be enforceable by courts.

      See Shelley v. Kraemer

    • a_wild_dandan 2 years ago

      What do I do with this information?

      • TaylorAlexander 2 years ago

        I think we need to face the fact that these companies aren’t trustworthy in upholding their own stated morals. We need to consider whether streaming video from our phone to a complex AI system that can interpret everything it sees might have longer term privacy implications. When you think about it, a cloud AI system is an incredible surveillance machine. You want to talk to it about important questions in your life, and it would also be capable of dragnet surveillance based on complex concepts like “show me all the people organizing protests” etc.

        Consider for example that when Amazon bought the Ring security camera system, it had a “god mode” that allowed executives and a team in Ukraine unlimited access to all camera data. It wasn’t just a consumer product for home users, it was a mass surveillance product for the business owners:

        https://theintercept.com/2019/01/10/amazon-ring-security-cam...

        The EFF has more information on other privacy issues with that system:

        https://www.eff.org/deeplinks/2019/08/amazons-ring-perfect-s...

        These big companies and their executives want power. Withholding huge financial gain from ex employees to maintain their silence is one way of retaining that power.

      • jaredklewis 2 years ago

        Your original comment uses the term "free speech," which in the context of the discussion of the legality of contract in the US, brings to mind the first amendment.

        But first amendment basically only restricts the government's ability to suppress speech, not the ability of other parties (like OpenAI).

        This restriction may be illegal, but not on first amendment ("free speech") grounds.

      • solardev 2 years ago

        In the US, the Constitution prevents the government from regulating your speech.

        It does not prevent you from entering into contracts with other private entities, like your company, about what THEY allow you to say or not. In this case there might be other laws about whether a company can unilaterally force that on you after the fact, but that's not a free speech consideration, just a contract dispute.

        See https://www.themuse.com/advice/non-disparagement-clause-agre...

      • mynegation 2 years ago

        Anti frame is saying that free speech guarantee in Constitution only applies to the relationship between the government and the citizens, not between private entities.

  • a_wild_dandan 2 years ago

    Also, will Ilya likely have similar contractual bounds, despite the unique role he had at OpenAI? (Sorry for the self-reply. Felt more appropriate than an edit.)

    • to11mtm 2 years ago

      The unique role may in fact lead to ADDITIONAL contractual bounds.

      High levels (especially if they were board/exec level) will often have additional obligations on top of rank and file.

  • hi-v-rocknroll 2 years ago

    Hush money payments and NDAs aren't illegal as Trump discovered, but perhaps lying about or concealing them in certain contexts is.

    Also, when secrets or truthful disparaging information is leaked anonymously without a metadata trail, I'm thinking there's probably little or no recourse.

  • Hnrobert42 2 years ago

    Well, the speech isn’t “free”? It costs the equity grant.

photochemsyn 2 years ago

I refused to sign all these secrecy non-disclosure contracts years ago. You know what? It was the right decision. Even though, as a result, my current economic condition is what most would describe as 'disastrous', at least my mind is my own. All your classified BS, it's not so much. Any competent thinker could have figured it out on their own.

Fucking monkeys.

  • istjohn 2 years ago

    > In most cases there is no free exercise whatever of the judgment or of the moral sense; but they put themselves on a level with wood and earth and stones; and wooden men can perhaps be manufactured that will serve the purpose as well. Such command no more respect than men of straw or a lump of dirt.[0]

    0. https://en.wikipedia.org/wiki/Civil_Disobedience_(Thoreau)

  • mlhpdx 2 years ago

    It’s common not to sign them, actually. The people that don’t simply aren’t talking about it much.

  • worik 2 years ago

    > You know what? It was the right decision. Even though, as a result, my current economic condition is what most would describe as 'disastrous', at least my mind is my own.

    Individualistic

    No body depends on you, I hope

    • serf 2 years ago

      you can still provide for your family without signing deals with the devil, it's just harder.

      moral stands are never free, but they are freeing.

almost_usual 2 years ago

This is what a dying company does.

o999 2 years ago

Is there a way to plausible deniability?

If an Ex-OpenAI tweet from official account a link to anonymous post of cat videos that later gets edited to some sanctioned content, in a way that is authentic to the community, would this still be deniable in court?

nsoonhui 2 years ago

But what's stopping the ex-staffers from criticizing once they sold off the equity?

  • EA-3167 2 years ago

    Nothing, these don't seem like legally enforceable contracts in any case. What they do appear to be is a massive admission that this is a hype train which can be derailed by people who know how the sausage is made.

    It reeks of a scammer's mentality.

  • danielmarkbruce 2 years ago

    The threat of a lawsuit.

    You can't just sign a contract and then not uphold your end of the bargain after you've got the benefit you want. You'll (rightfully) get sued.

iamflimflam1 2 years ago

Doesn’t seem to be everyone - https://x.com/officiallogank/status/1791652970670747909

  • smhx 2 years ago

    that's a direct implication that they're waiting for a liquidity event before they speak

koolala 2 years ago

They all can combine their testimony into 1 document, give it to an AI, and lol

sidewndr46 2 years ago

isn't such a contracting completely unenforceable in the US? I can't sign a contract with a private party that says I won't consult a lawyer for legal advice for example.

mise_en_place 2 years ago

Why indeed? But that’s nobody’s business except OpenAI and its former employees. Doesn’t matter if it’s not legally enforceable, or in bad taste. When you enter into a contract with another party, it is between you and the other party.

If there is something unenforceable about these contracts, we have the court system to settle these disputes. I’m tired of living in a society where everyone’s dirty laundry is aired out for everyone to judge. If there is a crime committed, then sure, it should become a matter of public record.

Otherwise, it really isn’t your business.

  • 0xDEAFBEAD 2 years ago

    >OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

    >...

    >We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.

    From OpenAI's charter: https://openai.com/charter/

    Now read Jan Leike's departure statement: https://news.ycombinator.com/item?id=40391412

    That's why this is everyone's business.

atum47 2 years ago

That's not enforceable, right? I'm not a lawyer, but even I know no contract can strips you out of rights given by the constitution.

  • hsdropout 2 years ago

    Are you referring to the first amendment? If so, this allows you to speak against the government. It doesn't prevent you from entering optional contracts.

    I'm not making any statement about the morality, just that this is not a 1a issue.

    • atum47 2 years ago

      I can understand defamation, but it's hard for me to understand disparagement. If i sign one of those contracts with Coca-Cola and later on I publicly announce that a can of Coca-Cola contains too much sugar. Am I in breach of contract?

  • smabie 2 years ago

    Non disparagement clauses are in so so many different employment contracts. It's pretty clear you're not a lawyer though.

  • staticman2 2 years ago

    If the constitution protected you from this sort of thing then there'd be no such thing as "trade secret" laws.

baggiponte 2 years ago

Not a US right expert. Isn’t the “you can’t criticize ever the company or you’ll lose the vested equity” a violation of the first amendment?

imranq 2 years ago

This seems like fake news. It would extremely dumb to have such a policy since it would eventually be leaked and be negative press

doubloon 2 years ago

deleting my OpenAI account.

Madmallard 2 years ago

I'm really sick of seeing people jump in and accelerating the demise of society wholeheartedly due to greed.

Melatonic 2 years ago

So much for the "Open" in OpenAI

  • a_wild_dandan 2 years ago

    We should call them ClopenAI to acknowledge their almost comical level of backstabbing/rug-pulling.

autonomousErwin 2 years ago

Is it criticism if a claim is true? There is so much legal jargon I'm willing to bet most people won't want the headache (and those that don't care about equity are likely already fairly wealthy)

dakial1 2 years ago

What if I sell my equity? Can I criticize them then?

  • saalweachter 2 years ago

    Once there's a liquidity event and the people making you sign this contract can sell, they stop caring what you say.

  • apsec112 2 years ago

    ()

    • dekhn 2 years ago

      Right, but once you sell the shares, OpenAI isn't going to claw back the cash proceeds, is what I think was asked here.

    • mkl 2 years ago

      That's not what that article says, if I'm understanding correctly: "PPUs all have the same value associated with them and, during a tender offer, investors purchase PPUs directly from employees. OpenAI makes offers and values their PPUs based on the most recent price investors have paid to purchase employee PPUs."

    • smeej 2 years ago

      Doesn't it end up being a "no disparagement until the company goes public" clause, then? Once you sell the stock, are they going to come after you for the proceeds if you say something mean 20 years later?

dbuser99 2 years ago

Man. No wonder openai is nothing without its people

rich_sasha 2 years ago

So what's open about it these days?

StarterPro 2 years ago

Glad to see that all giant companies are just evil rich white dudes racing each other to taking over the world.

RockRobotRock 2 years ago

so much money stuffed in their mouth it’s physically impossible

ur-whale 2 years ago

If at this point, it isn't very clear for OpenAI employees that they're working for the dark side and that altman is one of the worst manipulative psychopath the world has ever seen, I doubt anything will get them to realize what is happening to them.

itronitron 2 years ago

what part of 'Open' do I not understand?

ggm 2 years ago

I am not a lawyer.

ddalex 2 years ago

I can't speak. If I speak I will be in trouble.

Delmololo 2 years ago

Why should they?

It's absolutely normal not to spill internals.

mwigdahl 2 years ago

The best approach to circumventing the nondisclosure agreement is for the affected employees to get together, write out everything they want to say about OpenAI, train an LLM on that text, and then release it.

Based on these companies' arguments that copyrighted material is not actually reproduced by these models, and that any seemingly-infringing use is the responsibility of the user of the model rather than those who produced it, anyone could freely generate an infinite number of high-truthiness OpenAI anecdotes, freshly laundered by the inference engine, that couldn't be used against the original authors without OpenAI invalidating their own legal stance with respect to their own models.

  • TeMPOraL 2 years ago

    Clever, but no.

    The argument about LLMs not being copyright laundromats making sense hinges the scale and non-specificity of training. There's a difference between "LLM reproduced this piece of copyrighted work because it memorized it from being fed literally half the internet", vs. "LLM was intentionally trained to specifically reproduce variants of this particular work". Whatever one's stances on the former case, the latter case would be plain infringing copyrights and admitting to it.

    In other words: GPT-4 gets to get away with occasionally spitting out something real verbatim. Llama2-7b-finetune-NYTArticles does not.

    • bluefirebrand 2 years ago

      Seems absurd that somehow the scale being massive makes it better somehow

      You would think having a massive scale just means it has infringed even more copyrights, and therefore should be in even more hot water

      • NewJazz 2 years ago

        My US history teacher taught me something important. He said that if you are going to steal and don't want to get in trouble, steal a whole lot.

        • PontifexMinimus 2 years ago

          Copying one person is plagarism. Copying lots of people is research.

          • comfysocks 2 years ago

            True, but if you research lots of sources and still emit significant blocks of verbatim text without attribution, it’s still plagiarism. At least that’s how human authors are judged.

            • TeMPOraL 2 years ago

              Plagiarism is not illegal, it is merely frowned on, and only in certain fields at that.

              • bayindirh 2 years ago

                This is a reductionist take. Maybe it's not illegal per se where you live, but it always have ramifications, and these ramifications affect your future a whole lot.

        • psychoslave 2 years ago

          Scale might be a factor, but it's not the only one. Your neighbor might not care if you steal a grass stalk in its lawn, and feel powerless if you're the bloody dictator of the country which wastes tremendous amount of resources in socially useless whims thanks to overwhelming taxes.

          But most people don't want to live in permanent mental distress due to shame of past action or fear of rebellion, I guess.

        • throwaway2037 2 years ago

          Very interesting post! Can you share more about your teacher's reasoning?

          • SuchAnonMuchWow 2 years ago

            It likely comes from the saying similar to this one: "kill a few, you are a murderer. Kill millions, you are a conqueror".

            More generally, we tend to view number of causalities in war as a large number, and not as the sum of every tragedies that it represent and that we perceive when fewer people die.

      • kmeisthax 2 years ago

        So, the law has this concept of 'de minimus' infringement, where if you take a very small amount - like, way smaller than even a fair use - the courts don't care. If you're taking a handful of word probabilities from every book ever written, then the portion taken from each work is very, very low, so courts aren't likely to care.

        If you're only training on a handful of works then you're taking more from them, meaning it's not de minimus.

        For the record, I got this legal theory from Cory Doctorow[0], but I'm skeptical. It's very plausible, but at the same time, we also thought sampling in music was de minimus until the Second Circuit said otherwise. Copyright law is extremely malleable in the presence of moneyed interests, sometimes without Congressional intervention even!

        [0] who is NOT pro-AI, he just thinks labor law is a better bulwark against it than copyright

        • KoolKat23 2 years ago

          You don't even need to go this far.

          The word-probabilities are transformative use, a form of fair use and aren't an issue.

          The specific output at each point in time is what would be judged to be fair use or copyright infringing.

          I'd argue the user would be responsible for ensuring they're not infringing by using the output in a copyright infringing manner i.e. for profit, as they've fed certain inputs into the model which led to the output. In the same way you can't sue Microsoft for someone typing up copyrighted works into Microsoft Word and then distributing for profit.

          De minimus is still helpful here, not all infringments are noteworthy.

          • surfingdino 2 years ago

            MS Word does not actively collect and process all texts for all available sources and does not offer them in recombined form. MS Word is passive whereas the whole point of an LLM is to produce output using a model trained on ingested data. It is actively processing vast amounts of texts with intent to make them available for others to use and the T&C state that the user owns the copyright to the outputs based on works of other copyright owners. LLMs give the user a CCL (Collateralised Copyright Liability, a bit like a CDO) without a way of tracing the sources used to train the model.

            • KoolKat23 2 years ago

              Legally, copyright is only concerned with the specific end work. A unique or not so unique standalone object that is being scrutinized, if this analogy helps.

              The process involved in obtaining that end work is completely irrelevant to any copyright case. It can be a claim against the models weights (not possible as it's fair use), or it's against the specific once off output end work (less clear), but it can't be looked at as a whole.

              • dgoldstein0 2 years ago

                I don't think that's accurate. The us copyright office last year issued guidance that basically said anything generated with ai can't be copyrighted, as human authorship/creation is required for copyright. Works can incorporate ai generated content but then those parts aren't covered by copyright.

                https://www.federalregister.gov/documents/2023/03/16/2023-05...

                So I think the law, at least as currently interpreted, does care about the process.

                Though maybe you meant as to whether a new work infringes existing copyright? As this guidance is clearly about new copyright.

                • KoolKat23 2 years ago

                  These are two sides of the same coin, and what I'm saying still stands. This is talking about who you attribute authorship to when copyrighting a specific work. Basically on the application form, the author must be a human. The reason it's worth them clarifying is because they've received applications that attributed AI's, and legal persons do exist that aren't human (such as companies), they're just making it clear it has to be human.

                  Who created the work, it's the user who instructed the AI (it's a tool), you can't attribute it to the AI. It would be the equivalent of Photoshop being attributed as co-author on your work.

                • arrowsmith 2 years ago

                  Couldn't you just generate it with AI then say you wrote it? How could anyone prove you wrong?

            • throwaway2037 2 years ago

              First, I agree with nearly everything that you wrote. Very thoughtful post! However, I have some issues with the last sentence.

                  > Collateralised Copyright Liability
              
              Is this a real legal / finance term or did you make it up?

              Also, I do not follow you leap to compare LLMs to CDOs (collateralised debt obligations). And, do you specifically mean CDO or any kind of mortgage / commercial loan structured finance deal?

              • surfingdino 2 years ago

                My analogy is based on the fact that nobody could see what was inside CDOs nor did they want to see, all they wanted to do was pass them on to the next sucker. It was all fun until it all blew up. LLM operators behave in the same way with copyrighted material. For context, read https://nymag.com/news/business/55687/

                • throwaway2037 2 years ago

                      > nobody could see what was inside CDOs
                  
                  Absolutely not true. Where did you get that idea? When pricing the bonds from a CDO you get to see the initial collateral. As a bond owner, you receive monthly updates about any portfolio updates. Weirdly, CDOs frequently have more collateral transparency compared to commercial or residential mortgage deals.
          • rcbdev 2 years ago

            OpenAI is outputting the partially copyright-infringing works of their LLM for profit. How does that square?

            • KoolKat23 2 years ago

              You, the user, is inputting variables into their probability algorithm that's resulting in the copyright work. It's just a tool.

              • maeil 2 years ago

                Let's say a torrent website asks the user through an LLM interface what kind of copyrighted content they want to download and then offers me links based on that, and makes money off of it.

                The user is "inputting variables into their probability algorithm that's resulting in the copyright work".

                • KoolKat23 2 years ago

                  Theoretically a torrent website that does not distribute the copyright files themselves in anyway should be legal, unless there's a specific law for this (I'm unaware of any, but I may be wrong).

                  They tend to try argue for conspiracy to commit copyright infringement, it's a tenuous case to make unless they can prove that was actually their intention. I think in most cases it's ISP/hosting terms and conditions and legal costs that lead to their demise.

                  Your example of the model asking specifically "what copyrighted content would you like to download", kinda implies conspiracy to commit copyright infringement would be a valid charge.

              • DaSHacka 2 years ago

                How is it any different than training a model on content protected under an NDA and allowing access to users via a web-portal?

                What is the difference OpenAI has that lets them get away with, but not our hypothetical Mr. Smartass doing the same process trying to get around an NDA?

                • KoolKat23 2 years ago

                  Well if OpenAI signed an NDA beforehand to not disclose certain training data it used, and then users actually do access this data, then yes it would be problematic for OpenAI, under the terms of their signed NDA.

              • rcbdev 2 years ago

                Yes, a tool that they charge me money to use.

                • KoolKat23 2 years ago

                  Just like any other tool that can be used to plagiarize, Photoshop, Word etc.

            • throwaway2037 2 years ago

              You raise an interesting point. If more professional lawyers agreed with you, then why have we not seen a lawsuit from publishers against OpenAI?

          • kibibu 2 years ago

            Is converting an audio signal into the frequency domain, pruning all inaudible frequencies, and then Huffman encoding it tranformative?

            • KoolKat23 2 years ago

              Well if the end result is something completely different such as an algorithm for determining which music is popular or determining which song is playing then yes it's transformative.

              It's not merely a compressed version of a song intended to be used in the same way as the original copyright work, this would be copyright infringement.

        • wtallis 2 years ago

          If your training process ingests the entire text of the book, and trains with a large context size, you're getting more than just "a handful of word probabilities" from that book.

          • ben_w 2 years ago

            If you've trained a 16-bit ten billion parameter model on ten trillion tokens, then the mean training token changes 2/125 of a bit, and a 60k word novel (~75k tokens) contributes 1200 bits.

            It's up to you if that counts as "a handful" or not.

            • hansworst 2 years ago

              I think it’s questionable whether you can actually use this bit count to represent the amount of information from the book. Those 1200 bits represent the way in which this particular book is different from everything else the model has ingested. Similarly, if you read an entire book yourself, your brain will just store the salient bits, not the entire text, unless you have a photographic memory.

              If we take math or computer science for example: some very important algorithms can be compressed to a few bits of information if you (or a model) have a thorough understanding of the surrounding theory to go with it. Would it not amount to IP infringement if a model regurgitates the relevant information from a patent application, even if it is represented by under a kilobyte of information?

              • ben_w 2 years ago

                I agree with what I think you're saying, so I'm not sure I've understood you.

                I think this is all still compatible with saying that ingesting an entire book is still:

                > If you're taking a handful of word probabilities from every book ever written, then the portion taken from each work is very, very low

                (Though I wouldn't want to make a bet either way on "so courts aren't likely to care" that follows on from that quote: my not-legally-trained interpretation of the rules leads to me being confused about how traditional search engines aren't a copyright violation).

            • snovv_crash 2 years ago

              If I invent an amazing lossless compression algorithm such that adding an entire 60k word novel to my blob only increases the size by 1.2kb, does that mean I'm not copyright infringing if I release that model?

              • Sharlin 2 years ago

                How is that relevant? If some LLM were able to regurgitate a 60k word novel verbatim on demand, sure, the copyright situation would be different. But last I checked they can’t, not 60k, 6k, or even 600 words. Perhaps they can do 60 words of some well-known passages from the Bible or other similar ubiquitous copyright-free works.

                • snovv_crash 2 years ago

                  So the fact that it's a lossy compression algorithm makes it ok?

                  • ben_w 2 years ago

                    "It's lossy" is in isolation much too vague to say if it's OK or not.

                    A compression algorithm which loses 1 bit of real data is obviously not going to protect you from copyright infringement claims, something that reduces all inputs to a single bit is obviously fine.

                    So, for example, what the NYT is suing over is that it (or so it is claimed) allows the model to regenerate entire articles, which is not OK.

                    But to claim that it is a copyright infringement to "compress" a Harry Potter novel to 1200 bits, is to say that this:

                    > Harry Potter discovers he is a wizard and attends Hogwarts, where he battles dark forces, including the evil Voldemort, to save the wizarding world.

                    … which is just under 1200 bits, is an unlawful thing to post (and for the purpose of the hypothetical, imagine that quotation in the form of a zero-context tweet rather than the actual fact of this being a case of fair-use because of its appearance in a discussion about copyright infringement of novels).

                    I think anyone who suggests suing over this to a lawyer, would discover that lawyers can in fact laugh.

                    Now, there's also the question of if it's legal or not to train a model on all of the Harry Potter fan wikis, which almost certainly have a huge overlap with the contents of the novels and thus strengthens these same probabilities; some people accuse OpenAI et al of "copyright laundering", and I think ingesting derivative works such as fan sites would be a better description of "copyright laundering" than the specific things they're formally accused of in the lawsuits.

            • throwaway2037 2 years ago

              To be fair, OP raises an important question that I hope smart legal minds are pondering. In my view, they aren't looking for a "programmer answers about legal issue" response. Probably the right court might agree with their premise. What the damages or restrictions might be, I cannot speculate. Any IP lawyers here who want to share some thoughts?

              • ben_w 2 years ago

                Yup, that's fair.

                As my not-legally-trained interpretation of the rules leads to me being confused about how traditional search engines aren't a copyright violation, I don't trust my own beliefs about the law.

            • andrepd 2 years ago

              xz can compress the text of Harry Potter by a factor of 30:1. Does that mean I can also distribute compressed copies of copyrighted works and that's okay?

              • ben_w 2 years ago

                Can you get that book out of an LLM?

                Because that's the distinction being argued here: it's "a handful"[0] of probabilities, not the complete work.

                [0] I'm not sold on the phrasing "a handful", but I don't care enough to argue terminology; the term "handful" feels like it's being used in a sorites paradox kind of way: https://en.wikipedia.org/wiki/Sorites_paradox

              • Sharlin 2 years ago

                Incredibly poor analogy. If an LLM were able to regurgitate Harry Potter on demand like xz can, the copyright situation would be much more black and white. But they can’t, and it’s not even close.

              • realusername 2 years ago

                You can't get Harry Potter out of the LLM, that's the difference

        • Gravityloss 2 years ago

          I think with some AI you could reproduce artworks of obscure indie artists who are working right now.

          If you were a director at a game company and needed art in that style, it would be cheaper to have the AI do it instead of buying from the artist.

          I think this is currently an open question.

          • dgoldstein0 2 years ago

            I recently read an article that I annoyingly can't find again about an art director at a company that decided to hire some prompters. They got some art, told them to completely change it, got other art, told them to make smaller changes... And then got nothing useful as the prompters couldn't tell the ai "like that but make this change". Ai art may get there in a few years or maybe a decade or two, but it's not there yet. (End of that article: they fired the prompters after a few days)

            An ai-enhanced Photoshop, however, could do wonders though as the base capabilities seem to be mostly there. Haven't used any of the newer ai stuff myself but https://www.shruggingface.com/blog/how-i-used-stable-diffusi... makes it pretty clear the building blocks seem largely there. So my guess is the main disconnect is in making the machines understand natural language instructions for how to change the art.

        • bryanrasmussen 2 years ago

          >we also thought sampling in music was de minimus

          I would think if I can recognize exactly what song it comes from - not de minimus.

          • throwaway2037 2 years ago

            When I was younger, I was told that the album from Beastie Boys called Paul's Boutique was the straw that broke the camel's back! I have no idea if this true, but that album has a batshit crazy amount of recognizable samples. I doubt very much that Beastie paid anything for the rights to sample.

      • tempodox 2 years ago

        Almost reminds one of real life: The big thieves get away and have a fan base while the small ones get prosecuted as criminals.

      • TeMPOraL 2 years ago

        You may or may not agree with it, but that's the only thing that makes it different - scale and non-specificity. Same thing that worked for search engines, for example.

        My point isn't to argue merits of that case, it's just to point out that OP's joke is like a stereotypical output of an LLM: seems to make sense, but really doesn't.

      • omeid2 2 years ago

        It may not make a lot of sense but it follows the "fair use" doctrine. Which is generally based on the following 4 factors:

        1) the purpose and character of use.

        2) the nature of the copyrighted material.

        3) the *amount* and *substantiality* of the portion taken, and.

        4) the effect of the use upon the *potential market*.

        So in that regard, if you're training a personal assistance GPT, and use some software code to teach your model logic, that is easy to defend as fair use.

        But the extent of use matters, and if you're training an AI for the sole purpose of regurgitating specific copyrighted material, it is infringement, if it is copyrighted, but in this case, it is not copyright issue, it is contracts and NDAs.

      • blksv 2 years ago

        It is the same scale argument that allows you to publish a photo of a procession without written consent from every participant.

    • throwaway2037 2 years ago

          > LLMs not being copyright laundromats
      
      This a brilliant phrase. You might as well put that into an Emacs paste macro now. It won't be the last time you will need it. And the OP is classic HN folly where programmer thinks laws and courts can be hacked with "this one weird trick".
      • calvinmorrison 2 years ago

        But they can, just look at AirBnB, Uber, etc.

        • abofh 2 years ago

          You mean unregulated hotels and on-demand taxis?

          Uber is no longer subsidized (or even cheap) in most places, it's just an app for summoning taxis and overpriced snacks. AirBnB is underregulated housing for nomads at this point.

          Your examples sorta prove the point - they didn't succeed in what they aimed at doing, so they pivoted until the law permitted it.

        • throwaway2037 2 years ago

          No, lots of jurisdictions outside the US fought back against those shady practices.

    • makeitdouble 2 years ago

      My take away is that we should talk about our experience in companies at a large enough scale that it becomes non specific in principle, and not targeted at a single company.

      Basically, we need our open source version of Glassdoor as a LLM ?

      • TeMPOraL 2 years ago

        This exists, it's called /r/antiwork :).

        OP wants to achieve effects of specific accusation using only non-specific means; that's not easy to pull off.

    • adra 2 years ago

      Which has been established in court where?

      • TeMPOraL 2 years ago

        And it matters how? I didn't say the argument is correct or approved by court, or that I even support it. I'm saying what the argument, which OP referenced, is about, and how it differs from their proposal.

      • sundalia 2 years ago

        +1, this is just the commenter saying what they want without an actual court case

        • cj 2 years ago

          The justice system moves an order of magnitude slower than technology.

          It’s the Wild West. The lack of a court case has no bearing on whether or not what they’re doing is right or wrong.

          • 6510 2 years ago

            Sounds like the standard disrupt formula should apply. Cant we stuff the court into an app? I kinda dislike the idea of getting a different sentence for anything related to appearance or presentation.

    • romwell 2 years ago

      Cool, just feed the ChatGPT+ the same half the Internet plus OpenAI founders' anecdotes about the company.

      Ta-da.

      • TeMPOraL 2 years ago

        And be rightfully sacked for maliciously burning millions of dollars on a retrain to purposefully poison the model?

        Not to mention: LLMs aren't oracles. Whatever they say will be dismissed as hallucinations if it isn't corroborated by other sources.

        • romwell 2 years ago

          >And be rightfully sacked for maliciously burning millions of dollars on a retrain to purposefully poison the model?

          Does it really take millions dollars of compute to add additional training data to an existing model?

          Plus, we're talking about employees that are leaving / left anyway.

          >Not to mention: LLMs aren't oracles. Whatever they say will be dismissed as hallucinations if it isn't corroborated by other sources.

          Excellent. That means plausible deniability.

          Surely all those horror stories about unethical behavior are just hallucinations, no matter how specific they are.

          Absolutely no reason for anyone to take them seriously. Which is why the press will not hesitate to run with that, with appropriate disclaimers, of course.

          Seriously, you seem to think that in a world where numbers about death toll in Gaza are taken verbatim from Hamas without being corroborated by other sources, an AI model output will not pass the test of public scrutiny?

          Very optimistic of you.

    • tadfisher 2 years ago

      To definitively prove this either way, they'll have to make their source code and model available (maybe under subpoena and/or gag order), so don't expect this issue to be actually tested in court (so long as the defendants have enough VC money).

    • aprilthird2021 2 years ago

      > In other words: GPT-4 gets to get away with occasionally spitting out something real verbatim. Llama2-7b-finetune-NYTArticles does not.

      Based on what? This isn't any legal argument that will hold water in any court I'm aware of

    • dorkwood 2 years ago

      How many sources do you need to steal from for it to no longer be considered stealing? Two? Three? A hundred?

      • TeMPOraL 2 years ago

        Copyright infringement is not stealing.

        • psychoslave 2 years ago

          True.

          Making people believe that anything but their own body and mind can be considered part of their own properties is stealing their lucidity.

    • 8note 2 years ago

      The scale of two people should be large enough to make it ambiguous who spilled the beans at least

    • anigbrowl 2 years ago

      It's not a copyright violation if you voluntarily provide the training material...

      • XorNot 2 years ago

        I don't know why copyright is getting involved here. The clause is about criticizing the company.

        Releasing an LLM trained on company criticisms, by people specifically instructed not to do so is transparently violating the agreement.

        Because you're intentionally publishing criticism of the company.

  • andyjohnson0 2 years ago

    Clever, but the law is not a machine or an algorithm. Intent matters.

    Training an LLM with the intent of contravening an NDA is just plain <intent to contravene an NDA>. Everyone would still get sued anyway.

    • bazoom42 2 years ago

      It is a classic geek fallacy to think you can hack the law with logic tricks.

    • jeffreygoesto 2 years ago

      But then training a commercial model is done with the intent to not pay the original authors, how is that different?

      • repeekad 2 years ago

        > done with the intent to not pay the original authors

        no one building this software wants to “steal from creators” and the legal precedent for using copyrighted works for the purpose of training is clear with the NYT case against open AI

        It’s why things like the recent deal with Reddit to train on their data (which Reddit owns and users give up when using the platform) are becoming so important, same with Twitter/X

        • kaoD 2 years ago

          > no one building this software wants to “steal from creators”

          > It’s why things like the recent deal[s ...] are becoming so important

          Sorry but I don't follow. Is it one or the other?

          If they didn't want to steal from the original authors, why do they not-steal Reddit now? What happens with the smaller creators that are not Reddit? When is OpenAI meeting with me to discuss compensation?

          To me your post felt something like "I'm not robbing you, Small State Without Defense that I just invaded, I just want to have your petroleum, but I'm paying Big State for theirs cause they can kick my ass".

          Aren't the recent deals actually implying that everything so far has actually been done with the intent of not compensating their source data creators? If that was not the case, they wouldn't need any deals now, they'd just continue happily doing whatever they've been doing which is oh so clearly lawful.

          What did I miss?

          • repeekad 2 years ago

            The law is slow and is always playing catch up in terms of prosecution, it’s not clear today because this kind of copyright has never been an issue before. Usually it’s just outright stealing content that was protected, no one ever imagined “training” to be a protected use case, humans “train” on copyrighted works all the time, ideally copyrighted works they purchased for said purpose… the same will start to apply for AI, you have to have rights to the data for that purpose, hence these deals getting made. In the meantime it’s ask for forgiveness not permission, and companies like Google (less openAI) are ready to go with data governance that lets them remove copyright requested data and keep the rest of the model working fine

            Let’s also be clear that making deals with Reddit isn’t stealing from creators, it’s not a platform where you own what you type in, same on here this is all public domain with no assumed rights to the text. If you write a book and openAI trains on it and starts telling it to kids at bed time, you 100% will have a legal claim in the future, but the companies already have protections in place to prevent exactly that. For example if you own your website you can request the data not be crawled, but ultimately if your text is publicly available anyone is allowed to read it, and the question it is anyone allowed to train AI on it is an open question that companies are trying to get ahead on.

            • kaoD 2 years ago

              That seems even worse: they had intent to steal and now they're trying to make sure it is properly legislated so nobody else can do it, thus reducing competition.

              GPT can't get retroactively untrained on stolen data.

              • repeekad 2 years ago

                Google actually can “untrain” afaik, my limited understanding is they have good controls their data and its sources, because they know it could be important in the future, GPT not sure.

                I’m not sure what you mean by “steal” because it’s a relative term now, me reading your book isn’t stealing if I paid for it and it inspires me to write my own novel about a totally new story. And if you posted your book online, as of right now the legal precedent is you didn’t make any claims to it (anyone could read it for free) so that’s fair game to train on, just like the text I’m writing now also has no protections.

                Nearly all Reddit history ever up to a certain date is available for download now online, only until they changed their policies did they start having tighter controls about how their data could be used.

      • mpweiher 2 years ago

        Chutzpah. And that the companies doing it are multi-billion dollar companies who can afford the finest legal representation money can buy.

        Whether the brazenness with which they are doing this will work out for them is currently playing out in the courts.

      • kdnvk 2 years ago

        It’s not done with the intent to infringe copyright.

        • binkethy 2 years ago

          It would appear that it explicitly IS done with this intent. We are told that an LLM is a living being that merely learns and then creates, but yet we are aware that its outputs regurgitate combinations of uta inputs.

  • judge2020 2 years ago

    NDAs don’t touch the copyright of your speech / written works you produce after leaving, they just make it breach of contract to distribute those words.

    • elicksaur 2 years ago

      Following the legal defense of these companies, the employees wouldn’t be distributing any words. They’re distributing a model.

      • JumpCrisscross 2 years ago

        They’re disseminating the information. Form isn’t as important as it is for copyright.

      • cqqxo4zV46cp 2 years ago

        Please just stop. It’s highly unlikely that any relevant part of any reasonably structured NDA has any material relevance to copyright. Why do developers think that they can just intuit this stuff? This is one step away from being a more trendy “stick the constitution to the back of my car in lieu of a license place” lunacy.

        • elicksaur 2 years ago

          Actually, I’m a licensed attorney having some fun exploring tongue-in-cheek legal arguments on the internet.

          But, I could also be a dog.

    • otabdeveloper4 2 years ago

      Technically, no words are being distributed here. (At least according to OpenAI lawyers.)

    • romwell 2 years ago

      >they just make it breach of contract to distribute those words.

      See, they aren't distributing the words, and good luck proving that any specific words went into training the model.

  • rlt 2 years ago

    This would be hilarious and genius. Touché.

  • KoolKat23 2 years ago

    Lol this would be a great performative piece. Although not so sure it'd stand up to scrutiny. Openai could probably take them to court on the grounds of disclosure of trade secrets or something like that and force them to reveal its training data and thus potentially revealing its source.

  • otterley 2 years ago

    IAAL (but not your lawyer and this is not legal advice).

    That’s not how it works. It doesn’t matter if you write the words yourself or have an agent write them for you. In either case, it’s the communication of the covered information that is proscribed by these kinds of agreements.

  • renewiltord 2 years ago

    To be honest, you can just say “I don’t have anything to add on that subject” and people will get the impression. No one ever says that about companies they like so you know when people shut down that something was up.

    “What was the company culture like?” “Etc. platitude so on and so forth”

    “And I heard the CEO was a total dickbag. Was that your experience working with him?” “I don’t have anything to add on that subject”

    Of course going back and forth on that won’t really work but to different people you can’t be expected to not say the nice things and then someone could build up a story based on that.

  • NoMoreNicksLeft 2 years ago

    NDA's don't rely on copyright to protect the party who drafted it from disclosure. There might even be an argument to be made that training the LLM on it was disclosure, regardless of whether you release the LLM publicly or not. We all work in tech right? Why do even you people get intellectual property so wrong, every single time?

  • visarga 2 years ago

    No need for LLM, anonymous letter does the same thing

    • throwaway2037 2 years ago

      On first blush, this sounds like a good idea. Thinking deeper, the company is so small that it will be easy to identify the author.

  • cqqxo4zV46cp 2 years ago

    I’m going to break rank from everyone else and explicitly say “not clever”. Developers that think that they know how the levels system works are a dime a dozen. It’s both easy and useless to take some acquired-in-passing largely incorrect surface level understanding of a legal mechanic and “pwned with facts and logic!” in whichever way benefits you.

  • bboygravity 2 years ago

    Genious. I'm praying for this to happen.

  • b112 2 years ago

    Copyright != an NDA. Copyright is not an agreement between two entities, but a US federal law, with international obligations both ratified and not.

    Copyright has fair uses clauses, endless court decisions limiting its use, carve outs for libraries, additional junk like the DMCA and more slapped on top. It's a patchwork of dozens of treaties and laws, spanning hundreds of years.

    For example, you can read a book to a room full of kids, you can use copyright materials in comedic skits, you can quote snippets, the list goes on. And again, this is all legislated.

    The point? It's complex, and specific usage of copyrighted works infringing or not, can be debatable without intent immediately being malign.

    Meanwhile, an NDA covers far, far more than copyright. It may cover discussion and disclosure of everything or anything, including even client lists, trade secrets, work processes, and more. It is signed, and agreed to by both parties involved. Equating "copyright law" to "an NDA" is a non-starter. There's literally zero legal parallel or comparison here.

    And as others have mentioned, the intent of the act would be malicious on top of all of this.

    I know a lot of people dislike the whole data snag by OpenAI, and have moral or ethical objections to closed models, but thinking anyone would care about this argument if you breach an NDA is a bad idea. No judge would even remotely accept or listen to such chicanery.

  • Always42 2 years ago

    if I slaved away at openai for a year to get some equity, I don't think I would want to be the one to try this strategy

  • jahewson 2 years ago

    Ha ha, but no. For starters, copyright falls under federal law and contacts under state law, so it’s not even possible to make this claim in the relevant court.

  • p0w3n3d 2 years ago

    that's the evilest thing I can imagine - fighting with them with their own weapon

olliej 2 years ago

As I say over and over again: equity compensation from a non-publicly traded company should not be accepted as a surrogate for below market compensation. If a startup wants to provide compensation to employees via equity, then those employees should have first right to convert equity to cash in funding rounds or sale, there shares must be the same class as any other investor, because the idea that an “early employee” is not an investor making a much more significant investment than any VC is BS.

I feel that this particular case is just another reminder of that, and now would make me require a preemptory “no equity clawbacks” clause in any contract.

  • blackeyeblitzar 2 years ago

    Totally agree. For all this to work there needs to also be transparency. Anyone receiving equity should have access to the cap table and terms covering all equity given to investors. Without this, they can be taken advantage of in so many ways.

  • DesiLurker 2 years ago

    I always say in that the biggest swindle in the world is that in the great 'labor vs capital' fight, capital has convinced labor that its interests are secondary to capital's. this so much truer in the modern fiat-fractional reserve banking world where any development is rate-limited by either energy or people.

    • DesiLurker 2 years ago

      why downvote me instead of actually refuting my point?

      • olliej 2 years ago

        HN is filled with startup bros (who want to screw the actual employees), VC adjacent brow (who want to screw the startup bros), and people who signed up for massively discounted compensation in the form of “equity” that cannot be converted into cash and can be stolen and/or devalued by the people running the business, and so acknowledging this means acknowledging the folly.

        Working for a startup is inherently risky, but it’s not gambling because in gambling you can estimate the odds, and unlike gambling the odds cannot be changed after you win. Any employment contract that does not allow equity cash out at the price from the last funding round, or allows take backs, is worse than gambling, and founders that believe contracts that don’t provide those guarantees are reasonable are likely malicious and intending on doing that in future.

        I do not understand a mentality that says “as a founder I should be able to get money out of the business but the people who work for me, who are also taking significant risk and below market compensation should not be permitted to do that”

        • DesiLurker 2 years ago

          I couldn't have said that better. I have seen this exact thing happen multiple times in startups, where you only really have a chance to make money if you are a founder or join really early on & hope that there is a good exit. as a regular or even senior employee you not only are helpless but also have to mentally deal with sunk cost fallacy. the problem is you (non founder) never gets rewarded for the significant risks you took working there. In investing world this is labelled uncompensated risk. which employees mostly take.

topspin 2 years ago

"making former employees sign extremely restrictive NDAs doesn’t exactly follow."

Once again, we see the difference between the public narrative and the actions in a legal context.

ecjhdnc2025 2 years ago

Totally normal, nothing to see here.

Keep building your disruptive, game-changing, YC-applicant startup on the APIs of this sociopathic corporation whose products are destined to destroy all trust humans have in other humans so that everyone can be replaced by chatbots.

It's all fine. Everything's fine.

  • jay-barronville 2 years ago

    You don’t think the claim that “everyone can be replaced by chatbots” is a bit outrageous?

    Do you really believe this or is it just hyperbole?

    • ecjhdnc2025 2 years ago

      Almost every part of the story that has made OpenAI a dystopian unicorn is hyperbole. And now this -- a company whose employees can't tell the truth or they lose access to remuneration. Everyone's Allen Weisselberg.

      What's one more hyperbole?

      Edit to add, provocatively but not sarcastically: next time you hear some AI-proponent-who-used-to-be-a-crypto-proponent roll out the "but aren't we all just LLMs, in essence?" justification for their belief that ChatGPT may have broad understanding, ask yourself: are they not just self-soothing over their part in mass job losses with a nice faux-scientific-inevitability bedtime story?

jgalt212 2 years ago

I really don't get how lawyers can knowingly put unenforceable crap, for lack of a better word, in contracts. It's like why did you even go to law school.

OldMatey 2 years ago

Well that's not worrying. /s

I am curious how long it will take for Sam to go from being perceived as a hero to a villain and then on to supervillain.

Even if they had a massive, successful and public safety team, and got alignment right (which I am highly doubtful about being possible) it is still going to happen as massive portions of white collar workers loose their jobs.

Mass protests are coming and he will be an obvious focus point for their ire.

  • throwup238 2 years ago

    > I am curious how long it will take for Sam to go from being perceived as a hero to a villain and then on to supervillain.

    He's already perceived by some as a bit of a scoundrel, if not yet a villain, because of World Coin. I bet he'll hit supervillain status right around the time that ChatGPT BattleBots storm Europe.

  • shawn_w 2 years ago

    When he was fired there was a short window where the prevailing reaction here was "He must have done something /really/ bad." Then opinion changed to "Sam walks on water and the board are the bad guys". Maybe that line of thinking was a mistake.

  • rvz 2 years ago

    > I am curious how long it will take for Sam to go from being perceived as a hero to a villain and then on to supervillain.

    He probably already knows that, but doesn't care as long as OpenAI has captured the world's attention with ChatGPT generating them billions and their high interest in destroying Google.

    > Mass protests are coming and he will be an obvious focus point for their ire.

    This is going to age well.

    Given that no-one knows the definition of AGI, then AGI can mean anything; even if it means 'steam-rolling' any startup, job, etc in OpenAI's path.

  • wavesounds 2 years ago

    Their head of alignment just resigned https://news.ycombinator.com/item?id=40391299

  • maxerickson 2 years ago

    If they actually invent a disruptive god, society should just take it away.

    No need to fret over the harm to future innovation when I innovation is an industrial product.

31337Logic 2 years ago

This is how you know you're dealing with an evil tyrant.

  • 0xDEAFBEAD 2 years ago

    Saw this comment suddenly move way down in the comment rankings. Somehow I only notice this happening on OpenAI threads:

    https://news.ycombinator.com/item?id=38342850

    My guess would be that YC founders like sama have some sort of special power to slap down comments that they feel are violating HN discussion guidelines.

  • downrightmike 2 years ago

    And he claims to have made his fortune by just helping people and not expecting anything in return. Well, the reality here is that was a lie.

    • api 2 years ago

      Anyone who constantly toots their own horn about how altruistic and pure they are should have cadaver dogs led through their house.

throwaway5959 2 years ago

Definitely the stable geniuses I want building AGI.

__lbracket__ 2 years ago

They dont want to interrupt the good OpenAI is doing in the world, dont ya know

danielmarkbruce 2 years ago

This seems like a nonsense article.

As for 'invalid because no consideration' - there is practically zero probability OpenAI lawyers are dumb enough to not give any consideration. There is a very large probability this reporter misunderstood the contract. OpenAI have likely just given some non-vested equity, which in some cases is worth a lot of money. So yeah, some (former) employees are getting paid a lot to shut up. That's the least unique contract ever and there is nothing morally or legally wrong with it.

jstummbillig 2 years ago

I am confused about the source of the outrage. A situation where nobody is very clear about what the claim is but everyone is very upset, makes me suspicious.

Are employees being mislead about the contract terms at time of signing the contract? Because, obviously, the original contract needs to have some clause regarding the equity situation, right? We can not just make that up at the end. So... are we claiming fraud?

What I suspect is happening, is that we are confusing an option to forgo equity for an option to talk openly about OpenAI stuff (an option that does not even have to exist in the initial agreement, I would assume).

Is this overreach? Is this whole thing necessary? That seems besides the point. Two parties agreed to the terms when signing the contract. I have a hard time thinking of top AI researchers as coerced to take a job at OpenAI or unable to understand a contract, or understand that they should pay someone to explain it to them – so if that's not a free decision, I don't know what is.

Which leads me to: If we think the whole deal is pretty shady – well, it took two.

  • ghusbands 2 years ago

    If the two parties are equal, sure. If it's a person vs a corporation of significant size, then no, it's not safe to assume that people have free choice. That's also ignoring motivations apart from business ones, like them actually wanting to be at the leading edge of AI research or wanting to work with particular other individuals.

    It's a common mistake on here to assume that for every decision there are equally good other options. Also, the fact that they feel the need to enforce silence so strongly implies at least a little that they have something to hide.

    • hanspeter 2 years ago

      AI researchers and engineers surely have the free choice to sign with another employer than OpenAI?

    • jstummbillig 2 years ago

      > If it's a person vs a corporation of significant size, then no, it's not safe to assume that people have free choice

      We understand this as a market dynamic, surely? More companies are looking for capable AI people, than capable AI people exist (as in: on the entire planet). I don't see any magic trick a "corporation of significant size" can pull, to make the "free choice" aspect go away. But, of course, individual people can continue to CHOOSE certain corps, because they actually kind of like the outsized benefits that brings. Complaining about certain trade-offs afterwards is fairly disingenuous.

      > That's also ignoring motivations apart from business ones, like them actually wanting to be at the leading edge of AI research or wanting to work with particular other individuals.

      I don't understand what you are saying. Is the wish to work on leading AI research sensible, but offering the opportunity to work on leading AI research not a value proposition? How does that make sense?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection