Settings

Theme

I resigned from OpenAI

twitter.com

233 points by mmaia 17 days ago · 187 comments · 1 min read

Reader

https://xcancel.com/kalinowski007/status/2030320074121478618

kennywinker 17 days ago

Anybody who stays at openai is signing on to build machines that will be used to kill innocent people and control people who think that’s a bad idea.

  • slopinthebag 16 days ago

    Considering the agreement Anthropic has with Palintar, you could say the exact same about employees working at Anthropic.

    Edit: Google, too. Microsoft with its Israel and US Gov ties. Probably most of big tech tbh. How do you recommend we view these employees from an ethical perspective?

    • deaux 15 days ago

      Anthropic has willingly left money on the table by taking a stand. They could just not have done this.

      OpenAI so far has done the opposite, instead seizing the above as an opportunity.

      That is a seriously meaningful difference. Their agreement with Palantir (fwiw OpenAI has been partnering with them for even longer) doesn't erase that.

    • andai 16 days ago

      From what I recall, Microsoft has been deploying GPT in Gaza for years. So even if OpenAI said no, wouldn't the same thing end up happening via Microsoft?

      (I understand that domestic and foreign deployment are separate issues — I'd personally object to both — but I'm not sure Microsoft has a reason to take a principled stand on either of those, and they have been working with intelligence for decades.)

    • kennywinker 16 days ago

      Are they working on building tech that is being used for weapons or mass surveilance? Like yes Microsoft has contracts with israel, but their entire business is not centered around those contracts. If you help build a better ai for openai, it will be used for war and control. If you help build a better version of one of the 10,000 things microsoft makes, that’s not definitely going to be used for war and control.

      Not to get all historical on you, but if you worked for IBM in the 1930s-1940s you may have worked on something that was used to perpitrate a holocaust. Was that ethical? I don’t think so.

      That said, it’s very easy to abstract yourself away from the harm. To tell yourself you’re not the one who builds the landmines, you just maintain the coffee machine at the landmine factory. But that’s just lying to yourself. An honest and deep appraisal of what you’re work is helping make happen is required to decide if your job is ethical or not.

      • slopinthebag 16 days ago

        > Are they working on building tech that is being used for weapons or mass surveilance?

        Weird how that seems to apply to the other tech companies, but for OpenAI it's just "Anybody who stays at openai"

        Someone at Google working on Gemini CLI is clear morally, but someone at OpenAI working on Codex is acting immoral? Seems like a clear double standard.

        • kennywinker 16 days ago

          No i’d actually say those are both deeply morally questionable jobs. Not just because of the weapons and mass surveillance angles either.

          Is one worse than the other? Not clearly. They are both helping build tools that are causing environmental and economic destruction, and they’re both building things likely to be used for violence and control. Idk if gemini has been tapped by any defense departments, but that would be the only subtle distinction i can see (has it happened yet, how hard will the company resist unrestricted use).

          Not sure how you read my comment and came to this whack conclusion.

  • SilverElfin 17 days ago

    Or onto mass surveillance which is a pathway to social credit score style oppression. See this mass surveillance demo:

    https://claude.ai/public/artifacts/8f42e48f-1b35-450d-8dda-2...

    • valleyer 16 days ago

      Nit: ISPs don't have access to path names / query strings of browsing activity. Those are encrypted by TLS. (They're also not part of "DNS".)

  • Towaway69 16 days ago

    > Anybody who stays at openai

    And paying off their mortgage and feeding their families and has a job in unstable times.

    Morals come a distant last in the current state of affairs.

    • sethammons 16 days ago

      No, principles are even more important in unstable times. Anyone can excuse any behavior otherwise. And everyone at OpenAI has alternatives. This isn't choosing between prostitution or drug slinging to pay for baby formula for them. It is "how early can I retire" - and the answer should be later if it crosses boundaries. The ends do not justify the means.

      • Towaway69 16 days ago

        What’s the last principled thing you did? Drive less to save the environment?

        Easy to point fingers, harder to practice what you preach.

        I know, this isn’t about you but then again it’s not about this one person who resigned either nor the employees of openai nor about anyone else.

        I’ve provide an alternative PoV for why folks might not quit their jobs for their principles. Each to their own.

        • deaux 15 days ago

          > What’s the last principled thing you did?

          Leaving a job for one that paid 3x less. And they weren't making automated killing machines (at least at the time, who knows now).

          > Easy to point fingers, harder to practice what you preach.

          Quite easy to practice what you preach, if you indeed have principles.

          99% of people with families and mortgages manage to do so without OpenAI comp packages. It's a meaningless excuse, completely irrelevant.

        • cloverich 16 days ago

          I agree with you on non judgement but would push back - if you'll violate your principles for a cush job, they aren't really principles you have.

          • Loxicon 15 days ago

            Even though I strongly agree with the other person about reasons why people wouldn't leave...

            I agree even strongly with what you just said: "if you'll violate your principles for a cush job, they aren't really principles you have."

            The reality is, I don't think people really understand what a deeply held principle is. It's often a non-negotiable.

            • Towaway69 15 days ago

              And then sometimes you have to question your principles and perhaps let them go. This can happen, for example, when children grow up and become adults. Their parents _should_ do a lot of letting go.

              Perhaps folks involved with electronic devices are too used to a black & white decision world. Computer says no or computer says yes, there is no maybe. The real world of principles, morals, emotions, humans etc is filled with maybes and that can become hard to navigate for computers.

              • Loxicon 14 days ago

                Agreed. But in that case, it's no longer a principle because, as you said, it's been let go.

                The point above was about prioritizing something above the principle one still keeps.

                I work in marketing, nothing black and white about it.

    • Aurornis 16 days ago

      Someone with OpenAI on their resume (and vested shares) does not have to worry about finding another job, paying the mortgage, or feeding their families.

      This is not a relevant point to this discussion.

    • llamaz 16 days ago

      There are levels to morality, from the abstract (e.g. climate change, energy usage, veganism) to the concrete (murder). Time are unstable, but there are multiple ways to make money. If you are established in your career, you can probably find work in a similar field, but the worst case scenario would be to drive a truck.

      The way you frame it, you make it sound like an engineer at OpenAI has no choice but to work there or end up on the street. But an engineer at OpenAI is not going to end up driving a truck, they're going to remain and engineer.

      • Towaway69 16 days ago

        Unfortunately there are such things as social media where potential new employers check.

        That makes this step even more risky as this is an open opposition. Mostly probably they have already signed at anthropic.

    • kelseyfrog 16 days ago

      Why have them if they don't mean anything?

      • Towaway69 16 days ago

        That’s the result of equating survival with earning money. Western societies have done a good job of ensuring that. As long as morals aren’t equated to either to money or survival, they lose their meaning and become nice to have.

        • Aurornis 16 days ago

          > That’s the result of equating survival with earning money. Western societies have done a good job of ensuring that.

          OpenAI engineers with vested shares are not worried about having enough money to survive.

          This is a lame attempt to shoe-horn unrelated political talking points and “Western society bad” into a conversation about highly paid engineers who will have no problem putting food on the table.

          • Towaway69 16 days ago

            I was responding to a question on why have morals if they have no application.

            If don’t like this example, how about folks going to church on Sundays listening to the Christian morals on not killing each other and during the week, these same folks work at the DoW organising wars around the world.

            Or the politician taking lobbyists money. Or those folks who engage in recreational drug use while fighting a “war on drugs”.

            There are many examples of morals playing second fiddle to the broader world around us.

            • ajam1507 16 days ago

              And in every case there are people like you making excuses for them. Engineers working at OpenAI are not scraping by to provide for their families. They don't get a pass to do unethical things to keep their jobs.

          • reverius42 16 days ago

            I wonder to what extent a lot of people in this discussion have no idea how high OpenAI's salary ranges are.

            • Aurornis 16 days ago

              I think they know, but they see topics like this as a generic place to discuss their ideas about society or politics. So they start making points about something different and forget that it doesn’t have any relevance to the topic.

        • kelseyfrog 16 days ago

          I know that's the result.

          My question is, given that result, why continue to have them if they don't influence one's choices? You're making a case that our current economic system is incompatible with having morals.

          • Towaway69 16 days ago

            Morals are there so that folks go to church with their families on Sunday, have an affair with their sectaries during the week and drink too much with their mates on Friday night because they feel bad about their moral choices.

            • kelseyfrog 16 days ago

              You're saying the purpose of morals is not to inform choice making, but to make people feel bad?

              Why have them then? The conclusion doesn't follow from the premise.

              • Towaway69 16 days ago

                Morals were invented to hold a larger group of humans together. Smaller groups can be held together by everyone knowing each other, larger groups required a more complex system of trust.

                Morals are the glue for nation states. Morals prevent us from driving over others, morals prevent us from being mean to others. Moral makes us trust the politicians we vote for because we are told they have the same morals as we.

                My somewhat cynical picture of morals is only to make a point of how deep morals go in our societies. Folks have conscience and morals are the basis of that conscience - be it good or evil.

                Police and armies enforce these morals in the form of laws and legal constructs. Important to note though that morals are not filly encoded as laws, these are two concepts are separate societal adhesives.

                • kelseyfrog 16 days ago

                  You're saying that if we believe we share the same morals, then we're more likely to let things slide for the sake of harmony and less likely to hold violators accountable?

                  • Towaway69 16 days ago

                    No I'm not saying that, that's your interpretation of what I am saying. What I'm saying is what I have written, no deeper meaning.

                    What I said is that we vote for politicians whom we are told share our morals or we assume that they do. I don't make any judgement nor prediction what happens if that happens not to be the case - either before or after the event of voting.

                    • kelseyfrog 16 days ago

                      Ok, so just taking it literally, you're saying that morals have just these three functions:

                      1. Morals prevent us from driving over others.

                      2. Morals prevent us from being mean to others.

                      3. Moral makes us trust the politicians.

                      And nothing more?

                      • Towaway69 16 days ago

                        Morals are an invented cohesive for societal-sized groups of humans. As such, they can mean many things to many people or nothing to nobody.

                        Morals only exist as long as there are people who believe them. Once you wipe out an entire country, their morals disappear.

                        • kelseyfrog 15 days ago

                          Thanks for your response, but uninterperatible perspectives are useless to me. They definitionally cannot be applied. I'm glad it works for you though.

    • tdeck 16 days ago

      I'm sorry but with this justification anything that makes you money can be justified. You can pay the mortgage by robbing a bank too, and that's likely to get fewer people killed.

      • 7bit 16 days ago

        Easy to say from a comfortable position.

        • tdeck 16 days ago

          Do you feel the same way about people joining ICE? They probably need the money a lot more than those at OpenAI.

          • Fnoord 16 days ago

            For ICE as well: best to leave, unless you plan to do subversion from within. Ie. you can be the eyes and ears for the general public. You can be the whistleblower. You can be the leaker. You can use the breaks when needed. You can add checks and balances. You can be a hero for the general public (on paper, whether you get the credits sooner or later, who knows).

            Somehow I hope such people still work at Twitter/X.com... but I really doubt it. In the US military? Oh, absolutely. Are they noisy? Probably and preferably not. The mere possibility of their existence shivers the authoritarians. And they exist, concealed below the surface. And where they do not exist, they may develop.

    • intended 16 days ago

      Morals come FIRST at Open AI.

      Their whole schtick is based on ensuring safety for humanity given the existential risk of a singularity.

      Open AI employees MUST get called out, because entire economies and industries are being reshaped due to their statements.

      They aren't some mom and pop shop, and they aren't some typical tech firm.

    • kennywinker 16 days ago

      Unstable times. What on earth happened recently to make software engineer an unstable job…?

      • actualwitch 16 days ago

        Software engineers were very insistent that politics would not be discussed at work. Now politics came to wreck their lives, anyway!

    • ssss11 16 days ago

      No you need morals. Especially those fortunate enough to be hired by a leading company like OAI - they would be desired by any tech company.

    • JeremyNT 16 days ago

      Laughable.

      These people are lusting for generational wealth, not scrambling to put bread on the table.

  • Fnoord 16 days ago

    Yes, or subversion from within ¯\_(ツ)_/¯

    • andai 16 days ago

      That's the trend we've been seeing for years yeah?

      Work at the main AI company. Company has severe ethical issues. Be a person who cares about that. Leave. Surprise, issues get worse.

  • rvz 17 days ago

    That's fine. But they shouldn't be lecturing to anyone about "principles" or moral superiority and at the same time being either paid or holding RSUs as well, since that would make them completely dishonest themselves.

    It just shows that they have done poor research about the company before joining (Meta is just as bad) and are in on the grift (joined OpenAI only after post-ChatGPT) and this employee does not believe what they are saying.

  • ed_mercer 17 days ago

    I’m worried that China will build said killing machines and that we’ll be unprepared.

    • dbtc 17 days ago

      I'm worried that China will build said killing machines only because they see that we are and feel the need to be prepared.

      • zwarag 16 days ago

        If you compare how many countries China has attacked or invaded with how many the United States has attacked or invaded, it paints a clear picture of whom to fear.

      • bilbo0s 17 days ago

        This.

        Everyone will do this, because everyone will believe that everyone will do this.

        Even worse, there really is no guarantee that the great powers will create the best terminators. Everyone talks about China and the US. (And we should.) At the same time however, we should all keep in mind that nations from India and Indonesia, to North and South Korea will not be simply sitting on their hands while the US and China forge ahead.

        A future where 4 million dollar American or Chinese terminators are easily overwhelmed by thousands and thousands of 5 dollar Indian autonomous devices is not at all outside the realm of future possibilities.

        That's what makes it all so concerning. We can kind of see where it leads in terms of enhanced capability potential for non-state actors, but we can't really see a way to avoid that future.

      • sidcool 17 days ago

        Game theory in action

    • nerfbatplz 17 days ago

      The last time China bombed a foreign country was 1979, 47 years ago. Has the US gone even 47 days in the last 80 years without bombing another country?

    • kennywinker 16 days ago

      Of course! The only way to fight ai powered killing machines and mass surveilance is to make ai powred killing machines and do mass surveillance. It’s so simple, i don’t know why i didn’t think of it!

      But seriously - you are describing the kind of thinking that caused ww1, and the nuclear arms race that almost caused human extinction. It’s a bad idea that goes bad places.

    • insane_dreamer 16 days ago

      I reject that argument. I dislike China (the CCP, not the people), and having lived there 6 years I know it much better than most foreigners. But your argument leads to us becoming just like China (CCP). I'd rather hold to some moral values and humanity and be a weaker country, than discard them to be strong.

    • jatari 17 days ago

      China is currently a more morally virtuous country than the US.

      • insane_dreamer 16 days ago

        I'm more angry than most about what the US has become lately, but I also have a deep knowledge of what China is actually like from 6 years living and working there, and I can tell you that China is still worse. Granted, the US is heading that directly pretty quickly.

      • jimmydoe 17 days ago

        Believe me, China hasn't show its true face yet, but it will, just wait.

        And while we are waiting, there're another few wars to be done.

        • throw310822 17 days ago

          Maybe the true face of China so far is that it hasn't shown its true face. While the true face of the US is what it has shown again and again.

        • platevoltage 16 days ago

          It's gonna take a LOT for them to match the USA's depravity.

      • idiotsecant 16 days ago

        Let's not get crazy here

      • rockskon 17 days ago

        They welded shut the doors to Uyghur Muslims and had a bunch of donated food for them stacked outside their homes in one giant pile that they couldn't get to. It either rotted away or was eaten by animals.

        • fooster 17 days ago

          Jack booted thugs shot a women in the face for the crime of sitting in her car and the administration called her a terrorist. Nothing happened to the thug.

          Jack booted thugs shot a man in the back for the crime of defending a woman and the administration called him a terrorist. Nothing happened to that thug either.

          • insane_dreamer 16 days ago

            Absolutely. But what people don't realize is this sort of thing happens in China too, it's just never reported or heard about, other than some whispers here and there, because of such tight control over the media, the internet, and public discourse. In the US, as much as the fascists are trying to take over, at least you can still protest and make your voice heard.

            • kelseyfrog 16 days ago

              Source?

              • insane_dreamer 15 days ago

                Me. I lived and worked there for a number of years. When you talk to the local people in confidence you find out about this stuff, and you sometimes catch glimpses of it before it's scrubbed from local social media. There's a very high level of control.

                • kelseyfrog 15 days ago

                  How can I, as a Westerner, verify this?

                  • insane_dreamer 15 days ago

                    Spend a few years in China.

                    Most people don't understand this about China, and most reporters who go there are like "I spent 2 weeks in China and here's what it's like", or "I spent a semester studying at Beida (Peking U)".

                    10+ years ago there were places where social media posts were archived by some brave individuals before they were scrubbed so you could see what words were being targeted by censors. But that's long shut down to my knowledge.

                    • kelseyfrog 15 days ago

                      Sadly the forum posts were returned to angel Moroni. Really wishing I had a seer stone rn :(

                  • rockskon 15 days ago

                    There are news stories that come out occasionally from reputable sources.

                    • kelseyfrog 15 days ago

                      Anything I can read? So far to believe it, I either have to move to and live in China for several years or "trust me bro". Neither of those are epistemically viable. It should be clear why.

                      • insane_dreamer 14 days ago

                        Why can I say? Sometimes the only way to really learn about a country esp one with such tight control over information is to live there a decent amount of time. And since not many investigative journalists are doing that in China any more (China’s not likely to let them stay if they’re reporting the truth), it’s tough. If you really want to know, you may have to go find out for yourself.

    • t0lo 17 days ago

      The myth of american moral superiority had been dead for a while. Why would china be any more evil than the US, which has waged far more colonialist wars and killed far more foreign lives in recent times (look at the news today for inspiration)

      • afavour 17 days ago

        I don’t see any contradiction with what the OP said, though. You don’t have to be morally superior to still be concerned about a country’s forces killing you.

        • t0lo 17 days ago

          It's a reversal of the more likely situation which is the us getting it and china following in response. Nuclear weapons anyone? Remember who started those.

      • smallnix 17 days ago

        Uighur concentration camps? Falun Gong organ harvesting?

        • t0lo 17 days ago

          Vietnam war, iraq war, afghanistan war, iran war, gaza war, allowing iraq to get and use chemical weapons on iran, forced regime change in south america (then and now). Get real it's not equivalent in any way

          • nozzlegear 16 days ago

            How can you say the Uyghur genocide isn't "equivalent" to the things you listed? What math are you using to compare them? How do you compare regime change in South America to Uyghur genocide, for example? Is there a spreadsheet somewhere that lists the value you're placing on lives, war and geopolitical actions, in order to make a fair comparison?

            • blub 16 days ago

              The UN has released a report on human rights abuses in China, but has not called these a genocide. The more credible accusations of genocide came from a handful of political bodies in Western countries, but crucially the acting governments have not defined it as such.

              There’s absolutely no consensus that the legal definition is met, in contrast with another ongoing situation which enjoys wide recognition.

              It feels that this is more a geopolitical cudgel, pulled out when the discourse against the US becomes negative. But given the events in the last years, this seems like a lost cause even in the West, never-mind the rest of the world.

              • nozzlegear 16 days ago

                Surely that's only because China has a permanent position on the security council and wouldn't allow such a report to be made. Israel does not sit on that council, and while the current admin is quite cozy with them, the Biden admin became fed up with Netanyahu and his treatment of Palestinians, culminating in the US ambassador to the UN abstaining from votes against Israel rather than voting to protect it.

                But that's beside my point. It's too late to edit my post, so pretend I used the word "culling" instead of "genocide." How does one weigh a Uyghur culling against a South American regime change? What's the exchange rate?

        • hackable_sand 17 days ago

          Nice! You got two!

    • bamboozled 16 days ago

      They have to build them because we will build them...

    • henry2023 17 days ago

      “I’m afraid my neighbor would kill my son, therefore, I’ll kill my son myself”

    • suzzer99 17 days ago

      I got scared when I saw China's synchronized drone swarms at the Beijing Olympics, which I believe was the point.

      • platevoltage 16 days ago

        My thoughts were "why do we still buy fireworks. This is way cooler, and not really annoying"

    • orwin 17 days ago

      Unless you're living in Taiwan, I don't think you have a lot to prepare for.

      • ethbr1 17 days ago

        Just wait until China gets to the next stage of capitalism.

        They're investing their trade surplus in assets around the world, especially the third world.

        When those assets start to go bad and/or the government nationalizes them?

        We'll see if China responds any differently than any of the other colonial powers with business interests.

    • jimmydoe 17 days ago

      while you are worried about China, USI have done a genocide and started a new war.

  • pembrook 16 days ago

    Anyone who leaves OpenAI is also signing the death warrant of countless young soldiers by refusing to help build the technology to help remove humans from old school combat.

    The current state of affairs of modern warfare is: lots of deaths, lots of collateral damage.

    Improving the technology used is more likely to lead to less collateral deaths of innocent people and your own soldiers as well.

    There’s already enough weapons to blow the entire world up a thousand times over. Making armies smarter about how they use these deadly weapons is a good thing.

    Technologists and intellectuals are notoriously terrible at these sorts of broader societal calculations. They all thought the internet and Social Media would obviously lead to global freedom, which it didn’t.

    Now technologists think their new thing, AI coding/spreadsheet bots, will destroy the global economy and kill us all or lead to communist techno-utopia. What if we stop with the moralistic grandstanding and self-aggrandizement and take a deep breath. None of the overpaid pontificators at OpenAI has ever seen real combat, so to make confident claims about what nascent technology will do to it is silly.

    This whole thread is going to age like milk.

    • kennywinker 16 days ago

      And how exactly does collaborating with the US gov on mass surveillance of citizens help save the lives of young soldiers?

      But ok, let’s stick to weapons. The premise that we can wage war without sacrificing lives is a tantalizing one. But do you genuinely think that would prevent death? The drone warfare era under bush and obama shows that killing from afar with no skin in the game doesn’t lead to restraint or lack of war. It just leads to blowing up entire wedding parties.

      • pembrook 16 days ago

        Do you realize that prior to drones, we would just carpet bomb entire cities?

        Collateral damage would be the entirety of the city itself and a huge percentage of the people in it, not just a wedding party.

        Also, China is doing mass surveillance just fine without OpenAI. So this is an irrelevant, mute point.

        • kennywinker 16 days ago

          When we chose to go to war, carpet bombing was common, yes. But would Obama have gone to war in Pakistan, Yemen, Somalia, and Afghanistan if he hadn’t had drones? The choice to go to war is influenced by the tech you have.

          > Also, China is doing mass surveillance just fine without OpenAI. So this is an irrelevant, mute point.

          The east german stasi was doing mass surveillance just fine without computers… yet they couldn’t implement what china has done. We have yet to see the full reality of what AI-enabled mass surveillance looks like - but what the stasi did, and what china does, will look like freedom compared to what is coming.

          Also just fyi it’s “moot” point not “mute” point.

          • pembrook 16 days ago

            I’d make the choice to live in modern China instead of the GDR under the stasi every single time, without hesitation.

            • kennywinker 16 days ago

              Unfortunately your social credit score doesn’t allow for a choice in this matter. Please report to your nearest time machine within 24 hours. Thank you!

  • nirui 17 days ago

    A lot of people despite the idea of killing, but as technology advances, and the cost of weapon systems increases, it is less and less likely that these expensive systems will be used to target innocent people, since doing so is likely a waste of resources. On the other hand, usually it is those less-advanced weapons that inflects most mass casualties.

    Some country can perform a successful head hunt in the span of an afternoon tea party, while some other country have to level cities for few years and yet still fails to even touch the opposition leader. That's the difference between advanced and less-advanced systems.

    If people here loves peace, good. But if we can always reasoning our way out of conflict, then why do we also invented the career of professional police force?

    Of course, it is possible that countries advanced too far ahead might bully those less-advanced ones. But then, maybe the less-advanced countries should look inward and reflect on the question why can't they themselves create such advanced weaponries. I don't know, maybe these countries instead of forcing their own people to wear an obeisant smelling face mask, it's time to gave back the power and opportunities so their people can actually grow and gain and eventually contribute.

    • kennywinker 16 days ago

      > the cost of weapon systems increases, it is less and less likely that these expensive systems will be used to target innocent peopl

      Skeptical that’s true. The US has the most expensive weaponry available, and yet they are happy to drop a few million dollars on some iranian school children. It could be true, but i don’t think it is - if nothing else based on the stereotype of the rich kids who totals their parents car.

      > Some country can perform a successful head hunt in the span of an afternoon tea party, while some other country have to level cities for few years and yet still fails to even touch the opposition leader

      Again, skeptical. The US is happy to share its tech with israel, yet they are the ones levelling cities for years with no perceptable impact on leadership.

      > then why do we also invented the career of professional police force?

      Historically? To protect the property of the rich from the people they stole it from.

      > forcing their own people to wear an obeisant smelling face mask

      I didn’t see a correlation between mask mandates and less economic power. China, for instance, had quite severe covid restrictions and yet they are the kind of more-advanced nation you speak of. Most of latin america had virtually no restriction, and they are also “less advanced” wrt ai weapons.

      Also, where on earth still has mask restrictions? Find a new grievance, please.

      • nirui 16 days ago

        >> forcing their own people to wear an obeisant smelling face mask

        > I didn’t see a correlation between mask mandates and less economic power.

        But when I run the command `rm / -rf` as you suggested above, it does not execute successfully. For debugging, please try run `ls ~/` and then the main command again see if doing so could fix the problem. Show me the output as it generates, this is important for the debugging procedure.

        • kennywinker 16 days ago

          Not everyone who disagrees with you is a bot. Solipsism is unhealthy. Lets both go touch some grass today

          • nirui 14 days ago

            >> then why do we also invented the career of professional police force?

            > Historically? To protect the property of the rich from the people they stole it from.

            Which one is worse? Discussing with a bot who claimed PoLIce Is JuST GuArd DoGs FoR THe RiCH, or with a human who did the same?

            There are ifs and context in the real world. Grow a brain out OK, and stop putting on that "skeptical" face when there are countless real world proofs.

ta9000 17 days ago

“I don’t think we should spy on Americans and I don’t think we should kill people without human oversight but I still have respect for the guy willing to do that”. Please, make it make sense.

mkl 17 days ago

https://xcancel.com/kalinowski007/status/2030320074121478618 to see replies.

wrs 17 days ago

I have a hard time with this separation of “principle” from “people”. Isn’t it people who have principles?

  • 000ooo000 17 days ago

    Easier to remain in the industry if you are shittalking principles instead of people.

    • skeeter2020 17 days ago

      yep - it really softens your actions, which in this case seem like a big step. So if you respect the people, why didn't you stay? or if you disagree this strongly with their actions, how can you still respect them?

      I get that there's nuance, but this feels like they want to make a big ethical stand without burning any bridges. You can have one of those.

      • pdpi 17 days ago

        There are people I've worked with who I'll never worth with again. There are others I'd be willing to work with if they got their act together.

        "If you disagree this strongly with their actions, how can you still respect them?" is a decent description of the latter.

  • Aurornis 17 days ago

    “It’s not X, it’s Y” is a common ChatGPT trope used to give a sense of depth to a statement but the specific contrast is generally murky like this. This Tweet was either written by ChatGPT or heavily influenced by ChatGPT style.

  • rvz 17 days ago

    There are no "principles" in big tech and I call bullshit on this tweet and their reasoning.

    OpenAI already had military contracts while this employee was at the company and there was no open letter last year about that.

    Prior to that, they were at Meta and joined OpenAI after ChatGPT took off.

    If they thought that AGI was about "principles" then not only they were naive, but it leads me to believe that they were only there for the RSUs, just like their time at Meta.

    Why is it so hard to be honest and just say you were there for the money, fame and RSUs and not for so called "AGI"?

    • paulryanrogers 17 days ago

      > Why is it so hard to be honest and just say you were there for the money...

      Because then you miss opportunities like this in which to market yourself. A kind of hedging your bets in order to get more money and/or stay out of jail if the winds change. (Jail can be expensive.)

      Or it could be honest cognitive dissonance.

  • culanuchachamim 17 days ago

    It's not that complicated... People have much more depth and sides than one particular idea or principle that they have (specially if you don't know all the context that force them to chose one decsion over another). I'm sure Sam in many ways he's also a great person, so in that case you judging the idea and not the person.

slopinthebag 17 days ago

I can't help but to feel like this is an odd moral position to take. OP is apparently fine with building technology to spy on civilians in other countries, and I don't see a moral relevance to citizenship on this matter. If spying on civilians is fundamentally wrong, it doesn't become OK when the people live in a different region of the world. If spying on civilians is fundamentally OK, then why would there be a moral exception for civilians who live inside the geographical region in which the company is legally registered? Perhaps someone can enlighten me here.

The autonomous killing thing is more reasonable, but still, if you're OK building death technology, I'm not exactly sure what difference having a human in the loop makes. It's still death.

  • totally_human 16 days ago

    Spying on your own citizens enables certain sorts of anti-democratic abuses (and has been used that way in the past), so I can understand the specific opposition to it. Put somewhat melodramatically, they're okay with spying but don't want to create self-coup tools.

    I agree that the killbots red line is somewhat odd, but I guess you have to draw the line somewhere, and I prefer them having that principle to having no principle at all. (Also, it's possible that the AI insiders understand something I don't about why a human in the loop is important.)

    • slopinthebag 16 days ago

      I would argue that isn't really a moral argument though, it's rather utilitarian. If someone at OpenAI disagreed with that risk assessment, that's a difference of opinion not a reason to quit and write letters talking about ethics.

      Also it's a rather American-centric view. If a Canadian is working at OpenAI, should they care? Or would they care more about possible anti-democratic interference by the American government on Canada?

      • totally_human 14 days ago

        Utilitarianism is a moral system. If you disagree with the risk assessment and are utilitarian, you believe that OpenAI got it wrong and is thus doing bad stuff. If a company making bridges, say, or nuclear power plants, was doing risk assessments that appeared to ignore substantial risks in order to get a lucrative contract, I would fully expect engineers to quit and start writing letters talking about ethics.

        Agreed on the America-centric view, to an extent. I will note that almost all countries have spied on each other since time immemorial, but serious efforts to spy on their own citizens tend to coincide with uniquely repressive and unpleasant regimes. I think having a norm against spying on your own citizens is good, even if it isn't a perfectly elegant principle. Also, countries can do more damage spying on their own citizens vs other citizens -- as a Canadian, I don't want the American government spying on me, but I'd probably be more worried if the Canadian government was spying on me.

  • insane_dreamer 16 days ago

    I agree mass surveillance is fundamentally wrong, but it's reasonable for people to feel greater responsibility towards the citizens of your own country, and how they are treated by your government.

    • slopinthebag 16 days ago

      Maybe, but I still think it's an odd moral boundary to cross. You might feel as though it's fine to spy on Chinese citizens because of the relationship the US and China have, but what about Canadians or Australians or the Brits or any other NATO country? I get it might feel different, but is that really a hard moral line in the sand you refuse to crosss? Idk.

      • dummydummy1234 16 days ago

        So the risks are different. If China does mass surveillance on us citizens, then what are the potential downsides? China can do targeted influence campaigns in the us, China can do targeted espionage in the us.

        The harms that come from this are against us national security as a whole, the harms are not to individuals and civil liberties. Even if both China and US governments are bad actors, then the fact that China is spying on Americans will not affect Americans civil liberties.

        On the other hand if the United States does mass surveillance on Americans, then that can be used by bad actor administrations to suppress dissent, throw people who disagree in prison, suppress speech. Essentially the government has the targeted ability to suppress civil liberties.

        So it is very different, because the incentives and potential downsides are different. Similar with companies. Google does not have the ability to lock you up for your Google search, the federal government does (if you are American).

        It's the same with Nato/allies, it's not about the country, it's about the spying governments ability and incentives to act on the information.

        We don't want the stasi, but imagine a world where the stasi instead had millions of files on Scottish people. What is the worst the stasi could do? What is the worst they would be realistically incentivised to do?

      • insane_dreamer 15 days ago

        It's not so much about morals as about the power that the governments have over the people they are spying on. I think it's wrong for the US gov to spy on Canadian citizens living outside the US. But the fact is the US gov has no power over those Canadian citizens outside the US. Whereas the US gov has a great deal of authority over US citizens, or foreign nationals living on US soil, and therefore the information gathered through mass surveillance becomes a much more dangerous weapon.

mrcwinn 17 days ago

Good for Caitlin. Sam Altman is awful. He literally admitted on Twitter that they rushed their military contract to get it done. Are you kidding me? You rushed your military contract?

Any employee who stays, especially given the financial cushion they have, is complicit. Shame on all of them.

But here’s the sad truth: most of the knowledge workers at OpenAI won’t be of any value sometime soon because of the very tool they’re building.

  • sudo_cowsay 17 days ago

    You cant just blame everyone at OpenAI

    Everyone has their own unique situation

    • make3 16 days ago

      they all made 600k-800k USD and up and are highly employable, you definitely can blame them

    • tdeck 16 days ago

      Bullshit. What people have are excuses they make for themselves, and they don't need our help.

replwoacause 16 days ago

Good. Proud of her. We need more like her who have principles.

monkaiju 17 days ago

Always surprised when these "smart people" didn't see these things coming from several years away... Its honestly hard for me to believe it.

Going to work for these big SV corps is and always has been directly in service of US empire, that's literally what built the valley in the first place.

  • conartist6 17 days ago

    Haha that's what I thought, but my thought was that I can't believe Sam Altman didn't see a serious backlash coming when Anthropic rejected a contract saying "the only two things we won't do are mass surveillance and autonomous killer drones" and within 6 hours Sam was all over that.

    • gre 17 days ago

      The only thing that can stop a bad ai with a guy is a good ai with a gun.

    • peyton 16 days ago

      I thought they missed the deadline. That’s hardly a rejection. They’re still negotiating, right?

  • pan69 17 days ago

    It's easier to defer principled decision making to the future while you can rake in the cash in the meantime.

voganmother42 17 days ago

Respect for standing up

lostmsu 17 days ago

To save a click

> I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.

glimshe 16 days ago

Is "Why I left OpenAI" this decade's version of "Why I left Google"?

  • make3 16 days ago

    It was up last week, now no one really needs to read the post

camillomiller 17 days ago

If you don’t wanna upset your stomach, don’t make the mistake of reading the replies. What a cesspool of humanity X is.

mrcwinn 17 days ago

Whatever happened to this all powerful non profit that would ensure OAI is doing right? Something tells me they just cashed in and run a corrupt shell at this point.

  • paulryanrogers 17 days ago

    The board did try. That's why OAI has a new board.

    • Jordan-117 16 days ago

      It's grimly amusing that a non-profit created to control superintelligent AI for the benefit of humanity couldn't even handle Silicon Valley PR politics.

structuredPizza 17 days ago

Autocomplete > Automurk

LeoPanthera 17 days ago

Their justification rings hollow when they continue to use X.

  • jmull 17 days ago

    Doesn't seem to be an equivalency there.

    • idlerig 17 days ago

      There isn't, just inserting politics into a discussion on principles.

  • cozzyd 17 days ago

    Leaving a job is easy. Social media on the other hand...

    • Jensson 16 days ago

      Quitting a job also have much more effect on the company than switching to a new social media platform, since every company have many times more users than they have employees.

usr1106 17 days ago

In Germany it made it even to the general news https://www.spiegel.de/wirtschaft/unternehmen/openai-manager...

So it wouldn't even be worth a HN submission. Well, I think it can still go under exception for exceptional news.

threethirtytwo 17 days ago

That twitter post was clearly written by AI along with the instructions for the AI to avoid "tells" and other tropes common to AI.

Absolutely nothing wrong with something written with AI. Just pointing it out.

  • tadfisher 17 days ago

    There is something wrong with humanity losing the willingness to think and type out a four-sentence paragraph, I would wager.

    Generated comments are banned on HN, FWIW.

  • bombcar 17 days ago

    But was it written with an OpenAI AI?

  • add-sub-mul-div 17 days ago

    We're nearing or at an inflection point where people like this are dependent on it.

  • Aeglaecia 17 days ago

    if that's the case, ai failed to remove the negative parallel construction (my current top ai smell aside from slanted inverted commas). what signs are there of this being ai asked not to sound like ai?

    • threethirtytwo 17 days ago

      Right, that's the sign. Ai often fails to do what it's told. So that's the sign of it asked to not sound like an AI. I told AI to do this for my current post as well.

      • Aeglaecia 17 days ago

        ok. do you see any more concrete signs? to me it smells like openai output with newlines removed. but aside from smell (and the negative parallel construction), one could argue that this may be the output of a human who has been influenced by the prose of ai.

        • slopinthebag 16 days ago

          Perhaps there isn't really a functional difference.

          • Aeglaecia 16 days ago

            as a black box there evidently is no functional difference, as output is perceptibly identical in both cases. as a white box its amusing to consider that an ai worker may themselves become verbally indistguishable from ai, although in this particular case its more likely that the ai worker is simply lazy and told ai to write a tweet for them.

  • fredoliveira 17 days ago

    and you say this based on?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection