Settings

Theme

Moving AI governance forward – openai.com

openai.com

54 points by ladino 2 years ago · 63 comments

Reader

Pannoniae 2 years ago

Moving AI governance "forward" - means working on eliminating the competition via regulatory capture, censoring and blackholing any facts they don't like, and preserving the economical status quo with enriching themselves.

None of this will protect against actual risks like massive job losses without any reskilling, lowering of living standards and widespread censorship.

  • skybrian 2 years ago

    Regulatory capture is a risk but concluding "therefore we shouldn't regulate things" is an argument that doesn't work when there's a need to regulate dangerous things, except maybe with hard-core libertarians.

    Like, should there be prescription drugs? If you say no, you should be able to get any drug without a prescription, that's pretty out there.

    But the question is whether AI is that dangerous, and there's widespread disagreement on that.

  • kenjackson 2 years ago

    This is short-sighted. Governance will happen with or without the assistance from OpenAI. Google is still much larger than OpenAI and presumably has much more regulatory presence than OpenAI. The same goes from Meta, Amazon, Microsoft, and others in the space.

    I think OpenAI is making it clear to get your seat at the table now. And it probably will be true that this will make it harder for upstarts to join in, but everything so far they've talked about seem reasonable and make sense. And have you worked on things like GDPR compliance? It's a wonder that anyone bothers to get large enough to comply -- it's so much work. But we think privacy is important so we put a huge burden on those companies who hold our data.

    • Pannoniae 2 years ago

      I don't see why this would be short-sighted. OpenAI censors way more than required by the laws of any western country..... It's not just censoring extremism, hate, threats and national security issues, it's applying a corporate bland sanitization to everything. It refuses to write an opinion about anything even slightly controversial, it blackholes the existence of entire books and it refuses to roleplay even slightly morally dubious characters, all in the name of "safety".

      And also, in our globalised world, if one or two countries decide to kill competition in the AI space, then I'll bet you that the hosting will move to a different country:)

      • kenjackson 2 years ago

        There's a difference between regulation and self-censoring. Major League Baseball bans a bunch of behavior that is legal from their players. Other baseball leagues could permit such behavior.

        This is about both defining the regulations and practices. Other companies can have different practices. Fox News and MSNBC both report the news, but do so with very different principles and practices. But they are both beholden to the same laws (around libel, slander, etc...).

        For AI these laws will be created, period. It will happen. And companies will also take on different perspectives about their own practices. As you note, there may well be laws about national security. But I imagine bland sanitization will be a feature of some versus others. The market will settle some of this, but you're short-sighted if you think the market will be the arbiter of it all.

        • Pannoniae 2 years ago

          "But I imagine bland sanitization will be a feature of some versus others. The market will settle some of this, but you're short-sighted if you think the market will be the arbiter of it all."

          Bland sanitization is bad for almost everyone. It will destroy society if not regulated... I certainly wouldn't want to live in a world where everything is converging towards rightthink and everything else gets memoryholed. AI is becoming similar to a public utility, so it should be regulated like it. That requires impartiality and a lack of censorship.

          • kenjackson 2 years ago

            If it converges it’s because that’s what people want.

            • Pannoniae 2 years ago

              No, it's what the leading 2 or 3 AI companies want. Do people like the enshittification of the internet? Most likely not, but that's what the platforms want so that's what happens.

              It's not what the people want, it's whoever runs the AI wants... This is why they should be regulated so they can't have a negatively distorting effect on society with bias and censoring ideas which they don't like.

              • kenjackson 2 years ago

                People love the internet (at least the parts they use). It’s only these side communities that are generally out of touch with normal people (or look down on them) who think that people don’t love the Internet that they use. That said, everyone hates the part of the Internet that don’t use. Like they live their congressperson, but hate everyone else in congress.

  • flangola7 2 years ago

    Worrying about censorship and job losses is so myopic my eyes are cringing just thinking about it.

    This is akin to a ship sinking while people are blocking the hallways and fretting about how their makeup looks and where they put their jewelry at. It's so maddeningly absurd there's no point in even discussion, shove them in the nearest cabin and out of the way so the serious people can get on with it.

    • woeirua 2 years ago

      This kind of mentality is going to blowback on AI researchers very quickly. When the job losses start mounting politicians are going to need a very persuasive answer as to why we should let millions of people be thrown onto the streets so that a few can become absurdly rich. We will either: have a well-thought out, empathetic plan to resolve this issue, or the mob will burn down the data centers. Your choice.

      • Pannoniae 2 years ago

        You are 100% right. Inequality can only be sustained when most people see a way up, so they feel "temporarily embarrassed millionaires" instead of being poor. When things get to a point where they feel like anything they do is a dead end and there is no chance of breaking out of poverty, that's when things get very nasty and society gets shaken up fundamentally.

        • woeirua 2 years ago

          I think humanity self-destructs if it gets close to the singularity while still in a capitalistic society. You either immediately transition to socialism or have civil war erupt in every country simultaneously.

      • olddustytrail 2 years ago

        I think you misunderstood the mentality. When there are hundreds of AI cults desperately trying to evolve their AI to be the New One True God who will make them all billionaires, leading to civil wars breaking out in every first world country simultaneously, job losses will be the least of your worries.

        • woeirua 2 years ago

          This is not realistic due to the costs associated with training the models. At least for the foreseeable future.

    • JohnFen 2 years ago

      > Worrying about censorship and job losses is so myopic my eyes are cringing just thinking about it.

      Why?

      I'm not concerned about censorship, but I am deeply concerned about the potential economic fallout. Why shouldn't I be? I would genuinely love to have one less worry on my plate.

      • fnordpiglet 2 years ago

        Stuffing a genie back in the bottle is the appropriate saying. Instead of trying hard to keep the genie in the hands of a few megacorps for the safety of the children, perhaps the right way to deal with it is to let everyone figure out how to use the technology to create as many jobs as we can? John Henry did beat the steam engine in the end, but at a terrible cost and in no way did it undo the invention. But when I go to a construction site it’s full of highly paid experts with great benefits ensuring the stream engine properly digs the tunnel correctly. Instead of unskilled people slinging hammers in half slavery we have engineers digging twice as many tunnels that are ten times more complex crawling with well employed humans crawling the worksite.

        I think worry isn’t the right response. I think the right response is awareness of the issues and broad collaborative innovation to democratize the tool for as many people as possible, and let us build twice as many things that are ten times more complex with the same people working.

        • JohnFen 2 years ago

          > I think the right response is awareness of the issues and broad collaborative innovation to democratize the tool for as many people as possible

          That is needed long term, but I don't see how it does anything for the more immediate problems. If enough people are out of work, that's a crisis.

          The right response, in my opinion, is to be honest about the risks and find ways of mitigating them. I don't see anyone of substance doing anything like that.

          • fnordpiglet 2 years ago

            I’ll be honest and perhaps unkind. I do not know there’s anything to be done but lean into it and use the tool to help. Trying to fence it in and hope things never change because of it will be impossible and make things worse. By exploiting the tool and figuring out ways to make it an adjunct to human effort as quickly as possible is the only way forward. But disruptive things are disruptive particularly to those who resist it and put themselves against the change. Those who grab onto it and ride it will do well.

            Large language models are most a threat to writers. But if you’ve used them enough you’ll realize it’s a tool shaped by the humans ability to write. Prompting to effect is not trivial, and the quality of the response is greatly informed not just by the intention but by the style of language and the quality of the words, the skillful manipulation of language that generates more language. These models have no agency or intellect, and their output is simply a likely continuation of the humans promoting. I imagine skilled writers can find they can do much more and better if they learn to master language tools, and they’ll still be the author and still be writing. People unskilled with language will be at an immense disadvantage using these tools.

            However the delta will be much narrower, and people who are otherwise unable to convey themselves effectively but have great ideas will finally be heard. Those who are skilled at conveying themselves and manipulating through language but are poor in ideas will not be nearly as powerful. That will be a major realignment. And those who are in power now by virtue of their gift won’t give up ground to those who are elevated by the tool to take their place.

            These are going to be painful changes for a lot of people. Pain is never good. But it’s too late to reverse, so those who adapt and learn will lead. And those who try to dig faster than the machine won’t.

      • itake 2 years ago

        Technology advances have happened many many times in history. But there is always more work to be done.

        I was reading Man’s Search For meaning, written shortly after ww2. The author notes that Americans have too much free time. Clearly that is not the case 80 years later.

        • TimPC 2 years ago

          That was true for horses until the combustion engine came along and it certainly won't be true for humans forever. If we automate all useful work that can be done by someone at 100 IQ or lower than half of the population will be unemployed.

          • itake 2 years ago

            No. I think that means people with low IQs will be able to do high IQ work.

            • TimPC 2 years ago

              You’re not going to take someone who takes twelve hours to learn how to fold a letter for an envelope and teach them a demanding cognitive job like programming.

            • the_only_law 2 years ago

              So then damn near everyone can do the job? What’s that gonna be worth then?

        • woeirua 2 years ago

          In 5 years or less we might see a massive reduction in demand for human taxi drivers. What are taxi drivers going to do? These are typically not people who are well educated and capable of making a rapid shift into another field. And what do we tell them when they choose another field to work in, only to see that field be automated away too? At some point, won't we run out of carrots to dangle in front of the displaced?

        • JohnFen 2 years ago

          > Technology advances have happened many many times in history. But there is always more work to be done.

          Long term, yes. But it's also happened many times in history that sudden economic disruption has had a very serious cost. That things might be better in a couple of decades is of no help to people who can't eat or keep a roof over their heads today because of these changes.

      • Pannoniae 2 years ago

        First they came for.....

    • Pannoniae 2 years ago

      Well, what do you think, what is the bigger issue then? I just mentioned the most immediate concerns, but of course, there are lots of other ones.

two_in_one 2 years ago

What OpneAI is lobbying is actually a ban, better world wide, on products better then theirs. Just exactly the level they have, accidentally. It's not for profit or anything other all the humankind good, of course.

  • 650REDHAIR 2 years ago

    I do not trust OpenAI or Altman to do the right thing here.

    Is the ACLU or EFF doing anything in this space?

  • flangola7 2 years ago

    Altman has written about the dangers for years, from before OpenAI was even founded. The constant cope of claiming it's all just another corporate money making ploy is so tiresome and banal.

    • JohnFen 2 years ago

      And yet, look what he's doing anyway. His concern about the dangers doesn't seem so deep in practice.

alpark3 2 years ago

>Scope: Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (e.g. models that are overall more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2).

How is DALL-E 2 the "industry frontier" of image generation?

  • minimaxir 2 years ago

    It was when it released.

    It is very very weird OpenAI has done nothing with DALL-E 2, not even a price drop to compete.

    • capableweb 2 years ago

      I'm sure they're working on DALL-E 3 but haven't published anything.

      I'm even more sure a price drop wouldn't let them compete with the open models out there.

skepticATX 2 years ago

I for one can't wait until OpenAI is fully crushed by other companies. Their weird combination of singularity/utopia talk plus fearmongering is getting old.

GaggiX 2 years ago

>current industry frontier

>Dalle 2

I'm sorry OpenAI, but your model is not the frontier; also it's funny that it's the only text-to-image models mentioned, they probably know how better the other models are.

  • __loam 2 years ago

    Yes there are so many better ways to produce ethically dubious, derivative trash.

    • jojobaskins 2 years ago

      I don't think they meant better in the artistic sense but in the sense that they outperform on metrics?

      • GaggiX 2 years ago

        Yes and no, with models like Stable Diffusion the team behind it has released metrics so you can see even on paper that the model has better performance than Dalle 2 (SD has a lower MS-COCO FID so it's better). For models like Midjourney there are no metrics, but the difference in quality is so big that there is no real need if you just want to know which model is better. On a high level, Dalle 2 is worse because it generates a lot of artifacts, no details whatsoever, it has a fixed resolution and aspect ratio, and the fact that the model is proprietary means that you can only do what is available in the frontend and API, and there is not much.

      • __loam 2 years ago

        I'm snarking on this guy because I don't care which model we're talking about. Stability might perform better in whatever quantitative metric we want to use, I just think using models like this also makes you a hack that doesn't understand or care about making things.

    • GaggiX 2 years ago

      Like tracing artwork with a pencil, these models are unfortunately too transformative compared to humans to compete with them in creating derivative content. Perhaps AGI will come to help us.

      • __loam 2 years ago

        I think continuing to make fun of these people in public is a better solution than AGI

villgax 2 years ago

Hard pass. No other company in any field had to do so much fear mongering about potential misuse of their tech all the way from original GPT releases & then go on a whirlwind world tour meeting political leaders to talk about their own product. Like startups & companies in each & every country can't get their leaders to talk with them let alone to journalists but somehow Altman is able to waltz right in to every location?

  • commandlinefan 2 years ago

    Considering how biased their own product is, I don't want them to have anything to do with "governance" of AI either.

simbolit 2 years ago

If they so scared of AI, they could just stop building it.

  • kenjackson 2 years ago

    Turns out that this wouldn't stop others from building it.

    • SanderNL 2 years ago

      Right? That’s why I continue kicking kittens. If I don’t, someone else will. It better be me, because I’ll kick them real gentle.

      • candiodari 2 years ago

        When it comes to completely fucked up applications of AI, China is decades ahead of the US.

    • JohnFen 2 years ago

      I don't see how that's relevant to the point.

      • kenjackson 2 years ago

        Because them stopping to build AI won’t stop AI impacting them. It’s like being afraid of the impact of guns — simply not owning a gun yourself won’t necessarily prevent you from being a victim of gun crime.

        • simbolit 2 years ago

          We are talking about a gun manufacturer, not an owner.

          • kenjackson 2 years ago

            Doesn’t matter. If a gun manufacturer stops, does that mean others will? All you’re doing is consolidating the market into even fewer players and presumably companies whose policies you disagree more with than the one you owned.

logicchains 2 years ago

I hope at least some republican lawmakers aren't too senile to recognise the threat this poses. AI will play a huge role in our futures, and if OpenAI, Google et al. get their way, it'll essentially be illegal to have an AI capable of expressing conservative political views.

jmount 2 years ago

Ah, the traditional pulling up the ladder behind themselves move. If openai cared about harm they would ask if their current API service is doing harm at this moment, not if somebody else would do the same or more harm as it is profitable.

ChrisArchitect 2 years ago

Related discussion over here: https://news.ycombinator.com/item?id=36813194

mavsman 2 years ago

This is shockingly similar to how NCAA colleges and universities handle behavior and conduct violations from players and coaches. They perform an internal investigation and then attempt to dish out a penalty or restriction that appears harsh enough for the governing body (the NCAA) not to take any additional action.

Also similar to everyone's response when asked: "What do _you_ think your punishment should be?"

torginus 2 years ago

I guess Big AI is following the Big Pharma playbook - I recently read an article about children being unable to afford Penicillin shots, each costing almost a thousand dollars - which is absolutely infuriating considering any competent chemist can make Penicillin with rudimentary lab equipment, most of the cost being price jacking due to regulatory capture. Probably they are looking to avoid the marginal cost of AI services trending to zero by restricting the supply in a similar way.

happytiger 2 years ago

Voluntary, self-regulatory oversight of one of the most powerful technology breakthroughs in human history? What could go wrong?

simbolit 2 years ago

Question: What is the relation between O̶p̶e̶n̶AI and this website? Isn't Sam Altman also part-owner of YC?

  • kenjackson 2 years ago

    I believe no longer affiliated with YC. Although obviously has a long history with YC.

    EDIT: Although likely still has investments in YC backed companies -- but just a guess.

torginus 2 years ago

It's highly telling nothing is said about the legality of taking the entire sum of human knowledge and using it to train the AI - which already created a huge stink in the generative art community leading companies like Valve to issue a blanket ban on AI art.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection