Settings

Theme

Establishment of the U.S. Artificial Intelligence Safety Institute

commerce.gov

71 points by frisco 2 years ago · 90 comments

Reader

dang 2 years ago

Recent and related:

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence - https://news.ycombinator.com/item?id=38067314 - Oct 2023 (334 comments)

qualifiedai 2 years ago

Let's shoot US innovation and leadership in the foot by establishing random limits on foundation model research.

According to EO's guidelines on commpute, something like GPT4 probably falls under reporting guidelines. Also, in the last 10 years GPU compute capabilities grew 1000x. What will be happening even 2 or 5 years from now?

Edit: yes, regulations are necessary but we should regulate applications of AI, not fundamental research in it.

  • swatcoder 2 years ago

    Mature corporations need liability protection in order to operate. As "AI" tools become widespread, they're going to want and then require assurance that liability for using those tools falls on somebody else.

    A healthy regulatory body provides for that by setting standards and holding the relatively few vendors liable for conformance rather than the countless users.

    It does interfere with innovation for those vendors doing foundational research, but it enables richly funded innovation in applications. It seems like we're at a point where lots of people want to start working on applications using current/near technology; failure to provide them the liability protections they need is what will stifle practical, commercial innovation and would leave AI applications in the hands of the few specialist technology companies who are confident in their models and have the wealth to absorb any liability issues that arise.

    • jjk166 2 years ago

      I don't want the companies recklessly operating AIs protected. The capabilities of AI aren't dangerous, only their applications, and those who want to commercialize AI should have to demonstrate that they are using them responsibly. If they're not ready to do that yet, then the field is not mature enough yet to warrant fostering commercialization.

  • stonogo 2 years ago

    Name one limitation imposed by this EO or this agency. The word 'limit' doesn't even appear in the article. The limitations recommended in the EO mostly focus around government use of the technology.

    As for reporting minimums, the ones in the EO are explicitly temporary. Quoting directly: "...shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements..." "Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements..."

    So, my question is: why are you ignoring the actual things happening in favor of complaining about phantoms?

  • az09mugen 2 years ago

    You write it like there is litteraly 0 risk. Are you aware there are some malicious deviations already possible with AI ? Edit : Totally agree with qualifiedai's edit

    • qualifiedai 2 years ago

      The risk of AI, is much smaller than the risk of computer graphics or social media.

      My point is that applications of AI must be regulated, not fundamental research.

    • andy_xor_andrew 2 years ago

      > Are you aware there are some malicious deviations already possible with AI ?

      Alright, you hooked me in. What are they?

      • ethanbond 2 years ago
        • rockemsockem 2 years ago

          That is not what is being discussed. Those things are already very illegal and we don't need new and novel ways to address them.

          • ethanbond 2 years ago

            That's exactly what's being discussed?

            A dramatic reduction in cost and increase in effectiveness of some undesirable behavior is exactly when you should look for new ways to address it. The goal of making things illegal is to prevent their occurrence, and if they get suddenly much cheaper and more effective, then your prior methods of deterring them will no longer work.

            • kanzure 2 years ago

              > The goal of making things illegal is to prevent their occurrence.

              Making drugs illegal didn't stop people from using drugs. Only a person can stop themselves from doing something, that's not something a law does.

              • ethanbond 2 years ago

                I didn't say all laws are 100% effective, or even greater than 0% effective. I stated why we have laws at all. Pretty wild logic you've got here. Let's try this one:

                > Making murder illegal didn't stop people from murdering. Only a person can stop themselves from doing something, that's not something a law does.

                Should we not have rape, murder, arson, or fraud laws?

              • pixl97 2 years ago

                You're confusing a bad law vs a law you don't like.

                Turns out drug laws for adults are bad because a huge portion of the population does them. This said very few would agree that we should start letting kids do drugs.

                Nuance is important and many people don't seem to grasp that distinction on things that hit close to home with them.

      • az09mugen 2 years ago

        For example, some black hats trained LLMs to pentest, so to find more easily vulnerabilities. Those can be used either to improve your defenses or attack entities.

        The AIs like Copilot et al are trained on code poorly written with bad security practices (there is a lot more than you think), hence reproducing these bad practices on produced code.

        Because also AI are fallible, the spreading of misinformation more than we already have. The retrieval of credentials with prompt hacking, because people push their credentials.

        Because they are generated by AI, the misuse of deepfakes, for example a spanish girl was blackmailed with alleged naked pictures of her, but could be used for far worse.

        And I did not scratch the copyright/artistic side of AI.

        It's not the AI per se the risk, but what people can do with it. Everything is not beautiful. But there are also good things with AI, I agree.

        I think there is the need for some form of regulation in a way or another, the sooner the better. I don't expect the regulation to restrain creativity, but to help prevent bad stuff happening.

    • omginternets 2 years ago

      You need to argue two things:

      1. There are risks specific to AI or specifically aggravated by AI (easy)

      2. Federal regulation of AI safety will reduce those risks (good luck)

      When articulating your arguments for point 2, I would recommend addressing the thorny issue of proliferation.

      • az09mugen 2 years ago

        It's not my job nor I have the imagination or the knowledge to argue 2.

        But don't you agree at least some legal questions should be asked about this overhype of AI ? Because I don't see any so far.

        Edit : this is the kind of legal question I was talking about, just learned it now : https://news.ycombinator.com/item?id=38102760

        • omginternets 2 years ago

          >But don't you agree at least some legal questions should be asked about this overhype of AI ?

          I have trouble answering that question as you've asked it. It seems like we agree on several things, namely:

          1. that any technology is subject to worst-case analysis; and,

          2. that it is appropriate in principle for law to govern the use of technology.

          Here's what I'm having trouble unpacking in your question:

          1. What are the exact legal questions you think should be asked, and aren't? (N.B. Your link is paywalled, and doesn't seem to refer to a specific legal question)

          2. What is it about AI exactly that you think is overhyped, and seem to think I disagree with?

          I don't have a lot of context to go on, so some of my questions may also contain unwarranted assumptions. I hope you'll point them out :)

          1. Have you thought about the difficulties involved in legislating around AI? Specifically, I've found it very difficult to articulate what is and isn't appropriate use of AI with any real precision. Let me give an example. I think we can all agree that "nudifying" photographs of minors is at least in poor taste, if not outright dangerous, and that it is fair game to make this particular usage of technology illegal. However, where do you stand on the idea that regulators should disallow the "nudification" use altogether? I can think of several legitimate (if a bit niche) uses, ranging from the creation of medical diagrams and teaching materials to filming love scenes in mainstream cinema with cloths-on and removing the cloths in post-processing. Do you think it's fair game to disallow these uses? If so, should this be absolute liability or should there be a notion of intent? If you think, as I do, that the technical capability should be unrestricted except insofar as it is employed to illegal ends, then we don't need any new laws. We simply apply the laws against, say, involuntary pornography and sexual exploitation of minors, and the problem is solved from a legal perspective; it is now a job for the executive branch.

          2. I would appreciate it if you could speak to the risk of misclassification. Many of the proposed regulations involve training AI systems to monitor other AI systems (or themselves, as with the case of prompt engineering). What happens when the black box makes mistakes? Do we accept that a small number of innocent people will be labeled X by AI? How should the law take this possibility into account? Again, do we accept that legitimate uses are de facto crippled or entirely disabled? That's one outcome I would very much like to avoid.

          3. On a macro-scale, how do we deal with the fact that other (perhaps less scrupulous) nations will have access to unrestricted AI?

          Point 3 is particularly troubling from a regulation perspective, because the penchant of software for proliferation is astronomically higher than that of, say, nuclear weapons. This feels like the 90's crypto export-controls all over again, which is minimally a gigantic waste of resources and maximally a crippling economical vulnerability.

          P.S.: My friend, it is exactly your job to argue your case when speaking about public issues. The term for this is "civic duty".

    • cscurmudgeon 2 years ago

      You write like there are 0 other countries.

    • jqpabc123 2 years ago

      Are you aware there are some malicious deviations already possible without AI too?

  • boringalterego 2 years ago

    Be glad they didn't put it into DOE. It would be NRC 2.0.

  • Strilanc 2 years ago

    You're worried about winning the race. I'm worried the prize is an accidental intelligence explosion killing everyone. A government slamming on their brakes would be the most encouraging thing I've heard all year.

    • qualifiedai 2 years ago

      "Accidental intelligence explosion" - you have to provide a reasonable argument that this can happen. AIs we have now or currently under development are still just tools in the sense that all agency and consciousness comes solely from human operators. Of course, human operators can be malicious, which is why we should regulate applications of AI, not fundamental research in it.

      • Strilanc 2 years ago

        I'm sure you're familiar with the arguments. It's the explicit goal of several AI companies to make something more capable than us at engineering. Once you have that, there's the possibility of very rapid self-reinforcing improvement. If you lose control of something like that, it's game over.

        GPT4 may not generate world class code, but it does it at a scale and speed unmatched by humanity. Alpha Zero took a week to go from nothing to better than any human in history at Go.

  • __loam 2 years ago

    US leadership was literally asking for this.

    • qualifiedai 2 years ago

      US leadership seems clueless and they just fell for regulatory capture.

      Regulate AI applications, not fundamental research in it.

      • __loam 2 years ago

        I mostly agree but it is rich considering datasets like LAION were built for research yet are the bedrock of billion dollar companies now.

  • therobot24 2 years ago

    Read the https://www.stateof.ai/ reports or even the Stanford AI Index (https://aiindex.stanford.edu/) reports from the last few years. There's plenty of reason to try to create some limits on AI. It's very likely the proposed limits won't be "random" as you say.

  • vsareto 2 years ago

    Better than companies doing their own thing and not having any restrictions or oversight at all. Remember that those companies will get rid of some portion of their workers as soon as they think an AI can do most of that work. Heck, they did layoffs and started hiring again without any advancements.

    You can't trust companies to self-regulate.

    • gustavus 2 years ago

      I'm sure China and Russia and Iran and all those other nations will 100% see the value of artificially restricting their AI efforts as well because of the risk of harm, and be socially responsible global citizens, that won't exploit this edge for thier own geopolitical agenda.

      • cscurmudgeon 2 years ago

        And when these countries overtake the US, the same people who brought AI regulations in the US will cry foul could when we regulate AI products from overseas.

      • vsareto 2 years ago

        I doubt any military or strategically important applications will be regulated

        • sterlind 2 years ago

          That sounds like the worst of all worlds. Killer robots and childproofed Alexas and nothing in between.

          • naveen99 2 years ago

            They’ll probably use ai to build an alternative to microsoft windows and Active Directory to start with. Interestingly microsoft makes just 2% of its revenue from china, despite an 85% market share for personal computer operating systems there. But I am sure China would prefer to not deal with windows telemetry also.

      • jprete 2 years ago

        China’s government is extremely interested in restricting their AI efforts because they don’t want it to contradict the Communist party.

        • __loam 2 years ago

          Yeah people bringing up China as if it's some bastion of freedom in comparison to us are hilarious.

          • gustavus 2 years ago

            Soviet man and American man were arguing about which country was more free.

            The American man said "I am so free I could walk up to the White house right now and scream 'I hate Ronald Reagan he is an incompetent buffoon.' Without getting arrested."

            The Russian man responded "That is not any more free than me I too could walk up to the Kremlin and scream 'I hate Ronald Reagan he is an incompetent buffoon.' without getting in any trouble."

          • cscurmudgeon 2 years ago

            No one did that. That is a straw man. People are less free but govt and institutions aligned with govt are more free.

        • qualifiedai 2 years ago

          But they are doing it in a way that do not hinder their fundamentals - they are enforcing that on alignment (see Baichuan2-chat) and application levels.

          Unlike Biden's silly EO which puts restrictions on foundation model compute levels.

        • startupsfail 2 years ago

          The party is under the authority of its ‘forever’ president Xi Jingping, a good friend of another dully elected president - Putin.

          And the interest is: “whatever is on the mind of an aging, non-elected dictator and his favorites”.

      • petre 2 years ago

        If you think the US is going to restrict its AI efforts, you are mistaken. This is just a big tech lobbying circus. They'll happily sell Uncle Sam robotic dogs with RPGs and targeting computers, autonomous drone swarms and other AI enabed hardware to deploy across all operations theatres.

        The Chinese are busy studying “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era", the Russians are busy getting wasted in Ukraine and the Iranians are busy checking if women are properly wearing their hijabs and smuggling weapons into Gaza.

brotchie 2 years ago

Insane to me that, given a multi-year lead in tech, capability, talent, the USA is shooting itself in the foot re: innovation around AI.

Talk about snatching defeat from the jaws of victory... damn

  • digging 2 years ago

    "Insane" is par for the course when talking about existential threats. I expect you don't believe there are existential threats from AI capabilities research, so anyone planning around them will look insane. To me, it's insane to say the threats can't exist.

    (I'm not endorsing this regulation. It's not at all clear than any regulation could be helpful. As you say, these regulations aren't going to slow non-US research efforts.)

WestCoastJustin 2 years ago

Great way to understand how all this works is watching All-In Summit: Bill Gurley presents 2,851 Miles[1]. Basically, regulate your competition into the ground.

[1] https://www.youtube.com/watch?v=F9cO3-MLHOM

  • AlexandrB 2 years ago

    Great way to understand how all this works is listening to Behind the Bastards: The Deadliest Workplace Disaster in U.S. History[1]. Basically, don't regulate anything until a bunch of poor people die.

    [1] https://www.iheart.com/podcast/105-behind-the-bastards-29236...

    Edit: To elaborate, it's pretty easy to cherry-pick cases of either over and under regulation and use that to "prove" either side of the argument. There's nothing in the Bill Gurley talk that provides any insight into whether AI should be regulated or not because it doesn't directly engage with issues around AI specifically. Instead, it just says: "tech regulation bad".

    • cscurmudgeon 2 years ago

      Ok, can we restrict all trade and place sanctions against nations that don’t restrict AI like we do?

andrewmutz 2 years ago

> Specifically, USAISI will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.

I have been afraid of over-regulation of AI but standards and testing environments don't sound so bad.

It does not sound like they are implementing legal regulations that will protect incumbents at the expense of AI innovation, at least at this point.

  • _jal 2 years ago

    > It does not sound like they are implementing legal regulations that will protect incumbents at the expense of AI innovation, at least at this point.

    Give them a minute, an agency needs to exist before it can be captured. There hasn't been time yet for a single revolving-door hire.

MeImCounting 2 years ago

Regulatory capture in action right before our eyes. Fears of Skynet are going to lead us to a cyberpunk dystopia where only large corporations have any access to powerful AI. What a bizarre time to be alive

bediger4000 2 years ago

William Gibson had the "Turing heat" in seminal cyberpunk novel "Neuromancer". Here's the real-life beginning of just such an organization.

p0w3n3d 2 years ago

  Thou shalt not make a machine in the likeness of a human mind
I guess we're heading for spice then
mortallywounded 2 years ago

I'm unsure what limits will do. Selling weapons and explosives is regulated, but it doesn't stop the government from doing it. So by limiting it, we're only limiting the people?

robbywashere_ 2 years ago

Cool. Great job guys. Now do one for CONSUMER DATA PROTECTION, RIGHTS AND PRIVACY. I WILL EVEN LET YOU COME UP WITH A FUNNY LITTLE 3 LETTER AGENCY NAME FOR IT. I DO NOT CARE.

  • terminous 2 years ago

    Since 1914, there has been a US law on the books that empowers the current Federal Trade Commission (FTC) to make broad enforcement of unfair or deceptive business practices:

    "(a) prevent unfair methods of competition and unfair or deceptive acts or practices in or affecting commerce;

    (b) seek monetary redress and other relief for conduct injurious to consumers;

    (c) prescribe rules defining with specificity acts or practices that are unfair or deceptive, and establishing requirements designed to prevent such acts or practices;

    (d) gather and compile information and conduct investigations relating to the organization, business, practices, and management of entities engaged in commerce; and

    (e) make reports and legislative recommendations to Congress and the public. "

    [1] https://www.ftc.gov/legal-library/browse/statutes/federal-tr...

  • smcin 2 years ago

    Data Intelligence Agency?

    Netizen Safety Agency?

    Citizens(/Consumers) Browsing in Privacy?

    to repurpose a couple.

facu17y 2 years ago

"Despite the increasing complexity and capabilities of machine learning models, they still lack what is commonly understood as "agency." They don't have desires, intentions, or the ability to form goals. They operate under a fixed set of rules or algorithms and don't "want" anything.

Even in feedback loop systems where a model might "learn" from the outcomes of its actions, this learning is typically constrained by the objectives set by human operators. The model itself doesn't have the ability to decide what it wants to learn or how it wants to act; it's merely optimizing for a function that was determined by its creators.

Furthermore, any tendency to "meander and drift outside the scope of their original objective" would generally be considered a bug rather than a feature indicative of agency. Such behavior usually implies that the system is not performing as intended and needs to be corrected or constrained.

In summary, while machine learning models are becoming increasingly sophisticated and capable, they do not possess agency in the way living organisms do. Their actions are a result of algorithms and programming, not independent thought or desire. As a result, questions about their "autonomy" are often less about the models themselves developing agency and more about the ethical and practical implications of the tasks we delegate to them."

The above is from the horse's mouth (ChatGPT4)

My commentary:

We have yet to achieve the kind of agency a jelly fish has, which operates with a nervous system comprised of roughly 10K neurons (vs 100B in humans) and no such thing as a brain. We have not yet been able to replicate the Agency present in a simple nervous system.

I would say even an Amoeba has more agency than a $1B+ OpenAI model since the Amoeba can feed itself and grow in numbers far more successfully and sustainably in the wild with all the unpredictability in its environment than an OpenAI based AI Agent, which ends up stuck in loops or derailed.

What is my point?

We're jumping the gun with these regulations. That's all I'm saying. Not that we should not keep an eye and have a healthy amount of concern and make sure we're on top of it, but we are clearly jumping the gun since we the AI agents so far are unable to compete with a jelly fish in open-ended survival mode (not to be confused with Minecraft survival mode) due to the AI's lack of agency (as a unitary agent and as a collective).

  • rockemsockem 2 years ago

    Is there a point buried in all that? You seem to be implying that there shouldn't be any regulatory body to address self-improving AI until it already exists? I don't think the government moves quickly enough for that to be okay.

    • robbywashere_ 2 years ago

      No, we/they should be focusing on other actual things that are happening here and now. Not science fiction.

lewhoo 2 years ago

Most of us care if the drugs we're taking are properly tested and won't have adverse side effects or at least that the adverse side effects are known so the risk/reward may be calculated. Most of us care if the cars we drive are safe for us and don't have any hidden flaws that may fatally emerge. Same goes for food and drinks I assume. Actually, it's probably easier to find areas with beneficial regulations than functionally no regulations at all. Why is it that in this case people are willing to abandon caution and just dive in without looking ?

codingdave 2 years ago

I must be missing something, as I'm not seeing the information in the linked press release that is fueling the specific commentary here around what the government intends to do, as all this seems to say is that they plan to create standards and provide testing environments. I'm sure there is more to it, I just didn't see where any of those facts were posted.

So I'm assuming some of you have seen more details - can someone share where they can be found?

  • terminous 2 years ago

    No, you're not missing anything. HNers and techies in general lean to the right-libertarian corner, and with that comes a common belief that when the government gets involved in something, they'll mess it up.

    It is against HN rules to call out a commenter for having not read the article, and earlier comments set the tone of the discussion when a post hits the front page. For many posts, by the time it hits the front page, the top-voted comments often include hot takes from someone who just saw the title and wrote a comment about whatever they imagined the article to be.

lordleft 2 years ago

Alarming how many people think that the development of AI should have…no government oversight? Are none of you familiar with history?

  • jadamson 2 years ago

    Useless comment without explaining what particular events in history you think are relevant. Are you familiar with how to make an argument? Read any books lately?

  • paganel 2 years ago

    One of the problems is that the (very big) companies that will most benefit as a result of this type of measures are de facto functioning as an extension of the government (both in the US and in the EU) when it comes to employing AI in the field, so getting back at those (very big) companies with "this use of AI is against the rules!" won't have any discernible effect (because it would be like telling the government that is is breaking its own rules, i.e. futile).

  • jjk166 2 years ago

    Why, exactly, should it have government oversight? The overwhelming majority of research, especially in computer science, has no government oversight.

  • omginternets 2 years ago

    Are you?

  • riku_iki 2 years ago

    I think most disastrous situations (wars, genocides) in humans history was led by governments applying its power.

paganel 2 years ago

For those in the know, is this a bipartisan position? Any chance of seeing rules like this one "over-ruled" (don't know the exact technical term) in case of different politicians coming to power in the US?

  • barryrandall 2 years ago

    It's executive action, and can be changed on a whim (provided appropriate processes are followed).

    Legislative action would theoretically be best, but our current congress couldn't produce a better bill than a wet speak and spell.

cmxch 2 years ago

That (and the rest of the regulatory package) looks like a framework to handicap AI technology when existing laws can handle the existing problems.

It can only help existing companies to stifle competition and guarantee revenue.

boredumb 2 years ago

Just the people I wanted to regulate cutting edge niche technology.

tap-snap-or-nap 2 years ago

I believe this could possibly be about denial of technological advantages for their competitors and potential threats to their control of the markets.

  • NotSammyHagar 2 years ago

    Often such practices are about that. I already got mine, block the next company. But there’s also a certain potential danger. Lower end book cover artists lost their jobs this year with the various picture creator software coming out. What job category is next? It will happen. I expect my job as a software engineer to require more and more use of “programmer aids” which are just llm code writer tools. If I don’t learn how to use them, I’ll be less effective as a programmer, at some point I’d be less employable.

vbi8iBEX 2 years ago

What a fucking joke. I am voting libertarian if i bother to vote at all. They're against AI regulation.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection