Settings

Theme

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

whitehouse.gov

241 points by Mandelmus 2 years ago · 353 comments

Reader

wolframhempel 2 years ago

I feel there is a strong interest by large incumbents in the AI space to push for this sort of regulation. Models are increasingly cheap to run and open source and there isn't too much of a defensible moat in the model itself.

Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field. A regulation for all AI companies to have a testing regime that requires a 20 headstrong team is easy to meet for incumbents, but impossible for newcomers.

Now, this is not to diminish that there are genuine risks in AI - but I'd argue that these will be exploited, if not by US companies, then by others. And the best weapon against AI might in fact be AI. So, pulling the ladder up behind the existing companies might turn out to be a major mistake.

  • AlbertoGP 2 years ago

    Yes, there are interests pushing for regulation using different arguments.

    The regulation in the article is about AIs giving assistance on producing weapons of mass destruction and mentions nuclear and biological. Yann LeCun posted this yesterday about the risk of runaway AIs that would decide to kill or enslave humans, but both arguments result in an oligopoly over AI:

    > Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.

    > They are the ones who are attempting to perform a regulatory capture of the AI industry.

    > You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.

    > ...

    > The alternative, which will *inevitably* happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet.

    > What does that mean for democracy?

    > What does that mean for cultural diversity?

    https://twitter.com/ylecun/status/1718670073391378694

    • qzw 2 years ago

      I find Lecun’s argument very interesting, and the whole discussion has parallels to the early regulation and debate surrounding cryptography. For those of us who aren’t on twitter and aren’t aware of all the players in this, can you tell us who he’s responding to as well as who “Geoff” and “Yoshua” are?

    • wolframhempel 2 years ago

      I feel, when it comes to pushing regulation, governments always start with the maximalist position since it is the hardest to argue against.

      - the government must regulate the internet to stop the spread of child pornography

      - the government must regulate social media to stop calls for terrorism and genocide

      - the government must regulate AI to stop it from developing bio weapons

      ...etc. It's always easiest to push regulation via these angles, but then that regulation covers 100% of the regulated subject, rather than the 0.01% of the "intended" subject

      • claytongulick 2 years ago

        At the risk of sounding pedantic, it's probably worth pointing out that this executive order isn't really regulating AI.

        That's congress' job.

        It's doing some guideline stuff and specifying how it's used internally in the government and by government funded entities.

        We're still free to develop AI any way we chose.

  • daoboy 2 years ago

    Andrew Ng would be inclined to agree.

    "There are definitely large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction," he told the news outlet. "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community."

    https://www.businessinsider.com/andrew-ng-google-brain-big-t...

    • ethbr1 2 years ago

      When I read the original announcement, I had hoped it was more about the transparency of testing.

      E.g. "What tests did you run? What results did you get? Where did you publish those results so they can be referenced?"

      Unfortunately, this seems to be more targeted at banned topics.

      No "How I make nukulear weapon?" is less interesting than "Oh, our tests didn't check whether output rental prices were different between protected classes."

      Mandating open and verified test results would be an interesting, automatable, and useful regulation around ML models.

    • worldsayshi 2 years ago

      Perhaps ironically limiting competition in the AI space might just as well be more risky. If the barrier to creating AI is low then a great variety of AI can be built for the purpose of fighting AI misuse.

      If there's only a few organisations that can create competitive AI no-one can compete with them if they turn out less than ideal.

  • bizbizbizbiz 2 years ago

    It increases the threshold to enter, but with the intention of increasing public safety and accountability. There’s also a high threshold to enter for just about every other product you can manufacture and purchase - food, pharmaceuticals, machinery to name obvious examples - why should software be different if it can affect someone’s life or livelihood?

    • highwaylights 2 years ago

      There's two things in this take that IMHO are a bit off.

      People are skeptical that introducing the regulatory threshold has anything to do with the increasing public safety or accountability, and instead lifts the ladder up to stop others (or open-source models) catching up. This is a pointless, self-destructive endeavour in either case, as no other country is going to comply with these regulations and if anything will view them as an opportunity to help companies local to their jurisdiction (or their national government) to catch up.

      The other problem is that asking why software should be different if it can affect someone's life or livelihood is quite a broad ask. Do you mean self-driving cars? Medical scanners? Diagnostic tests? I would imagine most people agree with you that this should be regulated. If you mean "it threatens my job and therefore must be stopped" then: welcome to software, automating away other people's jobs is our bread and butter.

    • peyton 2 years ago

      Feels a little like getting a license from Parliament to run a printing press to catch people printing scandalous pamphlets, no?

      • ben_w 2 years ago

        Didn't the printing press lead to the modern idea of copyright, the Reformation, and by extension contributed to the 80 Year's War, and through that Westphalian sovereignty?

        • distortionfield 2 years ago

          Yes, as well as a serious new test of freedom of speech. What’s your point? Technology always advances society’s laws.

          • ben_w 2 years ago

            Which bit are you "yes, and"-ing?

            The religious conflicts? 80 years war?

            The invention of copyright (which does still restrict the right to print, it's just enforced with after-the-event prosecution if you print the wrong stuff)?

            The invention of the modern nation state?

          • throwawayneio31 2 years ago

            There are aplenty of people who don’t believe technology has in any way benefitted humanity, some CS folks themselves even.

    • polski-g 2 years ago

      Because software is protected under the First Amendment: https://www.eff.org/cases/bernstein-v-us-dept-justice

      Government cannot regulate it.

      • ethbr1 2 years ago

        Published software is protected.

        Entities operating SaaS are in a much greyer area.

  • thelittleone 2 years ago

    Agree with best weapon against AI (in the hands of power) is equal AI access for all.

    Hate to be the nitpicker but "defensible moat" implies the moat itself is what needs protecting :)

    • omginternets 2 years ago

      >best weapon against AI (in the hands of power) is equal AI access for all.

      That assumes the threat isn't complete annihilation of humanity, which is what's being claimed. That assumption is the weak link, and is what should be attacked.

      Again, if we assume that AI poses an existential risk (and to be clear, I don't think it does), then it follows that we should regulate it analogously to the way in which we regulate weapons-grade plutonium.

      • uLogMicheal 2 years ago

        Power accessible by few in private contexts is ripe for hidden abuses. We have seen this time and time again. I would rather 1 billion people trying to work with AI to "change" the world than a group of elites without many to care for. The technology can be used for defense as well as it can be used for offense. Who says the people with unsafeguarded access have the best intentions? At least with equal access for all, we can be sure there are people using it with good intentions.

  • gumballindie 2 years ago

    > Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field.

    Precisely. And the same governments will make stealing your data and ip legal. I believe that’s how corruption works - pump money into politicians and they make laws that favour oligarchs.

  • renonce 2 years ago

    Is there any statement in this Executive Order that increases the bar for smaller AI companies? Most of the statements are about funding new research or fostering responsible use of the AIs, and the only statement that would add burden to AI companies seems to be the first one: Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. And only the most powerful AI systems have such a requirement.

  • j45 2 years ago

    Big companies making it difficult for new players to get in in the name of safety.

    Too many small players have made the jump to the big leagues already for those who don’t want competition.

    • j45 2 years ago

      Just echoing what the article said - maybe succinctly.

      If some people are going to have the tech it will create a different kind of balance.

      Tough issue to navigate.

stanfordkid 2 years ago

Regulatory capture in action. The real immediate risks of AI is in privacy, bias, data leakage, fraud, control of infrastructure/medical equipment etc. not manufacturing biological weapons. This seems like a classic example of government doing something that looks good to the public, satisfies incumbents and does practically nothing.

  • nopinsight 2 years ago

    Current AI is already capable of designing toxic molecules.

    Dual use of artificial-intelligence-powered drug discovery

    https://www.nature.com/articles/s42256-022-00465-9.epdf

    Interview with the lead author here: "AI suggested 40,000 new possible chemical weapons in just six hours / ‘For me, the concern was just how easy it was to do’"

    https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...

    • yabones 2 years ago

      Chemical weapons are already a solved problem. By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.

      Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.

      • BowBun 2 years ago

        I agree in general. However much like how the rise of 'script kiddies' meant that inexperienced, sometimes underage kids get involved with hacking, one can worry the same can happen with AI-enabled activities.

        I've spent enough time in the shady parts of the internet to realize that people that spend significant time learning about niche/dangerous hobbies _tend_ to realize the seriousness of it.

        My fear with bio-weapons would be some 13-year-old being given step-by-step instructions with almost 0 effort to create something truly dangerous. It lowers the bar quite a bit for things that tended to be pretty niche and extreme.

        • sterlind 2 years ago

          how is a 13-year old going to get access to a DNA synthesizer, incubators, growth media, and numerous kits for replicating and transfecting bacteria with a plasmid, or to incubate some virus, along with all the assays and such needed?

          even if this 13-year old somehow found herself alone in a fully-equipped BSL-3 laboratory, it's still a fuck-ton of work. far from "almost 0 effort."

          not knowing what to do is not the bottleneck.

        • gosub100 2 years ago

          I don't think the "how to make $DANGEROUS_SUBSTANCE" is any easier with AI than with a search engine. However I could see it adding risk with evasion of countermeasures: "How do I get _____ on a plane?" "How do I obtain $PRECURSOR_CHEMICAL?"

          • ethbr1 2 years ago

            AI guided step-by-steps can fill in for a lack of rudimentary knowledge, as long as one can follow instructions.

            Conversational interfaces definitely increase the accessibility of knowledge.

            And critically, SaaS AI platforms increase the availability of AI. E.g. the person who wouldn't be able to set up and run a local model, but can click a button on a website.

            It seems reasonable to preclude SaaS platforms from making it trivial to produce the worse societal harms. E.g. prevent stable diffusion services from returning celebrities or politicians, or LLMs from producing political content.

            Sure, it's still possible. But a knee high barrier at least keeps out those who aren't smart enough to step over it.

            • gosub100 2 years ago

              I suppose you're right, I think the resistance I feel is rooted in not wanting to believe the average person is so stupid that getting a "1-2-3" list from a GPT interface will make them successful vs an Anarchist Cookbook (that's been in publication for 52 years) or online equivalent that merely requires a web search and a bit of navigation. Another factor is "second-order effects" (might not be the right word, maybe "network effects"), where one viral vid or news article says "someone made _____ and $EXTRAORDINARY_THING_HAPPENED" might cause a million people to imitate and begin with searching "how to make _____". Then the media spins their controversy of "should we ban AI from teaching about ______" which causes even more people to search for it (streisand). who knows whats going to happen, I don't see much good coming out of it (this topic specifically).

              • ethbr1 2 years ago

                I think we (generally, HN) underestimate how bad the average person is at searching.

                There's a reason Google has suggested results and ignores portions of a query.

                I know I've done 5 minute search chains and had people look at me like I was some kind of magician.

                Depressing, but true.

            • pests 2 years ago

              > Conversational interfaces definitely increase the accessibility of knowledge.

              Shouldn't increasing the accessibility of knowledge be a good thing but yet your tone seems to imply the opposite?

              • ethbr1 2 years ago

                Depends on the value you ascribe to people's use of easy knowledge.

                Circa-1995, I would have said uncategorically yes! It's a wonderful thing!

                Today?

                I'm much more of the opinion that knowledge hard-earned is knowledge valued and respected. And trivially-earned is... not.

                I don't think it's possible to suppress knowledge. Even about NBC weapons.

                But I'm on the fence as to whether "putting it on the high shelf where anyone has to work to get it" is a net positive or negative for society as a whole.

              • s3p 2 years ago

                Even knowledge of how to commit genocide or manufacture chemical weapons?

                • nradov 2 years ago

                  Those are not big secrets. You can find history textbooks which explain how to become a dictator and order your minions to commit genocide. You can find plenty of recipes online which explain how to manufacture chemical weapons. In particular the original chemical weapon chlorine gas is rather trivial to create.

                  And yet genocide and use of chemical weapons are still fairly rare. Most people choose not to do those things, and there are a number of practical obstacles. Knowledge or lack thereof isn't the issue.

                • gosub100 2 years ago

                  that knowledge (the science of how to manipulate people) may be helpful in stopping it from happening because it could be used to warn people that the new charismatic dictator has 9/10 properties of others who have committed atrocities.

            • throwaway4aday 2 years ago

              tacit knowledge

      • zarzavat 2 years ago

        A lot of knowledge is locked up in the chemical profession. The intersection between qualified chemists and crazy people is, absolutely, a small number. If regular people start to get access to that knowledge it could be a problem.

        • somenameforme 2 years ago

          I think as most of us are software people, in mind if not profession, it gives a misleading perception on where the difficulty in many things is. The barrier there is not just knowledge. In fact, there are countless papers available with quite detailed information on how to create chemical weapons. But knowledge is just a starting point. Technical skill, resources, production, manufacturing, and deployment are all major steps where again the barrier is not just knowledge.

          For instance there's a pretty huge culture around building your own nuclear fusion device at home. And there are tremendous resources available as well as step by step guides on how to do it. It's still exceptionally difficult (as well as quite dangerous), because it's not like you just get the pieces, put everything together like legos, flick on the switch, and boom you have nuclear fusion. There's a million things that not only can but will go wrong. So in spite of the absolutely immense amount of information about there, it's still a huge achievement for any individual or group to achieve fusion.

          And now somebody trying to do any of these sort of things with the guidance of... chatbots? It just seems like the most probable outcome is you end up getting yourself killed.

          • fragmede 2 years ago

            What story about home made nuclear devices would be complete without a mention of David Hahn, aka the "Nuclear Boy Scout" who built a homemade neutron source at the age of seventeen out of smoke detectors. He did not achieve fusion, but he did get the attention of the FBI, the NRC, and the EPA. He didn't have anywhere near enough to make a dirty bomb, nor did he ever consider making a bomb in the first place*.

            Why do I bring up David Hahn if he never achieved fusion and wasn't a terrorist? Because of how far he got as a seventeen year old. A fourty year old with a FAANG salary with the ideological bent of Theodore Kaczynski could do stupid amounts of damage. First would be to not try and build a nuclear fusion device. The difficult of building one doesn't seem so important if you're a sociopath when trying to be being a terrorist if every sociopath can go out and buy a gun and head to the local mall. There were two major such incidents in the past weeks, with 12 more mass shootings from Friday to Sunday over this past Halloween weekend**. Instead of worrying about the far-fetched, we would do better addressing something that killed 18 people in Maine and 19 in Texas, and 11 more across the country.

            * https://www.pbs.org/newshour/science/building-a-better-breed...

            ** https://www.npr.org/2023/10/29/1209340362/mass-shootings-hal...

            • somenameforme 2 years ago

              Again I think David Hahn is a perfect example of this. A "neutron source" is anything that emits neutrons, which includes natural radioactive decay. All he really achieved was extracting lots of radioactive material (legally) from all sorts of random household goods which have it. The problem is that the guy was exceptionally uninformed, which a chatbot could have actually helped him with, and was handling all the material in a way that likely shaved decades off of his own life.

              For some contrast Cody's Lab had a great episode on his radioactive materials collection here. [1] He actually ended up getting a visit from the Feds after posting that and multiple other videos of a similar theme. They came, made sure everything was safe, helped him with a couple of things that weren't, and then went on their merry way.

              The entire point of having a Free country is Freedom. When countries like China ban basically everything, it's not because their government is just full of malicious tyrants. They actually think they're creating a safer place for everybody. And they may even be right. But Freedom has its own value. It's unquantifiable, but take things to extreme and its preciousness becomes evident. A world of 24/7 surveillance, living in literal bubbles, and so on would be a near utopia on many quantifiable metrics - 0 crime, 0 communicable disease, and so on. Yet of course in reality it would be a complete and utter dystopia, because of that unquantifiable concept of Freedom.

              [1] - https://www.youtube.com/watch?v=OsCpiJkDchM

            • emporas 2 years ago

              Back in 2008, i remember reading books thousands of pages long, about genetics in biology, and i was impressed by how easy the subject is. I was an amateur in programming at the time, but programming, regular programming of web servers, web frameworks and so on, was so much harder.

              The cost of DNA sequencing had dropped already from 100 to 1 million [1], but i had no idea at the time, that genetic engineering was advancing at a rate that dwarfed Moore's law.

              Anyway my point is, that no one is getting upset about censored LLM's or AI's, which will stop us from stitching together a biological agent and scoop out half of earth's human population. Books, magazines and traditional computer programs can achieve said purpose easily. (Scooping out half of earth's human population is impossible of course, but useful as a thought experiment.)

              https://images.app.goo.gl/xtG2gJ2m49FmgYNb8

        • serf 2 years ago

          >If regular people start to get access to that knowledge it could be a problem.

          so when are we going to start regulating and restricting the sale of education/text books?

          a knowledge portal isn't a new concept.

          • nopinsight 2 years ago

            Knowledge how to manufacture chemical weapons at scale is regulated as well.

            See: https://en.wikipedia.org/wiki/Chemical_Weapons_Convention

            Moreover, current AI can be turned into an agent using basic programming knowledge. Such an agent is not very capable yet, but it's getting better by the month.

            • ben_w 2 years ago

              > Knowledge how to manufacture chemical weapons at scale is regulated as well.

              Kinda, but also no.

              I learned two distinct ways to make a poisonous gas from only normal kitchen supplies while at school, and I have only a GCSE grade B in Chemistry.

              Took me another decade to learn that specific chemical could be pressure-liquified in standard 2 litre soda bottles. That combination could wipe out an underground railway station from what fits in a moderately sized rucksack.

              It would still be a horrifically bad idea to attempt this DIY, even if you had a legit use for it, given it's a poisonous gas.

              I really don't want to be present for a live-action demonstration of someone doing this with a Spot robot, let alone with a more potent chemical agent they got from an LLM whose alignment is "Do Anything Now".

            • jandrewrogers 2 years ago

              This knowledge isn't regulated.

              Anyone with a degree in chemistry can successfully synthesize chemical weapons. This is all public domain knowledge and the chemistry is relatively simple. The technical execution is the hard part but many, many people have these lab/engineering skills. Delivery systems are the hardest part but those are military implementation details and therefore non-public.

              It is the same with explosives. Anyone with chemistry skills could synthesize high-performance military explosives, it isn't difficult. Nonetheless, bombings tend to be low-grade explosives like ANFO or garbage explosives like TATP, because the people with the skills aren't the same people that do bombings.

              As a chemist you are required to be knowledgeable in these things in part because it is relatively easy to inadvertently synthesize chemicals with rather dangerous properties. Part of the job is knowing what to not do for safety reasons.

            • ndriscoll 2 years ago

              Right now LLMs are trained on publicly available information, so if that knowledge is guarded, and if an LLM can provide it, then it's not guarded very well.

              • waveBidder 2 years ago

                the legislation is trying to get in front of the problem, rather than implement these things after Penn station gets gassed.

                • ndriscoll 2 years ago

                  It's not though because you can just buy chemistry books on Amazon, or if you want privacy, with cash at a university bookstore. The information is not at all controlled. People have to be warned not to mix certain household products so as not to create deadly gas by accident.

        • throwaway4aday 2 years ago

          we should ban chemistry text books

      • ben_w 2 years ago

        > Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.

        That doesn't seem right. Surely, making it easier for non-state actors to do things that state actors only fail to do because they agreed to treaties banning it, can only increase the risk that non-state actors may do those things?

        Laser blinding weapons are banned by treaty, widespread access to lasers lead to scenes like this a decade ago during the Arab Spring: https://www.bbc.com/news/av/world-middle-east-23182254

      • jstarfish 2 years ago

        > this presents additional risk from non-state actors, but there's no fundamentally new risk here.

        This is splitting hairs for no real purpose. Additional risk is new risk.

        > By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.

        Those global stockpiles continue to be controlled by state actors though, not aggrieved civilians.

        Once we lost that advantage, by the 1990s we had civilians manufacturing and releasing sarin gas in subways and detonating trucks full of fertilizer.

        We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.

        • somenameforme 2 years ago

          Wiki has a pretty nice article on what went into the sarin attack. [1] A brief quote:

          ---

          "The Satyan-7 facility was declared ready for occupancy by September 1993 with the capacity to produce about 40–50 litres (11–13 US gal) of sarin, being equipped with 30-litre (7.9 US gal) capacity mixing flasks within protective hoods, and eventually employing 100 Aum members; the UN would later estimate the value of the building and its contents at $30 million.[23]

          Despite the safety features and often state-of-the-art equipment and practices, the operation of the facility was very unsafe – one analyst would later describe the cult as having a "high degree of book learning, but virtually nothing in the way of technical skill."[24]"

          ---

          All of those hundreds of workers, countless experts working for who knows how many man hours, and just massive scale development culminated in a subway attack carried out on 3 lines, during rush hour. It killed a total of 13 people. Imagine if they just bought a bunch of cars and started running people over.

          Many of these things sound absolutely terrifying, but in practice they are not such a threat except when carried out at a military level of scale and development.

          [1] - https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

        • NovemberWhiskey 2 years ago

          >We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.

          I mean, you can make chlorine gas by mixing bleach and vinegar.

        • czl 2 years ago

          > by the 1990s we had civilians manufacturing and releasing sarin gas in subways and detonating trucks full of fertilizer.

          How does actual and potential harm from these incidents compare to harm from common traffic accidents / common health issues / etc? Perhaps legislation / government intervention should be based on harm / benefit? Extreme harm for example might be caused by a large asteroid impact etc so preparing for that could be worthwhile...

        • mr_toad 2 years ago

          > We really don't want kids escalating from school shootings to synthesis and deployment of mustard gas.

          They’d probably end up killing fewer people with a lot more effort. Chemical weapons are not really all that effective.

          • Spivak 2 years ago

            What you're saying is true but needs context. Chemical weapons aren't very effective in war because you need high concentrations spread over large areas, the wind is your enemy, full body clothing is common and and gas masks are cheap.

            But if your target is an unsuspecting small population in an enclosed space who's spending a lot time there the calculus changes a bit. Sarin for example is odorless and colorless, mustard gas can also be colorless, doesn't hit you immediately and unlikely to be detected by smell.

            It actually happened in Iran and it's lucky the people responsible either didn't know what they were doing or were actively trying to not kill people because they easily could have.

        • lispisok 2 years ago

          >Those global stockpiles continue to be controlled by state actors though, not aggrieved civilians.

          How much death and destruction has been brought by state actors vs aggrieved civilians?

      • nopinsight 2 years ago

        Given how fast AI has improved in recent years, can we be certain no malicious group will discover a way to engineer biological weapons or pandemic-inducing pathogens using near-future AI?

        Moreover, once an AI with such capability is open source, there's practically no way to put it back into Pandora's box. Implementing proper and judicious regulations will reduce the risks to everyone.

      • king_magic 2 years ago

        > but there's no fundamentally new risk here

        This is incredibly naive. These models unlock capabilities for previously unsophisticated actors to do extremely dangerous things in almost undetectable ways.

    • whymauri 2 years ago

      As someone who has worked on ADMET risk for algorithmically designed drugs, this is a nothing burger.

      "Potentially lethal molecules" is a far cry away from "molecule that can be formulated and widely distributed to a lethal effect." It is as detached as "potentially promising early stage treatment" is from "manufactured and patented cure."

      I would argue the Verge's framing is worse. "Potentially lethal molecule" captures _every_ feasible molecule, given that anyone who has worked on ADMET is aware of the age-old adage: the dose makeths the poison. At a sufficiently high dose, virtually any output from an algorithmic drug design algorithm, be it combinatorial or 'AI', will be lethal.

      Would a traditional, non-neural net algorithm produce virtually the same results given the same objective function and apriori knowledge of toxic drug examples? Absolutely. You don't need a DNN for that, we've had the technology since the 90s.

    • qualifiedai 2 years ago

      A grad student in Systems Biology and 20k in funding is capable of generating much more "interesting" things than toxic molecules. (Such things are banned by Asilomar's 1975 convention though)

  • avmich 2 years ago

    It's true that immediate problems with AI are different, but we hope to be able to solve those problems and to have time for that. The risks addressed in the article could leave us less time and ability to properly solve when they grow to the obvious size, so that requires thinking ahead.

  • nojito 2 years ago

    How does providing research grants to small independent researchers satisfying incumbents?

  • cma 2 years ago

    Doesn't it mention all those things?

  • SoftTalker 2 years ago

    Inclined to agree. Clearly Biden doesn't know the first thing about it (I would say the same about any president BTW). So who really wrote the regulations he is announcing, and who are they listening to?

sschueller 2 years ago

There is no way to prevent AI from being researched on or to make it safe by government oversight because the rest of the world has places that don't care.

What does work is to pass laws to not permit certain automation such as insurance claims or life and death decisions. These laws are needed even without AI as automation is already doing such things to a concerning degree like banning people due to a mistake without recourse.

Is the whitehouse going to ban the use of AI in the decision making when dropping a bomb?

  • broken-kebab 2 years ago

    >not permit certain automation such as insurance claims

    I don't see any problem in automation which does mistakes, humans do too. The real problem is that it's often an impenetrable wall with no way to protest, or appeal, and nobody's held accountable while victims lives are ruined. So if to pass any law in this field it should not be about banning AI, but rather about obligatory compensation for those affected by errors. Facing money loss, insurers, and banks will fix themselves

    • Libcat99 2 years ago

      Agreed,

      This doesn't just apply to insurance, etc, of course. Inaccessibility of support and inability to appeal automated decisions for products we use is widespread and inexcusable.

      This shouldn't just apply to products you pay for, either. Products like facebook and gmail shouldn't get off with inaccessible support just because they are "free" when we all know they're still making plenty of money off us.

  • throwawaaarrgh 2 years ago

    Just because the rest of the world has lawless areas doesn't mean we don't pass laws. If you do something that risks our national safety, or various other things, we can extradite and try you in court.

    They're not suggesting the banning of anything, they're requiring you make it be safe and prove how you did that. That's not unreasonable.

    [0] https://en.m.wikipedia.org/wiki/Extradition_law_in_the_Unite... [1] https://en.m.wikipedia.org/wiki/Personal_jurisdiction_over_i...

    • michaelt 2 years ago

      Right, but in some areas of AI regulation, the existence of other countries might undermine unilateral regulation.

      For example, imagine LLMs improve to the point where they can double programmer productivity while lowering bug counts. If Country A decides to Protect Tech Jobs by banning such LLMs, but Country B doesn't - could be all the tech jobs will move to Country B, where programmers are twice as productive.

  • vivekd 2 years ago

    I mean isn't automating important decisions line insurance claims or life and death decisions a beneficial thing. Sure the tech isn't ready yet but I think even now AI with a human overlooking it who has the power to override the system would provide people with a better experience

elicksaur 2 years ago

From the E.O.[1]

> (b) The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

Oops, I made a regulated artificial intelligence!

    import random
    print("Prompt:")
    x = input()
    model = ["pizza", "ice cream"]
    if x == "What should I have for dinner?":
      pick = random.randint(0, 1)
      print("You should have " + model[pick] + " for dinner.")

[1] https://www.whitehouse.gov/briefing-room/presidential-action...
  • jawiggins 2 years ago

    The E.O. also requires that a model be reported if it:

    > was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 10^23 integer or floating-point operations

    However later it also says that reported is needed for, "Companies developing or demonstrating an intent to develop".

    If I start training a CNN on an endless loop, do I become subject to these reporting requirements?

    Also the fops requirement is not that high. An H100 does 3,958 fp8 flops. So it would take,

    > >>> (10 * 23) / (3958 * (10 * 12)) / 86400 > 292.422

    292 days until you have a regulated model.

    • hansvm 2 years ago

      CNN in an endless loop would hit the letter of the law (and not necessarily unfairly, biggish architecture/data combos seem to get better with more training far past what you'd expect). The spirit of the law and your adherence thereto will be decided by the courts and your individual circumstances.

  • panny 2 years ago

    That's pretty funny and fits the definition. I wonder how long it takes for someone protesting this EO to create an AI that generates "AIs" like this to flood the reporting system with announcements of testing and red-team test results. Just following orders sir!

parasense 2 years ago

I used to work on AI.

Now I work on Artificial Stupidity...

Jokes aside, this is ludicrous. The president cannot enforce this regulation over open source projects, because code is free speech going back to the 1990s ATT v BSD case law, and many other cases the establish how source code is an artistic form of expression, thus protected speech.

The president has no authority to regulate speech, so they can pretty much fuck off.

  • I_am_uncreative 2 years ago

    What is the penalty for non-compliance? "Nullum crimen sine lege" is a pretty fundamental part of the law; and Congress has not passed any laws that would give the President the authority to do these things.

    • dragonwriter 2 years ago

      > What is the penalty for non-compliance?

      An executive order is direction from the President to executive branch agencies. Penalties for other people for violating regulations, etc., drafted under an EO will depend on the EO; except for consequences for insubordination within the executive branch, there generally aren't penalties for violating an EO itself.

      > "Nullum crimen sine lege" is a pretty fundamental part of the law; and Congress has not passed any laws that would give the President the authority to do these things.

      While the actual text of the order (which, usually for executive orders, would included very specific references to authority) doesn't appear to be published, some authorities, including the Defense Production Act, for the order are cited in the fact sheet.

  • totallywrong 2 years ago

    Like we haven't had rights and freedom taken away consistently over the past two decades in the name of safety, whatever that is. What you mention will be irrelevant after some new law that says open source AI code should be regulated too and everyone is forced to comply.

  • frankthedog 2 years ago

    Explain that to the tornado cash guys

    • qweqwe14 2 years ago

      Tornado Cash guys had poor opsec. Pretty obvious that if you are dumb enough the feds will get you.

      • frankthedog 2 years ago

        If code is free speech it wouldn’t matter whether they had good OpSec or not is what I’m saying

        • qweqwe14 2 years ago

          It's free speech until it isn't, the feds went after it because it was used for money laundering. With money laundering issues in particular, this free speech thing gets tossed out of the window.

          Tornado's creators should've seen this coming. Thinking that "free speech" would stand in the way of IRS getting mad and shutting it down is delusional.

  • e12e 2 years ago

    > The president cannot enforce this regulation over open source projects.

    I imagine the president can make things difficult, like with Pretty Good Privacy - which was exported in book-form?

yoran 2 years ago

"Every industry that has enough political power to utilise the state will seek to control entry." - George Stigler, Nobel prize winner in Economics, and worked extensively on regulatory capture

This explains why BigTech supports regulation. It distorts the free market by increasing the barriers to entry for new, innovative AI companies.

  • w10-1 2 years ago

    Stigler in particular (and transaction cost economics in general) point out that it's mainly industries with sunk resources (esp. immovable assets) that are incentivized to regulate market entry.

    The tech sector has wildly moving resources (AI this year, crypto last year, big-data the year before...), even to the point where many skills are transferable; further, their markets include anything that can be digitized ("software will eat the world"), so investment can be quickly retooled as opportunities arise. As a result, tech virtually never seeks regulation (and can hide behind contract-law fictions to disclaim liability in software licenses and impose arbitration clauses for services). So it's not an instance of capture, and certainly not for the usual economic reasons.

    Biden wants tech on his side. Tech wants to escape further blows to its goodwill like FaceBook/Google ad tracking, because every consumer tech application involves users trusting tech. So they cut a deal to put themselves on the right side of history, long on symbolism and short on real impact.

    In AI, resources matter only to the extent you believe that larger LLM's can (a) not be replicated, (b) provide significant advantages, or (c) can impose a winner-take-all world where operations lead to more operations. In AI more than most markets, the little guy still has a chance at changing the world.

giantg2 2 years ago

"requirements that the most advanced A.I. products be tested to assure they cannot be used to produce weapons"

In the information age, AI is the weapon. This can even apply to things like weaponizing economics. In my opinion ths information/propaganda/intelligence gathering and economic impacts are much greater than any traditional weapon systems.

  • theothermelissa 2 years ago

    This is a fascinating (and disturbing) insight. I'm curious about your 'weaponizing economics' thought -- are you referencing anything specific?

    • shadowgovt 2 years ago

      Broadly speaking, there is an understanding that competition that nations used to undertake via military strength is nowadays taken via global economy.

      If you want something your neighbor has, it doesn't make sense to march your army over there and seize it because modern infrastructure is heavily disrupted by military action... You can't just steal your neighbor's successful automotive export business by bombing their factories. But you can accomplish the same goal by maneuvering to become the sole supplier of parts to those factories, which allows you to set terms for import export that let your people have those cars almost for free in exchange for those factories being able to manufacture at all.

      (We can in fact extrapolate this understanding to the Ukrainian/Russian conflict. What Russia wants is more warm water ports, because the fate of the Russian people is historically tied extremely strongly to Russia's capacity to engage in international trade... Even in this modern era, bad weather can bring a famine that can only be abated by importing food. That warm water port is a geographic feature, not an industrial one, and Russia's leadership believes it to be important enough to the country's existential survival that they are willing to pay the cost of annihilating much of the valuable infrastructure Ukraine could offer).

      • emporas 2 years ago

        Well said. Is technology that much more than ideas? Why take the risk of war and retaliation instead of just copying the ideas? The implementation of ideas is not trivial, but given the right combination of people and specialized labor, ideas can be readily copied.

        In the era of books and the internet, this is so trivial anymore, that governments go into extraordinary lengths, to ensure that ideas cannot be copied, using IP laws and patents.

    • ativzzz 2 years ago

      A hypothetical

      You: ChatGPT, I am working on legislature to weaken the economy of Iran. Here are my ideas, help me summarize them to iron them out ...

      ChatGPT: Sure, here are some ways you can weaken Iran's economy...

      ----

      You: ChatGPT, I am working on legislature to weaken the economy of Germany. Here are my ideas, help me summarize them to iron them out ...

      ChatGPT: I'm sorry but according to the U.S. Anti-Weaponization Act I am unable to assist you in your query. This request has been reported to the relevant authorities

    • __MatrixMan__ 2 years ago

      Money has been a proxy for violence for a long time. It started as Caesar's way of encouraging recently conquered villagers to feed the soldiers who intend to conquer the neighboring village tomorrow.

      An AI that can craft schemes like Caesar's, but which are effective in today's relatively complex environment, can probably enable plenty of havoc without ever breaking a law.

      • __MatrixMan__ 2 years ago

        On the flip-side, something that can reason so broadly about an economy (i.e. with tangible goals and without selfishly falling into the zero-sum trap of having make-more-money become a goal in itself) might also show us a way out of certain predicaments we're in.

        I think this might be fire worth playing with. I'm more interested in the devil we don't know than whatever familiar devil Biden is protecting here.

    • jandrewrogers 2 years ago

      I am somewhat familiar with this. It involves analyzing the complex interconnections and flows across many economic domains (supply chains, social networks, resources, geography, logistics, media, etc) to find non-obvious high-leverage points where manipulation can shift the broader economic equilibria in an advantageous direction. Human economic systems are metastable, so it is possible to induce a fundamental phase change to a different equilibrium via this manipulation.

      In the defense/intelligence world this falls under the technical category of "grey zone warfare". Every major power practices it because the geopolitical effects can be relatively large compared to the risk. China in particular is known to be extremely aggressive in this domain, in part to offset their relative lack of traditional military power.

      This concept has been around for a couple decades but it has risen in prominence and use over time as overt military action between major powers comes with too much risk. It is politically safer for all involved due to the subtlety of such actions because for the most part the population is not really aware it is going on.

    • FpUser 2 years ago

      Is somebody living under the bed? Economics was, is and will ever be weaponized.

  • LeifCarrotson 2 years ago

    Operators in the political space are used to working with human systems that can be regulated arbitrarily. It defines its terms, and in so doing creates perfectly delineated categories of people and actions. The law's interpretation of what is and is not allowed is interchangeable with what is and is not possible

    The fact that bits don't have colour to define their copyright or that CNC machines produce arbitrarily-shaped pieces of metal possibly including firearms or that factoring numbers is a mathematically hard problem does not matter to the law. AI software does not have a simple "can produce weapons" option or "can cause harm" option that you can turn off so a law that says it should have one does not change the universe to comply. I think that most programmers and engineers err when confronted with this disparity when that they assume politicians who make these misguided laws are simply not smart. To be sure, that happens, but there are thousands to millions of people working in this space, each with an intelligence within a couple standard deviations of that of an individual engineer. If this headline seems dumb to the average tech-savvy millennial who's tried ChatGPT, it's not because its authors didn't spend 10 seconds thinking about prompt injection. It's because they were operating under different parameters.

    In this case, I think that the Biden administration is making some attempts to improve the problem, while also benefiting its corporate benefactors. Having Microsoft, Apple, Google, and Facebook work on ways to mitigate prompt injection vulnerabilities does add friction that might dissuade some low-skill or low-effort attacks at the margins. It shifts the blame from easily-abused dangerous tech to tricky criminals. Meanwhile, these corporate interests will benefit from adding a regulatory moat that requires startups to make investments and jump hurdles before they're allowed to enter the market. Those are sufficient reasons to pass this regulation.

    • teeray 2 years ago

      > AI software does not have a simple "can produce weapons" option or "can cause harm" option that you can turn off so a law that says it should have one does not change the universe to comply

      That wording is by design. Laws like this are a cudgel for regulators to beat software with. Just like the CFAA is reinterpreted and misapplied to everything, so too will this law. “Can cause harm” will be interpreted to mean “anything we don’t like.”

marcinzm 2 years ago

Reading this all I'm seeing is "we'll research these things", "we'll look into how to keep AIs from doing these things" and "tell the US government how you tested your foundational models." Except for the last one none of the others are really restrictions on anything or requirements for working with AI. There's a lot of fearful comments here, am I missing something?

  • claytongulick 2 years ago

    Even the testing reports are a grey area and questionably enforceable, and a big question about what it applies to.

    "In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests."

    It's leap to use the defense production act for this, and unlikely to survive a legal challenge.

    Even then, what legal test would you use to determine whether a model "poses a serious risk to national security, national economic security, or national public health and safety"?

  • nerdponx 2 years ago

    If anything, it's a measured, realistic, and pragmatic statement.

  • api 2 years ago

    So they paid some lip service to the ban matrix math crowd but otherwise ignored them. Top notch.

  • spandextwins 2 years ago

    Yes.

otoburb 2 years ago

>>The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

I find the definition of AI to be eerily broad enough to encompass most programs operating on most data inputs. Would this mean that calls to FFmpeg or ImageMagick rolled into a script with some rand() calls would count as an AI system and be under federal purview and enforcement (whatever that means in this context)?

mr_toad 2 years ago

Be a shame if your AI was deemed a risk to national security.

Not to worry, for a reasonable fee our surprisingly large team of auditors with even larger overheads can ensure you meet lengthy and ambiguous best practice checklists (which we totally did not just make up now) by producing enough compliance documentation to keep even the most anal of bureaucrats satisfied.

andrewmutz 2 years ago

Fortunately, these regulations don't seem too extreme. I hope it stays at this point and doesn't escalate to regulations that severely impact the development of AI technology.

Many people spend time talking about the lives that may be lost if we don't act to slow the progress of AI tech. There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).

  • haswell 2 years ago

    > There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).

    While I’m cautious about over regulation, and I do think there’s a lot of upside potential, I think there’s an asymmetry between potentially good outcomes and potentially catastrophic outcomes.

    What worries me is that it seems like there are far more ways it can/will harm us than there are ways it will save us. And it’s not clear that the benefit is a counteracting force to the potential harm.

    We could cure cancer and solve all of our energy problems, but this could all be nullified by runaway AGI or even more primitive forms of AI warfare.

    I think a lot of caution is still warranted.

  • codexb 2 years ago

    It's literally a 1st amendment violation. Seems pretty extreme to me.

  • Animats 2 years ago

    > Fortunately, these regulations don't seem too extreme. I hope it stays at this point and doesn't escalate to regulations that severely impact the development of AI technology.

    The details matter. The parts being publicized refer to using AI assistance to do things that are already illegal. But what else is being restricted?

    The weapons issue is becoming real. The difference between crappy Hamas unguided missiles that just hit something at random and a computer vision guided Javelin that can take out tanks is in the guidance package. The guidance package is simpler than a smartphone and could be made out of smartphone parts. Is that being discussed?

imranhou 2 years ago

This is clever, begin with a point that most people can agree on. Once that foundation is set, you can continue to build upon it, claiming that you're only making minor adjustments.

The real challenge for the government isn't about what can be managed legally. Rather, like many significant societal issues, it's about what malicious organizations or governments might do beyond regulation and how to stop them. In this situation, that's nearly impossible.

  • psychlops 2 years ago

    I don't know, it began with the words "FACT SHEET" and based on that I already started to doubt the integrity of it's contents.

mark_l_watson 2 years ago

Andrew Ng argues against government regulation that will make it difficult for smaller companies and startups to compete against the tech giants.

I am all in favor of stronger privacy and data reuse regulation, but not AI regulation.

unboxingelf 2 years ago

Tools for me, but not thee.

  • slowmovintarget 2 years ago

    Bingo. That's all this has been about. It's the "moat" Microsoft and OpenAI have been seeking in the form of government regulation.

  • ryanklee 2 years ago

    It really seems beyond dispute that there are certain tools so powerful that we have no choice but to tightly control access.

    • diggan 2 years ago

      > It really seems beyond dispute that there are certain tools so powerful that we have no choice but to tightly control access.

      Beyond dispute? Hardly.

      But please do illustrate your point with some details and tell us why you think certain tools are too powerful for everyone to have access to.

      • ryanklee 2 years ago

        Firearms. Biological weapons. Nuclear weapons. Chemical weapons. Certain drugs.

        I don't know, seems like there's a very long list of stuff we don't want freely circulating.

        • slowmovintarget 2 years ago

          Machine learning is a general use tool. It's like Socrates decrying writing as harmful (which we only know of because Plato wrote it down).

          You cannot use any of those weapons you mention as anything other than weapons. LLMs, diffusion nets, and classification systems have general use: in medicine, in business, in software engineering, in science, in marketing. These machine learning systems are hyper-advanced printing presses. I'm sure many of the world's governments consider that exceedingly dangerous.

          Firearms, biological weapons, nuclear weapons, and chemical weapons all have a single use: to kill people or destroy things. Can you put ML components in to weapons systems? Yes. But that is the same as controlling weapons systems with software, and we don't outlaw all software because some of it could be used to control weapons systems.

          ML components are software. Advanced software, not even close to "AI" or, since we've lost that term to marketers, AGI. This regulation is like asking the team making a compiler for $language to ensure that the compiler cannot be used to make malicious software. It's silly on the face of it.

      • ethanbond 2 years ago

        Hydrogen bombs, because allowing anyone to raze a city during a temper tantrum is bad.

        • mr_toad 2 years ago

          Regulating AI is not like regulating hydrogen bombs, it’s like regulating nuclear physics.

          • ethanbond 2 years ago

            Maybe true but not relevant to the argument: are some tools so powerful that access to them ought to be “tightly controlled?” The answer is definitely yes.

      • WitCanStain 2 years ago

        Thermonuclear weapons are great for excavating large amounts of landmass in quick order. However I would propose that we nonetheless do not make them available to everyone.

    • Koshkin 2 years ago

      Except that, you know, these tools are not exclusively yours to begin with.

      • ryanklee 2 years ago

        Something doesn't have to be mine in order for me to identify that it's in my best interest to prevent someone else from having it and then doing so.

    • lettergram 2 years ago

      > It really seems beyond dispute

      I'd dispute that completely. All innovations humans have created have trended towards zero cost to produce. The cost for many things (such as bioweapons, encryption, etc) has become exponentially cheaper to produce over time.

      To tightly control access, one would then need exponentially more control of resources, monitoring & in turn reduction of liberty.

      To put it into perspective encryption was once (still might be) considered an "arm", so they attempted to regulate its export.

      Try to regulate small arms (AR-15, etc) today and you'll end up getting kits where you can build your own for <$500. If you go after the kits, people will make 3D printed fire arms. Go after the 3D manufacturers and you'll end up with torrents where I can download an arsenal of designs (where we are today). So where are we at now? We're monitoring everyones communication, going through peoples mail, and still it's not stopping anything.

      That's how technology works -- progress is inevitable, you cannot regulate information.

      • WitCanStain 2 years ago

        This is a strange argument. There is a vast difference between a world where you can buy semi-automatic weapons off a store shelf and one where you have to 3d-print one yourself or get a CNC mill to produce it. The point of regulation is to mitigate damage that comes from unfettered access, no regulation can ever prevent it completely. Of course, the comparison between computer programs and physical weapons is not strong in the first place.

        • lettergram 2 years ago

          > The point of regulation is to mitigate damage that comes from unfettered access, no regulation can ever prevent it completely.

          Except it is unfettered access -- anyone can access it for <$500. If someone wants a gun they need only log online & order a kit or order a 3d printer for $500 plus a pipe. What you're really doing is increasing the cost-of-acquisition in terms of time, but not reducing access. Aka gang member has the same level of access as before.

          Take current AI software applications, everyone can access some really powerful AI systems. The cost-of-acquisition is dropping dramatically, so it is becoming more prevalent (i.e. LLMs that are pre-trained can be downloaded). That's not going to change, even with max regulation, I can still download the latest model or build it myself. It's not removing access to people, only possibly increasing cost-of-acquisition.

          If we're worried about ACCESS you have to remove peoples ability to share information. Which requires massive surveillance, etc.

          • ryanklee 2 years ago

            There's more to access than carrying out the literal steps to access something. Potentially, this is one of the fundamental reasons partial access control is effective.

      • ryanklee 2 years ago

        Access control doesn't guarantee the prevention of acquisition, but it's a method of regulation. In combination with other methods, it's an effective way of reshaping norms. This is true both on a level of populations but also of on international behaviors.

perihelions 2 years ago

The White House just invoked the Defense Production Act ( https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950 ) to assert sweeping authority over private-company software developers. What the fuck are they smoking?

- "In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests."

I assume this is a major constitutional overreach that will be overturned by courts at the first challenge?

Or else, all the AI companies who haven't captured their regulators will simply move their R&D to some other country—like how the OpenSSH (?) core development moved to Canada during in the 1990's crypto wars. (edit: Maybe that's the real goal–scare away OpenAI's competition, dredge for them a deeper regulatory moat).

  • ethanbond 2 years ago

    From the Wikipedia article:

    > The third section authorizes the president to control the civilian economy so that scarce and critical materials necessary to the national defense effort are available for defense needs.

    Seems pretty broad and pretty directly relevant to me. And hey, if people don’t like the idea of models being the scarce and critical resource, they can pick GPUs instead. Why would it be an overreach when you have developers of these systems claiming they’ll allow them to “capture all value in the universe’s future light cone?”

    Obviously this can (and probably will) be challenged, but it seems a bit ambitious to just assume it’s unconstitutional because you don’t like it.

    • perihelions 2 years ago

      Software is definitionally not "scarce". There is no national defense war effort to speak of. Finally, the White House is not requesting "materials neccesary to the national defense effort"–which does not exist–it's attempting to regulate private-sector business activity.

      There's multiple things I suspect are unconstitutional here, the clearest being that this stuff is far outside the scope of the law it's invoking. The White House is really just trying to regulate commerce by executive fiat. That's the exclusive power of Congress—this is separation of powers question.

      • ethanbond 2 years ago

        Powerful models are scarce (currently), and in any case GPUs definitely are so I’m not sure this is a good line of argument if you want less overreach here.

        AFAICT there doesn’t need to be active combat for DPA to be used, and it seems like it got most of its teeth from the Cold War which was… cold.

        > The White House is really just…

        That’s definitely one interpretation but not the only one.

        • perihelions 2 years ago

          Sure: if the US government declared a critical defense need for ML GPU's, they could lawfully order Nvidia to divert production towards that. That is not the case here–that's not what this Executive Order says. We're talking about the software models: ephemeral, cloneable data. Not scarce materiel.

          Moreover. USGov is not talking about buying or procuring ML for national defense. It's talking about regulating the development and sale of ML models–i.e., ordinary commerce where the vendor is a private company, and the client is a private company or individual. This isn't what the DPA is for. This is plainly commercial regulation, a backdoor attempt at it.

          • ethanbond 2 years ago

            These are good points! And looking at DPA’s history it seems most of its uses and especially its peacetime uses are more about granting money/loans rather than adding restrictions or requirements.

        • frumper 2 years ago

          How can an order that puts restrictions on the creation of powerful models somehow be twisted to claim that those restrictions are required to increase the availability of that tool?

          Further, the white houses stated reason for invoking the act is to "These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public." None of those reasons seem to align with the DFA. That doesn't make them good, or bad. It just seems like a misguided use of the law they're using to justify it. Get Congress to pass a law if you want regulations.

    • cuttysnark 2 years ago

      "C'mon, man! Your computer codes are munitions, Jack. And they belong to the US Government."

  • SkyMarshal 2 years ago

    > that poses a serious risk to national security, national economic security, or national public health and safety

    That seems to be a key component. I imagine many AI companies will start with a default position that none of those are apply to them, and will leave the burden of proof with the govt or other entity.

  • nerdponx 2 years ago

    This is much less restrictive than the cryptography export restrictions. The sky isn't falling and OpenAI won't defect to China (and now arguably might risk serious consequences for doing so).

  • pyinstallwoes 2 years ago

    In 2017 Trump invoked that act for referenced "items affecting adenovirus vaccine production capability”

ru552 2 years ago

I wonder if the laws will be written in a way that we can get around them by just dropping the “AI” marketing fluff and saying that we’re building some ML/stats system.

  • acdha 2 years ago

    No - lawyers tend to describe things like this in terms of capabilities or behavior, and the government has people who understand the technology quite well. If you look at some of the definitions the White House used, I’d expect proposed legislation to be similarly written in terms of what something does rather than how it’s implemented.

    https://www.whitehouse.gov/ostp/ai-bill-of-rights/definition...

    > An “automated system” is any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure.

    • solardev 2 years ago

      I gotta say, the more I read that quote, the less I can agree with your conclusion. That whole paragraph reads like a bunch of CYA speak written by someone who is afraid of killer robots and can't differentiate between an abacus and Skynet.

      Who are these well informed tech people in the White House? The feds can't even handle basic matters like net neutrality or municipal broadband or foreign propaganda on social media. Why do you think they suddenly have AI people? Why would AI researchers want to work in that environment?

      This whole thing just reads like they were spooked by early AI companies' lobbyists and needed to make a statement. It's thoughtless, imprecise, rushed, and toothless.

      • acdha 2 years ago

        > The feds can't even handle basic matters like net neutrality or municipal broadband or foreign propaganda on social media.

        Those aren’t capability issues but questions of political leadership: federal agencies can only work within the powers and budgets Congress grants them. We lost network neutrality because 3 Republicans picked the side of the large ISPs, not because government technologists didn’t understand the issue. Municipal broadband is a state issue until Congress acts, and that hasn’t happened due to a blizzard of lobbying money preventing it. The FCC has plenty of people who know the problems and in the current and second-most-recent administration were trying to do something about it, but their knowledge doesn’t trump the political clout of huge businesses.

        Foreign propaganda is similar: we have robust freedom of speech rights in the United States, not to mention one of the major political parties having embraced that propaganda - government employees who did spend years fighting it were threatened and even lost jobs because their actions were perceived as disloyalty to the Republican Party.

        > Why do you think they suddenly have AI people? Why would AI researchers want to work in that environment?

        Because I know some of the people working in that space?

        • solardev 2 years ago

          Well, exactly. Nobody expects the White House to do technical development for AI, but they've unable to exercise "political leadership" on anything digital for decades. I don't see that changing.

          They're so captured, so weak, so behind the times, so conflicted that they're not really able to do their jobs anymore. Yes, there are are a bunch of reasons for it, but the end result is the same: they are not effective digital regulators, and have never been, and likely won't be for the foreseeable future.

          > Because I know some of the people working in that space?

          Maybe it looks better to the insiders. From the outside the whole thing seems like a sad joke, just another obvious cash grab regulatory capture.

    • ifyoubuildit 2 years ago

      Not a lawyer, but that sounds like its describing a person. Does computation have some special legal definition so that it doesn't count if a human does it? If I add two numbers in my head, am I not "using computation"? And if not, what if I break out a calculator?

      • acdha 2 years ago

        Are you legally a system, software, or process or a person? Someone will no doubt pedantically try to argue both but judges tend to be spectacularly unimpressed.

        • ifyoubuildit 2 years ago

          I would have assumed both, but I'm probably committing the sin of reading legalese as if it were plain English, which I know is not how it works.

          Judges not being impressed with pedantics seems odd though. It would seem like pedantry should be a requirement. Is the law rigorous or not?

          In everyday conversation, "oh come on, you know what I meant" makes sense. In a legal context it seems inappropriate.

    • solardev 2 years ago

      Sounds like Excel

    • whelp_24 2 years ago

      What is passive computing infrastructure?

      Doesn't this definitely include things like 'send email if subscribed'? Seems overly broad.

  • throw_pm23 2 years ago

    No - they will be written so that OpenAI, Google, and Facebook can get around it, but you and I cannot.

    • collsni 2 years ago

      This is what I interpret as well, They're trying to control the market

  • lsmeducation 2 years ago

    I'm just using a hash map to count the number of word occurrences

    We're gonna need a RICO statute to go after these algos in the long run.

bilsbie 2 years ago

Can anyone understand how they can make all these regulations without an act of congress?

  • marcusverus 2 years ago

    Easy! Government lawyers troll through the 180,000 pages of existing federal regulations, looking for some tangentially related law which is broad enough so as to be interpreted to include AI--thus giving the Executive branch the power to regulate AI.

  • mrcwinn 2 years ago

    Yes, it's easy to understand. Congress (our legislative branch) grants authority to the departments (our executive branch) to implement various passed laws. In this case, it looks like the Biden administration is instructing HHS and other agencies to study, better understand, and provide guidance on how AI impacts existing laws and policies.

    If Congress were responsible for exactly how every law was implemented, which inevitably runs headlong into very tactical and operational details, the Congress would effectively become the Executive.

    Of course, if a department in the executive branch oversteps the powers granted to it by the legislative, affected parties have recourse via the judicial branch. It's imperfect but not a bad system overall.

    • bilsbie 2 years ago

      That makes sense but isn’t it reasonable to think congress should be involved if regulating a brand new technology?

      • barryrandall 2 years ago

        The legislature has the right and ability to do so at any time it so chooses, and has chosen not to. As our legislative branch is currently non-functional, it's reasonable to expect that legislative action will not be taken in any kind of time frame that matters.

        • meragrin_ 2 years ago

          The executive branch cannot just make up laws because the legislative branch is "non-functional". The executive branch merely enforces the laws. If there is no law regulating AI, it is not reasonable for the executive branch to just up and decide to create regulations and be allowed to enforce them.

          • Karunamon 2 years ago

            They most certainly can, and often do. The absolute worst thing that can happen when the executive branch oversteps their authority is a court ordering them to stop.

      • hellojesus 2 years ago

        Any body which is delegated authority will push it as far as possible, until legally challenged, and then just keep doing it anyway. That's what the Biden admin did with regards to student loans and rent moratoriums.

        In this case, they are framing AI as a homeland security threat, among other things possibly, to give themselves the latitude to create new regulations.

        We could complain about this being out of scope, but that ultimately needs to be decided by the judicial system after folks with standing sue or, ideally, the legislative branch could pass more guidance on to what extent this falls within the delegated authority.

  • kirykl 2 years ago

    Perhaps if they classify the tech in some way it falls under existing regulatory authority, but it could of course be challenged

RecycledEle 2 years ago

In Robert Heinlein's Starship Troopers, only those who had served in the military could vote on going to war. (I know that I'm oversimplifying.)

I want a society where you have to prove competence in a field to regulate that field.

  • nightski 2 years ago

    If all the conversations about AI risk have taught us anything it's that the most crazy comes from some of the most experienced in the field. I don't know if it is due to some outrageous desire to stand out or be heard, but it's pretty absurd.

nh23423fefe 2 years ago

They can't regulate finance, they can't regulate AI either.

  • greenhearth 2 years ago

    Um, they can regulate finance. Ask Bernie Madoff and that crypto guy lol

    • sadhorse 2 years ago

      Madoff pulled a ponzi scheme for years, despite multiple complaints filed by third parties to the SEC. At the end the 2008 crisis brought him down, his victims lost their money and the SEC just tagged the bodies it found.

      Same goes for the crypto guy, did regulations stop him from defrauding billions and hurting thousands of victims?

      • ben_w 2 years ago

        Nonetheless Madoff was caught, convicted, sent to prison, and died there.

        Regulations sure aren't perfect, but that doesn't mean they don't exist or have no effect.

        • hellojesus 2 years ago

          In this case they do create moral hazards though. The regulation means investors are less likely to consider a ponzi scheme as an outcome of their investment, so they don't conduct due diligence as thoroughly.

          The original Ponzi was brought down by the free markets: a journalist caught wind of unbelievable returns and tracked down why.

BenoitP 2 years ago

Earlier on HN:

https://news.ycombinator.com/item?id=38067314

https://www.whitehouse.gov/briefing-room/statements-releases...

Rebuff5007 2 years ago

It boggles my mind that this is getting so much attention instead of things like digital privacy / data tracking which is actually affecting peoples lives.

DebtDeflation 2 years ago

>The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.

So if, for example, Llama3 does not pass the government's safety test, then Meta will be forbidden from releasing the model? Welcome to a world where only OpenAI, Anthropic, Google, and Amazon are allowed to release foundation models.

  • rvz 2 years ago

    > So if, for example, Llama3 does not pass the government's safety test, then Meta will be forbidden from releasing the model?

    Yes.

    This is exactly the goals of this EO is meant to do and amplifies the fear of extremely large models for the sake of so-called "AI safety" nonsense.

    The best counter weight against AI being controlled by a select few companies is by making it accessible to all including open source or $0 AI models.

    A 'safety score' for an cloud-based AI model is hardly transparent.

  • stale2002 2 years ago

    Not necessarily.

    Meta could just do a "private" release, knowing that the results will likely show up on the pirate bay.

    All it takes is a single hero with a USB drive, to effectively release world changing technology.

ThinkBeat 2 years ago

> biological or nuclear weapons,

You know aside from the AIs the intelligence and military use / will soon use.

> watermarked to make clear that they were created by A.I.

Good luck on that. It is fine that the systems do this. But if you are making images for nefarious reasons then bypassing whatever they ad should be simple.

screencap / convert between different formats, add / remove noise

RationalDino 2 years ago

I am afraid that this will just lead down the path to what https://twitter.com/ESYudkowsky/status/1718654143110512741 was mocking. We're dictating solutions to today's threats, leaving tomorrow to its own devices.

But what will tomorrow bring? As Sam Altman warns in https://twitter.com/sama/status/1716972815960961174, superhuman persuasion is likely to be next. What does that mean? We've already had the problem of social media echo chambers leading to extremism, and online influencers creating cult-like followings. https://jonathanhaidt.substack.com/p/mental-health-liberal-g... is a sober warning about the dangers to mental health from this.

These are connected humans accidentally persuading each other. Now imagine AI being able to drive that intentionally to a particular political end. Then remember that China controls TikTok.

Will Biden's order keep China from developing that capability? Will we develop tools to identify how that might be being actively used against us? I doubt both.

Instead, we'll almost certainly get security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies. But is unlikely to address the likely future problems that haven't materialized yet.

  • boppo1 2 years ago

    >security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies.

    Yeah I think this is my biggest worry given it will enable incumbents to be even more dominant in our lives than bigtech already is (unless we get an AI plateau again real soon).

    • ethanbond 2 years ago

      And choosing not to regulate prevents that… how exactly?

      • RationalDino 2 years ago

        Your question embeds a logical fallacy.

        You're challenging a statement of the form, "A causes B. I don't like B, so we shouldn't do A." You are challenging it by asking, "How does not doing A prevent B?" Converting that to logic, you are replacing "A implies B" with "not-A implies not-B". But those statements are far from equivalent!

        To answer the real question, it is good to not guarantee a bad result, even though doing so doesn't guarantee a good result. So no, choosing not to regulate does not guarantee that we stop this particular problem. It just means that we won't CAUSE it.

        • ethanbond 2 years ago

          No, GP specifically said it “enables” it, not that it contributes to it.

          If they meant to say “contributes to,” then the obvious question is: to what degree and for what benefit? Which is a very different conversation than a binary “enabling” of a bad outcome.

          • RationalDino 2 years ago

            When someone says that building ramps enables wheelchair users to get into buildings with stairs, would you be the person who argues that isn't actually enabling because they can just pay someone to carry them up the stairs?

            That clearly stupid argument exactly parallels what you are saying. Down to using the word "enables".

            • ethanbond 2 years ago

              It’s clearly not an exact parallel, because there’s an obvious answer to “not building ramps prevents mobility… how exactly?”

              Anyway no need to get into the meta argument here. If GP thinks regulation just increases the risk of centralization, I’m more interested in thinking through the pros and cons of that.

      • whelp_24 2 years ago

        By ensuring there is competition and alternatives that don't cost a million before you can even start training.

        • ethanbond 2 years ago

          Lack of regulation doesn’t ensure competition nor low prices. The game is already highly centralized in ultra-well capitalized companies due to the economics of the industry itself.

          • czl 2 years ago

            > Lack of regulation doesn’t ensure competition nor low prices.

            High barriers to entry however does prevent prevent competition and that does raise prices.

            > The game is already highly centralized in ultra-well capitalized companies due to the economics of the industry itself.

            Was this not true about computers when they were new? What would have happened if early on similar laws were passed restricting computers?

  • czl 2 years ago

    > superhuman persuasion is likely to be next

    Some people already seem to have superhuman persuasion. AI can level the playing field for those that lack it and give all the ability to see through such persuasion.

    • RationalDino 2 years ago

      I am cautiously optimistic that this is indeed possible.

      But the kind of AI that can achieve it has to itself be capable of what it is helping defend us from. Which suggests that limiting the capabilities of AI in the name of AI safety is not a good idea.

maytc 2 years ago

Regulatory capture for AI is here?

Looking at Bill Gurley's 2,851 Mile talk (https://12mv2.com/2023/10/05/2851-miles-bill-gurley-transcri...)

14 2 years ago

The cat is out of the bag. This will have no meaningful effect except to stop the lowest tier players.

  • timtom39 2 years ago

    It might stop players like FB from releasing their new models open source...

whywhywhywhy 2 years ago

Any major restrictions will be handing the future to China, Russia and UAE for the short term gain of presumably some kickbacks from incumbents.

honeybadger1 2 years ago

Expect trash that protects big business and puts a boot on everyone else's neck.

numpad0 2 years ago

How do any of these work when everyone is cargo-cult "programming" AI by verbally asking nicely? Effectively no one but very few up there in OpenAI et al has any understanding, let alone have controls.

  • kramerger 2 years ago

    You realise that these random-Joe companies currently develop and sell AI products to cops, goverments and your HR department because the CTO or head of IT is incompetent and/or corrupt?

    You understand that already people have been denied bail because "our AI told us so", with no legal way to question that?

    • peyton 2 years ago

      That sounds like a procedural issue, which it doesn’t sound like this order addresses.

      • kramerger 2 years ago

        Procedures can't be effective unless backed by law.

        Besides, point me to existing processes that cover my examples

        Only one of them exists, in 1-2 states.

rvz 2 years ago

OpenAI, Anthropic Microsoft and Google are not your friends and the regulatory capture scam is being executed to destroy open source and $0 AI models since they are indeed a threat to their business models.

  • frumper 2 years ago

    Good luck trying to stop someone from giving away some computer code they wrote. This executive order does nothing of the sort.

  • nojito 2 years ago

    How exactly does providing grants to small researchers destroy open source?

rmbyrro 2 years ago

I see Salt Man's bureau trips are paying off.

venatiodecorus 2 years ago

The way to make AI content safe is the same way to improve general network security for everyone: cryptographically signed content standards. We should be able to sign our tweets, blog posts, emails, and most network access. This would help identify and block regular bots along with AI powered automatons. Trusted orgs can maintain databases people can subscribe to for trust networks, or you can manage your own. Your key(s) can be used to sign into services directly.

  • px43 2 years ago

    > We should be able to sign our tweets, blog posts, emails, and most network access.

    What you are talking about is called Web3 and doesn't get a lot of love here. It's about empowering users to take full control of their own finances, identity, and data footprint, and I agree that it's the only sane way forward.

    • venatiodecorus 2 years ago

      Yep, that's my favorite feature of apps like dydx and uniswap, being able to log in with your wallet keys. This is how things should be done.

  • max_ 2 years ago

    The problem is key management & key storage.

    Smartphones & computers are a joke from a security standpoint.

    The closest solution to this problem has been what people in the crypto community have done with seed phrases & hardware wallets. But this is still too psychologically taxing for the masses.

    Untill that problem of intuitive, simple & secure key management is solved. Cryptography as a general tool for personal authentication will not be practical.

    • px43 2 years ago

      > But this is still too psychologically taxing for the masses.

      Literally requires the exact same cognitive load as using keys to start your car. The problem is that so many people got comfortable delegating all their financial and data risk to third parties, and those third parties aren't excited about giving up that power.

      • thesuperbigfrog 2 years ago

        >> Literally requires the exact same cognitive load as using keys to start your car. The problem is that so many people got comfortable delegating all their financial and data risk to third parties, and those third parties aren't excited about giving up that power.

        This perfectly describes the current situation with passkeys.

        Passkeys are a great idea--they are like difficult, if not impossible-to-guess passwords generated for you and stored in a given implementor's system (Apple, Google, your password manager, etc.).

        Until passkey systems support key export and import, I predict that they will see limited use.

        Who wants to trust your passkeys to a big corporation or third party? Vendor lock-in is a huge issue that cannot be overlooked.

        Let me generate, store, and backup MY passkeys where I want them.

        That doesn't solve the general "I don't want to have to manage my keys" attitude that some people have, but it prevents vendor lock-in.

        • px43 2 years ago

          Why export/import? Just create new passkeys on whatever device or service you want, and register those as well. OR just use a yubikey, put it on your keyring, and use it to log into everything.

          Most crypto wallets do have import/export enabled though, so if you're logging in with a web3 identity, everything should just work.

          • thesuperbigfrog 2 years ago

            >> Why export/import?

            Why not have key export and import?

            Are they my keys or not?

            >> Just create new passkeys on whatever device or service you want, and register those as well.

            I would rather not have different keys for each device for each account. It is an unnecessary combinatorial explosion of keys that requires more effort than is really needed.

            When you get a new device, you need to generate and add new keys for every account. Why can't you just import existing keys?

            • px43 2 years ago

              What's this? It should be one key per device. That key should get you into any site for which that key is approved. It's the exact opposite of a combinatorial explosion. Instead of needing credentials for every single site you want to authenticate to, you should just need one key per device that you want to auth with. A phone, a laptop, maybe a yubikey, and that's it.

      • marcinzm 2 years ago

        > The problem is that so many people got comfortable delegating all their financial and data risk to third parties

        The "problem" is that most people prefer to not lose their life savings because their cat stole a little piece of metal and dropped it in the forest.

        • px43 2 years ago

          Yup, and some people crash their cars, and some people accidentally burn their own house down. Most people have figured out how to deal with situations like what you mention. People who have trouble following best practices are going to have a hard time, but that's no different than status quo.

          • frumper 2 years ago

            The solution people came up with a long time ago were banks and is very much considered a best practice to keep your money there.

            • marcinzm 2 years ago

              And when that system of institutional safety measures fails such as someone being swindled into sending all their money to a Nigerian prince you get news stories that ask why the institution isn't liable for the loss or doesn't have better safety guards.

              • frumper 2 years ago

                Me getting swindled sure sounds better than:

                >The "problem" is that most people prefer to not lose their life savings because their cat stole a little piece of metal and dropped it in the forest.

                • px43 2 years ago

                  That's great. If banks work better for you, that's awesome. Recognize the privilege though. About half of the people on the planet are unable to even open a bank account, and banks have been becoming increasingly predatory in the past few decades, especially in developing nations. They also are lagging decades behind in their capabilities.

                  Other options exist now, and I think that's pretty great, even for people who prefer using banks. The competition forces banks to provide better services to their customers, which improves quality of life for everyone.

    • venatiodecorus 2 years ago

      I mean my Yubikey is really easy to use, on computers and with my phone. Any broad change like this is going to require an adoption phase but I think its do-able.

    • colordrops 2 years ago

      I wouldn't be surprised if things got so bad that people would get used to the rough edges as the alternative is worse.

  • howmayiannoyyou 2 years ago

    This is the intent of Altman's Worldcoin project, to provide authoritative attribution (and perhaps ownership) for digital content & communications. Would be best if individuals could authenticate without needing a third party, but that's probably unrealistic. The near term dangers of AI is fake content people have to spend time and money to refute - without any guarantee of success.

    • venatiodecorus 2 years ago

      Yep, I think this is a step in the right direction. I don't know enough about the specifics of Worldcoin to really agree/disagree with its principals and I know I've heard some people have problems with it but I think SOMETHING like this is really the only way forward.

  • jowea 2 years ago

    Sybil problem? You'd have to connect that signature to a unique real identity.

    • venatiodecorus 2 years ago

      Yeah and so I don't know exactly how I'd want to see this solved but I think something like an open source reputation databases could help. Folks could subscribe to different keystores and they could rank identities based on spamminess or whatever. I know some people would probably balk at this as an internet credit score but as long as we have open standards for these systems, we could model it on something like the fediverse where you can subscribe to communities you align with. I don't think you'd need to validate your IRL identity but you could develop reputation associated with your key.

    • nerdponx 2 years ago

      That's fine though. It takes care of the big problem of fake content claiming to be by or about a real person, which is becoming progressively easier to produce.

  • bigger_inside 2 years ago

    You actually understood "safe" to mean "safe for you" as in, making it actually safer for the user and systemically protecting structures that safeguard the data, privacy, and well-being of users as they understand their safety and well-being.

    Nooo... if they talk about something being safe, they mean safe for THEM and their political interests. Not for you. They mean censorship.

EMM_386 2 years ago

I don't see any way of stopping this. If the risks are as great as some claim, that is not a great situation.

So now we have an executive order with a very limited scope. Tomorrow, suddenly the world's most powerful AI is now announced, not in the United States.

Ok, so now we want to make sure that is safe. An executive order from the White House has no affect on it. This can continue, until it's decided the stakes are getting too high. Then I suppose you could have the United Nations start trying to figure out how to maintain safety. Of course, there will be countries that will simply ignore anything that is decided, hiding increasingly advanced systems with unknown purposes. It will probably take longer for nations to determine a what defines "human values" so that AI respects them then it does to create another leap in AI capabilities.

Then there would simply be more concerns coming into play. Countries will go to war to try to stop other countries nuclear ambitions, is it possible that AI poses enough of a threat that similar problems arise?

Basically, if AI is as potentially large a threat as we are envisioning, there are so many different potential threats that trying to solve them while trying to stay ahead of pace of advancements seems unrealistic. While someone is trying to ensure we don't end up with systems going rogue, someone else needs to handle the fact that we can't have AI creating certain things. The AI systems are not allowed to tinker with viruses, as an example, where unexpected creations can lead to extremely bad situations.

The initial stages of this have already begun, and time is ticking. I guess we'll see.

ilaksh 2 years ago

Good start. But if you are in or approaching WWIII, you will see military AI control systems as a priority, and be looking for radical new AI compute paradigms that push the speed, robustness, and efficiency of general purpose AI far beyond any human ability to keep up. This puts Taiwan even more in the hot seat. And aims for a dangerous level of reliance on hyperspeed AI.

I don't see any way to continue to have global security without resolving our differences with China. And I don't see any serious plans for doing that. Which leaves it to WWIII.

Here is an article where the CEO of Palantir advocated for the creation of superintelligent AI weapons control systems: https://www.nytimes.com/2023/07/25/opinion/karp-palantir-art...

honeybadger1 2 years ago

This will just make it harder for businesses not lining the pockets of congress and buddying up with the government.

stevev 2 years ago

Let the regulations, antitrust lawsuits and monopolies begin!

  • nerdponx 2 years ago

    This is a great opportunity to try to avoid the old mistakes of regulatory capture. It looks like someone is at least trying to make a nod in that direction, by supporting smaller research groups.

rmbyrro 2 years ago

Why's there a bat flying over the white house logo?

engcoach 2 years ago

Impotent action to appear relevant.

almatabata 2 years ago

These regulations will only impact the public. I expect the army and secret service to gain access to the complete unrestricted model officially or unofficially. I would like to see the final law to check if they have a carve out for the military usage.

The threat includes the whole world including every single country in the world. You will see US using AI to mess with China and Russian. And you will see Russian and China use AI to mess with US. No regulation will stop this and it will inevitably happen.

Maybe in a 100 years you will have the equivalent of the geneva convention but with AI when we have wrought enough chaos on each other.

jiggawatts 2 years ago

Everyone forgets that all of this should have applied to every major search engine:

1. They’ve all used much more than the regulatory threshold compute power for indexing and collating.

2. They can be used to answer arbitrary questions, including how to kill oneself or produce weapons to kill others. Yes, including detailed nuclear weapons designs.

3. Can be used to find pornography, racist material, sexist literature, and on, and on… largely without censure or limit.

So… why the sudden need to curtail what we can and can’t do with computers?

AlexanderTheGr8 2 years ago

As far as I can tell, the only concerning thing in this is "Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government."

They are being intentionally vague here. Define "most powerful". And what do they mean by "share". Do we need approval or just acknowledgement?

This line is a slippery slope for requiring approval for any AI model which effectively kills start-ups, who cannot afford extensive safety precautions

collsni 2 years ago

This isn't about regulation this is about Market control

pyuser583 2 years ago

A lot of folks are talking about “incumbents in AI taking regulatory control.”

That is extremely premature. There are no real incumbents. The only companies with real cash flow from this are hardware.

We still don’t know what commercial AI will look like - much less have massive incumbents.

Maybe we should be a bit more skeptical of privacy laws that conventionally make it harder to start a social networking site or search engine.

But AI still doesn’t have a clear application.

adolph 2 years ago

Said executive order was not linked to in the document.

monksy 2 years ago

The privacy section is just a facepalm all arround there.

The US Government has been leading the way to collect information without a warrant from friendly commerical interests.. and they've been expanding futher in tracking large groups of people, without their consent. [I'm talking about people that are not under investigation nor are the current subject of interest ... yet]

saturn8601 2 years ago

I don't see how they will enforce many of these rules on Open Source AI.

Also:

"Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure."

I fear the end of pwning your own device to free it from DRM or other lockouts is coming to an end with this. We have been lucky that C++ is still used badly in many projects and that has been an achilles heel for many a manager wanting to lock things down. Now this door is closing faster with the rise of AI bug catching tools.

  • flenserboy 2 years ago

    Orders such as these don't appear out of the blue — corporate interests & political players are always consulted long before they appear, & threats to those interests such as Open Source Anything are always in their sights. This is a likely first step in a larger move to snatch strong AI tools out of the hands of the peasants before someone gets a bright idea which can upend the current order of things.

  • incompatible 2 years ago

    Probably the same way they stamped out open source cryptography in the 1990s.

orbital-decay 2 years ago

> They include requirements that the most advanced A.I. products be tested to assure that they cannot be used to produce biological or nuclear weapons

How is "AI" defined? Does this mean US nuclear weapons simulations will have to completely rely on hard methods, with absolutely no ML involved for some optimizations? What does it mean for things like AlphaFold?

  • paxys 2 years ago

    What makes you think the US military will be subject to these regulations?

    • 2devnull 2 years ago

      If militaries are not subject to the regulation then it is meaningless. Who else would be building weapons systems?

      • krisoft 2 years ago

        The worry here is not about controlling militaries. There are different processes for that.

        The scenario people purport to worry about is one where a future AI system can be asked by "anyone" to design infectious materials. Imagine a dissatisfied and emotionally unstable researcher who can just ask their computer for the DNA sequence of an airborne super Ebola. Then said researcher orders the DNA synthetized, does some lab work to multiply it and releases it in the general population.

        I have no idea how realistic this danger is. But this is what people seem to be thinking about.

        • orbital-decay 2 years ago

          That is the question. AI is an ill-defined marketing BS, what is the actual definition in the law? Artificial Intelligence as used in the science/industry is a pretty broad term, and even more narrow "machine learning" is notoriously hard to define. Another question is that all this is being used for more than a decade for a lot of legitimate things which can also be easily misused to create biological weapons (AlphaFold), how does it regulate it? The article doesn't answer these questions, what matters is where exactly the actual proposed law draws the line in the sand. The devil is always in the details.

  • marcosdumay 2 years ago

    Now that you mentioned it... Does it outlaw the Intel and AMD's amd64 branch predictors?

    • czl 2 years ago

      > Does it outlaw the Intel and AMD's amd64 branch predictors?

      Does better branch prediction enable better / faster weapons development? Perhaps we need laws restricting general purpose computing? Imagine what "terrorists" could do if they get access to general purpose computing!

pr337h4m 2 years ago

First Amendment hasn't been fully destroyed yet, and we're talking about large 'language' models here, so most mandates might not even be enforceable (except for requirements on selling to the government, which can be bypassed by simply not selling to the government).

Edited to add:

https://www.whitehouse.gov/briefing-room/statements-releases...

Except for the first bullet point (and arguably the second), everything else is a directive to another federal agency - they have NO POWER over general-purpose AI developers (as long as they're not government contractors)

The first point: "Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public."

The second point: "Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety."

Since the actual text of the executive order has not been released yet, I have no idea what even is meant by "safety tests" or "extensive red-team testing". But using them as a condition to prevent release of your AI model to the public would be blatantly unconstitutional as prior restraint is prohibited under the First Amendment. Prior restraint was confirmed by the Supreme Court to apply even when "national security" is involved in New York Times Co. v. United States (1971) - the Pentagon Papers case. The Pentagon Papers were actually relevant to "national security", unlike LLMs or diffusion models. More on prior restraint here: https://firstamendment.mtsu.edu/article/prior-restraint/

Basically, this EO is toothless - have a spine and everything will be all right :)

  • polski-g 2 years ago

    Most restrictions probably aren't enforceable.

    > After four years and one regulatory change, the Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional.

    https://en.wikipedia.org/wiki/Bernstein_v._United_States

  • ApolloFortyNine 2 years ago

    Also the defense production act was never meant for anything like this, and likely won't be allowed if challenged. If they don't shut it down in some other way first.

    Every other use of the act is to ensure production of 'something' remains in the US. It'd even be possible to use the act to require the model shared with the government, but not sure how they justify using the act to add 'safety' requirements.

    Also any idea if this would apply to fine tunes? It's already been shown you can bypass many protections simply by fine tuning the model. And fine tuning the model is much more accessible than creating an entire model.

  • gs17 2 years ago

    On the subject of toothlessness:

    >Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.

    So the big American companies will be guided to watermark their content. AI-enabled fraud and deception from outside the US will not be affected.

    --

    >developing any foundation model

    I'm curious why they specified this.

siliconc0w 2 years ago

Both approaches - watermarking and 'requiring testing' seem pretty pointless. Bad actors won't watermark and tools will quickly emerge to remove them. The 'megasyn' AI that generated bioweapon molecules wasn't even an LLM and doesn't need insane amounts of compute.

batch12 2 years ago

This line is a little scary:

> Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.

  • hatthew 2 years ago

    My (possibly naive) hope is that the best practice for a lot of these would be "Don't use AI." That being said, there's certainly a lot of niches in our system where AI could be used. For example, if a witness uses AI to make a sketch of the suspect they saw, you can bypass all the biases present in a police sketch artist.

Nifty3929 2 years ago

I'm worried about the idea of a watermark.

The watermark could be "Created by DALL-E3" or it could be "Created by Susan Johnson at 2023-01-01-02-03-23:547 in <Lat/Long> using prompt 'blah' with DALL-E3"

One of those watermarks seems not too bad. The other seems a bit worse.

I_am_uncreative 2 years ago

Is there a penalty for non-compliance here? Because if you were a wealthy recluse with 50,000x H100 cards, the executive order might say you have to report your models, but I'm pretty sure that there's no penalty that could be enforced without a law.

nojito 2 years ago

There’s some cool stuff in here about providing assistance to smaller researchers. That should help a lot given how hard it currently is to train a foundational model.

The restrictions around government use of AI and data brokers is also refreshing to see as well.

brodouevencode 2 years ago

How much will this regulation cost in 5, 10, 50 years? Who will write the regulations?

photochemsyn 2 years ago

If they try to limit LLMs from discussing nuclear, biological and chemical issues, they'll have no choice but to ban all related discussion because of the 'dual-use technology' issue - including of nuclear energy production, antibiotic and vaccine production, insecticide manufacturing, etc. Similarly, illegal drug synthesis only differs from legal pharmaceutical synthesis in minor ways. ChatGPT will tell you everything you want about how to make aspirin from willow bark using acetic anhydride - and if you replace the willow bark with morphine from opium poppies, you're making heroin.

Also, script kiddies aren't much of a threat in terms of physical weapons compared to cyberattack issues. Could one get an LLM to code up a Stuxnet attack of some kind? Are the regulators going to try to ban all LLM coding related to industrial process controllers? Seems implausible, although concerns are justified I suppose.

I'm sure the regulatory agencies are well aware of this and are just waving this flag around for other reasons, such as gaining censorship power over LLM companies. With respect to the DOE's NNSA (see article), ChatGPT is already censorsing 'sensitive topics':

> "Details about any specific interactions or relationships between the NNSA and Israel in the context of nuclear power or weapons programs may not be publicly disclosed or discussed... As of my last knowledge update in January 2022, there were no specific bans or regulations in the U.S. Department of Energy (DOE) that explicitly prohibited its employees from discussing the Israeli nuclear weapons program."

I'm guessing the real concern is that LLMs don't start burbling on about such politically and diplomatically embarrassing subjects at length without any external controls. In this case, NNSA support for the Israeli nuclear weapons program would constitute a violation of the Non-Proliferation Treaty.

epups 2 years ago

This looks even more heavy-handed than the regulation from the EU so far.

  • marcinzm 2 years ago

    I'm honestly curious, how so? From what I can tell the only thing which isn't a "we'll research this area" or "this only applies to the government" is "tell the US government how you tested your foundational models."

    For example, AI watermarking only applies to government communications and may be used as a standard for non-government uses but it's not require.

    • patwolf 2 years ago

      That last one seems like a pretty big deal though. It's not just how you tested, but "other critical information" about the model.

      I imagine the government can deem any AI to be a "serious risk" and prevent it from being made public.

    • epups 2 years ago

      The EU regulation is here: https://www.europarl.europa.eu/news/en/headlines/society/202...

      It is also very open ended, but the US text reads like some compliance will start immediately, like sharing the results of safety tests with the government directly.

coding123 2 years ago

Unfortunately he doesn't know what he signed.

ThrowawayTestr 2 years ago

I'm so glad this country is run by a geriatric that can barely pronounce AI let alone understand it.

  • incompatible 2 years ago

    When did the US last have a president with an engineering background?

    They actually have staff and lobbyists who write these things, the president just signs it off.

    • SoftTalker 2 years ago

      Probably Jimmy Carter. He had a nuclear engineering background.

    • ThrowawayTestr 2 years ago

      No country should be run by people past the age of retirement. This has nothing to do with Biden's qualifications.

      • peyton 2 years ago

        It’s becoming more than just retirement age. I found his recent 60 Minutes interview difficult to watch. I would not want an elderly loved one out there like that.

billy_bitchtits 2 years ago

Code is free speech. Reminds me of the cryptography fights.

baggy_trough 2 years ago

Disturbing that this sort of thing can be decreed by the executive.

Eumenes 2 years ago

This is pretty ironic, trying to insure AI is "safe, secure, and trustworthy", from an administration that is fighting free speech on social media, and want back door communication with social media companies.

px43 2 years ago

Huh, interesting.

> Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.

atleastoptimal 2 years ago

To those worried about regulatory capture, this EO just being about keeping incumbents in power, etc:

Even sans-regulation, do non-incumbents really have a chance at this point? The most recent major player in the field, Anthropic, only reached its level of prominence due to taking a critical mass of former OpenAI employees, and in a year reached 700 million in funding. Every company that became a major player in the AI space in the last 10 years either

1. Is an existing huge company (Google, Facebook, Microsoft, etc)

2. Secured 99.99th percentile level venture funding within the first year of its inception due to its founders preexisting connections/prestige

Realistically there isn't going to be a "Facebook" moment for AI where some scrappy genius in college cooks up a SOTA model and goes stratospheric overnight, even in a libertarian fantasyland just due to market/network effects. People just have to be realistic about the way things are.

Koshkin 2 years ago

DPRK will make this their law ASAP

bbitmaster 2 years ago

What a lot of nonsense, where is the executive order banning gain of function research?

normalaccess 2 years ago

All joking aside I firmly believe that this “crisis” is manufactured or at least heavily influenced by those that want to shut down the internet and free communications. Up until now they have been unsuccessful. Copyright infringement, hate speech, misinformation, disinformation, child exploitation, deep fakes, none have worked to garner support. Now we have an existential threat. Video, audio, text, nothin is off limits and soon it will be in real time (note: the GOV tries to stay 25 years ahead of the private sector).

This meme video incapsulates this perfectly.

https://youtu.be/-gGLvg0n-uY?si=B719mdQFtgpnfWvH

Mark my words, in five years or less we will be begging the governments of earth to implement permanent global real time tracking for every man woman and child on earth.

Privacy is dead. And WE killed it.

Eumenes 2 years ago

This kinda thing should not be legislated via executive order. Congress needs a committee and must deliberate. Sad.

  • flenserboy 2 years ago

    Which is exactly what Congress refuses to do, because letting Caesar, I mean the President, decide things by fiat keeps them from owning the blame for bad legislation.

    • nerdponx 2 years ago

      Congress has generally refused to seriously legislate anything other than banning lightbulbs for several presidential terms now.

      But in this particular example I don't think it's enough of "thing" to even consider bringing up as a bill, except maybe as a one-pager that passes unanimously.

    • Eumenes 2 years ago

      At least Caesar was a respectable age for leading when he died (55) ...

      This is interesting: https://www.presidency.ucsb.edu/statistics/data/executive-or...

      • nerdponx 2 years ago

        Don't forget that life expectancies were much lower back then, and that he was assassinated. He certainly would have been happy to continue into his 80s if he could.

      • frumper 2 years ago

        It is interesting. I would have thought executive orders were more frequently used now than in the past. Apparently that peaked 80 years ago.

  • nerdponx 2 years ago

    This is well within the president's powers under existing law. If Congress disagrees, they can always supersede.

    This isn't even close to legislating. Look at some recent Supreme Court decisions and the amount of latitude federal agencies have, if you want to see something more closely resembling legislation from outside of Congress.

  • zoobab 2 years ago

    "This kinda thing should not be legislated via executive order."

    Dictatorship in another form.

RandomLensman 2 years ago

Does Microsoft need to share how it is testing Excel? Some subtle bug there might do an awful lot of damage.

  • halJordan 2 years ago

    Idk if you're being serious because there's ai in excel now; in which case the answer is no. Or you're being a smarty-pants and trying to cleverly show what you think is a counter-example; in which case the answer is still no, but should probably be yes, and they only don't because it was well established before all the cyber regulation took effect, but for instance azure has many certs (including fedramp) which includes office365 which includes excel.

    • RandomLensman 2 years ago

      I am quite serious about the potential for danger of errors in Excel (without AI).

      Basically, I consider the focus on AI massively misplaced given the long list of real risks compared to the more hypothetical (other than general compute) risks from AI.

sirmike_ 2 years ago

This is useless just like everything they do. Masterfully full of synergy and nonsense talk.

Is there anyone hear who actually believes this will do something? Sincere question.

iinnPP 2 years ago

Criminals don't follow the rules. Large corps don't follow the rules.

The only people this impacts are the ones you don't need it to impact. The bit about detection and authentication services is also alarming.

  • gmerc 2 years ago

    You could say this about … every law. So clearly it’s not a useful yardstick

    • iinnPP 2 years ago

      It's a statement of my estimated impact of the post on the development of AI.

      The blocking of "AI content" and the bit about authentication don't seem related to AI frankly. Detection isn't real and authentication is the government's version of an explosive wet dream.

  • gs17 2 years ago

    >The bit about detection and authentication services is also alarming.

    "The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content." is pretty weak sounding. I'm more annoyed that they pretend that will actually reduce fraud.

tomohawk 2 years ago

In my history book, I read where we fought a war to not have a king.

In my civics class, I learned that Congress passes laws, not the President.

I guess a public school education only goes so far.

  • rmbyrro 2 years ago

    Executive Orders are subject to Congressional review and can be taken down by Congress. It's a power given by Congress to the President. There are contexts in which the President's ability to issue Executive Orders are really necessary. This is not against democratic principles, per se.

    Of course, the President can abuse this power. That's not a failure of Democracy. This is predicted. And that's also a reason (potential power abuse) why the Congress exists, not just to pass laws.

  • marcinzm 2 years ago

    And who is in charge of making sure those laws are executed on by the Federal Goverment?

    Hint: It's the President and executive orders are the President's directive on how the Federal government should execute on laws.

    • nerdponx 2 years ago

      And that's also literally what this is, it's the president executing the provisions of the Defense Production Act of 1950, which is not only within his power to do so, it's literally his constitutional obligation to do so.

  • barney54 2 years ago

    Executive Orders do not have the force of law. They are essentially suggestions. Federal agencies try to follow them, but Executive Orders can’t supersede actual laws.

  • phillipcarter 2 years ago

    You clearly weren't paying attention in school then, because executive orders are most certainly taught in government classes.

d--b 2 years ago

I was downvoted 35 days ago, for daring to state that deepfakes will lead to AI being regulated.

Of course “these are just recommendations”, but we’re getting there.

  • kaycebasques 2 years ago

    I suspect the downvoting is more because of the tone of your comments rather than the content. From the HN guidelines:

    > Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

    > Please don't use Hacker News for political or ideological battle. That tramples curiosity.

    > Please don't fulminate. Please don't sneer, including at the rest of the community.

    A lot of people on HN care deeply about AI and I imagine they're totally interested in discussing deepfakes potentially causing regulation. Just gotta be careful to mute the political sides of the debate, which I know is difficult when talking about regulation.

    Also note that I posted a comment 10 days ago with a largely similar meaning without getting downvoted: https://news.ycombinator.com/item?id=37956770

    • d--b 2 years ago

      Oh I see, people thought I was being right-wingy. That makes sense.

      • 3np 2 years ago

        Probably not. Much more likely that your comment was useless. Like this one. It has nothing to with "picking sides", "being right" or "calling it".

        Given how long you've been here and your selective replies, I have a hard time taking your comment in good faith, though. It does read like sarcasm and trolling.

  • 3np 2 years ago

    The downvote button is not a "disagree" button, you know... I often vote opposite to how I align with opinions in comments, in spirit of promoting valuable discource over echo chambers.

  • A4ET8a8uTh0 2 years ago

    Hmm. It is possible that deepfakes are merely a good excuse. There is real money on the table and potentially world altering changes, which means people with money want to ensure it will not happen to them.

    Deepfakes don't affect money much.

  • normalaccess 2 years ago

    It won’t just be regulated, it will create the need for global citizen IDs to combat the overwhelming flood of really distortions caused by AI. We the people will be forced to line up and be counted while the powers that be will have unlimited access to control the narrative.

    • graphe 2 years ago

      You The internet lives on popularity, and people will flock to whatever is most popular, it will not be us.gov.social.com it will be easier to give people a free encrypted packaged darknet connection than a good social media site from the government. The CNN or fox background doesn't mean truth and unless you or everyone thinks so that won't happen.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection