Settings

Theme

The EU AI act is coming, this time for real probably

wolfhf.medium.com

127 points by FionnMc 3 years ago · 297 comments

Reader

wongarsu 3 years ago

In general I find the EU AI act quite reasonable. There are a couple AI uses that will be prohibited (most mentioned by the article, mostly manipulating people or dystopian government stuff). But imho the meat of the act is in categorizing many use cases as "High Risk", which comes with a list of requirements. Things like putting some effort in thinking about risks, into your training data, understanding biases, having documentation and logging, options for manual intervention etc. Most of it wouldn't be out of place in a best-practices guide for deploying AI models (even though obviously most of it is extremely high level)

  • konschubert 3 years ago

    The effect is going to be that all the innovation happens elsewhere, big AI companies will develop outside Europe.

    And Europeans will only get to benefit as consumers, not as makers of AI.

    And the EU will take that as a reason to regulate even harder and then also somehow end up giving a bunch of money to its telcos.

    EDIT:

    My point is that regulation doesn’t just prevent “bad products”. The compliance overhead also prevents good products.

    That being said, I agree that there are certain decisions and actions that always should require a human appeals process

    • wokwokwok 3 years ago

      ?

      Do you think there’s some kind of karmic point scoring system that you win by being the first one to build horrible unethical dystopian AI models?

      Or are you just complaining you won’t be able to horrifically exploit people using them to make lots of money?

      …because, sure, red tape is bad, but is it ultimately that terrible not being a allowed to be super villain?

      “But America has super villains! …! We’re being left behind…”

      Ok… um. Yeah… I’m ok with that. Please don’t try to be a super villain.

      These laws don’t seem nearly as outrageous as you make them out to be.

      • ithkuil 3 years ago

        That's a position I do share, but let me play the devil's advocate for moment:

        What if an AI research company doesn't want to be a supervillain or do bad things but they (or their investors) decide it's safer to just do the research in a jurisdiction where they face less risk of being subjected to regulatory burden especially during the early exploratory stage. Surely they'll care about abiding to the rules of one of the major world's markets, but they'll so it after they have a mature product.

        At least that's how I interpreted GP's comment. EU will eventually get the product that confirms to the safety specifications that we all agree they should, but it will be a foreign product.

        • AndrewKemendo 3 years ago

          Flashback to 1880 Oklahoma, US:

          "What if an oil research company doesn't want to be a supervillain or do bad things but they (or their investors) decide it's safer to just do the research in a jurisdiction where they face less risk of being subjected to regulatory burden especially during the early exploratory stage."

          Edit: In case anyone is confused, Post 1880, Land in modern day Oklahoma was considered "unassigned" according to the genocidal government and led to a "Land run" where vigilantes, crooks, scammers and generally unsavory people flooded the area - creating "Boomers" and "Sooners" that were given license to go and steal land from native people's where there were no govt regulations and they just started drilling for oil basically immediately

          • ithkuil 3 years ago

            I'm sorry but despite the edit I still fail to get the point you're making.

            • AndrewKemendo 3 years ago

              The idea of searching for places to do product development that have fewer structural protections for residents is unconscionable.

              This kind of "externality washing" is talked about as though it were some kind of amoral or practical business matter separate from ethical decisions, rather than an explicit attempt to avoid responsibility for the community that you are moving into.

              That is to say, organizations look for economic areas that do not have power structures that will scrutinize, push back or otherwise enforce community social standards on said organizations. Capital will always find a downtrodden population to exploit for it's own development BECAUSE they do not have other protections.

      • dragonelite 3 years ago

        Well you need patents and those patents will be created and registered in the US or China this way. Kind of like how EU car manufacturers have been sleeping on the EV movement. Now they have to source their parts from Chinese manufacturers.

        So wealth is leaking(patent costs etc) out of Europe and the continent will have less wealth in total over time. Especially now that the cheap Russian energy they had access too is going to their manufacturing rival China and it will also boost Indian development manufacturing.

        Brussels is doing everything in their power to deindustrialise and make Europe less competitive on the global market..

        • nozzlegear 3 years ago

          Wouldn’t those patents be for things the EU sees as unethical anyway, and therefore something they shouldn’t build? Or do you mean the rest of the world might use these unethical, super-villainy means to discover and patent totally normal things that they might want to use in the EU as well?

          > Kind of like how EU car manufacturers have been sleeping on the EV movement. Now they have to source their parts from Chinese manufacturers.

          Do we know that’s actually because they weren’t able to innovate on their own due to Brussels regulation, or some other factor?

          • hef19898 3 years ago

            > Kind of like how EU car manufacturers have been sleeping on the EV movement. Now they have to source their parts from Chinese manufacturers.

            That's a meme and not true at all. Well, of course a lot of European car makers source from China. As does everyone, everywhere, else.

          • konschubert 3 years ago

            The point is that regulation doesn’t just kill unethical startups, it kills all startups.

            • awuji 3 years ago

              Why would it kill all startups?

              The amount of AI startups that are being restricted by this are minimal. Hugging Face isn't going to have to shut down because of this. University research into the best ways to stabilize GAN training isn't going to grind to a standstill. Companies developing weeding robots aren't going to have to turn to symbolic AI for image recognition. Scientists are still going to be able to model landslide risk caused by receding glaciers.

              Maybe only 1% of AI start ups will be negatively effected by this to the point they would be better off leaving Europe.

              I am not in a EU country and I would never ever vote to join the EU, but this is one of the better EU legislations. Saying this would "kill all startups" is like saying GDPR will kill all internet business in Europe. Its both alarmist and false.

        • mitchdoogle 3 years ago

          I don't understand what you're arguing for. You want the EU to throw ethics out the window to encourage business growth?

          • oh_sigh 3 years ago

            No one us saying throw ethics out, they just see a different risk/reward balance than the EU regulatory bodies

      • andrewmutz 3 years ago

        I think he is saying that laws have unintended consequences. And a law with good intentions (preventing bad AI outcomes) can have the unintended effect of discouraging entrepreneurs from choosing europe to build a new company.

        I know plenty of people who avoid the healthcare sector because of HIPAA and avoid handling credit card data because of PCI. PCI and HIPAA are well-intentioned but can scare off innovators.

        • alasdair_ 3 years ago

          PCI isn’t a law. It’s a standard that the free market decided upon by itself. You’re making the exact opposite point than the one you are intending to make.

          • andrewmutz 3 years ago

            I don't think I am. The point is that well-intentioned rules can have the unintended consequence of stifling innovation.

      • AndrewKemendo 3 years ago

        >Or are you just complaining you won’t be able to horrifically exploit people using them to make lots of money?

        Having watched this new brand of "AI" capitalists behave, that's exactly what they want to do with it

    • Taywee 3 years ago

      Yeah, like food safety regulations, other places still won't abide. It's not like "other places still will allow abuse of their citizens" is a great argument for us to as well.

      Safety regulations exist in every major industry, and sometimes it slows some progress for the sake of individuals. This isn't a bad thing.

      • spacebanana7 3 years ago

        In the food industry there (generally) isn't a big isn't a big prize or defensive moat for being the first to develop a new product at scale. Especially at the national level.

        Many capital intensive internationally traded industries are different. Once overseas competitors have been through a few innovation cycles and sunk tens of billions of dollars into product development it's basically impossible to catch up. Even with government support.

        TSMC is the canonical example, but similar principals apply with Amazon and other big tech companies with high infra spending. More subtly, industries with high marketing costs are very hard to penetrate at scale when an incumbent has reached a certain depth. See enterprise software with MSFT or high fashion with LVMH.

        As well as losing the opportunity for economical development in a new industry, dependence on overseas imports for critical industries creates geopolitical headaches.

        European governments recognised this risk in aerospace a few decades ago, and the Airbus project saved Europe and the world from a Boeing dominated international passenger jet market.

      • endisneigh 3 years ago

        I don't know why people compare software regulation to the regulation of physical things that do obvious harm like food regulations.

        • braveyellowtoad 3 years ago

          Could you please clarify: is your position that software is not capable of doing harm?

          • endisneigh 3 years ago

            Yes, most software is harmless. There are some scenarios where it could do harm, but ultimately said harm was the product of human usage of the software, not software inherently. There are a few examples such as medical device software and things like train software where honest mistakes can result in human harm. However even with those the issue was that humans made the mistakes.

            Software development as a discipline certainly needs more rigor like the other engineering disciplines. However the AI act is premature since AI ultimately is an implementation detail for a variety of use-cases, that should be regulated independently, not AI. Regulating AI in general is like regulating electrons.

            If the EU has an issue with AI potentially resulting in more misinformation, then regulate the types of sites, that is social media, that would be the vector for such spread.

            I'm open to counterexamples outside those class of examples.

            • DicIfTEx 3 years ago

              I don't think you need counterexamples, because your base argument is faulty. Food full of formaldyhde only causes harm as the product of human usage, i.e. someone eating it, regulation aims to make it impossible for such a situation to arise. For a less extreme but no less real example, just look at the regulation of raw milk in much of the world.

              Elsewhere in the thread you have touched on firearm regulation; it's worth noting that amongst states with the capacity to do so, it is really only the US that abdicates its responsibility to regulate firearms, with predictably tragic consequences (though, as with everything in the US, that varies state to state etc.)

              • endisneigh 3 years ago

                Your example is flawed because the harm is obvious and palpable. Not to mention food in most countries must disclose ingredients that are used. Take alcohol which is known to be poisonous. Do you support its banning: yes or no?

                https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3959903/

                Food should be regulated because as you mentioned it is consumed and is a vector for obvious harm. In addition to that due to the fact that food is comprised of chemicals, it is difficult to ascertain the quality of food once prepared without consumption.

                Banning AI is like banning books. Useless. Information will spread either way. It'll just do so elsewhere.

                • Taywee 3 years ago

                  Who banned AI? This discussion is about regulation of AI to reduce its capability for harm, not to ban it outright.

            • Taywee 3 years ago

              Replace "software" with "guns" in your post and it's the same effect.

              • endisneigh 3 years ago

                Sure. There are people who think guns are harmless and do not need to be regulated. There are those who do.

                At the end of the day guns do not shoot people on their own, though.

                • Taywee 3 years ago

                  Yes, but gun regulations don't operate under the assumption that guns shoot people on their own. They operate under the understanding that people who do damage can do much more damage with guns than without.

                  Pretending that any of the regulation of guns or AI is done with the understanding that they are anything other than powerful tools that can be used by humans to do profound damage is a strawman argument. We all understand that guns don't shoot people on their own. Don't oppose regulation by characterizing the opposing viewpoint as something it isn't.

                  • endisneigh 3 years ago

                    I'm well aware of the distinction. I'm simply saying that it is a fact that guns do not shoot people on their own. Some countries regulate differently, obviously. You also seem to think governments, which are made by the said humans that can do "profound damage", are infallible.

                    At the end of the day you either democratize the access, or you don't. Those who want to break the law will break it either way. All you are going to do is punish those who follow the law by not giving them access.

                    For what it's worth - I couldn't care less about guns and wouldn't mind their banning. However I trust my government. If you do not, I would not want guns to be banned. But that's the thing - a government I don't trust would not want citizens to have guns to begin with, and thus the dilemma.

                    • Taywee 3 years ago

                      I don't think governments are infallible, and I didn't argue as such. I don't agree that regulations are inherently bad. If you want to race to the bottom, I can easily pull out some extreme scenario of giving AK-47's to every citizen upon graduating Kindergarten, but it would be a stupid argument based on nothing you've actually said, other than taking your point to the most extreme extent. Please give me the courtesy of not assuming I hold the most unreasonable extension of my argument as well. It only sabotages the conversation.

                      People driving drunk do profound damage as well. Cars don't kill people on their own. Would you agree with making drunk driving legal again?

                      When a tool allows a person to do a lot of damage, it should be regulated to prevent bystanders from taking the brunt of other peoples' bad decisions. A race to the bottom doesn't really help anybody.

                      If you're going to argue that every tool that allows people to do harm should be not regulated at all because the tool isn't anthropomorphized, I'm not sure how we can have a discussion at all about it.

                      • endisneigh 3 years ago

                        > People driving drunk do profound damage as well. Cars don't kill people on their own. Would you agree with making drunk driving legal again?

                        No, because drunk driving inhibits your ability to operate a vehicle lawfully.

                        a better example would be using your phone while driving. should this be illegal? it's well documented that using your phone, even if hands-free increases car accidents. should phones be designed to automatically shut off while in a vehicle?

                        your ak47 example is also just silly. please use better examples to make your point.

                        • Taywee 3 years ago

                          It is still a regulation, and supports the argument that a tool that can be used to do damage should have rules to reduce the risk and severity of damage where reasonable.

                          There are already laws governing phone use while driving in many places. In my experience, people using phones while driving can be extremely dangerous, and I've often wished that people couldn't do so.

                          My AK-47 example was intentionally silly, and was framed as something silly and extreme, as a direct comparison to you accusing me of believing that government was infallible. I'm not sure why you're pointing out that something I explicitly pointed out as extreme and unreasonable is extreme and unreasonable. That was the entire point: that we will get nowhere by attacking straw men.

                  • csomar 3 years ago

                    Given that Switzerland allow guns, this idea is that guns will make it worse is wrong. People don't need guns to make significant damage. Because guns are harder to get in Europe, nut-jobs used trucks.

                    The solution is to educate people to do less self or other's harm. To understand how to operate with these things.

                    But educating people is expensive and difficult. It also sometimes backfire in creating a population that's much harder to persuade. So yeah let's regulate these idiots to death...

                    • Taywee 3 years ago

                      I don't disagree with you, and I find Switzerland a really interesting example. Building good education, good culture, and good social values is often a much healthier outcome than regulation. In my experience, trying to build effective, reproducible education is not just expensive and difficult, but nebulous as well. A lot of American attempts to better their education systems have been expensive for questionable benefit.

                      I'd much rather have a good culture than good regulations, if I had the choice. I think most people would, but there's no sure path to get there. Switzerland has some magic sauce that other countries would be hard pressed to replicate at scale.

        • renjimen 3 years ago

          But software can do obvious harm. This is surely one the most important lessons of the internet era

          • AbrahamParangi 3 years ago

            Yeah it really can’t though. If there’s mercury in the tuna that’s obviously and unequivocally bad. If there’s funny memes supporting the other political party that’s supposed to be equivalent?

            • forgetfulness 3 years ago

              It may be less acute than mercury poisoning, yes, but social media and the search engine-optimized web encouraging polarization and radicalization in adults, and exacerbating mental illness in children, have been pretty harmful outcomes, not to speak of them being leveraged by state actors with that intention.

            • renjimen 3 years ago

              We use software to make all kinds of harm preventing and harm causing decisions. Military, medical, social, policing, financial. Pretty much all aspects of life

            • Taywee 3 years ago

              This comment is so disingenuous it's ridiculous. It's like opposing gun regulation with some contrived scenario about water pistols and Nerf guns.

              • AbrahamParangi 3 years ago

                It is enough to recognize that at minimum, people disagree about which software things cause harm whereas people overwhelming recognize that lead in food is bad.

                Given that reasonable people disagree, it's not even close to comparable.

                • renjimen 3 years ago

                  Well yeah. Lead in food is very specific. Software is very broad. Nobody said all software was bad.

                  • AbrahamParangi 3 years ago

                    What is an example of a specific, non-controversial harm that software causes?

                    • Taywee 3 years ago

                      The way social media acts as an echo chamber and may intensify extreme viewpoints and isolation from fellow citizens isn't extremely controversial. It has also been leveraged as a tool for good, helping prevent many people from being effectively suppressed, but it's hard to deny the harm that comes from engagement-driven advertisement algorithms systemically stoking people's anger and sense of personal victimization.

                      The trends of polarization and politically extreme positions in America in particular over the past 30 years are profound and striking.

                    • renjimen 3 years ago

                      Many military applications, obviously. Most of that is intentional harm. There’s a lot of unintentional harm in software too, such as biases in models for medical insurance companies, policing and the justice system.

        • hef19898 3 years ago

          Considering how much software actually export controlled under, e.g. ITAR (coming from the US by the way), your point is?

    • gumballindie 3 years ago

      I am pretty sure that regardless of where you are if you sell a product in the eu that violates eu law you will be fined. Furthermore you may be banned from selling in the eu all together. Meaning legit ai companies in the eu will flourish while ai companies not following the rules will miss out on the richest market.

      So in effect the eu is protecting it’s citizens work from theft and companies from unethical competition.

      • konschubert 3 years ago

        Big companies can afford regulations. They will do just fine with writing all the compliance documents

        But Small companies can’t afford the compliance overhead.

        And big companies can only develop out of small companies.

        So they will develop elsewhere

        • gumballindie 3 years ago

          They said the same about GDPR and is working fine. If a company crosses me I just request data removal and call it a day. It's all good. I do hate GDPR popups tho.

          • anonylizard 3 years ago

            Yeah, and now where is the privacy respecting European tech giant? Europe has fallen farther and farther behind in tech, every single year, and are now completely crushingly behind in AI. Like the only (continential) European AI company I know of is DeepL, and DeepL will soon be in DeepSh-t if GPT-4 costs go down.

            • gumballindie 3 years ago

              The privacy respecting european tech giant is the citizen. Also there is significant contribution to ai coming from the old world, some of the most brilliant minds are from here. And whats the point in having gazzilionaire corporations with offices surrounded by tents and poverty? Where’s the progress in that?

              • hef19898 3 years ago

                People seem to forget dystopian cyberpunk was a parody and warning, not a playbook or something desirable. But as long as people can pretend to be part of the new, neo-feudlist tecj overlord class they don't seem to care. Until the inevitable, Zorg-like, FAANG / MAGMA layoffs happen and kick them out of the door that is.

          • jraph 3 years ago

            > I do hate GDPR popups tho

            They really are "let's annoy people people so they hate GDPR popups", not so much "GDPR popups".

            You would not ask "can I track you?" to random people physically entering your store as a storekeeper.

          • endisneigh 3 years ago

            by what metric is it working fine?

    • PoignardAzur 3 years ago

      > The effect is going to be that all the innovation happens elsewhere, big AI companies will develop outside Europe.

      Why though?

      Like, most of the requirements listed in the AI act seem extremely reasonable and low-overhead, if not seriously meek. Given the immense budgets AI labs can call upon, why would "having to fill a form that says you made sure the AI wasn't racist" be a disqualifying factor?

      • konschubert 3 years ago

        Let’s see how much effort the compliance is. Just having to know about it is a burden for a small startup.

        • hef19898 3 years ago

          It propably is, based on how many of those small, VC backed start-ups, complain about, I don't know, the burden of properly declaring taxes, paying social security for their properly registered employees and so on.

          • robertlagrant 3 years ago

            How many do complain about that?

            • hef19898 3 years ago

              In Germany? A lot, apparently a RSU or ESOP program is to hard to set-up, paying higher base salaries out of the question, so they complain about zheir inability to allow their "worker" participate on the company "success", mostly as a diversion from the fact that they just don't want to. Or taxes, something impossible to do for a new, academic / SW engineer led start-up to do in Germany. It is so bad, if you delieve people, that for that reason alone companies move abroad. All BS of course, given howany small businesses without the reasources of a VC backed start-up, actually manage to do so day in day out. The list goes on...

              • robertlagrant 3 years ago

                Okay, but how many is it? How many VP-backed ones complain vs small businesses that don't? How do we control for the small businesses that just have no voice and toil away quietly?

                • hef19898 3 years ago

                  If you want total freedom, unhinged and unlimited capitalism, not a single thread of law applying tor pursuite of wealth, well, bad luck, because you don't even get this in the US. Admittedly so, the US are scaringly close.

                  • robertlagrant 3 years ago

                    You seem to be replying to a totally different point. Did you read my question? If so, is there a reason you're not answering it?

        • renjimen 3 years ago

          Then it should be factored into the profitability of starting in the AI sector. And just like for tax, there will be new external audit services that lower the costs of these overheads. In a way, these regulations create whole new kinds of industries

          • konschubert 3 years ago

            These kind of industries add no wealth. They reduce the productivity of the economy.

            • renjimen 3 years ago

              Productivity is a narrow metric for the success of society. Without these kinds of regulatory non-productive industries you would likely have fewer and worse social services, such as roads and the police. You would also be exposed to more risks in your daily life, such as pollutants and poisons. In the long term, these industries can allow for safer and more efficient societies.

    • aqme28 3 years ago

      Would you say the same things about FDA regulations?

      As a programmer who used to work on medical devices, these sound pretty similar like the FDA regulations we dealt with. The goal isn't to stifle innovation, it's to keep scammers and crooks out of the market and make things better for everyone. The EU doesn't want another Thalidomide, and the US shouldn't want another Theranos.

      • konschubert 3 years ago

        I am not calling to abolish all regulation.

        But please consider this:

        How many people’s lives could have been saved if FDA regulations didn’t make medical approvals take 10 years? How many life-saving medications could have been invented if the cost of approval didn’t limit innovation to the biggest companies?

        • Taywee 3 years ago

          It's as unanswerable a question as "how many lives would have been lost by dangerous medications with long-term side effects that didn't become apparent in the term of testing?"

          We can't really know whether it was a net good or a net bad because we can't take a look at the reality that didn't happen. What we do know is that there were deadly medications on the market before the FDA. There were medications available over the counter that were outright poisonous. Arsenic and mercury were in common medicines.

          It is a balance. You can sacrifice lives through inaction, or sacrifice lives in the way of progress. Lack of regulation is bad, but so is overregulation.

          • zeroonetwothree 3 years ago

            Actually there is a huge amount of research into this. The wide consensus seems to be that the FDA is far too conservative and overall having more lax approvals would have saved many more lives in net.

            • aqme28 3 years ago

              Studies saying that the "FDA is too conservative" does not mean that the optimal situation would be no FDA at all.

              • konschubert 3 years ago

                I don’t think anyone in this thread here is saying that there should be no FDA regulation at all?

            • Taywee 3 years ago

              I would like to see this research.

          • konschubert 3 years ago

            This is a very balanced comment, strange that it is getting downvoted.

        • aqme28 3 years ago

          And how many have been saved because medical approvals are difficult?

          We don’t know! But to ask incredulously about only one side of these questions is disingenuous.

        • hef19898 3 years ago

          History has those answers, recent history I mean. E.g. Contergan.

          Medical regulations are written in literal blood and dead people. Really good example for Chesterton's fence.

          • konschubert 3 years ago

            Maybe we would have a cure for most cancer by now.

            You cannot see the damage done by regulation, but it exists.

            Regulation is always a trade-off.

            It’s not “us vs them”. It’s us vs us.

            • hef19898 3 years ago

              Jesus, no idea what to say here... Just one thing: mRNA vaccines, coming out of cancer treatment research, one of the most sucessful companies is actually German, and they use the mountains of cash they made during Covid, partnering with Pfizer on Covid vaccines, to really push the cancer research.

              Unregulated medical research is bad, like in really, literall Nazi doctor bad. Shocking it still has to pointed out...

              • zeroonetwothree 3 years ago

                this is a straw man, less regulation does not immediately mean Nazis. I mean come on be reasonable

                • hef19898 3 years ago

                  A buncjlh of the EU rules around medical research and drug development is, in fact, based on experience with leteral Nazo experimentation done by the likes of Dr. Mengele. Dor once not an internet drope, but rather historical fact.

      • hef19898 3 years ago

        Sometimes it feels like allowing scammers and crooks is kind of the goal, isn't it?

        • renjimen 3 years ago

          What? I don’t follow the logic. How could regulation in favour of protecting the rights of citizens over corporations be intentionally facilitating scammers?

          • hef19898 3 years ago

            I meant that being to scam people is an actual goal of people opossing rwgulations like the discussed AI act or GDPR. Or financial regulations, the whole crypto space, last weeks over hyped society saving tech, is built on the assumption financial reulations don't apply to crypto.

            I am all in favor of reasonable, individual rights protecting regulation. The EU AI act might just be one, as is GDPR.

    • AndyMcConachie 3 years ago

      Fine. Promoting innovation as an ideologiy has become camouflage for mass exploitation. Most 'innovation' that happens in SV is either advertising nonsense or so new trick to avoid labor laws.

      I've lived in both the USA and The Netherlands and I now live in The Netherlands. I recently had to renew my Dutch driver's license and the entire process so painless and quick. Compared to my experiences doing the same thing in both Virginia and California it was a lovely breeze. That's progress. That's innovation.

      Don't even get me started on the byzantine nonsense that is the IRS versus the Belastingdienst. I'll take the Belastingdienst any day.

      Americans wouldn't know real innovation if a walkable neighborhood arrived on a highspeed train and began eliminating healthcare bureaucracy.

      • sofixa 3 years ago

        > Americans wouldn't know real innovation if a walkable neighborhood arrived on a highspeed train and began eliminating healthcare bureaucracy.

        Thanks for the laugh, beautifully put.

        In general I agree with your point, Americans hyperfocus on certain types of innovation (reinventing this or that wheel, "revolutionising X", like every single bullshit "reinventing transportation" "like a train, but much worse" proposal out there), mistake everything else for lack of innovation, and totally miss the forrest for the trees. New and different stuff is celebrated, even if it's an obvious disaster in the making for most involved (e.g. Uber destroying labour protections, Airbnb putting immense pressure on already stressed housing markets, idiots reinventing trains but worse with "pods").

        Meanwhile American companies are drastically behind in many niches (airplanes, cars, trains, railroad companies, cosmetics, etc.), but you don't hear Americans complain for their lack of innovation. But when anything EU is mentioned "innovation is dead, Deutsche Telekom is so old and just sucking Brussels money". When Intel gets subsidies to manage to catch up to the competition because they stagnated it's strategic, but when EU funds go for anything it's handouts propping up failing industries.

        • robertlagrant 3 years ago

          I think the big difference is the US is customer-focused, and the EU is employee-focused.

          To be stereotypical: in the US organisations exist to provide value people choose to pay for (e.g. electric cars; cheap rocketry; cheaper accommodation for travel; cheaper and much more convenient taxis); in the EU they exist to employ people, and hopefully they provide value, but if they don't provide much it's fine because it's pretty difficult to start a competitor anyway.

          • sofixa 3 years ago

            > I think the big difference is the US is customer-focused, and the EU is employee-focused

            I think this is simply untrue. US is profit focused above all, customer experience be damned. If you need examples, check Boeing killing people to get a plane out as soon as possible with ridiculously bad engineering choices; railroad companies skimping on decades old safety tech to keep profits as high as possible; healthcare being the quagmire that it is; massive companies posting record profits while laying off employees en masse; extremely customer hostile practices like like selling customer data everywhere; abusive pricing models on basic infrastructure like internet and mobile; etc. etc.

            The only segment I can think about where you can legitimately say "customer-focused" is in hospitality, and that's only because employees have to, them getting any money depending directly on customers being very happy. For me it's to annoying extents though.

            > in the EU they exist to employ people, and hopefully they provide value, but if they don't provide much it's fine because it's pretty difficult to start a competitor anyway.

            Also disagree. That's not how things work, even in flag airlines, let alone private companies. And no, it's not difficult to start a competitor outside of capital-intensive market segments, which is the same everywhere.

    • alasdair_ 3 years ago

      >The effect is going to be that all the innovation happens elsewhere, big AI companies will develop outside Europe.

      The EU has rules for aircraft and automobile manufacturing and yet Airbus and Ferrari seem to be able to continue to be world leaders in their fields.

      Moreover, there are very few restrictions on many industries in Somalia, yet for some reason it fails to be a hub of innovation.

      Regulations don’t immediately mean a flight of intellectual capability to less restrictive places.

      • csomar 3 years ago

        > Regulations don’t immediately mean a flight of intellectual capability to less restrictive places.

        This. Most people forget that the US is still a highly regulated place. And the tech hub (California/Bay Area) is a highly-taxed and highly-bureaucratic area too...

        It's not deregulation or investment. Innovative economies are like emergent behavior. They spring out of chaos and no one has no understanding so far how that happens.

      • konschubert 3 years ago

        I am not arguing for zero regulation.

        I’m saying you have to consider the price of regulation.

        Airbus is able to compete in the airliner market because it’s a huge incumbent.

        Maybe this is the price we have to pay for safe planes. But there is almost zero competition in the airline manufacturing sector because the regulation is keeping out any potential competition.

    • ElevenLathe 3 years ago

      > ...and then also somehow end up giving a bunch of money to its telcos.

      A funny but probably true prediction. Brussels loves handing money to PTTs the way the US Congress loves handing it out to Raytheon.

    • cschmid 3 years ago

      The only way to avoid falling under the provisions of the act is to not have any customers in the EU -- that's quite a big market to cut yourself out of, and definitely something a big company cannot afford to do.

  • ChuckNorris89 3 years ago

    The EU AI act is more for protecting people from being exploited by companies and institutions and having AI take the fall.

    Basically what they're trying to prevent, is that you can't deny someone welfare, or an insurance claim, or a close their bank account with the justification that "computer says NO"[1]. For every such decision, a human should be at the wheel who can bring a legal fitting justification and be responsible for it.

    [1] https://youtu.be/x0YGZPycMEU?t=8

    • btbuildem 3 years ago

      That seems like a very reasonable stance. AI is a tool like any other, the entities with agency that use it are still people.

  • logicchains 3 years ago

    >New transparency and risk assessment requirements for providers of (generative) foundation models like GTP

    Anyone following the space would be aware that there's currently a huge Cambrian explosion of innovation in foundation models going on. Something like this, requiring you to go through a bunch of potentially expensive bureaucracy would completely kill this growth in the EU, to the benefit of the larger players.

hackeraccount 3 years ago

The people writing these regulations don't know what the problems of the future are. The problems of the present are tricky because every problem has a constituency (if it didn't it wouldn't be a problem.).

The EU is trying to get around this by acting quickly before constituencies can develop. This will make no one happy - it's like my wife telling me that a car's going to hit me if I walk out the front door so she's locking me inside. All I'll know about is that I can't go for a walk. I might hear that the neighbor got hit by a car but I'll also hear that they get to go for walks so to the degree the strategy is successful it's also to the degree that it fails.

  • fstokesman 3 years ago

    > ...remote biometric identification systems in publicly accessible spaces, biometric categorisation systems (e.g. categorizing by gender, race, ethnicity, citizenship status, religion, political orientation) and the use of AI for predictive policing.

    > AI systems which can influence voters in political campaigns and by use of suggestion systems on very large platforms...

    > New transparency and risk assessment requirements for providers of (generative) foundation models like GTP.

    > Clarified exemptions for research.

    Putting these kinds of restrictions in place is absolutely a good thing. While they might not get everything right, this is a step in the right direction. Our laws and understanding as a society has been lagging behind technological development for decades now. That fact has enabled a large amount of exploitation to take place, which has (in the last decade especially) had a large hand in massively undermining our democracies.

    • logicchains 3 years ago

      >New transparency and risk assessment requirements for providers of (generative) foundation models like GTP

      This is absurd. For a relatively small sum in the grand scheme of things, I could rent a few A100s, download a free dataset and train a model like LLaMA 30B, which is comparable to GPT3 (and indeed there already are such efforts popping up). Such a law could potentially make it illegal to upload such a thing if you live in the EU without going through a potentially expensive and bureaucratic process. It will completely stifle AI development the same way requiring people to going through a bunch of paperwork to upload a new library would stifle web development.

  • smarx007 3 years ago

    Did you actually read the proposed text of the AI act? I certainly want more oversight over a medical startup or airplane manufacturer doing AI in the essential system components than otherwise. I think adding a high-risk category is a brilliant move.

    • jiggywiggy 3 years ago

      The problem with this type of regulation is that you need to add huge amount of vagueness in the law to be future proof. Leading to huge amounts of uncertainty for companies, unnecessary red tape and higher legal fees.

      As well as giving the power to a judge to punish companies if the public sentiment goes negative.

      • sambeau 3 years ago

        Laws can be iterated on. Uncertainty, red tape and legal fees all seem like worth-while brakes on a potentially dangerous industry. Better to be safe than sorry.

        • mschuster91 3 years ago

          > Laws can be iterated on.

          For Americans, this concept seems to be almost alien given the (at least from an European POV) more or less constant gridlock between House, Senate, Presidency and whatever the 50 states make out of that regarding enforcement.

          Or, to put it differently, they prefer the Wild West and barely self-regulated markets because they have completely lost any trust in government to create and modernize laws - a viewpoint that does make sense given the ridiculous age of key players in Congress. Feinstein is 90 years old, both likely Presidential candidates are over 75, Senators' median age is 65. How can anyone expect these people to even understand modern issues?!

          I think this is also the cause why so many American companies failed or have massive difficulties entering the European market. They simply cannot think that other countries have governments that actually govern and regulatory agencies that don't take it well if foreign companies try to buy their way out of trouble.

          • letmevoteplease 3 years ago

            The design of the American system was always based on a distrust for power. The gridlock is the point. Not my favorite person but Scalia put it well:

            https://www.youtube.com/watch?v=Ggz_gd--UO0

            • mschuster91 3 years ago

              Eventually though it leads to a situation where no one has any trust left in government, which is extremely dangerous from a democracy perspective - it breeds resentment, splintering/secession and people taking the law into their own hands (or to put it bluntly, shooting at everything they deem a threat - including children playing hide-and-seek [1]).

              That's also the reason why there are so many doomsday preppers in the USA vs. everywhere else on the planet that isn't an active warzone. These people simply don't trust the government to keep them alive in a time of crisis.

              [1] https://eu.usatoday.com/story/news/nation/2023/05/09/louisia...

              • logicchains 3 years ago

                > extremely dangerous from a democracy perspective

                How is it extremely dangerous? Over a hundred million people died in the 20th century from trusting their government too much, nothing compares to that.

                • mschuster91 3 years ago

                  Simple: if enough people do not trust the government and do not go to vote, the government loses its democratic legitimacy - and fringe extremists gain ever more power. What happened to the Republican Party should ring all alarm bells - the Bush era was bad enough, but look just how far the moderates have eroded from the party since these times. The fact that the current top runner for the GOP Presidency nomination in 2024 will be a man twice impeached and convicted of (for now at least) sexual assault or that a complete fraud (George Santos, if that even is his legal real name?!) could gain a seat in Congress is worrying - where have all the people gone that would have said "no, we want someone who can at least behave themselves in a somewhat decent manner worthy of the office"?

                  The alternative can be seen in France: many have voted Macron purely because he was (and is) better than le Pen and the other parties have all but eroded - and now the country is embroiled in riots because, surprise, the population didn't vote for this shit of a pension reform: they voted to simply not have a fascist in office.

                  > Over a hundred million people died in the 20th century from trusting their government too much, nothing compares to that.

                  Hitler's rise to power was precisely the other way around - mainly due the exploding inflation after WW1, an economy hampered by reparations and the subsequent loss of trust in democracy and the government. The people flocked to Hitler because he ran on a platform of scapegoating - Hitler's platform was to blame the "rich Jewish elites" and that their extinction would save the people.

                  The most troubling thing for me is just how many parallels the rise of Hitler has with our current economic situation. Rampant inflation and explosion of costs of living, government budgets strained by the combined cost of massive economic crises (2008ff financial crisis, euro crisis, migration crisis, COVID, Russian invasion), external enemies to rally the people behind (China), a loss of trust in democracy accompanied by a world-wide rise of charismatic strongmen (Trump, Putin, Erdogan, Bolsonaro, Xi, Salvini/Meloni), lies and propaganda running unchecked, open violence in the streets... history is repeating itself, right as the last survivors of the 1933-1945 era have died - and those few that are still alive have kept sounding the alarm for years now without being heard.

              • zeroonetwothree 3 years ago

                the US has more of a culture of self sufficiency. This isn’t necessarily a bad thing. It also has higher incomes that allow for such types of frivolous spending

                • mschuster91 3 years ago

                  > It also has higher incomes that allow for such types of frivolous spending

                  We have the same net income as the Americans in Europe (ludicrous tech salaries aside), we simply pay collectively with our taxes for stuff that Americans have to pay for on their own, first and foremost healthcare and retirement.

          • jiggywiggy 3 years ago

            Yeah changing and updating laws sucks everywhere. I'm sure some places are worse or better. But most democracies are set up on purpose to frustrate the process.

        • jiggywiggy 3 years ago

          It's hard to find the line between better safe then sorry.

          On the surface level you are right.

          But look at the cost of medicine development, it has exponentially risen to billions in the last decades, stifling innovation. And although it maybe got a bit safer, it didn't get exponentially safer.

        • dantheman 3 years ago

          How's the cookie banner going?

      • smarx007 3 years ago

        TÜV certification is the red tape I want. Just how in the North America people respect appliances with UL marking.

        But regarding the uncertainty, I agree with you. And I guess, it's inevitable given the pace of change and innovation. I am expecting more ISO standards to be created and updated in response to the AI Act, which will limit the uncertainty to some extent.

    • eimrine 3 years ago

      > Did you actually read the proposed text of the AI act?

      Are these 140 pages [1] the proposed text?

      [1] https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COM...

  • PurpleRamen 3 years ago

    Laws can change, but if hard damage is done, there is usually nothing you can do about.

    AI at the moment is moving fast and unreasonable, we already have first waves of victims. Putting a lid on it, and slowing it down seems reasonable, even if it won't be a perfect solution. Interesting part is, this is pretty similar to the situation in 2020, when the pandemia started. Nobody knew exactly what's coming and to navigate it, but everyone tried their best to survive what everyone saw unfolding on global scale.

    • SiempreViernes 3 years ago

      The "self driving" cars have already killed several people, and nobody can object those aren't sold as "modern AI systems" by the companies that produce them.

      I guess you meant to say something like "our first wave" rather than "no first wave"?

    • kumi111 3 years ago

      Honestly, until it doesn't have victims it's just a good thing. About Pandemia, it wasn't uncertain, as it was clear threat, driven by survivability.

  • carschno 3 years ago

    > every problem [in the present] has a constituency

    The internet as a way of distribution is not exactly new, plenty of legislations on various levels have been dealing with that fact for a while, for instance GDPR. "The people writing these regulations" cannot predict the future indeed, but that is obvious. It should not serve as an excuse for doing nothing, though.

endisneigh 3 years ago

The acceleration of the divergence of the EU from the United States with respect to cutting edge software will be a site to behold. One is neutering and the other is cultivating.

  • lm28469 3 years ago

    > One is neutering and the other is cultivating

    Now the question is: what are they neutering/cultivating.

    It might very well end in a "why did we start putting lead in gas again?" / "we can spray kids with DDT clouds right?" x100 moment

    • bionhoward 3 years ago

      You could make the converse statement about thalidomide, which USA avoided while others suffered severe problems. Regulation isn’t a silver bullet, and it’s particularly challenging to regulate AI because bits flow faster than atoms

      • lillecarl 3 years ago

        There's also the opioid epidemic and amphetamine epidemic. They're pretty hard to justify in my opinion. Not that other countries don't have issues, but it's less common in democracies that that these things are pushed onto people by the govt through lobbying.

        • danlugo92 3 years ago

          I live in a city with defacto legal drugs (including up to cocaine, crack and meth, but admittedly no opoids except hydrocodone) and there's no societal collapse nor any epidemic to be seen.

          The problem is not drugs.

        • dantheman 3 years ago

          The opioid and amphetamine problems are due to drug relations in the US. It would have been better to not prohibit drugs. We are finally starting to see the rollback for marijuana. The amount of damage to civil liberties and lives lost to the drug war should be exhibit A in any argument about regulation.

    • zeroonetwothree 3 years ago

      DDT wasn’t banned for harming humans. Actually banning it was probably bad for humans overall because of the rise in malaria (but we opted to make that trade off for environmental reasons). You should probably pick a better example.

      • lm28469 3 years ago

        > DDT is an endocrine disruptor.

        > DDT is classified as "moderately toxic" by the U.S. National Toxicology Program (NTP) and "moderately hazardous" by WHO

        > "research has shown that exposure to DDT at amounts that would be needed in malaria control might cause preterm birth and early weaning ... toxicological evidence shows endocrine-disrupting properties; human data also indicate possible disruption in semen quality, menstruation, gestational length, and duration of lactation"

        https://en.wikipedia.org/wiki/DDT#Human_health

  • ChuckNorris89 3 years ago

    Because many of the US's so called cutting edge SW has mostly been privacy invasive data harvesting empires designed to serve ads, like Google and Meta, or regulation dodging apps like Uber and AirBnB, or monopolies engaging in anti-competitive and anti-consumer behavior, like Amazon, Microsoft and Apple. With that logic TikTok is also some fancy innovation. None of the FAANGMAULs have their hands clean. Same with scummy companies like Cambridge-Analytica.

    Honest SW innovations that don't aid in monetizing your eye-ball time to make you buys shit, or manipulating your emotions into making you feel insecure, or swaying your vote, don't make nearly as much money as what big-tech does.

    Same as stopping food manufacturers from putting arsenic and lead in our food, or the auto industry from making only fuel inefficient air poisoning shitboxes, or big-tobacco who killed more people than the Nazis, the tech industry also needs regulations to protect the consumers, and honestly, it's long overdue and yet people on HN don't want this because they feel like this will stop them from becoming the next Zuckerberg.

    • andrepd 3 years ago

      This hits the nail on the head. We have talent and we have tools, but the current market incentives lead to these being used to develop things that are often actively negative for people. We need democratic regulation to make it so that the incentives favour meaningful innovation, not just mindless data-harvesting mental health-destroying apps.

    • zeroonetwothree 3 years ago

      Why is it that every time someone makes a poorly reasoned argument steeped in emotion there are Nazis involved?

    • criley2 3 years ago

      You can extrapolate this opinion all the way to "the only moral human existence is de-industrialization and return to hunter/gatherer lifestyle, and reducing humanity from billions to a million or less".

      You drew the line at software and automobiles, but it's not hard (and not a slippery slope) to authentically draw the line much further. All of civilization is bad for the planet and for us as well. Bread was a mistake.

      • zirgs 3 years ago

        Surveillance capitalism isn't necessary for society to function. I can live just fine without google, fb and tiktok serving me ads. I can store my files in the cloud just fine without microsoft and google scanning them. Navigation apps work just fine without uploading location history to google. A lot of those so-called "innovations" aren't necessary.

  • ClumsyPilot 3 years ago

    it's not neutering, it's pest control

    • neximo64 3 years ago

      I'm assuming you're typing this with the comfort of some kind of blink or webkit based web browser, made by said pests

      • ClumsyPilot 3 years ago

        Never accused browser developers of being pests.

        But I do have some candidates, say gig-economy delivery apps that bypass minimum wage, and put drivers under auch time pressure that they are always panicking, besides peeing in bottles one of them hit my friend on a motorbike and nearly put him in the hospital.

        Many the Deliveroo folks in London are driving illegal ebikes with 1000W motors at 45 km speed. I

        Most of then are self-done conversion of $150 bicycle that is not even roadworthy, has old squeky rim brakes, no mirrors or horn or anything.

        If I know this, deliveroo knows this, and I am confident that they are knowingly taking advatage of this situation. If they were fully employed, deloliveroo would have to provide them with vehicles, something roadworthy.

        For example a Riese & Muller ebike is actually a roadworthy light moped with mirrors, ABS, etc.

        Why is my safetu sacraficed to save deliveroo like 2 grand on vehile costs? Are lives so cheap?

      • aqme28 3 years ago

        What do blink or webkit have to do with their comment? Regulations like this are designed to weed out scammers and charlatans and make the market better for everyone, hence "pest control." Think Theranos and the FDA.

      • kolinko 3 years ago

        We would all be using Internet Explorer now, if not for the US Government. So using browsers as an argument against regulations is a bad analogy.

        • neximo64 3 years ago

          Still made by a pest.

          The original comment was about how Europe isn't able to nurture technology.

          • ClumsyPilot 3 years ago

            Anyone heard of ARM and ASML? Skype, Spotify? Just because we don't do software giants does not mean we can't do technology.

            Maybe US economy would be healthier if you did not allow like 3 companies to buy up all the competition.

            • neximo64 3 years ago

              Of those 1 is software as you mention, 1 is not an EU company, and 1 was founded 40 years ago, before tech regulation was a thing.

              If you did a like for like equivalent for American technology it can't even compare. You could find small countries in Asia that have done more.

              What the author of the original comment is saying is thats not a surprise.

              Surely you can't be serious with the EU almost lecturing the world on how AI should work, using Spotify? or even counting ASML and ARM? On what basis?

              • ClumsyPilot 3 years ago

                Would you accept your own argument in relatuon to China - since they dominate production of hardware, we should adopt their top-dpwn approach to industrial policy and exploitative labour laws?

                • neximo64 3 years ago

                  Well no, but I would argue that we don't have to since we have (collectively as the west) our own software and hardware. When we look at the aspect sold into western markets, it is not being designed and made in China it is being contracted to be built there.

                  I would also argue that the labour exploitation is being done by American/European companies.

                  The point i'm trying to make with the EU is that it doesn't but wants to extraterritorially rule other countries, with no industry knowledge or stakeholders, other than a market to sell to. Where companies exist in question, are mostly American is because their government doesn't hamper development and lets companies thrive in a reasonable way that is balanced and fair.

                  The EU version is quite overbudensome where the company wont easily exist in the first place and that has no impact into the labour practices to outsourced companies anyway. There are plenty of European companies that manufacture in China and exploit labour - which is a separate argument to the ability for companies to be nurtured to exist.

      • krono 3 years ago

        What we call pests often fulfil such important roles in nature that we'd be hopelessly effed without them. They only become a problem when they decide to come and live in our homes, eat from our food, and lay eggs in our beds.

        So since we're stuck with them, the best thing we can do is to try and prevent this problematic behaviour from taking place. If that somehow fails things are going to get nasty for a while.

      • aerhardt 3 years ago

        I’m assuming you’ve benefitted to some extent from GDPR making it harder for companies to laugh their way to the bank with your data, even if you don’t live in the EU.

        And to be clear, I pretty much agree with your sentiment; the EU is overzealous and the US is light-years ahead in digital tech - and not only or mainly due to regulation. But let’s not pretend the US culture of “just do it and inshallah” is all rainbows and sunshine. With AI in particular there are existential or quasi-existential risks.

        • endisneigh 3 years ago

          > With AI in particular there are quasi-existential risks

          lol, there really are not. It's hilarious to see the upper echelons of many government's unironically believing this. they should definitely keep an eye on the development, but as of yet it is not even close to an existential risk.

          • aerhardt 3 years ago

            Geoffrey Hinton, Stuart Russell and other AI leaders are warning us that there are existential risks, if only due to the inevitable creation of autonomous murder machines by the military (Hinton and Russell are both worried about things beyond that, by the way). I’ll take their word for it and not only on the basis of their authority - they have very well put arguments.

            OpenAI is also playing it safe to some extent, maybe to please the general public and serve the org’s own interests, or maybe they are genuinely concerned about real safety threats, who knows. At any rate their actions don’t align with your discourse.

            • endisneigh 3 years ago

              > At any rate their actions don’t align with your discourse.

              Sure they do. You can still use GPT-4 now. If GPT-4 was actually an existantial risk they would have shut it down.

    • tenpies 3 years ago

      > pest control

      I believe the Chinese Communists tried a similar thing[1]. I know you're just being poetic, but let's hope history doesn't repeat.

      ---

      [1] https://en.wikipedia.org/wiki/Four_Pests_campaign

  • simion314 3 years ago

    AFAIK in USA the AI stuff is still in limbo, Microsoft and Open AI will have to defend themselves in court, the result could be worse then what EU is preparing depending on your alignment.

    I prefer this shit is clarified so everyone knows what is legal and not, and not have only the big corporations do the illegal stuff and apologize later.

    • hhh 3 years ago

      It’s not illegal though. That’s the point, is it not?

      • simion314 3 years ago

        It is undecided/unclear what is legal or not, I am not an american so I am not the one that should explain this.

        But Microsoft trained their coding AI with GPL software but not with proprietary software impaling that they can risc screwing Open Source but not the paying GitHub customers or not sure why they did not used their own internal proprietary code.

        Open AI trained with the entire inernet, books, ignored any license copyright cponcerns. Stability.ai also trained with lot of stuff. Adobe trained only with stuff they have licenses.

        So as we can see until an USA judge decides how to interpret the existing laws then stuff are not sure, some companies take the risks other take less risks, others no risk.

        But if USA some other judge in a superior court my decide something else, so IMO USA also needs to take a decision and not let FUD (deserved or not) about AI to spread.

        I seen many do not know that the issue ChatGPT had in Italy was about a data leak. There are laws on what you do when this happens, probably are similar laws in USA , maybe in USA is harder to start a investigation or maybe there were some lobbying involved.

  • oblio 3 years ago

    The jury's still out on whether software is an unalloyed good.

    • brookst 3 years ago

      Straw man. Nobody would say that.

      Is the jury still out on whether software does more good than harm?

      • oblio 3 years ago

        Some categories, for sure.

        We have no idea about the impact of AI.

        It's almost like the EU is regulating stuff that's really scary.

        • brookst 3 years ago

          Absolutely. The EU's regulatory posture is to prohibit things that could cause harm in the future. It's very consumer-first, risk-avoidant, and a fair way to do things.

          For someone who's more willing to accept risk and who prefers regulating bad outcomes rather than just scary things that could possibly have bad outcomes but which nobody can predict, it looks over-protective. But I fully support the EU's right to take a wait-and-see approach and gradually accept AI if it goes well for the rest of the world.

          Europe hasn't been at the forefront of any tech or cultural change for centuries, and seems to be getting along just fine.

          • oblio 3 years ago

            > Europe hasn't been at the forefront of any tech or cultural change for centuries, and seems to be getting along just fine.

            "Centuries" is mega hyperbole when you look at physics, chemistry, etc, etc up until at least 1930.

      • chii 3 years ago

        it depends on what people attribute as harm.

        I think the only harm an AI can do is the kind that come from errors (human or otherwise). Such as the boeing 737 Max, where people misuse software.

        The type of 'harm' that some people attribute to AI such as loss of economic utility, to me, is not harm. And legislating that away is dangerous.

  • aqme28 3 years ago

    What about these regulations is neutering? They sound pretty reasonable I thought.

  • AndrewKemendo 3 years ago

    One actively encourages corporations to use the population as test subjects for any psychopathic idea that gets enough money backing it

    The other is Europe

  • stefan_ 3 years ago

    Yes, that's exactly how it turned out with China who have been doing this much longer..

  • can16358p 3 years ago

    Same for GDPR.

    Not a surprise all big tech is in US. If EU wants to play this card, US will get even stronger at keeping tech talent.

    (I reside neither in the US or EU)

    • hef19898 3 years ago

      Reducing everything, technology, economy and society, to software tech is such a pseudo-SV bubble thing, it is getting boring by now.

    • icepat 3 years ago

      > US will get even stronger at keeping tech talent

      Not necessarily the case. There's many reasons someone would leave the US (or North America in general) for the EU/EEA. The US is becoming less attractive to some given the current political weirdness. There's plenty of interesting work to be done in the EU and EEA that's not "hindered" by GDPR, and many developers who relocate to the EEA actually _like_ GDPR, and regulation aimed towards protecting individual rights. That, at least, is the case for me. Having had the opportunity to relocate anywhere, I ultimately picked the EEA.

    • smarx007 3 years ago

      I didn't see any effect from GDPR on tech talent migration. Any source on this?!

      • icepat 3 years ago

        This, to me, sounds like personal opinion. Many developers in the EEA, while agreeing that it causes extra work, like the spirit of GDPR. The attitude (a generalisation) towards the American way of working with private data, is generally that it's unethical. It's not hard to find developers who are happy to, and want to, work within the framework.

      • robswc 3 years ago

        He probably means its just another factor in why there's very few startups/innovative companies in Europe.

        I did consulting and for a fair amount of projects where we had to block all European traffic while GDPR compliance was sorted out. There was almost never anything we had to actually do/change but the threat was looming and it was easier to just not take the risk.

        • smarx007 3 years ago

          That's a totally legit way of "complying" with GDPR. An unacceptable way to do this is to present a consent screen that restricts access if consent is denied but allows access otherwise (looking at you, Washington Post, but at least they provide an option to pay for a privacy-respecting experience). Cf §43.2 https://gdpr-info.eu/recitals/no-43/

  • suction 3 years ago

    As a EU citizen, I can't even begin to explain to you how willing I am to trade in possibly not having cutting edge software for the higher living standards and cultural superiority of the EU.

gumballindie 3 years ago

> While the AI Act’s references to copyright issues in generative AI are still very vague and only stress how much of a grey area it is, requiring providers of large models to be more transparent about their sources seems not a bad thing as such. As many aspects of the act it will be seen how this works out in practice.

If my license prohibits use of my work for ai training, or requires that any modified code includes my license or credits, or i lack a license, or my web blog doesnt give you permission to train against my content then you shouldnt use it. Google tried hijacking content with amp and ai is not different from it. If you violate my terms then i want to be able to submit evidence - or suspicion - to a government agency that audits or fines you to oblivion. Ideally you have to pay damages equal to the number of people that you may have sold my content to, in full or partially.

This would lead to a win win setup. Artists, developers, writers, lawyers and so on would need compensation for training content - one time or ongoing - leading to higher quality models, job growth and a superior ai product over all.

Ai is by and large a net positive but needs to be done right.

  • andybak 3 years ago

    This feels like (yet another) extension of copyright. Whilst I'm not sure I completely disagree with you, I want people to acknowledge that copyright is not the natural state of the universe. Prior to (I think) 1790 there was no copyright and human beings managed minor things like, you know, the renaissance and stuff like that.

    Copyright was invented and enforced and the results have been a mixed bag. It seems to suffer from a ratchet effect where the law only ever increases the scope to which copyright applies and never decreases it.

    However intuitive your sense of your moral rights are, it's about the net benefit to society and we should be very careful what we wish for.

    • tpxl 3 years ago

      If creating LLMs based on copyrighted data is found to be legal, all that will do is allow giant companies to sell copyrighted work without crediting the original authors, while leaving everyone else in the dirt.

      • andybak 3 years ago

        > all that will do is allow giant companies to sell copyrighted work without crediting the original authors, while leaving everyone else in the dirt.

        I'm not sure I follow. But even accepting your premise - I'm not sure how it will favour giant companies over anyone else. The models are already in the wild and anyone can use them. In some ways - large companies are less likely to do anything that might open them up to legal risks or PR downsides.

        Maybe this is more of a Napster moment than it is a big tech powergrab?

        • tpxl 3 years ago

          GPT is owned by Microsoft, LLama by Facebook and Bard by Google. If you trained a model on google public properties and started distributing it for money (or its output), we'd be sued into oblivion real quick.

          • andybak 3 years ago

            My point was that the models exist, people are fine tuning them and/or releasing open clones. There are models of comparable power to the state of the art without any controlling interest from a big tech company.

            The Google memo covered this in detail and it was what makes me want to question the "AI is owned by big tech" angle.

      • zeroonetwothree 3 years ago

        Won’t it actually harm big companies that own the majority of IP currently? It will empower individuals to benefit from creative works more cheaply.

        • tpxl 3 years ago

          The vast majority of IP is not owned by big companies. For every big picture movie there are hundreds of indie movies that will only get a few views.

          • andybak 3 years ago

            You're measuring by len(all_media)

            Surely a more useful metric is sum(all_media.value)

            • tpxl 3 years ago

              Price/profit != value. Sure, Hollywood movies bring in a ton of money, but I get way more value from daily indie youtubers than a blockbuster released once a month.

              • andybak 3 years ago

                Agreed but also Sturgeon's Law applies.

                So maybe the correct answer is somewhere in-between

    • gumballindie 3 years ago

      > Prior to (I think) 1790 there was no copyright and human beings managed minor things like, you know, the renaissance and stuff like that.

      Curious if the introduction of copyright is what led to an explosion of products and innovation. Suddenly people were given an incentive to monetize their ideas. I doubt the renaissance happened due to a lack of copyright. I think it's more due to social, political and health circumstances rather than the lack of protection of one's work. We, in Europe, suffered from disease, famine, war, to the point where we reached the conclusion that enough is enough - we need rules to the game.

      • zeroonetwothree 3 years ago

        There doesn’t seem to be evidence that copyright increases innovation. Indeed in some areas with no IP protection we actually see more innovation (example: fashion)

    • nickfromseattle 3 years ago

      > it's about the net benefit to society and we should be very careful what we wish for.

      Seems like we have a classic trolly problem.

      On one track, compensating copyright holders is required for LLMs, and it's going to be very expensive to acquire all of this copyrighted info, meaning only the biggest companies can afford to do it.

      On the other track, compensating copyright holders is not required, LLMs (led by big tech) capture most of the economic value from every incremental piece of content created by humans in perpetuity, consolidating wealth in the hands of a few shareholders and insiders.

      Neither seem ideal.

      • gumballindie 3 years ago

        > On one track, compensating copyright holders is required for LLMs, and it's going to be very expensive to acquire all of this copyrighted info, meaning only the biggest companies can afford to do it.

        There is also the third track which is that most abundant code is open source or unlicensed content (which is protected in the US afaik). If corporations can't monetize on it, we win, because models either need to be open source or we need payment for training.

      • andybak 3 years ago

        I'm not sure it's certain yet with AI is going to lead to more consolidation or actually have the opposite effect.

        Whilst history tends to make me suspect the former, the recent leaked Google memo gave me pause for thought. AI is already out there and already can be trained on consumer hardware. It's ever so slightly possible that big tech won't be able to horde the benefits this time.

      • welshwelsh 3 years ago

        I'd choose the second track without hesitation.

        Shareholders can consolidate all the wealth they want, as long as they deliver the goods: LLMs that are trained on all of humanity's creative output.

      • zirgs 3 years ago

        Open source models are possible if we pick the second option. Lots of innovation in the AI scene is happening thanks to open source models being available to the general public.

  • zirgs 3 years ago

    You can't forbid others from learning from your work. If you want to prevent it then don't post it publicly.

    • gumballindie 3 years ago

      Sorry, ai is not “others”, it’s software. This argument of yours is equal to if you dont want to get mugged dont go outside.

      • zirgs 3 years ago

        If someone wants to learn stuff from your work - they will and there's nothing that you can do about it. I can train a LoRA on someone's artstyle in a few hours on my PC. You can rent a GPU and do it under an hour. It takes more time to curate, process and caption images than to actually train the AI. It's that easy. So yeah - the cat is out of the bag and you will have to adapt.

        • gumballindie 3 years ago

          I can download a whole set of movies even tho the piracy cat’s been out of the bag for a while. But if i monetised it i am in a hell lot of trouble. My debate points are not about stopping ai. They are about how we shape it’s use as a tool.

          • zirgs 3 years ago

            The difference is that AI training is not illegal and styles aren't copyrightable. So you can already make stuff in the style of some other artist and then sell it.

  • endisneigh 3 years ago

    > This would lead to a win win setup. Artists, developers, writers, lawyers and so on would need compensation for training content - one time or ongoing - leading to higher quality models, job growth and a superior ai product over all.

    I have been told information should be free, though.

    • gumballindie 3 years ago

      According to terms and conditions. Ip is not information, it’s work. Pay for it or dont use it. I decide the terms of my own output not you, and certainly not a corporation that resells it and threatens me with unemployment.

      • andybak 3 years ago

        > I decide the terms of my own output not you, and certainly not a corporation that resells it and threatens me with unemployment.

        Actually - the government decides the terms of your output by passing laws. Your legal right to your content is that which the law allows. If copyright legislation was revoked tomorrow you'd be howling into the void.

        • gumballindie 3 years ago

          What kind of tyranny would allow a handful of corporations to grab my work for free and resell it while making me unemployed? I do want as a citizen to have my work protected, and i equally want that corporations to benefit from it and compensate me if so i wish.

          Anti copyright is a bit like communism. What’s the plan? That we all live in one happy commune while the politburo owns everything and we starve in the name of glorious progress? We tried it before and it didnt work.

          • andybak 3 years ago

            I'm not anti-copyright as much as "anti-assuming-more-copyright-is-an-unalloyed-good".

            > What kind of tyranny would allow a handful of corporations to grab my work for free and resell it while making me unemployed?

            Again - a presumption that AI will be owned by big tech - it's already out there and runs on your home PC. And it can't be taken back.

      • endisneigh 3 years ago

        I don't disagree with you, but many people on HN think digital information should be free, just sayin'

NickHoff 3 years ago

AI is moving too fast to regulate. A lot has happened in the field just during the time that these "last minute amendments" have been discussed. It's more likely that the EU will end up with laws that are obsolete by the time they're implemented, take forever to revise or repeal, and just sit there constraining innovation for no good reason. For example, if an EU-based startup wants to build a generative AI system to make interior design renders to show off furniture for a magazine (or whatever), how much time and legal expense will the "transparency and risk assessment requirement" add?

To be fair some of this sounds like a reasonable idea, like prohibiting "remote biometric identification systems in publicly accessible spaces". The issue is that this law would only prohibit using AI to do that. Let's outlaw the things we really don't want (like algorithmic voter influence) in a technology-agnostic way and then let AI flourish.

guy98238710 3 years ago

The problem is still the same as it was with the cookie law: Should we regulate capability or intent? Regulation by capability is easy to specify in the law, but it cripples useful technology as much as predatory technology and thus ends up being just a legal manifestation of technophobia, hindering overall progress. Regulation by intent/purpose or actual harm (as opposed to potential harm) is a much better option in my opinion.

The other troubling aspect of this is that it's not going to be proportional, but then hard-edged law is a broader problem.

decide1000 3 years ago

I always feel that the EU protects the people, while others seem to care about profits only.

  • lopis 3 years ago

    Just read through the comments on this thread. Most people only care that EU "fall behind" the US, no matter the societal cost.

    • nologic01 3 years ago

      There is no reason that EU should "fall behind". The danger with digital tech has almost nothing to do with the capabilities and features of the technology (well if its a nuclear bomb its features are very specific) and has everything to do with who controls it and why.

      The implication that there is only one way to organize control of AI tech is patently false. We already know that you can have at least i) a state controlled system (China) and ii) an oligopolistic corporate controlled system (US). There are many other plausible ways and the EU must simply create the conditions for these to develop.

      Legislation that has real teeth and vision can do this. This is possible even within existing corporate and monetary systems. Even the (controversial) Friedman doctrine about corporation's sole aim is seeking profit is that they must operate with the moral and legal sandbox that society creates.

  • zeroonetwothree 3 years ago

    As a general rule no one in the real world is “good” or “evil”. Everyone is acting for their own quasi-selfish and quasi-ethical reasons. Governments aren’t exclusively the “good guys” looking out to “protect people” any more than they are exclusively the “bad guys” exploiting people for corrupt gains. Corporations aren’t exclusively the “bad guys” seeking profit regardless of the consequences any more than they are exclusively the “good guys” looking to expand consumer welfare with innovation. These are just stories people in power tell to manipulate you. The truth is always in the middle.

  • Lionga 3 years ago

    That is exactly what they want you to feel about them.

gambiting 3 years ago

EU AI Act, not EU Act

duringmath 3 years ago

Funny how all these EU rules come at the detriment of US companies.

All they do is keep piling on regs and they're doing it for emerging tech with exactly zero proof of harm.

timwaagh 3 years ago

I don't why bard isn't coming to the EU. Is it this or something else. Google hasn't explained it. I would hope that any ai act will allow reasonable players like Openai, Microsoft and Google to continue their good work without ruining into too much red tape. Considering what is happening, if it's because of the ai act, the act might be too strict as the EU risks getting disadvantaged by not having access to the same tools as 180 other countries. AI will ultimately prove too useful to ban.

  • robswc 3 years ago

    >Is it this or something else

    I ofc can't speak for Google but I know the work I did it was much easier to simply block the EU until compliance was sorted out. It was almost always fine and we didn't have to change anything but it wasn't worth the risk.

    I'd be shocked if it never made it to Europe.

    • timwaagh 3 years ago

      Google has now said they will do it but indeed they want to sort out compliance first.

  • input_sh 3 years ago

    Under what criteria did you reach the conclusion that OpenAI, Microsoft and Google are reasonable players in all of this? Is it purely based on their name recognition? Is that not antithetical to the idea of a competitive market?

    Also, there aren't 180 countries in the world that are outside of EU.

    • timwaagh 3 years ago

      I read 180 countries get access. From what I have seen from the list the ter m'countries and territories' might be more accurate.

      I think they're reasonable players. They are either well established or have goodish intents. I don't think they ever did anything worse than package internet explorer with Windows.

    • dragonwriter 3 years ago

      > Also, there aren’t 180 countries in the world that are outside of EU.

      If by countries you mean “UN member or non-member observer states”, correct.

      But that’s pretty much the most conservative count. (And leaves 168 non-EU countries.)

      • input_sh 3 years ago

        But Bard is not available in way more "traditional" countries than just the EU.

        https://support.google.com/bard/answer/13575153?hl=en

        Canada, Brazil, WB6 (Bosnia, Serbia, Montenegro, Albania, Macedonia, Kosovo), Switzerland, Iceland, Norway, Moldova, Ukraine, the ones under US sanctions (Syria, North Korea, Iran, Russia, Belarus, China)... plus 27 EU countries. That's 46 right there, and I probably missed a few.

        But hey, gotta get those numbers up, so it's available to all of literally zero people living in Heard Island and McDonald Islands or South Georgia and the South Sandwich Islands.

jcarrano 3 years ago

The problem with this is that is is too easy to encompass lots of current uses of data analytics and even primitive AI in its definition. Also, does a simple matrix scoring classify as AI?

> AI systems which can influence voters in political campaigns and by use of suggestion systems on very large platforms

I read that as "targeted advertisement is banned for political campaigns".

> biometric categorisation systems

This brings back the debates on "biased" AI, where people seem to forget that machine learning works on the basis of bias and then go and propose introducing more bias to counteract.

My guess is that we will reach a state where anyone using ML/AI for anything having to do with people will be exposed to a fine, but the EU will apply the rules to it's own discretion against the companies that it does not like.

Mortiffer 3 years ago

Funny how on the same day we see google announcing that BRAD will not expand into Europe

https://news.ycombinator.com/item?id=35914705

aqme28 3 years ago

I'm seeing lots of comments from fellow software engineers that seem to boil down to "all software regulation is bad" and I'd like to know where that's coming from, or if it's just a minority of the more libertarian-minded among us.

Regulation can of course be a problem, but it can also do a lot of good to protect consumers. Do people feel this way about the FDA, the FAA, etc?

  • robswc 3 years ago

    >Do people feel this way about the FDA, the FAA, etc?

    To a degree sometimes. There is no benefit to regulators to allow for more risk, because they will be blamed if something goes wrong. There's hardly anything to gain (as a regulator) to take risk.

    • aqme28 3 years ago

      Completely agree, but that’s not the same as saying that these organizations do not provide benefit

  • jkeisling 3 years ago

    Yes, as a matter of fact, I do. If you look at even the US regulatory environment, for example: FDA approval for new drugs takes decades and billions, and enough said on America's "War on Drugs". The NRC has practically strangled nuclear power in its cradle, and environmental review adds years and millions to practically all new infrastructure. Health insurance regulation and financial regulation have enshrined a few oligopolistic players, with nearly impossible regulatory burdens for any new entrants. It's not merely that any one regulatory agency has run amok. All such agencies are fundamentally incentivized to increase their power and reduce risk by adding regulation in a one-way ratchet, while they have very little incentive to allow innovation except sustained external pressure, which new industries rarely can sustain. Large corporations rarely counterbalance this regulatory creep: they welcome regulation, since they can capture agencies to their will through lobbying and the "revolving-door" and afford a legal staff, while startups and individuals still face impossible regulatory burdens. Don't even get started on copyright. Regulation in America is fundamentally broken, and while it admittedly does manage to "protect" consumers from some of the worst potential abuses, it does so by choking innovation and essentially freezing us in the 1970s.

    Why should we believe that AI regulation would be any different? Whatever agency is given power won't just stop at reasonable guidelines. They will likely be pressured by big players like Microsoft to choke off open source, by copyright giants like the RIAA and Disney to stop generation, and by every imaginable constituency to protect their jobs from change. Most importantly, individual open-source development will become prohibitive, and AI will be locked behind corporate APIs. In fact, if you look at sites like LessWrong, AI doomers are openly welcoming regulation precisely because they know an FDA-like agency will stop progress dead in its tracks. Make no mistake, AI regulation will hand enormous power to governments and corporations while denying it to the individual, whatever the good intentions were at the beginning. It is a devil's bargain, and if we as an industry are to take it in the name of "safety", we should at least do it with eyes open, instead of pretending that all that is being asked for is "guardrails" and "common sense".

    • aqme28 3 years ago

      Maybe we disagree on a fundamental level then. Even though the FDA, NRC, FAA, etc overregulate and stifle somewhat, I think they are a net good and that a regulation-free environment would not be a better situation.

golemotron 3 years ago

I'm not sure the EU is powerful enough to have their way on AI, USB-C, GDPR and a handful of other things. With a population aging faster than the US's it seems like they are spiraling into irrelevance. It could be a market served only by corporations that are willing prop up old tech.

  • scrollaway 3 years ago

    “The eu is spiralling into irrelevance” is up there with “the internet is a fad” and “ai is useless” in the list of absurd sentences to say out loud so nobody takes you seriously ever again.

cmilton 3 years ago

Why do we assume evil? How do our unfounded decisions today affect the possibilities of progress? How easy will it be to repeal these laws when we find the harm they may cause? I would prefer to wait and see.

  • ttjjtt 3 years ago

    Because the results of the last few decades of tech optimism have been so utterly dismal? And all the other results of the neo lib deregulation dream which, we’re most of us are either quietly panicked or despondent about.

    • cmilton 3 years ago

      >Because the results of the last few decades of tech optimism have been so utterly dismal?

      Can you expound upon this a bit? In my view, we have witnessed astonishing progress in tech over the last few decades.

      >neo lib deregulation dream

      What is your dream?

      • ttjjtt 3 years ago

        I meant to specify social results. Innovation is valuable only in so far as it raises the well-being of society as a whole. All the critical measures of this are going in the wrong direction.

        Technological advancement might be astonishingly fast but that gives no indication of its value.

        10-15 years ago used I knew a good few tech optimists, including myself & would regularly encounter more. Now, I don’t know one, and it’s odd to encounter one. The mood has shifted. VC fuelled platform capitalism has now unfolded.

        From earlier today: https://news.ycombinator.com/item?id=35901537

        Everyone’s deepest dream is an ever kinder society where everyone can thrive. It just gets distorted by short term incentives of power. Hence regulation.

        • cmilton 3 years ago

          I believe that tech has helped solve food shortages across the world in recent decades. Tech has cured disease. Even recently it helped to develop a vaccine at record speed. Tech has provided internet connectivity for the most remote parts of the world.

          That gives me optimism.

  • soulofmischief 3 years ago

    In the current climate, and generally, you should distrust your government on a systematic level, and take pride in doing so. They certainly distrust you.

    • renjimen 3 years ago

      That’s a wildly sweeping statement. I am assuming you don’t live in a functioning social democracy. You should try it some time, it might restore your faith in the ability of centralised power to improve the lives of its citizens

    • cmilton 3 years ago

      That sentiment doesn't really bother me. Most of the "current climate" exists within these screens.

      Just so we are clear, I am against this premature neutering of technology.

      • DangitBobby 3 years ago

        Most of the law restricts or prohibits mundane abuses of ML that have already happened, and calls for people developing and deploying AI to attest they are doing their due diligence.

lionkor 3 years ago

Is this not an announcement of an announcement?

  • FionnMcOP 3 years ago

    The act isn't ratified, and probably won't be for some time. The attached link explains that yesterday the act passed the first set of votes in the EU, which formally starts it on the path to being law. The article also describes some of the recent changes to the wording of the law.

polski-g 3 years ago

Thank god for the First Amendment: https://www.eff.org/deeplinks/2015/04/remembering-case-estab...

dhfbshfbu4u3 3 years ago

They could've released this at any point over the last few years, then worked together with industry to refine the conditions. Instead, they waited too long and now it's going to be a regulatory mess. The big winners here will be the lawyers.

otikik 3 years ago

I hope they put something about user interface there.

The “EU IA UI” act.

melvinmelih 3 years ago

There’s a reason why Google didn’t release Bard to EU yesterday, expect that to happen more and more as they fall further and further behind.

  • Renaud 3 years ago

    It’s ok.

    Either Google manages to follow these principles, which to me look aligned to their announced intent for more ethical AI use, or they won’t, and that means their product won’t follow the basic ethical guidelines proposed by the EU, in which case, I’d rather it not be available in the EU (and I don’t understand how anyone would defend a company not ready to address these ethical concerns).

    • lexandstuff 3 years ago

      It's just a large language model trained on textual content. It's not going to hurt you.

      Do you have to wait until every product you use is bureaucratically vetted?

    • renjimen 3 years ago

      Exactly. If they’re not willing to understand and vet their own tech in the face of basic regulation, then no thank you. Sad the US population will once again be guinea pigs for these disruptive technologies

nologic01 3 years ago

What the EU is doing is not working.

The legislative effort is real, well intentioned, and groundbreaking. It reflects democratically the wishes of half-a-billion people that include some of the fairest and most sensible countries on the planet. So ignore the hallucinating tech bros. EU legislation is certainly in the direction of what a good digital society looks like.

But there is no response on the ground. From the tech makers. Remember the timeless Buckminster Fuller quote: you need to make a new model that makes the old model obsolete. Why aren't there any actors taking the clear legislative signals at heart to create the new digital model?

Here are some hypotheses / flow chart:

* there are no such actors. too much focus on luxury goods, not much on tech. mass migration of talent to the US has created a desert.

* there are actors but they don't get funded to act. If that is the case, we need to ask: there are gazillions of euros rotting, so why is this not happening? Possible causes:

  \* financiers don't actually buy the legislative agenda. they think it will be watered down / defanged by the FAANGS

  \* financiers belong to a group that doesn't actually *like* this legislative agenda (they bank on surveillance capitalism and the like for guaranteed returns)
Whatever the fundamental challenges and first causes, unless there is bottom-up buildup of alternative approaches the top-down agenda will eventually fail.
  • smarx007 3 years ago

    Well, GDPR seems to be working all right. Care to share any facts in support of your statements? AI Act's required audits by TÜV and like (somewhat similar to UL in North America) are going to have pretty straighforward "response on the ground".

    • nologic01 3 years ago

      you seem to imply that enforcing legislation means its "working". this is obviously a pre-requisite. legislation that is not enforced is a joke.

      but as clearly stated in my comment, this is fighting the old model, not building a new one.

      • smarx007 3 years ago

        To me, a privacy regulation that focuses not on merely collecting and protecting private data (HIPAA and friends), but instead on purposes of its use is very much a new model. For example, Amazon doesn't need consent to process and store your home address, but only as long as they use it to deliver goods you order and nothing else.

        I think it's more of a problem with companies who are fighting to keep the old model (thus, asking for consent where it's not needed just in case they want to do something privacy-disrespectful with it in the future – you can totally deny consent and keep using the service) and consumers not understanding this (and thus, not voting with their feet). But I see GDPR enforcement every day (I donate to https://noyb.eu/en and recommend you to do the same if possible), people fighting non-free consent etc. – I expect in a few more years the landscape will be better understood.

roenxi 3 years ago

In theory I probably do agree with the bill as presented in the article, there are 2 reservations that should be bought up.

Firstly, the EU has a history of crippling their native tech industry to the general detriment of the EU. It is quite possible that this will end up being one more nail in the coffin.

Secondly, they're fighting economics on this one. AI seems to be cheap and accessible. In practice controlling the things that this article discusses is probably going to be impossible. It is like banning communism or nazi-ism - we'd do it if it was possible. It isn't. Attempts to ban the idea just make the whole situation worse. It is well nigh impossible to ban modes of thinking, and AI appears to be one of those.

  • wnkrshm 3 years ago

    The EU is more in line with China on this one, I'm all for letting other societies try this cultural nuclear weapon on themselves first. See what sticks.

    I feel that's the way things have been going culturally/economically in France/Germany: Look at the world (usually the US but now also China more), see what is useful/competitive, adopt it in moderation. So this stance is extremely in line with the usual approach, in my opinion.

    • bitL 3 years ago

      France's industrial might is going down quickly and Germany without cheap resources is losing against China that copied most of its tech. Moreover, China is already outcompeting US in the AI albeit focusing on population control models. EU will become an open air museum for rich travellers.

      • ldhough 3 years ago

        > China is outcompeting the US in ... population control models.

        I feel 100% OK with being "outcompeted" in this space.

        • bitL 3 years ago

          That's fine, but the point is that EU will be super vulnerable to any AI because they didn't invest in it nor created an environment hospitable for it.

  • Silverback_VII 3 years ago

    >the EU has a history of crippling their native tech industry to the general detriment of the EU. It is quite possible that this will end up being one more nail in the coffin.

    The primary factor is brain drain. As many experts in geopolitics suggest, the European Union, in collaboration with Russia, could potentially rival, if not surpass, the United States in terms of power. However, it appears they are content to be pawns on the larger chessboard.

    • pelorat 3 years ago

      Russia has absolutely nothing to offer the EU, unlike Ukraine.

    • bjornsing 3 years ago

      EU and Putin together would be unstoppable…?

      • wongarsu 3 years ago

        The EU is happy to deal with Erdogan. Well, maybe not happy, but willing. The only thing that puts Putin into a category of his own is the Ukraine war. Before that happened, pulling Russia closer to the EU and maybe making them a bit more moderate over the decades was the strategy. Compared anything that happened before and after, the USSR with its hostility towards the rest of Europe was an outlier, just an outlier that Putin seems to be fond of. Now it looks like we have to wait for Putin to leave his position to get back to good relations.

        • csomar 3 years ago

          What you got wrong is putting all the blame on Putin. The US has made it clear that it is not accepting that Europe gets its gas from Russia regardless whether Putin is friendly, not in power, or Russia is a democracy. Not everyone agrees with the US stance (Germany went with the pipeline, France/Macron tried to avert the war). It seems that US intelligence has prevailed, however, to the advantage of the US (or their plans at least).

          Europe got badly screwed. Now Putin is an enemy forever. And he is not going down since all of Russia's powerful people are on a watch list. They have nowhere to go, and so they might as well stick around to the bitter end.

        • hef19898 3 years ago

          The US as well, Turkey is a NATO country after all. Heck, Israel and Turkey are in bad with the Saudis to some extend. If anything, Erdogan isbwilling to deal with Putin (explicitly using the people, not the countries, here). None ofbthat jas anything to do with AI so, does it?

      • Silverback_VII 3 years ago

        Yes, indeed, that's also why the United States takes all measures to make it nearly impossible.

  • TheFattestNinja 3 years ago

    (many EU countries have laws that ban Nazism and its derivatives btw. They are not working flawlessly everywhere but they are there and do "some" effect).

    • roenxi 3 years ago

      And yet they have about the same amount of trouble with Nazis as ... more or less every other country in the world. Very mysterious.

  • umanwizard 3 years ago

    Nazism is in fact banned in many European countries.

    • galleywest200 3 years ago

      That may be true, but those countries still see the "parades" by these groups. Its "banned" but not gone.

      • eimrine 3 years ago

        Are you Russian propagandist? They are probably the only actor nowadays who use to name some groups of European people as nazi with emphasis on some parades. Why not to name any group which fits what you are claiming - for letting us to see your guts without asks for some extra clarifications?

        • logicchains 3 years ago

          It's Russian propaganda to point out that the Azov brigade march with nazi flags and some even took selfies with photos of Hitler?

          • eimrine 3 years ago

            How am I supposed to answer about some event without any google-friendly markers of the event? Such as date?

    • dandellion 3 years ago

      Usually what they have are bans to the display of nazi symbolism, or holocaust denial and things like that. Banning something intangible like an ideology is not very feasible, it's usually more about minimising expressions and influence.

r2vcap 3 years ago

I don't care what EU countries do to curb their productivity. (I am a Korean national.) However, I am worried that another barrier is being built when considering the case of GDPR. And I hope they don't intervene in other countries as if their thoughts are universal.

  • renjimen 3 years ago

    The intention is to protect the rights and wellbeing of citizens. It’s short-termism to allow unfettered productivity gains at the expense of the populace.

    • zeroonetwothree 3 years ago

      And who exactly do productivity gains benefit?

      • dorchadas 3 years ago

        Not the populace, that's for damn sure. The uber-wealthy who capture even more of the wealth and then use whatever legal tricks they can to avoid paying taxes on it, all while weakening the power power of the other classes.

sgt101 3 years ago

Missing :

- statutory watermarking of output - disclosure of training data - under age & vulnerability limitations - limitation of indiscriminate publication

m00dy 3 years ago

Invoker here,

welcome to my network, it is decentralised and permissionless.

lvl102 3 years ago

I quite frankly don’t care what EU does. They’ve not been innovating at all there. They took a bunch of Chinese money and now they’re doing everything they can to harm US tech firms…while the US is de facto defending them with own tax money in Ukraine.

I cannot wait for EUZ to fail.

  • hef19898 3 years ago

    If those poor, under-priviledged US tech just a helping hand in defending themselves against those evil, democratic governments... Communism it is! /s

    On a more serious note, you are aware of the fact that a lot of the material and money Ukraine is gettong comes from European NATO countries and the EU? It is almost like an, whats the word again, alliance supporting Ukraine instead of a single country.

    • lvl102 3 years ago

      Is it an alliance when EU led by Germany funded Putin regime for the past two decades?

      Interesting.

      By the way, more than 85% of the funding is coming from the US.

      My point really is that even if EU regulations against US big techs are well-intentioned, they’re effectively doing the work for China. I don’t understand what the Chinese have done for the EU. US, on the other hand, have spilled blood to defend the continent.

      • hef19898 3 years ago

        Numbers for those 85% please, and include material deliveries as well, ok?

        By now so, the meme of Germany funding Putin by virtue buying gas from Russia is as stale and old as the "they lied about masks" one. Old enough to just not adress anymore.

iinnPP 3 years ago

We have a unique problem here that cannot be solved with legislation. The world government will never agree to it and will use it for power. The citizenry will never agree to it because the government won't.

Furthermore, we almost unanimously agree that murdering others (especially children) is bad. Murder happens all the time.

Eventually, someone is going to release a self-growth-AI on the world.

That event is what we need to prepare for. Everything else is not worth the effort. More so given the time we have to deal with this issue.

bitL 3 years ago

EU is going to legislate itself to irrelevance, missing another tech train that could propel its future. AI ethics won't make money.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection