Settings

Theme

European Union AI Act

goodcomputer.substack.com

38 points by AdilZtn 2 years ago · 32 comments

Reader

Rinzler89 2 years ago

It's sad the comments here bashing this haven't understood the regulation, or haven't even read it and are just slamming it because in their mind EU REGULATION=BAD, AI=GOOD INNOVATION.

EU AI regulation isn't there to stop AI innovation, it's only there to restrict where and when AI can be used on decisions that affect people. For example, you can't deny someone healthcare, a bank account, a rental, unemployment payments, or a job, just because "computer says NO"[1].

I don't understand how people can be against this kind of regulation, especially knowing how biased and discriminatory AI can be made to be while also being a convenient scapegoat for poor policies implemented by lazy people in charge: "you see your honor it wasn't our policies and implementation that were discriminatory and ruined lives, it was the AI's fault, not ours".

[1] https://www.youtube.com/watch?v=x0YGZPycMEU

  • Yizahi 2 years ago

    People who are supporting the most egregious overreach by corpos, don't think that they themselves will ever be harmed by such corpo. But just in case if they ever become a multitrillionaire genius innovator, they proactively don't want some pesky laws to restrict their potential future mansions and giant yachts.

    • fakedang 2 years ago

      This right here describes the sentiment of a lot of folks on this site. I surmise a lot of folks here bashing the regulations wouldn't be too happy if say an AI HR decided that their performance was below par, and then when their job search later gets affected by some AI recruiter who eliminates based on resume keywords (well it's already happening today).

  • sschueller 2 years ago

    People also don't realize that this can happen without AI (insurance companies data mining for example) so such a law is a good thing.

    • input_sh 2 years ago

      Data mining was already addressed. There's a copyright exception that says you can data mine publicly accessible stuff, as well as outlining who can do that and for what purposes (mostly scientific).

      2019 Directive for Copyright in the Digital Single Market, articles 3 and 4. The two regulations kinda complement each other.

      • ideashower 2 years ago

        By chance, is there a list anywhere of the various regulations the EU has implemented to regulate tech companies? It's really phenomenal work, and I'd love to read about the arc of it all.

  • yinser 2 years ago

    Most of the comments here recognize the risks of vague and broad language versus writing targeted legislation for current problems and updating as things progress.

    • Yizahi 2 years ago

      So vague and broad private data scrapping and private data selling is fine, but vague and broad laws restring said activities is not?

      PS: by private I mean licensed by any license except for "free for all" and/or completely private.

    • margalabargala 2 years ago

      Commenting after you, I don't see any critical comments that criticize vague language, and certainly none provide examples. There seem two sorts of comments here:

      1) commenters who read the article, and are generally in favor, as it is neither vague nor broad, and instead celebrating it as targeted legislation for current problems that can be updated.

      2) commenters who did not read the article, and are having exactly the knee-jerk reaction the person you replied to is describing.

      Here are some examples of the second sort of comment:

      > EU legislators are totally detached from reality, it can be seen that they do not understand what is the matter with AI, for them it is just "another IT tool" that can be "regulated". As always: US innovates, EU regulates.

      > EU tech legislation is comical at this point. A bunch of rules that almost nobody follows and at best they fine FAANG companies a few hours of revenue.

      Note how neither actually mentions anything substantial beyond the headline.

ianbicking 2 years ago

Until reading this article I hadn't realized that emotion detection is banned (edit: but confirmed only in workplaces and educational institutions)

I've had it on my list to try integrating Hume.ai (https://www.hume.ai/) into a prototype educational environment I've been playing with. The entirety of their product is emotion detection, so this must be concerning for them.

My own desire is to experiment with something that is entirely complementary to the learner, not coercive, guided by the learner and not providing any external assessment. In this context I feel some ethical confidence in using a wide array of inputs, including emotional assessment. But obviously I see how this could also be misused, or even how what I am experimenting with could be redirected in small ways to break ethical boundaries.

While Hume is a separate stack dedicated to emotional perception, this technology is also embedded elsewhere. GPT's vision capabilities are pretty capable at interpreting expressions. If LLMs grow audio abilities then they might be even better at emotion perception. I don't think you can really separate audio input from emotional perception, and it's not clear whether those emotional markers are intentional or unintentional cues.

  • soco 2 years ago

    Here's a beginner question: what's the big difference between emotion perception and sentiment analysis (which is offered everywhere)? Sentiment goes only plus/minus and emotion produces multidimensional charts?

    • Ukv 2 years ago

      Emotion recognition is based on biometric data like facial expression, whereas sentiment analysis would typically be about text:

      > The notion of emotion recognition system for the purpose of this regulation should be defined as an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.

    • ianbicking 2 years ago

      I thought I'd look it up to be sure, finding it in the official PDF [2]: "[Prohibited:] AI systems inferring emotions in workplaces or educational institutions, except for medical or safety reasons."

      Elsewhere it specifically calls "emotion recognition" to be of "limited risk" (calling for transparency) and elsewhere kind of implies it to be "high risk" (as being part of the "annex"), though maybe it's just calling out use of emotion recognition in those high risk areas (e.g., credit scores).

      But it doesn't seem to actually define "emotion recognition." (Though someone else says it involves biometric data, which seems in line with everything else in the regulation.)

      All that said, it seems like under the law you could actually make emotion recognition systems, even for education, it's just that education institutions and workplaces couldn't use them. (Though that's a pretty big blocker for an educational tool!)

      [1] https://www.europarl.europa.eu/topics/en/article/20230601STO...

      • candiodari 2 years ago

        So there's effectively a blanket exception for the organisations you really don't want to be doing this, ie. Police, the institution itself and government? (incidentally, government is your health insurance company in most of Europe)

        I keep coming to the same problem with these regulations. I am much less afraid of Amazon/Google/... figuring out something about me and using it to sell me stuff than I am afraid of the police doing the same, and arresting me or otherwise having a huge negative impact on my life. Knowing the police, they'll probably not even do AI monitoring correctly, and of course, won't be responsible for the damage they cause.

        Frankly, that Amazon and Google figure out stuff I might want to buy might actually be a positive. Maybe. Sometimes. If they become better at weeding out scams, that is.

        • ianbicking 2 years ago

          While there's a hard block on using emotion recognition in workplaces and educational institutions, the other cases you list are listed under "High Risk" and so they are very much included in the regulation.

          There are several exceptions for different kinds of law enforcement, things like eminent danger, or doing biometric filters when searching for someone that fits a description. How much you can squeeze in under those exceptions depends on how bold the police are. Probably a lot, but it's not written that way.

janalsncm 2 years ago

> everything is now AI, even things that are very clearly not AI

Links to a 2019 article. It would probably be good to get some more recent numbers. I think even a ChatGPT wrapper “uses” AI although they did not develop it and have no moat.

yinser 2 years ago

Marc Andreessen said that industries that stand to gain from AI may be shielded from it by existing licensing and regulations e.g. education, law, medical. This AI act adds a whole other layer of shielding.

hipadev23 2 years ago

EU tech legislation is comical at this point. A bunch of rules that almost nobody follows and at best they fine FAANG companies a few hours of revenue.

  • n2d4 2 years ago

    That is evidently false? The EU got Apple to switch to USB-C, Google to open up Android's web search, Facebook to stop processing EU customers' data in the US, etc. Whenever I'm in the US and sign up for a service, I immediately get hit with weekly spam mail and "newsletters". And the only consistent way to terminate my account easily and completely delete my PII is by using an EU VPN.

  • eigenket 2 years ago

    Theres some reasonably sensible stuff here. I am strongly in favor of stuff like banning ai-based profiling of people by law enforcement, prediction of criminal offences and social scoring systems.

    Can you explain why you think this is comical?

  • goldfeld 2 years ago

    I think they have it much better than over at faangland

    • evantbyrne 2 years ago

      Who is "they"? EU tech companies and their workers definitely don't have it better than American ones

      • goldfeld 2 years ago

        Tech companies are not the first concern of civilization at least not insofar as I don't know of any philosophical strain establishing them as kings and overlords.

        • evantbyrne 2 years ago

          We can speak past each other cryptically if you prefer that but I don't think that is constructive. I believe that necessary protections generally already exist through existing legislation and the AI part doesn't need to be specified, but I'm open to changing my mind with new information. Just like cookie banners won't stop the CCP from slurping up TikTok comms, I don't see how this legislation will stop adversaries from weaponizing AI. The definition of AI is going to be vague enough to cause significant hurdles for companies looking to operate in the EU, which is already struggling to keep up with the US economy.

  • huqedato 2 years ago

    EU legislators are totally detached from reality, it can be seen that they do not understand what is the matter with AI, for them it is just "another IT tool" that can be "regulated". As always: US innovates, EU regulates.

    • michaelmrose 2 years ago

      The benefit of these innovation generally don't accrue at all to the bottom half and minimally to the next quartile. As someone not in the top quartile I would given the chance much prefer to be an EU citizen both in general and in respect to this regulation.

idle_zealot 2 years ago

I'm tentatively a fan of the high-risk portion of this legislation, but am disappointed that the EU seems to be taking a "training on copyright data is a copyright violation" stance. This basically kills open models. Only the biggest of companies will be able to strike licensing deals on the scale necessary to produce a model familiar with modern human culture. Any model trained only of public domain data will have surprising knowledge gaps, like a person who has never read a book or watched a movie, only read reviews.

  • Ukv 2 years ago

    > disappointed that the EU seems to be taking a "training on copyright data is a copyright violation" stance

    On reading the text, I'm not convinced that they actually are. Copyright of the training data is only mentioned once in the act that I can find, here:

    > Any use of copyright protected content requires the authorization of the rightholder concerned unless relevant copyright exceptions and limitations apply. Directive (EU) 2019/790 introduced exceptions and limitations allowing reproductions and extractions of works or other subject matter, for the purposes of text and data mining, under certain conditions.

    Initially "Any use of copyright protected content requires the authorization of the rightholder concerned" sounds like a strong anti-scraping stance, but then the "unless relevant copyright exceptions and limitations apply" makes it nothing more than a restatement of how copyright works in general. The question is whether any exceptions/limitations do apply, and the fact that they immediately point to the DSM directive's copyright exception for text and data mining implies they see it as sufficient for machine learning datasets.

    The "certain conditions" essentially just means following robots.txt if it's for commercial purposes, which all scrapers I'm aware of already do regardless.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection