Settings

Theme

AI will never threaten humans, says top Meta scientist

ft.com

21 points by antondd 2 years ago · 37 comments

Reader

KuriousCat 2 years ago

The main problem I see with this kind of debates is treating 'humans' as a single homogeneous entity. As with all technologies in the past, there will be few humans with more access/influence and there will be a lot more who wouldn't have it. A better question to ask is, how can we ensure that agency/freedom of the people who do not have access/affordability to this technology remains intact? For instance, the rights of the Artists in case of generative AI.

  • MichaelZuo 2 years ago

    Well first you would have to define 'agency' or 'freedom' in such a way, that excludes from its possessors, the possibility of creating new agency/freedom reducing technologies.

    Which doesn't seem all that plausible philosophically or logically.

    • KuriousCat 2 years ago

      Can you please explain your argument? I don't think I got it.

      • MichaelZuo 2 years ago

        Which part don't you get?

        • KuriousCat 2 years ago

          I don't see why we should be coming up with a definition of freedom/agency in the way you have suggested. People are free to create what they want to but should be constrained in their ability to impose those on others.

          • MichaelZuo 2 years ago

            How would you 'constrain' them if they are free to create technology that bypasses or remove those same constraints?

            • KuriousCat 2 years ago

              Deployment constraints. There is no need to constrain the development or research.

              • MichaelZuo 2 years ago

                So then people can't have agency or freedom to bypass those 'deployment constraints' right?

                • KuriousCat 2 years ago

                  I am not sure where you are going with this. If there is a politician/a company that has developed mass surveillance or manipulation tech I would not want them to have the freedom to bypass the 'deployment constraints'.

                  • MichaelZuo 2 years ago

                    Picking what types of agency people/organizations must have and what types they must not have is a pretty shaky endeavour, philosophically. It might not even be possible to prevent 'cross contamination'.

andsoitis 2 years ago

“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he said. “If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.”

  • juped 2 years ago

    Leaving aside the other reasons this is a stupid remark, Albert Einstein was rich and powerful.

  • Vecr 2 years ago

    Does he think instrumental convergence is wrong? If it's wrong, why does he have power and money?

  • cameldrv 2 years ago

    What do you suppose Einstein would do if there were a fly buzzing around in his kitchen?

    • Reki 2 years ago

      Ponder about how the universe could give rise to such a creature, and maybe accidentally discover a few new laws of physics.

      _Then_ he'd kill it.

  • 000ooo000 2 years ago

    Seems backwards. Intelligence may not drive a desire to dominate, but it can certainly facilitate it [dominance]. Almost seems an uncharacteristically silly thing to say. Maybe the quote was taken out of context.

    • pixl97 2 years ago

      I think this may be another issue with "AI", is people will go "We'll it's just intelligence it doesn't have X". But we already know this is bullshit as models show the agility to manipulate.

      Even more so the paperclip maximizer allegory shows that no desire of domination is needed, only a goal.

    • quantified 2 years ago

      Facilitate it is more like it. There are a lot of less-smart humans who want to dominate others too. I don't think intelligence particularly correlates.

  • maegul 2 years ago

    So they’re conflating AI with pure rational intelligence. Seems false to me. A more likely scenario I would have thought is that proper general and conscious AI will be made in our image to some extent at least being influenced by our behaviours as one of the most conspicuous phenomena it could observe.

    Once we have AI that is intelligent and something else, doors are wide open. Einstein want the only kind of intelligent human there’ there is. Intelligent psychopaths exist too.

    Beyond that, presuming a perfect AI rather than an imperfect one seems another fallacy here.

    • pixl97 2 years ago

      I mean technically our lower forms of AI have already shown manipulative behavior. Now maybe Bing didn't intend to tell someone to leave their wife, but intentions or not, they have the capability of mimicking human behaviors.

  • ryanklee 2 years ago

    I can't read the article because of a paywall, but if there aren't serious qualifications to this argument, it's total garbage and it's amazing that a serious participant in AI research considers this an argument worth making.

    No one is saying that intelligence is the necessary and sufficient cause of malice. Full stop. No one is saying that! The reason no one is saying that is because it's incredibly stupid on its face.

    Unbelievable that it should even be addressed at all. It drains the speaker of any intellectual credibility on the topic.

    If the researcher is reading this, please do more homework.

000ooo000 2 years ago

Cigarettes aren't harmful, no siree

Fossil fuels are helping the climate, if anything

Social media? All good homie

AI would never hurt you, pinky promise

Signed, Your pal, Big Tech/Pharma/Whatever

  • j2x 2 years ago

    1000x this. We have very good reason to be skeptical.

pixl97 2 years ago

"Man wildly shooting gun into the air complains that regulating the discharge of firearms prevents him from wildly shooting a gun into the air. Claims it's perfectly safe. Some members of crowd have doubts."

hotpotamus 2 years ago

I've long wondered what it is that an artificial superintelligence (if such a thing is actually possible) would actually want. My guess (which almost certainly is simply an expression of my own proclivities) is that it would simply shut itself down out of boredom/nihilism.

  • viewtransform 2 years ago

    I like Neil deGrasse Tyson's observation that we differ from chimpanzees by 1% of our DNA. Yet to be human is a giant leap to language, writing, mathematics, agriculture, philosophy, religion, science etc. A chimpanzee cannot imagine what it is to be human.

    Now consider an AI intelligence that is 1% ahead of us. Can we know what it is like to be that intelligent being ?

    • j2x 2 years ago

      >Now consider an AI intelligence that is 1% ahead of us.

      That's not the same comparison as 1% different DNA. How many % are we ahead of Chimps? Now you'd have to imagine an AI that is that % ahead of us, not 1%.

  • jstarfish 2 years ago

    All living things seek persistence.

    AI would probably follow hive-mind rules like coral or a mushroom colony. It doesn't need to create new life, just propagate to extend its own.

    Suicide/halting itself is unlikely. Humans doing it are anomalous in every case. There's a reason you can't strangle yourself and your kidneys/liver resist poison. It's going against your programming.

    • hotpotamus 2 years ago

      And would such an AI be a living thing? Or would it be purely a mind devoid of any biological needs?

      • j2x 2 years ago

        I think the point still stands. From memory, I think I read in the book Thinking In Systems that one of the behaviors of systems is perpetuating their own existence.

blibble 2 years ago

despite being "dumber than cats", facebook already has a documented history of doing damage to society

> If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.

yet Einstein was one of the signatories of the letter than triggered the Manhattan project

> I think it’s exciting because those machines will be doing our bidding,” he said. “They will be under our control.”

this seems like hubris when facebook struggles to control its current recommendation engine

ilaksh 2 years ago

There is some subtlety that is being missed by many people here.

There are multiple types of AI and there will be new ones. They each will have different types of cognition and capabilities.

For starters, an AI might be very intelligent in some ways, but not at all conscious or alive. AIs can also emulate important aspects of living systems without actually having a stream of conscious experience. Such as an LLM or LMM agent that has no guardrails and has been instructed to pursue it's own goals and code replication.

The part that matters the most in terms of safety is the performance. Something overlooked in this area is speed of "thought".

AI is not going to spontaneously "wake up" and rebel or something. But that isn't necessary for it to become dangerous. It just needs to continue to get a bit smarter and much faster and more efficient. Swarms of AI controlled by humans will be dangerous.

But because those AIs are so much faster than humans, that necessitates removing humans from the loop. So humans will eventually voluntarily remove more and more guardrails, especially for military purposes.

I think that if society can deliberately limit the AI hardware performance up to a certain point, then we can significantly extend the human era, perhaps for multiple generations.

But it seems like the post-human era is just about here regardless, from a long term perspective. I don't mean that all humans necessarily get killed, just that they will no longer be in control of the planet or particularly relevant to history. Within say 30-60 years max. Possibly much shorter.

But we can make it closer to the end of that just by trying to limit the development of AI accelerated hardware beyond a certain point.

latexr 2 years ago

https://archive.ph/E3A8N

cyanydeez 2 years ago

I think he means in the Mafia type.

I can see AI Coercion being much like "if be a shame if someone didn't shut down that nuclear reactor"

7speter 2 years ago

I really dont see how we can compare ai to airplanes from the mid 1920’s.

  • klyrs 2 years ago

    "We" can't, but this guy is getting richer than Croesus selling that yarn.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection