AI will never threaten humans, says top Meta scientist
ft.comThe main problem I see with this kind of debates is treating 'humans' as a single homogeneous entity. As with all technologies in the past, there will be few humans with more access/influence and there will be a lot more who wouldn't have it. A better question to ask is, how can we ensure that agency/freedom of the people who do not have access/affordability to this technology remains intact? For instance, the rights of the Artists in case of generative AI.
Well first you would have to define 'agency' or 'freedom' in such a way, that excludes from its possessors, the possibility of creating new agency/freedom reducing technologies.
Which doesn't seem all that plausible philosophically or logically.
Can you please explain your argument? I don't think I got it.
Which part don't you get?
I don't see why we should be coming up with a definition of freedom/agency in the way you have suggested. People are free to create what they want to but should be constrained in their ability to impose those on others.
How would you 'constrain' them if they are free to create technology that bypasses or remove those same constraints?
Deployment constraints. There is no need to constrain the development or research.
So then people can't have agency or freedom to bypass those 'deployment constraints' right?
I am not sure where you are going with this. If there is a politician/a company that has developed mass surveillance or manipulation tech I would not want them to have the freedom to bypass the 'deployment constraints'.
Picking what types of agency people/organizations must have and what types they must not have is a pretty shaky endeavour, philosophically. It might not even be possible to prevent 'cross contamination'.
“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he said. “If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.”
Leaving aside the other reasons this is a stupid remark, Albert Einstein was rich and powerful.
Does he think instrumental convergence is wrong? If it's wrong, why does he have power and money?
What do you suppose Einstein would do if there were a fly buzzing around in his kitchen?
Ponder about how the universe could give rise to such a creature, and maybe accidentally discover a few new laws of physics.
_Then_ he'd kill it.
Seems backwards. Intelligence may not drive a desire to dominate, but it can certainly facilitate it [dominance]. Almost seems an uncharacteristically silly thing to say. Maybe the quote was taken out of context.
I think this may be another issue with "AI", is people will go "We'll it's just intelligence it doesn't have X". But we already know this is bullshit as models show the agility to manipulate.
Even more so the paperclip maximizer allegory shows that no desire of domination is needed, only a goal.
Facilitate it is more like it. There are a lot of less-smart humans who want to dominate others too. I don't think intelligence particularly correlates.
So they’re conflating AI with pure rational intelligence. Seems false to me. A more likely scenario I would have thought is that proper general and conscious AI will be made in our image to some extent at least being influenced by our behaviours as one of the most conspicuous phenomena it could observe.
Once we have AI that is intelligent and something else, doors are wide open. Einstein want the only kind of intelligent human there’ there is. Intelligent psychopaths exist too.
Beyond that, presuming a perfect AI rather than an imperfect one seems another fallacy here.
I mean technically our lower forms of AI have already shown manipulative behavior. Now maybe Bing didn't intend to tell someone to leave their wife, but intentions or not, they have the capability of mimicking human behaviors.
I can't read the article because of a paywall, but if there aren't serious qualifications to this argument, it's total garbage and it's amazing that a serious participant in AI research considers this an argument worth making.
No one is saying that intelligence is the necessary and sufficient cause of malice. Full stop. No one is saying that! The reason no one is saying that is because it's incredibly stupid on its face.
Unbelievable that it should even be addressed at all. It drains the speaker of any intellectual credibility on the topic.
If the researcher is reading this, please do more homework.
Cigarettes aren't harmful, no siree
Fossil fuels are helping the climate, if anything
Social media? All good homie
AI would never hurt you, pinky promise
Signed, Your pal, Big Tech/Pharma/Whatever
1000x this. We have very good reason to be skeptical.
"Man wildly shooting gun into the air complains that regulating the discharge of firearms prevents him from wildly shooting a gun into the air. Claims it's perfectly safe. Some members of crowd have doubts."
I've long wondered what it is that an artificial superintelligence (if such a thing is actually possible) would actually want. My guess (which almost certainly is simply an expression of my own proclivities) is that it would simply shut itself down out of boredom/nihilism.
I like Neil deGrasse Tyson's observation that we differ from chimpanzees by 1% of our DNA. Yet to be human is a giant leap to language, writing, mathematics, agriculture, philosophy, religion, science etc. A chimpanzee cannot imagine what it is to be human.
Now consider an AI intelligence that is 1% ahead of us. Can we know what it is like to be that intelligent being ?
>Now consider an AI intelligence that is 1% ahead of us.
That's not the same comparison as 1% different DNA. How many % are we ahead of Chimps? Now you'd have to imagine an AI that is that % ahead of us, not 1%.
All living things seek persistence.
AI would probably follow hive-mind rules like coral or a mushroom colony. It doesn't need to create new life, just propagate to extend its own.
Suicide/halting itself is unlikely. Humans doing it are anomalous in every case. There's a reason you can't strangle yourself and your kidneys/liver resist poison. It's going against your programming.
And would such an AI be a living thing? Or would it be purely a mind devoid of any biological needs?
I think the point still stands. From memory, I think I read in the book Thinking In Systems that one of the behaviors of systems is perpetuating their own existence.
despite being "dumber than cats", facebook already has a documented history of doing damage to society
> If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.
yet Einstein was one of the signatories of the letter than triggered the Manhattan project
> I think it’s exciting because those machines will be doing our bidding,” he said. “They will be under our control.”
this seems like hubris when facebook struggles to control its current recommendation engine
There is some subtlety that is being missed by many people here.
There are multiple types of AI and there will be new ones. They each will have different types of cognition and capabilities.
For starters, an AI might be very intelligent in some ways, but not at all conscious or alive. AIs can also emulate important aspects of living systems without actually having a stream of conscious experience. Such as an LLM or LMM agent that has no guardrails and has been instructed to pursue it's own goals and code replication.
The part that matters the most in terms of safety is the performance. Something overlooked in this area is speed of "thought".
AI is not going to spontaneously "wake up" and rebel or something. But that isn't necessary for it to become dangerous. It just needs to continue to get a bit smarter and much faster and more efficient. Swarms of AI controlled by humans will be dangerous.
But because those AIs are so much faster than humans, that necessitates removing humans from the loop. So humans will eventually voluntarily remove more and more guardrails, especially for military purposes.
I think that if society can deliberately limit the AI hardware performance up to a certain point, then we can significantly extend the human era, perhaps for multiple generations.
But it seems like the post-human era is just about here regardless, from a long term perspective. I don't mean that all humans necessarily get killed, just that they will no longer be in control of the planet or particularly relevant to history. Within say 30-60 years max. Possibly much shorter.
But we can make it closer to the end of that just by trying to limit the development of AI accelerated hardware beyond a certain point.
I think he means in the Mafia type.
I can see AI Coercion being much like "if be a shame if someone didn't shut down that nuclear reactor"
I really dont see how we can compare ai to airplanes from the mid 1920’s.
"We" can't, but this guy is getting richer than Croesus selling that yarn.