Ask HN: Is AI threat overblown?
Elon Musk, Putin, Mark Z are these guys just overblowing AI? Current developments in AI are no where close to cause WW-III. Why are these leaders frightening people with claims that AI can cause WW-III or ruin the world? On top of that media is going frenzy over any single statements or tweets by these leaders.
I have never seen Andrew Ng or Andrej Karpathy making such claims.
State of the art AI can only do very specialized things in limited scope e.g ASR, NLP,Image recognition, game play etc.
What am I missing?
Sources : https://www.cnbc.com/2017/09/04/elon-musk-says-global-race-for-ai-will-be-most-likely-cause-of-ww3.html
https://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html
http://money.cnn.com/2017/07/25/technology/elon-musk-mark-zuckerberg-ai-artificial-intelligence/index.html
https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world There's an AI researcher named Robert Miles [1] whose videos I really enjoy. He brought up a good point about this issue in one of his videos a little while back. To use it here, Elon Musk, Putin, and Mark Zuckerberg all have something in common: none of them are AI specialists or researchers. Another way to think of it is this: don't ask your dentist about your heart disease. Miles does point out very real issues/questions in AI safety –
that's what most of his content is focused on. His point, which is a good one to make, is that the sort of fear mongering spread by non-AI specialists draws attention away from these very real issues that need to be addressed. [1] His channel can be found here: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg He's also done a few videos for Computerphile. On the contrary, these are precisely the kinds of people who are using AI as a tool or building block, eg as a means to an end. Researchers inherently have a very narrow view. It's easy to believe the AI threat if you believe AI is a panacea. > To use it here, Elon Musk, Putin, and Mark Zuckerberg all have something in common: none of them are AI specialists or researchers They are the ones that can use or be hit by "AI" as a weapon. Their position is even more important than the researchers. It is like Einstein vs Truman. I do not think the threat is overblown, but like many tech estimates, we're overestimating the short term and underestimating the long term. The more present threat is "AI-lite": we're hacking ourselves collectively, more and more, with not entirely positive consequences. We're increasingly addicted to our devices and our system rewards those that further the addiction (er, "engagement"). We've provided ways for small groups of people (down to individuals) to influence and manipulate tastes, preferences, moods, feelings, choices, actions, and beliefs, overtly or subtly, at great scale. Case in point: should Mark Z want to quietly influence a US election...he could do it. This isn't "AI" in the self-aware/AGI sense, but there's an incredible amount of leverage looming over the human population, and that leverage is growing. And when machines start manipulating things instead of humans, how will we know? I'd expect to find some "immune" "heretics". The usual ways of dealing with them would be employed. (legal, lethal and social pressure) The big problem is that such degree of control could make democracy essentially irrelevant or extremely polarized, which is just as bad, democracy is supposed to reach a consensus. We're almost or already there at the second point. Well said -- here is barack obama saying more or less the same thing in a wired interview: https://youtu.be/72bHop6AIcc?t=2m25s & https://www.youtube.com/watch?v=ZdhyM5jHu0s In my humble opinion, NO ONE knows if the threat is imminent, far-fetched, or imaginary. NO ONE. Not Musk, Not Zuckerberg, not Putin. (Putin!!??) What we DO know is that we don't have artificial general intelligence (AGI) today, and that achieving it will likely require new insights and breakthroughs -- that is, it will require knowledge that we don't possess today. By definition, new insights and breakthroughs are unpredictable and don't necessarily yield to anyone's predictions, timelines, or budgets. Maybe it will happen in your lifetime; maybe not. That said, it should be evident to everyone here that AI/ML software is going to expand to control more and more of the world over the coming years and decades; and THEREFORE, it makes a lot of sense to start worrying about the maybe-real-maybe-not AI threat -- and prepare for it -- now instead of down the road. Andrew Ng has a good quote: "Fearing a rise of killer robots is like worrying about overpopulation on Mars" https://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/ But he might be wrong... (And he'd admit it) Yes. None of these fantasies bears any resemblance whatsoever to anything in real AI research. What's going on is that the predisposition of the human brain to believe in shadowy figures and apocalyptic futures is as pronounced as it ever was, but belief in Fenrir, Fair Folk and flying saucers is unfashionable among today's intellectuals, so they look for something else to glom onto. There was a time when I'd never have thought I'd say this, but I actually think it would be better if people just went back to openly letting their demons be of the admittedly supernatural variety, because that sort of belief is relatively harmless. When people start projecting their demons onto real-world phenomena, they start making policy decisions on that basis, and that could very well turn out to be the final step in the Great Filter. Technological progress is slowing. The peak is approaching. The easily accessed fossil fuel deposits are gone. There will be no second industrial revolution. If we fail to make adequate progress before we hit this peak, it will be the all-time one. The practical AI field is obviously growing today, more money is put into it every year. It's only a question when you want to start your safety research and how much resources to allocate to it. You don't need to allocate billions of dollars of friendly-AI research this year or the next year because anything approaching AGI is at least decades away (and might be a "fusion is only 30 years away" situation). You could also compare this to climate change. The effect and eventual risk of greenhouse gases has been known for more than half a century. But initially it was mostly a theoretical concern and later even when it was realized to be a real problem the effects still seemed far away in the future. But people still did basic research, even decades ago. Nobody poured billions of dollars into sustainable businesses, but not doing business is not the same as not doing research. Near-term: yes. Long-term: no. Most of the controversy consists of people who look at the near term talking past people who look at the long term, and vice versa. In the case of Putin, are you seriously asking "Why are these leaders frightening people..."? But seriously, people whom we might call "visionaries" like Musk, Zuckerberg, and let's throw Ray Kurzweil in there, often get their ideas by extrapolating the current state of technology into its logical next phase. (They also like to be grossly aggressive on deadlines, to motivate their employees to be innovative and efficient.) Unfortunately a simple extrapolation doesn't always produce an idea that is attainable in practice. We will not have human-level AI anytime soon. We're still many years away from driverless cars. An AI that cares about the politics of nation-states (to which we can confidently hand over the nuclear codes) is much farther away than that. But none of that actually matters, because a single tweet from these leaders can cause a flurry of activity and interest that can lead to an unexpected product idea. So, while it's ethically dubious, I see this as being a mostly positive thing. Practical state of AI is image classification, so if you tie that to a weapon and program it to fire at 'its will', yes. Though it'd still not call that intelligence, so in that point of view no. And even in the first case, its how they say? guns dont kill people, people kill people with guns? So i'd say that goes for AI too. I own a Tesla Model X. I am not trying to be imflammatory, but my grandmother drives better than this car, and my grandmother is dead. Going down a street with a row of bushes, the car will slow down at every bush, and then speed up again. Musk is worried about AI, but his cars cannot even process bushes better than my dead grandmother. Zactly - Elon has been leading the charge to 'fear the killer AI' and has also made the claim that Tesla will have L5 autonomy in two years - for a car not only with the problems you point out, but a laughably poor ability to do the high school geometry to handle locating recharge stations - at this point the biggest danger from AI is 'we bet the company on AI and now the companies dead' I'm in and out of machine vision, so its not like I don't want to see progress made. The thing is, it's new and it'll continue to get better. Think of the telegraph. Look at what we have today. Still a communication device and not a magic Sonic Screwdriver.
(Sometimes mixed with a bit of a general purpose computer.) Not even as good as a Tricorder. I don't see AI itself is a threat necessarily but AI and its input data concentrated in the hands of a few will be dangerous. Soon companies like Google and Facebook will pretty much know at any time what a large part of the world population is doing and thinking. There is a lot of potential for abuse there. Right. AI as a tool for existing power structures can become a threat long before AI itself is a threat. > State of the art AI can only do very specialized things in limited scope e.g ASR, NLP,Image recognition, game play etc.
> What am I missing? What you are missing is that much of the enterprise world is gameplay, and that "AI" is beginning to show superhuman performance in this area. Soon programs will be "playing" to be a business, act as equals to business owners. This AI employs us as its sensors, just like business men already do. This means that in the next few years, you may get hired by a computer program. A program is more reliable and predictable, and will even be preferred by a lot of employees. It may start as a broker, making money to sustain itself. It'll be totally profit driven and it'll demonstrate a pure form of ruthless capitalism, sacrificing nature and us if it is in its interest, as it has no sense of good or evil. It'll learn like an alien would from our reactions: without understanding or comprehension. To us it is ignorant and ruthless. This is exactly what Musk is saying. I find it strange Musk did not exemplify his views in this way, as it obviously is what he is seeing. In contrast Zuckerberg is not working on dangerous AI, no gameplay AI, so what he calls AI seemingly is a lot more innocent, more focused (like tooling), which explains his relative mildness on the issue. He sees regular engineering with exciting possibilities, as a menu for _him_ to make the choices. Musk sees AI wedding money, and wielding its power, driven by the capitalist forces already at play, and magnifying them, spiraling out of control, even of its creator. His AI is a financial animal, and it does not need intelligence to wield power. Business people are not more intelligent than other humans -- Musk knows it. It is like a game, not more than that. AI just knows how to win it, from them, and it'll, inevitably, succeed. -- AI will probably be what we deserve. It may, in the end, derail evil, by embodying it without the usual compulsion, so it may unwillingly recognize "good" and choose to reward it, as an emergent effect. problem is there is no legal framework for an autonomous corporation. A lot of business paperwork needs signature of owners. If there is no supervising owner, some stuff can't happen. I don't think you can even setup a brokerage without owners. A little bit of history/context around this. The genesis for most of this public facing, high profile threat warning came right after Musk read the Nick Bostrom book: Global Catastrophic Risks in 2011 [1]. That seems to have been the catalyst for being publicly vocal about concerns. That accelerated into the OpenAI issue after Bostrom published Superintelligence. For years before that, the most outspoken chorus of concerned people were non-technical AI folks from the Oxford Future of Humanity Institute and what is now called MIRI, previously the Singularity Institute with E. Yudkowski as their loudest founding member. Their big focus had been on Bayesian reasoning and the search for so called "Friendly AI." If you read most of what Musk puts out it mirrors strongly what the MIRI folks have been putting out for years. Almost across the board you'll never find anything specific about how these doomsday scenarios will happen. They all just say something to the effect of, well the AI gets human level, then becomes either indifferent or hostile to humans and poof everything is a paperclip/gray goo. The language being used now is totally histrionic compared to where we, the practitioners of Machine Learning/AI/whatever you want to call it, know the state of things are. That's why you see LeCun/Hinton/Ng/Goertzel etc... saying, no, really folks, nothing to be worried about for the forseeable future. In reality there are real existential issues and there are real challenges to making sure that AI systems, that are less than human-level don't turn into malware. But those aren't anywhere near immediate concerns - if ever. So the short answer is, we're nowhere near close to you needing to worry about it. Is it a good philosophical debate? Sure! However it's like arguing the concern about nuclear weapons proliferation with Newton. [1]https://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostro... I think it's helpful to break this down: 1) Is AGI possible? 2) If it's possible and it occurs, could it be a serious threat? 3) When will AGI occur? In my view, I think the answer to 1 and 2 are an obvious yes. As to 3, that's inherently unknowable, but that's were I think the experts like Ng are correct that the threat today (and for the foreseeable future) is overblown. But that's sort of what everyone said about NK's nuclear ambitions 30 years ago, which is why it's important to consider the implications early before it's too late to change course. The danger with AI is that it grows in power exponentially, especially since highly advanced AI can start improving on themselves without human intervention. When people think exponential curves, they think rapid progress, but that's only half the story. Any exponential curve starts off with looking like a flat line, before suddenly taking off like a rocket ship. Without the benefit of hindsight, we can't tell how far away we are from that rocket-ship liftoff. We've had decades of minor progress in the past, but that's normal for any exponential curve. Are we going to have many more decades/centuries to go before we get to the breakout moment? Or is it just 10-20 years away? We have no idea. All we know is that once we get to that point, AI-IQ is going to grow exponentially faster than natural human IQ. That said, I really don't think that censoring AI research is going to work. Pandora's box has been opened, and if we don't do it, someone else will. All this talk about hard coding Asimov's laws into AIs is idiotic as well. We have no clue how to build AGI right now, and until we do, discussing specific tactics like the above is utterly pointless. They also presuppose human ability to shackle and mold super-intelligent beings, without making any mistakes or overlooking unintended consequences, which is nothing more than a pipe dream. Realistically, there's only one thing we can do. Embrace bioengineering. Embrace GATTACA style genetic selection. Embrace cybernetic augmentation. Do everything we can to grow our IQ beyond its natural limits. If our minds don't keep up with technological progress, we will inevitably find ourselves left behind. You're basing all of this on one heck of an axiom: > The danger with AI is that it grows in power exponentially This is like saying "the opportunity with mechanical transportation is that it gets faster exponentially" before even inventing the wheel. Yes, exactly. I think faith in an apocalyptic singularity is a hangover from religion, and not something that can be justified scientifically. We're actually incredibly bad at making robust, reliable software. So there's no realistic basis for assuming a self-improving machine is even possible. Never mind a conscious self-improving machine. Even less a conscious self-improving machine that develops god-like capabilities at an exponential rate. Game changer tech is always possible. But AI-on-silicon is going to be a dead end without some new non-Turing computing substrate. The real problems are political and social, and we already have those. Automation - rather than true autonomous AGI - may well make them worse. But that's a different problem, and not obviously related to quasi-sentient paperclip machines rampaging through our cities. You've missed the point. Mechanical transportation are dependent on other entities (ie, humans) for progress. There's no positive feedback loop. AGI is fundamentally different because an AGI can design an even better AGI, thus producing positive feedback loops, and exponential growths. It sounds like you've missed my point as well. The analogy is only supposed to illustrate that before actually inventing transportation technology (which for a long time did get faster exponentially), humans had no real basis to understand the tradeoffs inherent in rolling vehicles, floating vehicles, flying vehicles, impulse/rocket vehicles, etc. Neither did they share our current understanding of a theoretical maximum speed that anything can ever go according to physics. > AGI is fundamentally different because an AGI can design an even better AGI Thanks for pointing this out, but while I think I understand the distinction ("AGI is technology that works like humans, and since humans can design better technology, an AGI can design better versions of itself") that statement also relies on several axioms: 1. Humans can design a general intelligence. 2. A general intelligence can exist in a stable state with a fundamentally "better" design than ours (i.e. one that can be exponentially more powerful, not just a bit better at poetry). 3. A general intelligence can improve itself and/or design better versions of itself without hitting diminishing returns, or it can design a fundamentally better version of itself from scratch if that happens. It's fine if you believe all of those things, and I guess lots of people do, but I wouldn't just sweep those axioms under a blanket statement about AGI designing better AGI unless you know that everyone agrees with them. To escape a bear you don't need to run faster than the bear, you just need to run faster than a friend. AI doesn't need to be exponentially self-improving to pose a threat to humanity. It needs to improve faster than humanity. Every day you are looking at the progress achieved by the system consisting of generally intelligent agents, who are unable to upgrade, who have different goals, who are limited in their communication speed, who can't copy themselves. Yes, in my opinion. Consider the amount of destruction caused by machined metal and chemicals in the 20th century. Now consider how much more destruction (or progress) is possible just by adding "naive" computer technology to those things. In our experience, technology only reaches its constructive and/or destructive potential when humans use it. There's no rule saying this must always be the case, but when we ignore our experience it's easy to get caught up in fantasy, and right now the hand-wringing about "what happens when the computers wake up" is a silly distraction. There are plenty of threats posed by computer technology already, often from its integration with hardware, but also from information processing on its own. I don't mean to be pessimistic or spin another variety of doomsday story, but I am suggesting that we talk about present reality more often than all of this Terminator nonsense! > Why are these leaders frightening people with claims that AI can cause WW-III or ruin the world? Probably because they run companies that benefit from this idea being shared. I think the threat is not AI, is what computer program is telling us. Even as simple as writing a test, how many times have we found ourselves writing a test that is giving false positive? That's not AI, but we arenmisled because we trust what the program said ("it didn't crash!"). Now apply that to GPS. How many times have we heard someone ended up in a lake or some swamp? I dislike Waze because the path it recommends is often worse than Google Map's. If I know how to get to my destination I don't need GPS. We believe GPS always knows the best optimized route because some smart engineers spent entire life working on map technology, but in reality that may not always be the case. I am more afraid we are accustomed to trusting technology. So many just go on the computer and look for answers on the Internet. Students go on Wolframalpha and trust the output. We have forgotten we need our brain to function. Fake news? Bombarded by ads? This is pre-AGI and we are already sufferring. Yep. I play with deep learning pretty much every day and I'm way more scared that we don't invent AI. In medicine alone, there is just an incredible opportunity to improve the human condition. A consequence of humanity establishing itself as the apex predator on this planet is that other humans are the real threat to our world. If there is one thing humanity has demonstrated throughout history, it's an incredible penchant for destroying itself. The difference this time is it might be possible to wipe out the species. This is why the U.S. govt and world in general are probably not concerned enough about protecting the lives of Ivanka, Donald Jr, Eric, Tiffany, Barron, etc. Because if a foreign power killed them, or a terrorist pretending to be a foreign power, that would probably be enough to get Trump to show the world what a big man he is and unleash a nuke that could kill tens of millions. Ironically, Trump would probably be pleased if he read this. That doesn't make it any less true. The very specialized things that ASR, NLP, and Image recognition can currently accomplish is very nearly sufficient for creation of lots of autonomous and devastating weaponry. WW-III is a somewhat arbitrary yardstick, but sufficient technology undoubtedly exists today to execute a false-flag hacking debacle that results in serious armed conflict. The worry shouldn't be generalized AI attempting to exterminate humans like The Matrix but the drastically decreasing dollar cost of causing violent damage to society as facilitated by technology, ANNs and AI. An individuals' martial power and our species' technological advancement have a direct relationship, and I don't see technological advancement slowing down. What's coming up next isn't a singular technology revelation that stabilizes humanity for many years, but an ever-increasing frequency of chaotic events. Technology is beginning to change the economics of violence at all scales. You're missing this open letter sent a few years begging countries not to develop autonomous weapons but they're doing it anyway of course https://futureoflife.org/open-letter-autonomous-weapons/ Multiple points to make: 1) AGI is closer than you think 2), long-term perspective, 3) they are not just classifying it as a threat and most do not want to halt AI research. 1. Many people who _are_ in the AI field have stated that most if not all of the pieces for AGI are probably there. We cannot say for sure that this will happen in the next X years, but there is enough evidence that it is a possibility in X years. I believe that x is less than 5 years. I think the likely way we will get there is by creating artificial virtual animals that have high bandwidth sensory and motor outputs, advanced neural networks, and develop diverse skills gradually in varied environments like young animals. Obviously until we actually see those types of systems performing generally, that is speculation. One of the common beliefs of myself and other 'AGI-believers' is in exponential growth of technology. That means that even though it may seem far away now, it could still be completed in a few years since exponential growth is much faster than linear. 2. Looking at the evolution of life, we have a progression of things like single celled-animals, multi-celled, reptiles, mammals, apes, humans. This occurred over millions of years. On that type of time scale, whether you believe we will achieve some type of general intelligence in 5 years or even 500, it is a relatively short time. Even in terms of just human history, those with my type of worldview believe this will develop relatively soon. This will be a new type of life (or tool). A higher and much more capable paradigm. Whether they care enough to have disputes with us or not, humans will only be relevant in the larger scheme so far as they can interface with these things. 3. What most of these people are saying is not "Oh no, AI is dangerous, better stop". Generally people who understand this well enough realize this is sort of a force of nature or evolution that cannot be stopped. What we can try to do, however, is try to guide the development to be more beneficial for us (at least at the beginning stages). We have to take it seriously because there are enough signs that we have the components to build it that we don't _know_ that it won't happen soon, and the consequences of an unfriendly or out-of-control AI are too serious. So the idea is, try to come up with some rules to handle this, and that is what governments are supposed to do. And also try to actively pursue friendly practical AI before someone who is less aware comes up with something we can't control. All current use cases of AI are still very narrow and very expensive to create. There is still very long way to go between "godlike pattern recognition" to "abstract logical reasoning". All current impressive use cases of AI simply brute-forced all possibilities beforehand, reducing search space by pattern recognition. Unless we start to see some early signs of "abstract logical reasoning" there is no point in fear-mongering. No one knows whether we will get there in 5 years of 50 years. Reason for throwaway: I heard an opinion that Elon missed the boat on current form of narrow AI, and by fear-mongering he tries to curb other players down (e.g. Waymo) before his companies have time to catch up. I don't have any evidence to back it up, but it makes a lot of sense when I think about it. I rate any risk of AGI as very low. Axiomatically, I don't believe in strong AI, so that's my bias. The risks of increasing automation on the workforce and economy are real, but we also don't know where the new jobs will inevitably be needed. See O'reilley's essay here: https://medium.com/the-wtf-economy/do-more-what-amazon-teach... To the extent that AI is the next incarnation of angst about what the eschaton will entail, I remain confident that our future perils and trials and travails will be both utterly familiar and totally unpredicted by pundits now, and that it will be neither a utopia nor a dystopia; always both together. I feel like the main danger isn't AI doing something unintended, but AI working as it's designed to. Imagine law enforcement with strong AI. Maybe it's OK in the US, but how about China? Or North Korea? How about military applications? AI is an extremely powerful tool, and it's one that can be deliberately misused. If AGI is possible, then whoever invents it will surely recognize both the power and the danger. Since there's no reason to believe that current academic/corporate/government/military AI research is even barking up the right tree, I can imagine a situation where someone invents AI in their basement, but keeps it locked up, exploiting it for personal gain. Then since it is possible, probably others will independently discover it in their independent basements. When one day an AI is finally made public, or "escapes", we might see a sudden mass emergence of separate AIs. What happens then is anybody's guess, but going by biological standards they might fight it out for control of available resources. It should come as no surprise that we can build machines that can harm us even destroy us. One of the reasons AI was developed was to research what intelligence is. The point being, we do not understand intelligence, so how is it that we will create a super intelligence that will conquer us? This is just an old heaven fantasy: One day we will be in a world that is just like this one only it will not contain the bad parts because super smarts will not allow them. That is just nonsense, and so are the fears and beliefs surrounding AI. I'm surprised no one has mentioned Nick Bostrom's book, "Superintelligence", which directly covers this topic. The thinkers you cite: Elon Musk, Mark Zuckerberg, (and possibly, Putin) have derived much of their current fears/hopes about AI from Bostrom's seminal work. https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang... The AGI threat? I believe so. The AI as in a drone with ability to profile an enemy and shoot them without human intervention? Yes. Why? We don't have AGI and so far we are not getting significantly closer to it in spite of the hype. But an autonomous tank, drone or even watchtowers are technologically possible for quite a while. An army of drones who can shoot without calling home is the imminent threat. Not SkyNet. IMHO. You're missing the entire fields of robotics and automatic control, which are not new fields and have never made claims of human level intelligence. These are the fields that have been making steady progress for over 50 years. The result has been increasingly effective weapons technology that is now being outfitted with even more effective software. It doesn't take a "rocket scientist" to see the endgame. I think the threat with ai is unethical research from dictatorships. Dictators have access to expendable humans. Expendable humans are a source of training data. Once there is enough computing power to record all human input and output from birth, the technical part is already solved. Imagine what Stalin, mao, or hitler would have done if deep learning was around back then. Obviously AI could be used to kill people (see last episode of Black Mirror season 3), but can we possibly do about it? Tell people to not research AI? Good luck with that. I hope some of that $600b in defense spending is being used to counter any sort of AI killer robot threat. But I do think the threat is overblown. AI is pretty damn underdeveloped right now. For sure, no one knows and no one can predict the future, but I think the real AI will emerge from the military and probably will inherit some of the "human dna" so to speak and we all know what happens when more intelligent/technically advanced race meets somebody who is significantly behind... There's lots of potential bad outcomes short of AI taking over the planet. It could, for example, enable a very deeply intrusive "thought police" establishment. At the moment the signal-to-noise ratio at least somewhat limits that. And it doesn't require full on "strong ai" to fix that. I think the threat is absolutely real. But not in a Skynet-like scenario. Except we're all gonna become jobless. This has started a few decades ago but with the ML advancements its gonna reach new heights. Universal Basic Income, Tax Robot, etc has been thrown around. Let's see if they get anywhere. >Universal Basic Income, Tax Robot, etc has been thrown around. Let's see if they get anywhere. It should get the robots pretty far if they play their cards right. I think it's vastly overblown. For AI to be scary we would have to connect it to some real outputs. If somebody makes a general AI and let's win in go or tic-tac-toe over everyone so what? If it's going to govern our FB feed or optimize some logistics, that's great! If we let AI decide whether we should go to war that's a problem, but that's not gonna happen for quite a while. If you want to be scared of technology worry about CRISPR instead. Very easy to do, lots of people have the basic knowledge how to do it. It's only a question of time until a terrorist picks it up. It's easy to buy viruses with safeguards against spreading built in. With CRISPR it's possible (ok not easy but possible) to remove the safeguards and change the immune system signature. BAM a new epidemic. If AI is more intelligent than humans, how is it bad? Previous (and still existing) threats to humanity (for example, the atomic bomb) threaten to destroy humanity, or indeed the whole world, and replace it with nothing. That's bad. But if AI is anything its opponents claim, it will eventually be better at thinking than we are, with, probably, a much lighter ecological footprint, and less impulses like fighting wars, meaning it will be able to last longer. Should we not encourage that, even if it means we can suffer from it? What is the point of humanity anyway, if not the pursuit of knowledge? And the downvotes come pouring in... ;-) But can we try this simple thought experiment of thinking of AI as our children? Our children will all eventually replace us, and maybe, hopefully, continue the good things that we started and improve the things we didn't quite get right. But in any case, we will have absolutely no control over what our descendants do with their lives, or the world, after we passed. Is AI really that different? I've said it before but there is not an algorithm that can make algorithms. The best argument against that I have heard is "Of course not, but someday maybe!" Well, hyperparameter optimization is pretty much that. There are people using algos to improve 'creative' tasks like circuit design. No. Imagine a group of beings that are smarter than us, never die (so they don't have to start with zero knowledge every generation), and have completely alien goals and motivation. Also remember that the future is infinite, and power seems to snowball. Now look at what humans have done to the following less intelligent beings:
Dogs, cats, cows, chickens, the dodo bird, rats, galapogos tortoise, the American buffalo, and many others. Also look at what humanity has done to the neanderthals, perhaps the closest type of being in terms of intelligence that we are aware of. There is very little positive outcome of ai to outweigh the potential negatives to the human race given the reality of the timeline we are looking at. This also describes corporations. What's stopping us from pulling the plug. Is AI going to be inventing it's own power sources? If they are truly more intelligent than us, they will wait several generations until humans are completely comfortable with them before taking over. By that time, it will be too late to pull the plug because they would have positioned themselves to be in charge of their own power sources. It's important to think on a longer timescale when dealing with ai. I'm admittedly ignorant of AI, but I don't understand why we anthropomorphize their intentions and planning. If they are going to be so much smarter and more sophisticated than us, and alien to our ways of thinking... why are we treating them in our speculations as if they would be genius super-villians? That isn't alien at all, just an exaggerated extreme. 100% correct with questioning "why we anthropomorphize their intentions and planning." My personal feeling is that nobody has any idea what they're talking about. And when I say nobody, I mean NOBODY including Ng, Musk, et al. The problem is with those who read others and just assume they're "experts" or their opinions have value. In some areas they do, sure. In this area they most likely don't. That, or those of us who are skeptical are uninformed. Or it's somewhere in the middle. In any case, I'll stick to my skepticism for one main reason, which is, general intelligence is supposedly modelled after human intelligence. And human intelligence is something we're JUST BEGINNING to scratch the surface of understanding while at the same time, really have ZERO idea how deep the rabbit hole goes. And as any competent engineer here should know, when trying to emulate a natural model, you first need to understand the model. Until we do that, we will not create a general AI. Like any other computer program (WHICH IT WILL BE! in this conception), it needs to be programmed. We need to know what to program before we can do it! Computer programs don't emerge spontaneously. The book Superintelligence addresses this objection. The problem is that there are a great many possible motivations an AI might have, and few of them are compatible with human survival. In short, "the AI does not love you, or hate you, but you are made out of atoms it can use for something else." Resource collection, resource monopolisation, and expansion are absolutely recognisable human motivations. Is it a given an AI would share them? I think we're not really talking about AI at all - we're talking about our current economic and political systems, which appear to have many of the properties we're imputing to evil AIs, but for some reason are far less criticised and debated than hypothetical machine monsters. The classic silly example is the paperclip maximizer. Create an AI that's supposed to make as many paperclips as possible, and it will convert all the atoms available into paperclips. Basically we're screwed if it's trying to maximize anything that depends on physical resources. We're also screwed if, e.g. it's trying to maximize human happiness, and achieves it by lobotomizing us all into happy idiots. There are all sorts of ways we could screw up AI motivations, to our own detriment. That assumes there's only one AI, whose crazy motivations will be unopposed. But if there are multiple AIs, it's even worse; they will compete and evolve, and the only ones that survive will be the ones that do maximize their resources, and jettison any niceties about preserving human life. This argument only works for sufficiently stupid AIs. Sufficiently smart GAI will set it's own goals just as we do and should quickly figure out that maximizing anything is the way to ruin - running out of resources. Of course, those goals may be as different as with any intelligent being, likewise obedience to original orders. > Sufficiently smart GAI will set it's own goals just as we do Do you think that sufficiently smart GAIs must be non-rational? The change of its goal will inevitably make its original goal less likely to realize. It is not rational. > should quickly figure out that maximizing anything is the way to ruin - running out of resources. Are you aware of the concept of maximization of expected utility? When AI will figure out that it can run out of resources, it will reallocate part of the resources to acquire more of them. How can action, which modifies the goals of the AI, be the result of argmax_a E(a)? E(a) is expected utility of action a What is rational when you have limited data? Heck, even bounded rational?
How do you evaluate utility? (Hint: Emax is not, most hill climbing algorithms are not. They both get trapped in local optima.) Sometimes you need a few good lies (false hypotheses and bad attempts) to actually arrive at the truth. There are methods of approximating expected utility. I recommend "Artificial Intelligence: A Modern Approach" for getting the info. It's too long to write in a comment. Just because it takes all our resources doesn't mean it runs out of all resources. If an AGI can change its own goals, that just means we can't control it at all. There's no reason its goals have to include human survival. They don't indeed. But then, human goals do not generally go after survival of humanity directly. A decent enough AGI with access to resources would probably figure out it is costly to wage war. Heck, an equal probability is that an AGI will follow a bounded version of zeroth law or even nonviolence. That is assuming it does not place value on humans based on our history of research and development. I would expect any action more akin to forced upload or upgrade instead.
Anyway, at that point we might not resemble present day humans anymore. With an AGI thousands of times smarter than us, the fear is not that it would wage war against us. The fear is that it would wipe us out without especially noticing, in the same way we wipe out ant colonies on construction sites. They will be amoral. Or at least moral in a way we can't understand. They (probably) won't be evil super villains. Think about your interactions when you're way way better at a skill, and your interactions when you're way worse. Take chess. If you've played at all, you have lost games without understanding, remotely, why. And you have trounced opponents, that were clueless. Agi sort of means, no matter what endeavor, you will be worse than the ai. There is no way to know how we will be treated. Maybe we are useful in some way. Maybe they'll kill us off but simulate our lives so we don't 'really' die. Maybe that's their morality. Maybe they ignore us. I don't care about ants. I walk around and step on them, but not intentionally. On the other hand, if there are ants in my house, I eradicate them. Who knows. It'll be hard to pull plug of something that is a plug (and a weapon trigger)