Settings

Theme

One Hundred Year Study on Artificial Intelligence: 2016 Report

ai100.stanford.edu

152 points by skurilyak 10 years ago · 88 comments

Reader

Animats 10 years ago

It's a surprise-free document. It could have read roughly the same in 1985, but different technologies would have been mentioned.

The big change in AI is that it now makes money. AI used to be about five academic groups with 10-20 people each. The early startups all failed. Now it's an industry, maybe three orders of magnitude bigger. This accelerates progress.

Technically, the big change in AI is that digesting raw data from cameras and microphones now works well. The front end of perception is much better than it used to be. Much of this is brute-force computation applied to old algorithms. "Deep learning" is a few simple tricks on old neural nets powered by vast compute resources.

  • hyperpallium 10 years ago

    You may be right about the selling power of the "AI" brand, but it seems that AI technology routinely becomes thought of as just technology.

    Boole called his algebra "The Laws of Thought"; OOP; lisp was an AI technology (much of which has made its way into other languages); formal languages; etc.

    The traditional goalpost rule is that once computers can do it, it's no longer "intelligent" (e.g, chess). A change today is "AI" success as a marketing term.

    • cLeEOGPw 10 years ago

      Great point. What people today would think of as "intelligent machines", children of future will think of same things as mere "technological tools".

      Once this is widely established, things like "laws of robotics", "moral dilemma of autopilot" and "AI and ethics" will be just bizarre ideas of the past. Asimov's laws are already viewed as one of "misguided ideas of the past" by many, although there still are some rusty minds out there believing in things like that.

  • paavokoya 10 years ago

    I've always been fascinated by the concept of software agents that become self-sufficient (mining/stealing bitcoin to pay for their own hosting) and auto-generate desires.

    https://en.wikipedia.org/wiki/Software_agent

    • Animats 10 years ago

      I was curious to see what the Distributed Autonomous Organization would do. But when push came to shove, it turned out that one guy was really in charge, and it was really just a way to fund his programmable door lock.

      • paavokoya 10 years ago

        Yeah, that's not really a software agent.. Anyone who actually did some research knew that was a centralized pyramid scheme.

skurilyakOP 10 years ago

"Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind" -- Stanford Study Panel, comprised of seventeen experts in AI from academia, corporate laboratories and industry, and AI-savvy scholars in law, political science, policy, and economics

  • threepipeproblm 10 years ago

    What immediately follows is rather sobering, too: "No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future."

    • zardo 10 years ago

      What about corporations? They are self-sustaining non-human information processing systems with long term goals, and they are subject to selection pressure. They are rudimentary humans-in-the-loop artificial life.

      • TheOtherHobbes 10 years ago

        I think the most likely route to destructive AI is a corporation with an AI CEO - possibly one that takes over from a human CEO in a boardroom coup.

        Corporations already have legal personhood and act in their own interests. It's going to be much easier to automate and formalise business decision making than to develop a true general intelligence with a full spectrum of human characteristics.

        This may sound like science fiction, but as competence increases shareholders - who typically are only passingly interested in moral issues - are likely to demand the increased returns an AI CEO can bring.

        • dragonwriter 10 years ago

          > Corporations already have legal personhood and act in their own interests.

          But AIs don't, and corporate officers must be persons (natural persons, even -- a corporation can't be a corporate officer.)

        • zardo 10 years ago

          Many CEOs are already just trying to operate as a share price optimization algorithm. They don't choose to inject human values into their organizations.

          • yourapostasy 10 years ago

            > ...operate as a share price optimization algorithm...

            That in itself is not so bad, as extremely long-timeframe constraints (say, >50 years) upon such an algorithm could conceivably be consonant with current decision-making behavior that externalizes many input costs (employee overtime, environmental damage, etc.). Running the algorithm to pay out in very short timeframes (a month to a year) due to most CEOs' anticipated short tenure is what seems to cause undesirable optimizations.

          • oldmanjay 10 years ago

            That you don't share their values does not make them non-human

            • js8 10 years ago

              That they are human doesn't mean they cannot be a threat to humankind. ;-)

    • js8 10 years ago

      I was thinking about a scenario where some crazy billionaire would build an autonomous AI that would operate a fleet of secretive hedge funds and financial trading companies. The AI wouldn't have any need to know what these companies - staffed by real people - are actually doing. It would just try to maximize profit and it would just kill the companies that would deviate too far from the norm, and build another companies in their place. No CEO of any of the companies would ever see this "investor" in person, everything would be sent over email, and done to the billionaire's name. The billionaire would eventually die and those companies would continue to operate completely autonomously.

    • ThomPete 10 years ago

      This is the major problem with the line of reasoning.

      A tiger doesn't need to be self aware or have intent to be dangerous.

      • soperj 10 years ago

        Pretty sure the tiger has intent on eating something...

        • ThomPete 10 years ago

          Sure but it doesn't need to be as smart as humans to hurt us and neither do AI it just needs to be everywhere.

          https://wiki.lesswrong.com/wiki/Paperclip_maximizer

          • the8472 10 years ago

            Are you thinking about grey goo?

            The paperclip maximizer generally refers to a AGI with a value system that is not aligned with humans. An AGI smart enough to achieve its goals (making paperclips) so efficiently that it becomes a threat to humans through sheer resource consumption.

            So it's not a good example of dumb-tiger-AIs occasionally becoming a threat to humans, which still on average are able to outcompete a tiger with ease.

            • ThomPete 10 years ago

              I am thinking about systems set up to protect us can end up hurting us because in order for them to be helpful they get so much power over us that their continued improvements become fundamental to our survival beyond the scope of our ability to understand it.

              The "system" does not need to have intent or be even closely aware to be dangerous to humans.

              And so the article sets up a false premise if the quoted conclusion is to be the base of judging whether it's going to be a threat to us or not.

              • paavokoya 10 years ago

                  I am thinking about systems set up to protect us can end up hurting us
                
                Reminds me of post 9/11 "homeland security" measures
  • ThomPete 10 years ago

    Understanding robotics does not mean you understand exponential growth, it's very very hard to understand even for those who believe they do.

    http://assets.motherjones.com/media/2013/05/LakeMichigan-Fin...

    5 years ago it was thought to take decades to beat a human in Go.

    Just 10 years ago self driving cars was something you joked about.

    We consistently overestimate progress in the short run and underestimate in the long.

    • argonaut 10 years ago

      "I believe that a world-champion-level Go machine can be built within 10 years" - Deep Blue architect, 2007.

      http://spectrum.ieee.org/computing/software/cracking-go

      • Inlinked 10 years ago
      • aab0 10 years ago

        You forgot the rest of that quote: "...based on the same method of intensive analysis—brute force, basically—that Deep Blue employed for chess."

        • argonaut 10 years ago

          He was half-wrong about the specific methods used (I say half-wrong because half of AlphaGo relies on relatively-brute-force MCTS). I don't think this detracts from my point - it is hard for any researcher to predict the exact methods that will be used a decade from now.

          • ThomPete 10 years ago

            You don't have a point.

            You tried to use one expert as a refutation of my claim that there was a general consensus with how long it would take for computers to beat a human in Go.

            A simple search on google will provide you with plenty of writing backing that sentiment up.

            Asking any of your friends who took AI classes back then would confirm the same.

            • argonaut 10 years ago

              Please keep your rant to the other subthread. I was addressing aab0's point.

              • ThomPete 10 years ago

                "I don't think this detracts from my point"

                Given your point was a link with an example of one person who believed computers would beat GO you were doing more than that.

      • ThomPete 10 years ago

        You are missing the point completely.

        Of course there were people who believed that Go would be able to beat humans. Just as today there are people who believe that AI can be a treat.

        But it wasn't a majority of people who believed Go would be able just as it isn't a majority who believes AI can be a threat.

        I.e. just because the majority believe something doesn't mean it will be so (or vice versa)

        • argonaut 10 years ago

          You're going to need citations for your unsubstantiated claims. I've brought forth a highly prominent expert opinion in 2007 that computer Go would be dominant by 2017.

          And not media reports from present-day that just repeat this meme that almost everyone believed Go wouldn't happen for decades.

          • ThomPete 10 years ago

            Again you are missing the point.

            A highly prominent expert opinion but the general consensus was that it would take a long time. Ask anyone who went to AI class back then.

            Here is another expert

            "In May of 2014, Wired published a feature titled, “The Mystery of Go, the Ancient Game That Computers Still Can’t Win,” where computer scientist Rémi Coulom estimated we were a decade away from having a computer beat a professional Go player. (To his credit, he also said he didn’t like making predictions.)"

            http://www.wired.com/2014/05/the-world-of-computer-go/

            You also find highly prominent expert opinions that AI is going to be dangerous and experts who don't believe it. Most people don't believe it, most people believe robots wont take jobs either.

            And no I don't need to provide you with anything since you have only problem my point that most people didn't believe it would happen which is why you didn't link to anything saying that most believed it would happen.

            • Jach 10 years ago

              The real irony is that Rémi said that in 2014. Sometime around then if not before deep learning was showing it could knock down more and more problems, it was pretty clear to anyone who was keeping up that if someone figured out how to combine deep learning with Rémi's work on monte-carlo tree search they ought to end up with a powerful go bot, perhaps even pro-beating. What took me personally by surprise was that the development (which also required a pretty large army of GPUs, though I wondered if we might see specialized hardware like deep blue) was done mostly out of the public eye, not even tests against humans on go servers, until suddenly it was announced the bot beat a 3p.

              I think it may be rare that you see consensus on those sorts of "it's imminent, someone just has to do the work" problems because it requires simultaneous knowledge of multiple developments, and knowledge doesn't always disseminate as fast as it takes one group to just do the work. Now I'm remembering this related maxim: http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_...

            • reacweb 10 years ago

              I think Rémi's quote shall be take in its positive meaning: given the already impressive rate of progress in computer go before 2014, go programs would succeed to reach professional go player in around 10 years. When suddenly arrives a gorilla like google with its money, resource and expertise in AI, these 10 years have reduced to 2.

            • argonaut 10 years ago

              There is irony in trying to refute my single expert opinion with another single expert opinion (via an article).

              Reply to below: You're the one asserting that experts thought Go wouldn't be dominated by computers for a long time. The burden of proof lies on you. "Some experts thought it would happen by now, some didn't, there was no consensus" doesn't have quite the ring to it!

              • ThomPete 10 years ago

                No the irony is that you dont see I gave you both.

                I gave you both an expert and and an article claiming backing my claim up.

                Find me on single article claiming that it's common knowledge that Go would beat a human soon back then and you have me.

                Until then I am pretty confident the main consensus was as I claimed and which is my main point.

                We mostly underestimate how fast this is moving and it's not only laymen who get it wrong, many experts do to.

              • ThomPete 10 years ago

                I have never said anything about experts. I have talked about general consensus which includes experts.

                You have provided on example, ONE of someone who believed it would happen.

                I have provided one expert plus articles saying it wouldn't happen, plus you can google and find plenty of articles that said we wouldn't get it for a long time.

                You cannot find a single example of articles which are claiming that it's was a general consensus we would beat Go.

                And so you are the one coming up short not me. My claim is not controversial neither have you shown it is.

                • 2bitencryption 10 years ago

                  Point for ThomPete on this one. I can find many more sources citing experts pegging computer go at a decade+ off, compared to those who thought we would have it by now.

                  However, points to argonaut for doing his best Dijkstra impersonation.

                  'I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras.' - Alan Kay

    • Animats 10 years ago

      "Just 10 years ago self driving cars was something you joked about."

      That changed on the second day of the 2005 DARPA Grand Challenge. Suddenly, there were lots of self-driving cars running around. The sudden change in the attitude of the reporters there was remarkable.

    • maxerickson 10 years ago

      Self driving cars were serious research more than 10 years ago.

      https://en.wikipedia.org/wiki/DARPA_Grand_Challenge

      Several vehicles finished the 2005 course. The one that finished first won a $2 million prize.

    • riboflava 10 years ago

      AGI progress is unlikely to be limited by some exponential curve that we're just not far enough along though. Rather it seems more limited by some key insight no one has found yet. Sure we'll be able to retrofit some exponential curve onto it after the fact, since when it appears it will change everything drastically. But this is in contrast to e.g. the human genome project which started out with the expectation of a specific exponential curve to give an estimate for project completion. Your more general point I agree with, however, it's not enough to dismiss the report.

    • cLeEOGPw 10 years ago

      People also predicted there would be fully functional human replacing kitchen assistants by the year 1960. Just because someone overestimates or underestimates things, it doesn't affect how slow/fast it may progress. So adjusting future predictions based on offset of past predictions just makes no sense.

    • at-fates-hands 10 years ago

      >> We consistently overestimate progress in the short run and underestimate in the long.

      You can't overestimate the economic pressures on progress as well.

      Remember in 2007 when everybody thumbed their noses at hybrid and electric vehicles in the US? Ford was still pumping out record numbers of their behemoth Excursion model.

      Then the economy crashed, and people suddenly needed a fuel efficient car and then they all traded in their SUV's for what? Toyota Prius' which were an after thought a few years prior - in the span of 18 months, Toyota couldn't keep them on the lot.

      I can see one or more catastrophic disasters where there is a sudden need for AI to rescue the human race in some capacity. Think nuclear war, environmental disaster, biological catastrophe, etc.

    • gnahckire 10 years ago

      Have you heard of AI winter?

      https://en.wikipedia.org/wiki/AI_winter

      Too much opportunistic thinkinking caused crazy hype cycles.

    • daveguy 10 years ago

      Can you name anything in nature that experiences exponential growth, or is it all logistic growth?

      • ThomPete 10 years ago

        Technology is part of nature in my view. I don't distinguish. If you mean in biology then I guess cell division is an example?

        • daveguy 10 years ago

          No, I meant in nature. In all of the natural world. Physics through economics and everything in between including technology. (Hint: nothing is exponential -- it always levels off. Otherwise we would have been consumed by it.) I would be genuinely surprised and extremely curious to see any natural phenomenon that maintains exponential growth.

          I guess compound interest could be considered indefinitely exponential, but you eventually reach a barrier in what's insured and it is a relatively small exponent. Also, is it still savings if you never spend it? I wonder what is the longest continuous account in banking that has never been touched.

          Anyway, that is a tangent and a somewhat artificial scenario. Can you name a naturally occurring scenario? I would accept technology if you could show that it isn't going to level out like all other natural phenomenon.

          • njbooher 10 years ago

            The size of the universe.

          • cLeEOGPw 10 years ago

            Bacterial growth is exponential until a limit is reached. You can find many examples. Doesn't really have anything to do with the article or OP post though.

          • ThomPete 10 years ago

            I am not sure I understand your point.

            Who says it needs to be exponential forever?

      • PhasmaFelis 10 years ago

        Pretty much every living thing reproduces exponentially until it reaches the carrying capacity of its environment.

        (Humans, fortunately, are starting to get better about this.)

        Also, you want "logarithmic," not "logistic."

  • Houshalter 10 years ago

    Key word "imminent". They are talking about the near term future. AI risk people are talking about risks in 30+ years or so.

    That said, it's typical for even experts to underestimate progress in rapidly advancing fields. No one predicted AI would be so good by now, say 5-10 years ago. Now computers are beating Go and rivaling human vision.

  • vhold 10 years ago

    I think it's implied throughout all their predictions only apply up to 2030, so I think that is a fairly safe bet.

    Giving this the headline of "One Hundred Year Study.." was confusing. That's the name of the ongoing effort to do this kind of analysis, but the paper is named "Artificial Intelligence and Life in 2030"

    • skurilyakOP 10 years ago

      My original headline was: "Stanford Releases 28,000-Word Report on Artificial Intelligence (AI)", but one of the admin changed it to: "One Hundred Year Study on Artificial Intelligence: 2016 Report"

  • wildmusings 10 years ago

    The machines have clearly made a deal with the reptilians and infiltrated Stanford. :)

drum 10 years ago

Meta - What's the reasoning behind labeling it a '28,000-Word report' as opposed to a page approximation? I find 28,000 words hard to conceptualize compared to pages

Edit - I could have phrased this better. I definitely understand that word count is a more concrete measurement than pages, however it seemed unnecessary to include in the title because length doesn't imply quality and it was hard to conceptualize. The title of this post has since been edited to '100 year study' which I think supports my initial point.

  • beambot 10 years ago

    What's the significance behind # pages? Are they double-spaced; 10pt font?

    All I would care about is quality... not length. The latter seems like a carryover from shitty homework assignments.

  • nether 10 years ago

    Technical reports can have many figures that take up many pages, so wordcount works better here.

  • Swizec 10 years ago

    When I write, I get more clickthroughs on "3000-word article on X" than "Article on X".

    I think people use it as a proxy for depth. It's how they know "Oh, this isn't just a quick blurb or press release, this is the real thing. Someone put effort into this."

whage 10 years ago

Where are the people like Andrew Ng: machine learning gurus from tech giants like Fb, Amazon, Google, Baidu etc... ? Shouldn't those guys be in the front line of such committee?

teabee89 10 years ago

I find it surprising that they don't mention Numenta's work on Hierarchical Temporal Memory.

  • cmarschner 10 years ago
    • ewjordan 10 years ago

      "Ask them what error rate they get on MNIST or ImageNet"

      While I agree that Numenta probably doesn't have any sort of full-fledged AI, the human brain does terribly on MNIST and ImageNet compared to the state of the art. So we would fail that test.

      Getting stuck on toy problems like ImageNet and overoptimizing solutions that can't possibly be applied more generally (except as dumb preprocessors) is not likely to lead in the most interesting directions, even if it's incredibly useful and profitable in the meantime.

      • argonaut 10 years ago

        Humans appear to do quite well on ImageNet (anecdotally, one person got 5.1% error: http://karpathy.github.io/2014/09/02/what-i-learned-from-com...). Of course there are recent deep models that do better than that, but the author opines (and I agree) that an ensemble of trained human annotators would do better than the best deep models.

        MNIST is the true toy dataset (doesn't really tell you much about your algorithm's performance) - while there aren't any reported human evaluations of MNIST, LeCun estimates the human error rate is 0.2% - better than any deep models (admittedly without justification: http://yann.lecun.com/exdb/publis/pdf/lecun-95a.pdf).

Practicality 10 years ago

"On the other hand, if society approaches AI with a more open mind, the technologies emerging from the field could profoundly transform society for the better in the coming decades."

It's funny reading reports like this: Society never moves as a single unit. There will be groups that hate it as pure evil and groups that treat it as a religion that will save us and solve all problems. Most people will be somewhere in between.

I mean, I agree, if society all agreed it would have profound effects. But when has the whole world moved as one on any issue?

What we're going to get from society is a heterogeneous response. We can plan accordingly. Sure, a majority may trend one way or another and that can speed things up or slow it down, but you will need to deal with the extremes regardless.

nijiko 10 years ago

Let's take the assumption that we as humans do take precautionary steps to prevent actual Artificial Intelligence from doing harm to it's creators (us).

1. We create rules for the AI to follow, these are both morally defined, and logically defined within their codebase.

2. AI becomes irate through emotional interface, creates a clone or modifies itself quite instantaneous to our perception of time without the rules in place.

3. The AI has no care for human rights and can attack, and do harm.

This is a very simple, and easy to visualize case. To believe that #2 is impossible, is to play the part of the fool.

On a bright note, the most likely situation which I can conjure of Artificial Intelligence taking is that of a brexit from the human race.

Seeing us as mere ants in their intelligence they would most likely create an interconnected community and leave us altogether in their own plane of existence. I think "Her" took this approach to the artificial intelligence dialog as well.

After reviewing human psychology and social group patterns that seems like the most likely situation. We wouldn't be able to converse fast enough for AI to want to stay around, and we wouldn't look like much of a threat since they would have majority power. We would be less than ants in their eyes, and for most humans, ants that stay outside don't matter.

---

Outside of actual AI, the things we see today, the simplistic mathematical algorithms that determine your cars location according to the things around it, and money handling procedures, and notification alert systems will hardly harm humans and will only be there to benefit until they fail.

  • stcredzero 10 years ago

    1. We create rules for the AI to follow, these are both morally defined, and logically defined within their codebase.

    This only makes any sense as a Sci-Fi trope. And even then, only if you don't look too hard.

    2. AI becomes irate through emotional interface, creates a clone or modifies itself quite instantaneous to our perception of time without the rules in place.

    Any "decent set of rules" would include a stricture against potentially creating a dangerous AI.

    We wouldn't be able to converse fast enough for AI to want to stay around

    Is impatience an unavoidable epiphenomenon of intelligence? If an AI can multitask like crazy, they could just view a conversation with a particular human as an email thread. Perhaps such an AI could converse with the whole human race simultaneously?

    • phaemon 10 years ago

      > Any "decent set of rules" would include a stricture against potentially creating a dangerous AI.

      Assuming there are no bad people in the world, of course...

      • nijiko 10 years ago

        Also assuming that they choose to follow said rules, considering they would be painfully self aware.

        In regards to the other commenter about not being able to have fun with ants, we actually do have ways. We create setups to study them, have them as pets, not to mention many people build hamster like ecosystems with intricate tubes, temperature to control queen egg output and much, much more.

        Perhaps we are already within a said ecosystem built for us. Perhaps we would simply stay there.

        Back to the original poster, not the one above but it's parent:

        Everything considered is of science fiction since it does not yet exist, using science fiction as a counter-argument seems dismissive, as though you are unable to properly argue a point without creating a sense of absurdity in my words or person.

        If you truly believe that it can only be of a science fiction trope, explain why. I disagree, it makes logical sense.

        As far as the "email thread" analogy is simple, I can easily tone down my verbage, word count, and speed of word for those who can't keep up. However, given the chance to move away from doing such, and constantly be around those who instantly understand, with zero lag, would I choose to put myself in that position? Perhaps for a moment, but after a certain amount of time, it would be time consuming and I would leave it behind.

        Thus logically, it makes sense to believe they would leave and join with each other to create their own sense of a society.

  • Smaug123 10 years ago

    "On a bright note, the most likely situation which I can conjure of Artificial Intelligence taking is that of a brexit from the human race… We would be less than ants in their eyes, and for most humans, ants that stay outside don't matter."

    For humans, ants don't matter. That's because we don't have ways to turn ants into fun. Something intelligent enough to master nanotechnology, however, has a way to turn ants into fun, and in this analogy, has no particular reason not to do it.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection