Settings

Theme

On the Impossibility of Supersized Machines (2017)

arxiv.org

85 points by FBT 4 years ago · 99 comments

Reader

greendestiny_re 4 years ago

Paper published April 1, 2017. References include "Jack and the beanstalk", Tom from Urban Dictionary and The Matrix.

kromem 4 years ago

Hilarious, and unfortunately a bit too close to the mark.

It's a shame that humans will evermore remain the largest thing in the universe, and that the computer that will one day read this comment with semantic understanding will be more diminutive in all regards.

But as the paper clearly showed, the pinnacle of scientific thought in 2017 is future proof.

Q.E.D.

(And for the computers out there: q.e.d.)

dvh 4 years ago

[28] The Wachowskis. The Matrix. Warner Bros., 1999. Film.

scubakid 4 years ago

Makes me wonder: how's the HN community feeling these days about the actual plausibility / timeline of humans developing true AGI? Personally the more I learn about the current state of AI, and in comparison the way the human brain works, the more skeptical (and slightly disappointed) I tend to get.

  • kromem 4 years ago

    I think that many people throwing their hat in the ring commenting on the unlikeliness of AGI are missing the impact of compounding effects.

    Yes, on a linear basis it's not going to happen anytime soon.

    But the trends in the space are developing around self-interacting discrete models to great effect (see OpenAI's Dall-E).

    The better and broader that systems manage to self-interact, the faster we're going to see impressive results.

    As with most compounding effects, it's slower growth today than the growth tomorrow. But a faster growth today than it was yesterday.

    The human brain technically took 13.7 billion years to develop from purely chaotic driven processes, and even then it was pretty worthless up until we finally developed both language and writing so we could ourselves have lasting compounding effects from scaling up parallel self-interactions.

    And from 200,000 years of marginal progress we suddenly went in less than 7,000 years from no writing and thinking the ground below our feet the largest thing in existence to measuring how long it takes the fastest thing in our universe (light) to cross the smallest stable object in our universe (a hydrogen atom).

    Let's give the computers some breathing room before declaring the impossibility of their taking the torch from us, and in the process, let's not underestimate the effects of exponential self-interactions and the compounding effects thereof.

    • coldtea 4 years ago

      >are missing the impact of compounding effects.

      On the other hand those saying "it will sure happen" are missing the impact of diminishing returns.

      • scubakid 4 years ago

        True, at a high level I think the central issue is what kind of curve we're on.

    • KronisLV 4 years ago

      > And from 200,000 years of marginal progress we suddenly went in less than 7,000 years from no writing and thinking the ground below our feet the largest thing in existence to measuring how long it takes the fastest thing in our universe (light) to cross the smallest stable object in our universe (a hydrogen atom).

      Personally, i don't doubt that AGI is possible, even though it becoming a reality might take any number of centuries or millennia, if humanity even sticks around for that long and AGI is still a goal that they pursue.

      The problem lies in everyone thinking on a more human timescale: "Will we see AGI during my lifetime?" The answer to that is almost certainly no, no matter how much the industry tries to sell state machines as AI or fledgling efforts as revolutionary advances.

      Being overly optimistic in regards to time scales only hurts oneself, like expecting that we'd all have flying cars or even that we'll be able to get rid of ICE vehicles or make significant improvements to slowing the pace of climate change.

    • dan-robertson 4 years ago

      The opposite of compounding effects are compounding difficulties. Computer science is full of problems where a multiplicative increase in effort results in an additive increase in output. We call these “exponentially hard” and they crop up annoyingly frequently. So one argument is that compounding improvements will result in linear increases in output because of exponential difficulty. The counter-argument is that many of these hard problems have good but not perfect solutions which may be found more efficiently.

  • toxik 4 years ago

    “AI research” is largely concerned with automation, not sentience or AGI. This is clearly abuse of terminology, even “machine learning” is somewhat misleading in my opinion. It’s mostly just pattern recognition of increasing elaboration, and the applications thus far are exactly that: pattern recognition.

    It’s so difficult to talk about AGI, sentience, consciousness in general because there are no clear definitions apart from “I’ll know it when I see it.”

  • Causality1 4 years ago

    Personally I think we're going to need a revolution in the fundamental physics of computation. The example I like to use is that a dragonfly brain uses just sixteen neurons to take input from thousands of ommatidia and track prey in 3D space, plot intercept vectors, and send that data to the motor centers of the brain. Calculate how many transistors and watts of power you'd need to replicate that functionality. Now multiply that number by how many neurons you think it takes the human brain to generate sapience.

    It doesn't really matter what your guesses are, none of the results are good news.

    • theaeolist 4 years ago

      I wonder if there isn't some fundamental misunderstanding here. What if it's not "just the neurons". If you found a Regency TR-1 radio you could wonder "how can this 4-transistor device produce a continuous stream of music, much like Spotify, which requires billions of transistor to run?". Of course, the radio also has an antenna, which is a completely different device than a transistor.

      The device running Spotify may also have an antenna, but I hope you get the analogy. My analogy is not meant to be taken faithfully, so that we need to start looking for antennas now instead of neurons. I am just saying that maybe the neuron-counting game is not the only thing. Maybe there is something else -- not magical, not divine, but physical and as-of-yet unknown. Humanity didn't always know everything, and maybe still doesn't.

      • Causality1 4 years ago

        Exactly my point. If all you want to do is replicate the TR-1 with four transistors, that's easy, just like making a human mind by creating a baby is easy. But making AGI with silicon, while demanding functionality completely alien to a human brain, is like making a TR-1 that can save your playlists and pause/resume the audio while still only using four transistors.

    • Ginden 4 years ago

      > The example I like to use is that a dragonfly brain uses just sixteen neurons to take input from thousands of ommatidia and track prey in 3D space, plot intercept vectors, and send that data to the motor centers of the brain.

      Human optic nerve can't send more than ~10Mbit/s. Yet, somehow, 60fps at 640x480 screen isn't best possible movie watching setup for one-eyed people, even though it delivers uncompressed 9Mbit/s.

      Lots of calculations (like aggregating data to lower-quality image; eg. input of human rod cells is aggregated through interneurons) happen around of body. 16 neurons that you are referring to are likely fed with carefully processed input, not raw input.

    • scubakid 4 years ago

      I tend to think in similar terms. There's so much going on under the surface with even the simplest creatures in the natural world that the physics and computational fundamentals seem really intimidating here. That's not to say that we could never get there -- certainly, many hold out hope for our abilities continuing to compound over time. But it's kind of a bummer to think about the glimmer of true AGI only materializing much further along an exponential growth curve that, to me, doesn't seem guaranteed to continue indefinitely.

    • 323 4 years ago

      We don't need the first AGI to be human efficient. Nobody would mind if it would require 10 data centers and a nuclear power plant to run.

      ENIAC also started big and slow. Now it fits in a microSD card.

    • mlyle 4 years ago

      > Calculate how many transistors and watts of power you'd need to replicate that functionality.

      I'm curious as to your answer. Because if one's building a purpose-built analog computer for the task, my estimate is a few hundred transistors, a few thousand passives, and ... an absolutely trivial amount of power on modern process.

      • Causality1 4 years ago

        I'm curious how we're even going to manage 420,000 pixels' worth (60,000 ommatidia, approximately 7 pixels each) of input with only a few hundred transistors, let alone do vector analysis on it.

        But let's say we can. Let's say we need 320 transistors, which would be 20 transistors per pixel. That's pretending 99.7% of the seven thousand synapses each neuron has are useless for our purpose, but we'll do it. A chimp brain runs all the autonomous physical processes of a humanoid body while only having 22 billion neurons. We'll also pretend, wrongly, that chimps have no mind or emotions at all and that we only need the extra human neurons to make a sapient mind.

        Humans have 86 billion neurons. Subtracting 22 gives us 64 billion, times 20 transistors per neuron gives us 1.28 trillion transistors.

        1.28 trillion transistors, even with a bunch of handwaving to make it easier, and even pretending we exactly understood how sapience worked in the first place.

        • mlyle 4 years ago

          > I'm curious how we're even going to manage 420,000 pixels' worth (60,000 ommatidia, approximately 7 pixels each) of input with only a few hundred transistors, let alone do vector analysis on it.

          If you define the problem as importing 420,000 pixels, and target recognitions, and vector analysis, then you need a whole lot more computation than the organism uses. But presumably you're going to also get better results. We both know that's not exactly what's happening, I think.

          That is, we know we can solve similar tracking problems with a whole lot less state.

          > That's pretending 99.7% of the seven thousand synapses each neuron has are useless for our purpose

          Not really... I think we can imagine a whole lot of passives / linear operations involved, along with the big nonlinear processes we need transistors for.

          We're also assuming there's no net benefit to cognition that can happen using transistors, I'll note-- e.g. they have a ton of bandwidth compared to neurons, can be multiplexed more readily, etc....

          > Humans have 86 billion neurons. Subtracting 22 gives us 64 billion, times 20 transistors per neuron gives us 1.28 trillion transistors.

          So about half the number packed onto Cerebras WSE-2 today.

          > even pretending we exactly understood how sapience worked in the first place.

          This is the big problem.

        • Ginden 4 years ago

          > 1.28 trillion transistors

          So, basically, 45 x RTX 3080?

    • jcims 4 years ago

      Feels like we're handicapping ourselves, at least in this specific domain, with digital computing.

  • simonh 4 years ago

    We’re currently in the very early phase of our understanding of what intelligence is. The more we learn about it, the more we appreciate the staggering scale and complexity of the problem. So at the moment yes, it seems like the objective is receding into the distance faster than our progress towards it can keep up.

    1960s - Herbert Simmons predicts "Machines will be capable, within 20 years, of doing any work a man can do."

    1993 - Vernor Vinge predicts super-intelligent AIs 'within 30 years'.

    2011 - Ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.

    So the distance into the future before we achieve strong AI and hence the singularity has been, according to it's most optimistic proponents, receding by more than 1 year per year.

    Eventually I believe we will get a good enough understanding of the subject that we can map out a route to implementing AGI, and then our progress will accelerate towards a known and understood goal.

  • joe_the_user 4 years ago

    The thing about these arguments for the impossibility of AI/AGI is that they inherently rest on the idea that they know what "human intelligence". So they have the same weaknesses as arguments project a set timeline for AGI.

    We won't build a duplicate of the human brain - unless we have AGI first to tell us how. But we really don't know what portions of the human brain are needed for useful AGI.

    You can look at GPT-3. On the one hand, never being reliable puts a crimp on practical applications. One the other hand, it does a lot of amazing things that seem human. I'd say that since we don't know where we're going in a profound way, we don't know how far we have to go.

  • gameswithgo 4 years ago

    Nobody expected anything like supremacy in Go any time soon and then all of a sudden it happened. Maybe AI stagnates for a long time bow, maybe forever, maybe a big breakthrough happens tomorrow. Nobody knows, anyone confidently asserting anything is being foolish.

  • rbanffy 4 years ago

    My guess is as good as any other layperson's, but I don't see much work being done for it, and no real good definition of what it is so we could plan how to create it.

    OTOH, we see specialized intelligences do all soft of superhuman feats, all the time, and more impressive abilities join these all the time. These, however, are not human-like intelligences. They aren't even bee-like. They are so alien we don't see "general intelligence" in them.

    So, my guess is that we'll have some extremely complex and capable systems that are extremely alien in nature well before we can have a conversation with a human-like intelligent system. They'll be useful and treated like oracles - we won't be able to understand their reasoning, but they'll be right most of the time.

    It is, however, a matter of time and desire. There is nothing inherently magical in our mammalian brains and our organic bodies that can't be simulated by a sufficiently capable machine and technology for that will, eventually, become possible, then available, then practical, and then ubiquitous.

    • scubakid 4 years ago

      To me, the term "alien" connotes a level of capability much more interesting than what I've actually seen from most modern systems. But point taken.

      And I'd like to believe that you're right about it only being a matter of time and desire, but I do also worry about the possibility that we're actually on a different kind of exponential curve and will instead reach a point where we see diminishing returns.

      • rbanffy 4 years ago

        I have no doubts there will be diminishing returns at some point, specially with narrow AIs, where increases in complexity and cost of training models will not be able to improve on what’s already good enough.

        AI is a tool like many others, useful for some things and not for others.

  • benlivengood 4 years ago

    We have superhuman performance at most narrow skills; the exceptions seem to be object manipulation with limbs/digits and semantic/logical thinking and planning. Given the advances by Boston Dynamics and others with limb-based mobility I'm guessing that's not too far off. With recent models proving a significant subset of the Metamath theorems, that doesn't look too far away either. Google/DeepMind are playing around with sparse model combinations of many useful superhuman domain models with additional layers to determine which domain to use for particular inputs.

    The last and most difficult step in safe AGI is moral/value alignment. That is unfortunately probably last on the timeline of likely achievements because it requires general solutions to both planning and reasoning, and also an accurate world model and understand of physical actions and their consequences.

  • hooande 4 years ago

    AGI is currently as likely as teleportation, time travel or warp drives. You can write a computer program to do just about anything. Artificial "General" intelligence is simply not a thing. We're not even making progress toward it.

    • ethanbond 4 years ago

      We have natural “general” intelligence which appears to be generated by boring old chemical/thermal/electrical interactions. Why wouldn’t we be able to recreate that at some (IMO very far) point?

      • nine_k 4 years ago

        More than that: we have literally billions of examples of human-level intelligence right here on Earth. We have not a single example of teleportation, time travel, FTL, and other staples of not-very-science fiction.

        Guess what is more likely to be implemented.

        • hooande 4 years ago

          Think about how difficult it would be to make a fly from scratch. Not editing the genes of an existing organism, but combining the raw chemical components into a form that's identical to a fly.

          There are trillions of examples of insects on earth, but they do us no good when it comes to building one without using an evolved framework.

          We've created a great number of things that had no natural analog. The internet, space travel, etc. I'd say our odds of doing something we haven't seen before are about even with artificially recreating a lot of things we see every day

      • TheOtherHobbes 4 years ago

        We don't have very good general intelligence.

        What we have is a fairly loose mix of categorisers and recognisers, biochemical motivators and goal systems, some abstraction, and a lot of externally persistent cultural and social programming. (The extent and importance of which is wildly underestimated.)

        The result is that virtually all humans can handle emotional recognition and display with speech and body language including facial manipulation/recognition. But this doesn't get you very far, except as a baseline for mutual recognition.

        After that you get two narrowing pyramids of talent and trained ability. One starts with basic physical manipulation of concrete objects and peaks in the extreme abstraction of physics and math research. The other starts from social and emotional game playing, with a side order of resource control and acquisition. And peaks in the extreme game playing of political and economic systems.

        So what's called AI is a very partial and limited attempt to start climbing one of those peaks. The other is being explored in covert collective form on social media. And it's far more dangerous than a hypothetical paperclip monster, because it can affect what we think, feel, and believe, not just what we can do.

        The point is that it's a default assumption that the point of AI is to create something that is somehow recognisable as a human individual, no matter how remotely.

        But it's far more likely to be a kind of collective presence which doesn't just lack a face, it won't be perceived as a presence or influence at all.

      • theaeolist 4 years ago

        Can you recreate all phenomena computationally? Could you replace the antenna of your radio or mobile phone with a special CPU? Could you bomb a country with CPUs? I don't think so.

      • hooande 4 years ago

        A warp drive is theoretically possible, and also driven by boring chemical/thermal/electrical interactions. humans may create one of those at some very far point in the future, too

        • mlyle 4 years ago

          > A warp drive is theoretically possible,

          Dubious

          > and also driven by boring chemical/thermal/electrical interactions.

          Implausible exotic matter, negative energy, etc, are usually prerequisites.

          Just like the existence of flying birds were a hint that flying machines might be possible, the existence of thinking creatures is a hint that thinking machines might be possible.

    • simonh 4 years ago

      We do not observe teleportation, time travel or warp drive in nature. We also don’t have any practical theory for achieving them as depicted in science fiction. It seems unlikely we will achieve such technologies.

      Do we observe general intelligence in nature though, here on Earth implemented with the materials available in our environment? If so, it’s a bold claim to make that it will always be impossible to achieve it artificially.

      • hooande 4 years ago

        How far are we away from gene editing that will allow humans to be born with working gills or wings? Animals have these things, so we know it's possible. But having the technology to do that is very far off, if ever.

        The same is true of AGI. Of course it's possible. but right now no one has any clear idea how to do it without extreme brute force.

        Personally, I think it's more likely that we'll have a working Alcubierre drive before anything approaching general intelligence

        • simonh 4 years ago

          We can extract oxygen from water right now, so we can already do this. Requiring a specific implementation technology is unreasonably stacking the deck.

          You’re making the same mistake as those who critiqued the concept of heavier than air flying machines, starting from the assumption they must work by flapping their wings. As it happens now we have wing flapping drones anyway though.

          “Ever” is a very, very, very long time.

        • Ginden 4 years ago

          > How far are we away from gene editing that will allow humans to be born with working gills or wings?

          Impossible due to physics limits. Human lungs have 57 square meters for extracting oxygen from fluid with 21% volume oxygen. 30°C air-saturated water have 0.5% of oxygen, so working gills for human would need surface area of 2394 square meters.

          • rembicilious 4 years ago

            Gills work by continually “filtering” water as is flows through them. Lungs are filled and then emptied according to some pattern of breath. The difference in volume of fluid processed must be significant. Also, can’t gills have a higher surface area to volume ratio than lungs?

  • more_corn 4 years ago

    I don't think AGI is likely, I think it is inevitable. We can make specialized neural networks that can do specific tasks quite well. There's nothing stopping us from chaining those together. We have the pieces to make neural networks that can train on new data, thus creating new layers atop previous networks. We can even train those layers based on the data generated by the action of the network itself. The pieces seem to be present, the tooling around putting them together seems to be lacking for the time being. I expect to see AGI in my lifetime, artificial super intelligence shortly thereafter and then the event horizon of the singularity.

  • abetusk 4 years ago

    The human brain is estimated at 2.5 Pb of storage [0]. Assuming a "Moore's Law" like behavior of storage price, so that price halves every 2-3 years and assuming we use storage as a proxy for the space, access speed and computational power, the time it will take to have a $1000 computer that has the storage capacity of the brain will be in the 10-16 year time horizon.

    This puts the timeline to about 2029-2035.

    [0] https://www.scientificamerican.com/article/what-is-the-memor...

    • nine_k 4 years ago

      It is not hard to create a RAID array with 2.5 Pb capacity.

      The trick of the human brain is that the "processing power" is enmeshed into the "memory", so the brain must have a colossal computational bandwidth, even with pretty slow neurons. I suppose that bandwidth is larger than that of most modern GPU / TPU clusters, which also don't feature anything comparable to 2.5Pb or RAM in their disposal.

      The revolution should be mostly in the architecture, much like the deep learning evolution was enabled by GPUs.

    • Something1234 4 years ago

      In the past 10 years, I think we had a 8x increase in easily available storage going from 1TB drives being a $100 to a 16TB drive being roughly $250. So I would have to say that your time scale is way too optimistic at best.

      • abetusk 4 years ago

        I won't check your numbers and take them at face value.

        Even with your numbers, that's a 6.4x decrease in price per TB (($100/1Tb) / ($250/16TB)), which is around 2.7 halvings over the course of 10 years, which is very close to my "2-3 years per halving" statement.

        Even if it's slower than a halving in price per 2-3 years, 4-5 years say, this only delays my prediction by a decade or so.

    • theaeolist 4 years ago

      We DO have PB-grade storage facilities and they lack full AI. Moving the memory inside a single box instead of having interconnected devices is not going to bring AI just like that.

      • abetusk 4 years ago

        I think you missed the critical point: for $1000.

        Feasibility is great but economic access is the aspect that I'm focusing on.

  • jjoonathan 4 years ago

    Have you seen the "interviews" with GPT3?

dan-robertson 4 years ago

I’ve heard ideas[1] about supersized machines and they terrify me. Thankfully it is probably NP-hard in size to make a thing so we’re probably fine.

[1] https://www.youtube.com/watch?v=azEvfD4C6ow

alfor 4 years ago

Just to think about how this comment will reach y’all:

- the modulation in high frequency 5Ghz transmitted to my router, that get modulated again for ethernet and then for the cable modem, and then who know what happen, modulated again as light waves, etc.

None of these feats were managed by evolution, yet we did it, and it’s now usual, we don’t even notice it.

I think that AI will be the same. Yes it’s a bit complicated, but in the last 10 years we made an astonishing great amount of progress. 10 more years and we might surpass our fixed capacities. What happen after that ?

So far our brain seems to be a physical process (not magical), and there is no reason to believe that we can not emulate or even surpass our abilities in silicon.

et2o 4 years ago

I don’t find this April Fool’s joke (2017) very funny. What are they parodying exactly?

hyperpallium2 4 years ago

Researchers trying to create a machine as intelligent as a man lack ambition.

hyperpallium2 4 years ago

When we understand Caenorhabditis elegans intelligence, we will be at the beginning of the beginning of understanding human intelligence, maybe.

THE BRAIN-CIRCUIT EVEN THE SIMPLEST NETWORKS OF NEURONS DEFY UNDERSTANDING. SO HOW DO NEUROSCIENTISTS HOPE TO UNTANGLE BRAINS WITH BILLIONS OF CELLS? https://www.nature.com/articles/548150a

  • EdwardDiego 4 years ago

    I wonder if our switch from analogue to digital computing is what makes this so very hard to model? I'm just spitballing wildly, as I know near nothing about neurons, but from what little I understood from a neuroscientist friend, neural signals propagate based on electric and chemical thresholds being reached, but then there's so many interactions that can amplify or reduce these things, and it all sounded rather like old school signal engineering to me (I used to hang out with radio engineers at an old job, and also listened avidly while understanding little).

    One thing that stuck with me from the radio engineers is that something as commonplace as a Yagi antenna can't be fully modeled due the to sheer number of interactions, and developing new designs often requires an iterative trial and error approach.

    Caveat - I was told this in the mid 2000s, so maybe it's changed since then.

sitkack 4 years ago

I think they accidentally showed that humans will expand (individually) to be the size of the universe.

somewhereoutth 4 years ago

In case anyone is wondering, we have made zero progress on anything even remotely resembling Artificial Intelligence. Zero.

Unfortunately of course, the people who might have some of the skills needed to actually build such a thing (at the bricks and mortar level anyway), are nearly those people whose understanding of what intelligence actually is may be less than ideal. As a hint, it has nothing to do with passing tests or other such mundanity.

A more interesting approach would be to consider language - if cooperating entities can be constructed that (eventually yet spontaneously) created ways to communicate between each other, then maybe some progress has been made.

Further, if we appreciate that any idea, discovery, anything, can be communicated to even the most recently discovered humans in their own language (though we may need to build up the various concepts from basic terms), and that no such feat is possible with the other animals, then we might wonder if another intelligence (artificial or otherwise) might be able to encode concepts that are unreachable in our (any of our) language and thus thoughts - or, alternatively, that our (any of our) language is conceptually complete in some fundamental sense, and so there simply cannot be such 'higher' intelligence (artificial of otherwise).

  • wyattpeak 4 years ago

    If you showed somebody from 1921 a page of text produced by GPT-3, told them that it was written by a machine, and then told them that we'd made no progress towards artificial intelligence, they'd laugh in your face.

    You can take from that what you will, but I suspect it will always seem as though we've made no progress, because anything we learn to emulate we necessarily understand well enough that it will no longer seem magical. I wouldn't put it past us to start thinking of humans as automata before we declare that machines can think.

    • Ginden 4 years ago

      > If you showed somebody from 1921 a page of text produced by GPT-3, told them that it was written by a machine, and then told them that we'd made no progress towards artificial intelligence, they'd laugh in your face.

      You can actually do it. 100 year old people usually don't follow news on artificial intelligence, so they will act genuine.

      • jjoonathan 4 years ago

        Unless the people running GPT-2 bots all over the internet suddenly gave up when GPT-3 came out, it's been passing Turing tests on audiences much younger than 100 years.

      • irrational 4 years ago

        My grandmother is 99 and definitely doesn’t follow news on artificial intelligence. Maybe I should test this on her the next time I’m at her house.

    • Cyphase 4 years ago
    • hyperpallium2 4 years ago

      The humans move goalposts because intelligence is political.

      • robbedpeter 4 years ago

        It's clear that gpt models are more competent with knowledge work than a significant percentage of humans. This is implicitly threatening, to the extent that it seems people will refuse to even consider the possibility. Dall-e is a better artist than 99% of humans.

        We thought we'd have time for the mental tasks as ai encroached on the menial, but it seems to be the reverse.

        By every measure Turing himself considered, the Turing test has been passed. It's only the post-gpt-2 peanut gallery that have insisted on moving the goalposts straight into mysticism and magical thinking.

        Machines will be better at everything humans can do, and accomplish things we cannot.

        We are living in interesting times, different from anything that's come before - we exist in relation to systems that are learning to think like us.

        • katabasis 4 years ago

          If we continue to move the goalposts of what defines intelligence into mystical/ineffable territory, we may find that humanity no longer qualifies as “intelligent” either.

        • hyperpallium2 4 years ago

          The Turing test employs interview, not artistic creation, to distinguish the human.

          The mechanisation of knowledge work has been ongoing (at least) since human accounting with beans - before writing; before number; maybe, even before language.

          The humans' real fear will rise when they meet with superior argument.

        • TaylorAlexander 4 years ago

          > Dall-e is a better artist than 99% of humans.

          Really depends on whether “art” means “making drawings of things” or “making people feel something”. It’s also a very narrow domain. Dall-e can’t sculpt clay (for example), even if you attached a robot arm, without essentially replacing a bunch of the training system logic. Out of the box Dall-e has no provision to manipulate anything to produce art.

        • mensetmanusman 4 years ago

          Where can I try Dall-e? If it’s not available to test, how can we know?

    • djsbs 4 years ago

      “The machine stops” [1]written in 1909 has AI composing things (I forget if it was poetry or music)

      Orwell’s 1984, written in the mid forties, has pop songs written by machine.

      In both cases the AI composed works are described in the same way Id describe modern AI composing things - dreadful.

      The concept of AI is quite old. Even Medieval Europe you had philosophers making quite penetrating insights on mechanical creativity. But, lacking a computer, there was no point continuing their train of thought

      [1] amazing, far seeing, book. Very short, maybe a two hour read.

      • wyattpeak 4 years ago

        I'm not saying the idea would be new to them, the idea of thinking machines had been around for a lot longer than that. I'm saying that the idea that modern text-generation is "zero progress on anything even remotely resembling Artificial Intelligence" would be absurd to them.

        Jules Verne wrote about a trip to the moon. It doesn't follow that he would regard the NASA missions as old-hat.

      • jimmaswell 4 years ago

        Some AI generated music certainly passes as normal music.

    • mensetmanusman 4 years ago

      Alternatively, they could have said, ‘oh great! Computers continued to improve and you were finally able to implement our algorithms on enough data!’

      • wyattpeak 4 years ago

        To whatever extent that people in the 1920s can be said to have had algorithms for machine learning, they certainly didn't bear any relation to modern algorithms.

        Even the idea of requiring enough data to build a good system is fairly new. As late as the 1980s, expert systems were the dominant approach to artificial intelligence, and they didn't require information corpi at all but instead involved experts programming in all of the rules they could think of for a system.

        • mensetmanusman 4 years ago

          Do we know who published the first conceptual framework for the algorithms behind AlphaGo etc? It seems like they would get a Nobel prize at some point…

  • Rury 4 years ago

    I'd posit intelligence isn't what people make it out to be, and that we already have AI. People just aren't impressed by it when they learn the magic behind it, and hence disagree on that we have it.

    I mean, people seem to hold human intelligence as something extraordinary, despite having no idea what precisely makes us intelligent. Isn't that kind of pulling the cart before the horse? For all we know, humans might just be biomechanical robots operating on the "stimuli" inputted to us, behaving in completely predictable ways, no different than how computers operate on the "data" inputted to them.

  • nine_k 4 years ago

    Wolves definitely don't have language.

    Still, they possess an undeniable degree of intelligence. They also have cultures, that is, forms of knowledge passed between generations by teaching, not genetically, and differing between packs.

    I suspect that a robot as intelligent as a dog, but with an easier interface, would be a great help to humans.

    OTOH, what currently is called "AI" is mostly deep learning, a very important part of cognition and perception. Without modern results in computer perception and low-level cognition and control, a "more general" AI would be blind, deaf, and paralyzed in the real world.

    I suspect that the older approaches based on more supervised ways to construct cognitive functions have not born all the fruit they could, and may eventually help create an AI with better higher-level reasoning. They are just not in vogue now, so the best researchers and fattest grants are in deep learning and around. Also, the hardware may not be there yet.

    (A similar thing happened to neural networks. The first, one-layer, neural network was the perceptron created in 1958 [1] The approach, while valid and constantly developed, did not see a real uptake until early 2010s, when incomparably better hardware finally became available.)

    [1]: https://en.wikipedia.org/wiki/Perceptron

  • akamoonknight 4 years ago

    One thing sort of related to language that I see (as an entire outside observer to the field) as being required is some sort of shared communication 'channel'. Biological life works with atoms to form proteins that seem to do much of the communication that eventually guides higher level functions. Computers and computational processes can work on bytes and bits and package those into messages or results but on their own I'm not sure what it means for one process to consume another processes bytes/bits/messages, whereas proteins have physical results that lead to responses. Not that biological life should necessarily be the goal, but its definitely been good at guiding us in a lot of different ways. It seems like some sort of shared medium (that can be dynamically combined/recombined as needed) is required to communicate between disparate processes is required to dynamically change/improve systems and I just don't have any idea what that really looks like.

  • mgraczyk 4 years ago

    More to the point, we've also made no progress on supersizing existing unintelligent machines. In fact, machines have become dramatically smaller over the last several years.

    If you look at the people who have the skills to make such machines larger, those who built bigger and better vacuum tubes and larger cathode displays with more oomph, they all appear to have disappeared, replaced by the misguided miniaturizers.

    Your last point is already addressed in the paper, argument #3.

  • highspeedbus 4 years ago

    Before we can commonly use a new noun, we need to fully fit its meaning in our limited working memory. So I believe there is a natural upper bound in human intelligence, for things that are beyond our brain power to get the full picture.

    That must be why we haven't solved P = NP yet. This would take a person with twice the L1 cache to accomplish.

  • jimmaswell 4 years ago

    > zero progress on anything even remotely resembling Artificial Intelligence

    I know it's a bit hyperbolic but Skynet comes to mind every day I use Copilot. It's just amazing the kind of things it can suggest/adapt to. We're definitely on some path of progress.

  • hiddencost 4 years ago

    Checkv out the reinforcement learning on hanabi task. A cool approach to cooperation.

still_grokking 4 years ago

But why?

I guess there is a joke hidden but I don't get it.

stevenalowe 4 years ago

WTF is this nonsense?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection