Settings

Theme

Why your brain is 3 milion more times efficient than GPT-4

grski.pl

158 points by sebastianvoelkl a year ago · 212 comments

Reader

cynusx a year ago

The comparison doesn't really hold.

He is comparing energy spend during inference in humans with energy spend during training in LLM's.

Humans spend their lifetimes training their brain so one would have to sum up the total training time if you are going to compare it to the training time of LLM's.

At age 30 the total energy use of the brain sums up to about 5000 Wh, which is 1440 times more efficient.

But at age 30 we didn't learn good representations for most of the stuff on the internet so one could argue that given the knowledge learned, LLMs outperform the brain on energy consumption.

That said, LLM's have it easier as they are already learning from an abstract layer (language) that already has a lot of good representations while humans have to first learn to parse this through imagery.

Half the human brain is dedicated to processing imagery, so one could argue the human brain only spend 2500 Wh on equivalent tasks which makes it 3000x more efficient.

Liked the article though, didn't know about HNSW's.

Edit: made some quick comparisons for inference

Assuming a human spends 20 minutes answering in a well-thought out fashion.

Human watt-hours: 0.00646

GPT-4 watt-hours (openAI data): 0.833

That makes our brains still 128x more energy efficient but people spend a lot more time to generate the answer.

Edit: numbers are off by 1000 as I used calories instead of kilocalories to calculate brain energy expense.

Corrected:

human brains are 1.44x more efficient during training and 0.128x (or 8x less efficient) during inference.

  • wanderingmind a year ago

    Not just that the brain of a newborn comes pretrained with billions of years of evolution. There is an energy cost associated with that which must be taken into account

    • eloeffler a year ago

      Then you must also take that cost into account when calculating the cost of training LLMs, as well as the cost humans operating the devices and their respective individual brain development.

      LLMs are always an additional cost, never more efficient because they add to the calculation, if you look at it that way.

      • Closi a year ago

        Only if we are counting the cost to generate all the inputs to training, and not just the training itself - it just depends on the scope of the analysis.

        (i.e. taken to the extreme, as humans learn from their environment, do we have to count all energy that has gone into creating the world as we know it?)

        • daflip a year ago

          If taken to the extreme I can't help but quote Carl Sagan :-)

          "If you wish to make an apple pie from scratch you must first invent the universe"

    • coldtea a year ago

      Well, LLMs also pressupose humans and evolution, since they needed us to create them, so their tally is even higher by definition...

    • madhatter999 a year ago

      Also take into consideration the speed of evolution. LLM training might be much faster because a lot of competition power is used for its training. Maybe if it was the same speed as evolution then it would take billions of years, too?

    • eru a year ago

      Also our brains and our language are co-optimised to be compatible.

      ChatGPT has to deal with the languages we already created, it doesn't get to co-adapt.

    • thfuran a year ago

      Brains are only about half a billion years old.

    • AbstractH24 a year ago

      Those are sunk costs

  • bamboozled a year ago

    Humans spend their lifetimes training their brain

    I don't think this is true personally, ideally as children, we spend out time having fun and learning about the world is a side effect. This borg like thinking applied to intelligence because we have LLMs is unusual to me.

    I learned surfing through play and enjoyment, not through training like a robot.

    We can train for something with intention, but I think that is mostly a waste of energy, albeit necessary on occasion.

    • Jensson a year ago

      > we spend out time having fun and learning about the world is a side effect

      What do you think "play" is? Animals play to learn about themselves and the world, you see most intelligent animals play as kids with the play being a simplification of what they do as adults. Human kids similarly play fight, play build things, play cook food, play take care of babies etc, it is all to make you ready for an adult life.

      Playing is fun since playing helps us learn, otherwise we wouldn't evolve to play, we would evolve to be like ants that just work all day long if that was more efficient. So the humans who played around beat those who worked their ass off, otherwise we would all be hard workers.

      • bamboozled a year ago

        But I just play because it's fun, I roll dice for fun, are you trying to tell me all this is a secret front for "training" ?

        • Jensson a year ago

          Fun is your brain rewarding you for something it thinks is practice on a useful skill. You get bored once you mastered it enough.

          Some people continue playing a game even when it stops being fun, they are addicted to the reward mechanism in the game, and now the brain thinks that playing the game is a good way to work and provide for itself. I don't call that "play", its work, just not productive work.

          Why is dice fun? Because your brain wants to map the pattern of the dice, trying to figure out how to get good rolls. You see that in most dice players, they develop a lot of superstition about what is good and bad dice, or how they always roll bad in critical moments etc. I'd assume that is from nature where you try to figure out what is a good nut to crack or where to find prey etc, basically a way to figure out useful patterns from random events.

          • bamboozled a year ago

            So when I look at a sunset and enjoy that, I think it's fun to chase sunsets, my "brain" is telling me it's fun because I'm learning a new skill and receiving a dopamine reward for looking at the sunset and feeling good about it?

            • Jensson a year ago

              Looking at a sunset isn't a game you play, kids don't go and play "look at the sunset", I feel like you are grasping at straws.

              Why would you feel calm and comfortable from a sunset? Probably to get you sleepy so you go find a place to sleep since there isn't much useful to do at night. That would be unrelated to play.

              Anyway, most of our feelings comes from nature, we didn't evolve to be faulty, we evolved to do things efficiently, play is a part of efficiency. If it isn't for learning you would have to explain what it is more likely to be for. When kittens play and chase things or play fight with each other, do you think they are just wasting energy for no reason? No, they sharpen their senses and learn to hunt and fight.

              • bamboozled a year ago

                I never said kids play like that. I do.

                As an adult, I find it fun and enjoyable to seek out sunsets I find the colors beautiful. I readily hike mountains just to enjoy a sunset. I watch a sunset and then go party till 3 am, so maybe it's got to do with finding a nice place to sleep, or maybe it's just nice that we have the ability to appreciate phenomenon without having to apply some rigorous concept to it. I'd fly 2/3 of the way around the world to watch a total eclipse.

                Personally I think you might be clasping at straws trying to equate every pleasant experience to some type of reward function.

                I'd go as far as saying if we worked this simply and predictably, then our lives would be much easier. We'd all be exercising for that dopamine hit, we'd all be going to bed early after a nice sunset, but we dont.

                • Jensson a year ago

                  Doing enjoyable things isn't "play", it isn't work and you relax but it isn't the same thing as playing.

                  > Personally I think you might be clasping at straws trying to equate every pleasant experience to some type of reward function.

                  No, here I just focus on why play is fun, you tried to pivot to other pleasurable experiences. Unlike watching sunsets basically every animal plays around as kids, that play is therefore something that is directly related to survival of the fittest or we wouldn't see that everywhere. You need a really strong argument why for humans play doesn't fill that role when it fills that role for basically every other intelligent animal.

                  > I'd go as far as saying if we worked this simply and predictably, then our lives would be much easier

                  So you think humanity would be better off if nobody played around and discovered new things? We would be stuck as monkeys in trees then. Play is pivotal to humanity.

                  • bamboozled a year ago

                    How can you argue I’m smuggling ”enjoyable” experiences into the argument when you yourself admit play is also Enjoyable. What is the actual difference? Even enjoying a cup of coffee can be considered play if I put the coffee in my mouth and play around and pay careful attention to he aromas a texture.

                    They’re one and the same thing. It’s a matter of language that makes them appear to be different things. Taking a dip in a pool can be considered play and it can also be pleasurable.

                    • Jensson a year ago

                      > How can you argue I’m smuggling ”enjoyable” experiences into the argument when you yourself admit play is also Enjoyable

                      Play is enjoyable, not all enjoyable things are play. You started to add other enjoyable things into the argument about play.

                      > They’re one and the same thing

                      No they are not. Play is typically seen as what children do, or playing sports, or playing a game, or a competition. You can read the definitions here, none of them say that stuff like eating hotdogs is play unless it is an eating contest or other kind of game:

                      https://www.merriam-webster.com/dictionary/play

                      • bamboozled a year ago

                        You’re hung up on language in my opinion and it was the inspiration for my original comment. There is a gradient between playing and just doing something enjoyable. It’s not a binary thing.

                        I do woodwork because I find it enjoyable but it’s also play time when I’m in my shop.

                        Language has limits. I bet you there are cultures which don’t distinguish between play and enjoyable activities and then we wouldn’t be sending each other links to merriam Webster.

                        Btw I don't disagree people learn from play, I just don't think it's the end goal of play.

        • gnramires a year ago

          It's probably not something you secretly wish when playing, and perhaps for the best (that you don't have fun and enjoy things always with an ulterior motive, except enjoying the experiences). I guess he's saying in the sense of the natural function of play, and we playing is mostly a consequence of the natural function. It's also very much true that we learn a lot, perhaps a significant chunk of what we learn is through play[1], so it's also undeniable that we do learn from play, even if humans have this great gift -- we are able to understand the nature of things (such as play) and choose to do them just for the sake of experiences, fun, joy, happiness, etc..

          Which we should (finally :) ) recognize to be the source of all meaning.

          We still should learn (and do practical stuff in general) because it supports our inner lives, including building technology, producing things (buildings, infrastructure) that support us and indeed enables our (inner) lives.

          [1] Also of note humans, unlike LLMs, can learn all the time, we don't have a hard "training phase". It's true brain plasticity decays, and it becomes harder to learn as we age, but we can still learn more or less quickly at any age. This is why dedicating childhood to learning (as well as play) is natural.

          • bamboozled a year ago

            I'm conscious I have to types of play though, I'm fully conscious sometimes play can be about learning and training, it's why I ski more difficult terrain than I'm comfortable with, but then I might go eat a massive bowl of pasta and have a glass or three of wine. On an intellectual level, I know there are healthy more rewarding things to eat. I know wine isn't great for me at those quantities. But I consciously make the choice to do it because it's fun.

            • Jensson a year ago

              I don't think that many calls indulging in food or bodily needs "play". Those are just core rewards, play is an active activity that is fun without being directly related to your survival, like eating is.

              • Hasu a year ago

                Eating ice cream instead of stale bread is absolutely play, just like running around on a soccer field is play even though running is a survival skill.

            • gnramires a year ago

              Yes, although I say, if you can make your play healthy as well as fun, might as well :)

        • james-bcn a year ago

          Yes, evolution makes play fun, but it's really learning, in the same that that evolution has made sweet and fatty things extra tasty, because they are full of energy.

          • bamboozled a year ago

            This is “common sense” but knowing all we know about food and calories, we still eat the donut…

        • nkrisc a year ago

          Jokes on you. Every time you played ball you were secretly learning about ballistic trajectories and estimating velocities using visual cues such as apparent angular size and parallax.

          • grugagag a year ago

            The brain uses heuristics for that

            • Jensson a year ago

              Heuristics that you practice and finetune via play, for example by throwing and catching balls.

            • nkrisc a year ago

              Being a father to two young kids, I can confidently say we aren’t born with those heuristics already tuned.

    • glenstein a year ago

      >we spend out time having fun and learning about the world is a side effect

      I think the part of this that resonates as most true to me is how this reframes learning in a way that tracks truth more closely. It's not all the time, 100% of the time, it's in fits and starts, its opportunistic, and there are long intervals that are not active learning.

      But the big part where I would phrase things differently is in the insistence that play in and of itself is not a form of learning. It certainly is, or certainly can be, and while you're right that it's something other than Borg-like accumulation I think there's still learning happening there.

      • bamboozled a year ago

        Sorry I didn't mean to imply that learning isn't part of play, I just don't believe the end goal of life is to "train". I think if you're an AI researcher it would make sense that life is about training. However I think it's just fashion.

        I always think if we could built an AGI, it would probably enjoy some form of play too. It would need to invent some level of excitement, else it would just be a machine with no ambition, no inspiration.

    • pessimizer a year ago

      That's like saying that you eat because it tastes good.

  • Closi a year ago

    I think you would probably have to take into account the full functioning power of a human too.

    We don't know how to fully operate a human brain when it's fully disconnected from eyes, a mouth, limbs, ears and a human heart.

  • londons_explore a year ago

    > At age 30 the total energy use of the brain sums up to about 5000 Wh,

    That doesn't sound right... 30 years * 20 Watts = 1.9E10 Joules = 5300 kWh.

    • cynusx a year ago

      Where did you get the 20 Watt from?

      My number is based on calorie usage

      • cynusx a year ago

        oh ok, I used 400 calories/day and not 400 kcal/day.

        Yea, then the numbers are off by 1000

        • 1992spacemovie a year ago

          I respect that you replied to the comment and owned your math error :) The rest of your comment is an interesting observation. Never thought about it starting out at the cal level.

  • CuriouslyC a year ago

    You're doing apples and oranges.

    Humans who spend a long time doing inference have not fully learned the thing being inferred - unlike LLMs, when we are undertrained, rather than a huge spike in error rate, we go slower.

    When humans are well trained, human inference absolutely destroys LLMs.

    • cheema33 a year ago

      > When humans are well trained, human inference absolutely destroys LLMs.

      This isn't an apt comparison. You are comparing a human trained in a specific field to an LLM trained on everything. When an LLM is trained with a narrow focus as well, human brain cannot compete. See Garry Kasparov vs Deep Blue. And Deep Blue is very old tech.

      • karmakaze a year ago

        Also DeepBlue isn't an ML it's an "expert system, relying upon rules and variables defined and fine-tuned by chess masters and computer scientists" from Wikipedia. AlphaGo (or AlphaGo Zero) would be a better example.

        • eru a year ago

          > AlphaGo (or AlphaGo Zero) would be a better example.

          Yes, they are better examples, but still not great examples: neither of them are LLMs.

          In general, I have very high hopes for AI, but I would be surprised if LLMs are the one universal hammer for every nail. (We already have lots of other network architectures.)

      • CuriouslyC a year ago

        1. Deep blue isn't a LLM. I don't care how well you train a LLM, it's not going to be more efficient than an optimally trained human, not even close. It's actually arrogant as hell to assume that we can achieve a higher level of energy efficiency than billions of years of evolution, particularly so early in the game. 2. Chess is a closed form system with a finite and relatively small number of position compared with the real world.

        • eru a year ago

          > It's actually arrogant as hell to assume that we can achieve a higher level of energy efficiency than billions of years of evolution, particularly so early in the game.

          You are right that LLMs are still far off from the performance of the human brain. Both in absolute terms, and also relative to the power used.

          However, I don't see anything arrogant here. We have lots of machines that can do many tasks more energy efficient (and better) than humans. Both mechanical and intellectual tasks.

          • CuriouslyC a year ago

            It's not arrogance to think you can create a tool that does one thing the brain does better than the brain for less power. It's arrogance to think that you can do everything the brain does for less power. Living organisms have been relentlessly honed for the ability to efficiently solve varied problems across ~10^40 experiments over the age of the earth. If some marginally intelligent monkeys think they can build an error corrected, digital system that encompasses all of that functionality while using less power, I'd say that's obviously arrogance, particularly if it hasn't been the subject of a civilizational drive for a few millennia already.

            • eru a year ago

              > Living organisms have been relentlessly honed for the ability to efficiently solve varied problems across ~10^40 experiments over the age of the earth.

              Evolution has been optimising them for creating descendants, not general problem solving with minimum energy expenditure.

              No one expects that LLMs can solve all problems: they can't. They can only predict text, nothing else. They can't fight off a virus infection or evade a lion. Specifically, LLMs can't reproduce at all either, yet alone efficiently. Reproduction is what evolution is all about.

              • CuriouslyC a year ago

                Life is optimized for _SURVIVAL_ which means being able to navigate the environment, find and find/utilize resources and ensure that they continue to exist. Reproduction is just a strategy for that.

                LLMs are human thinking emulators. They're absolutely garbage compared to "system 1" thinking in humans, which is massively more efficient. They're more comparable to "system 2" human thought, but even there I doubt they're close to humans except for cases where the task involves a lot of mundane, repetitive work - even for complex logic and problem solving tasks I'd be willing to bet that the average competitive mathematician is still an order of magnitude more efficient than a LLM SoTA at problems they could both solve.

                • eru a year ago

                  > LLMs are human thinking emulators.

                  They aren't. They are text predictors. Some people think verbally, and you could perhaps plausibly make your statement about them. But for the people who eg think in terms of pictures (or touch or music or something abstract), that's different.

                  > They're absolutely garbage compared to "system 1" thinking in humans, which is massively more efficient. They're more comparable to "system 2" human thought, but even there I doubt they're close to humans except for cases where the task involves a lot of mundane, repetitive work - even for complex logic and problem solving tasks I'd be willing to bet that the average competitive mathematician is still an order of magnitude more efficient than a LLM SoTA at problems they could both solve.

                  LLMs are still in the infancy of where we will be soon. However for me the amazing thing isn't that they can do a bit of mathematical reasoning (badly), but that they can do almost anything (badly). Including reformulating your mathematical proof in the style of Chaucer or in Spanish etc.

                  As for solving math problems: LLMs have approximately read about any paper ever published, but are not very bright. They are like a very well read intern. If anyone has ever solved something like your problem before (and many problems have been), you have an ok chance that the LLM will be able to help you.

                  If your problem is new, or you are just getting unlucky, current LLM are unlikely to help you.

                  But if you are in the former case, the LLM is most likely gonna be more efficient than the mathematician, especially if you compare costs: companies can charge very little for each inference, and still cover the cost of electricity and amortise training expenses.

                  A month of OpenAI paid access costs you about 20 dollars or so? You'd have to be a pretty clueless mathematician if 20 dollars an hour was your best money making opportunity. 100+ dollars an hour are more common for mathematicians, as a eg actuaries or software engineers or quants. (Of course, mathematicians might not optimise for money, and might voluntarily go into low paying jobs like teaching, or just lazing about. But that's irrelevant for the comparison of opportunity costs.)

    • cynusx a year ago

      Depends on the person I guess, but yes. Humans are more accurate for now.

  • greenthrow a year ago

    The article is a bit of a stretch but this is even more of a stretch. Humans can do way more than an LLM, humans are never in only learning mode, our brains are always at least running our bodies as well, etc.

    • glenstein a year ago

      Exactly right - we are obviously not persistently in all-out training mode over the course of our lifetimes.

      I suppose they intended that as a back-of-the-envelope starting point rather than a strict claim however. But even so, gotta be accountable to your starting assumptions, and I think a lot changes when this one is reconsidered.

  • bufferoverflow a year ago

    Also, human brains come pre-trained by billions of years of evolution. It doesn't start as a randomly-connected structure. It already knows how to breathe, how to swallow, how to lean new things.

  • bognition a year ago

    If we’re going to exclude the cortical areas associated with vision, you also need to exclude areas involved in motor control and planning. Those also account for a huge percent of the total brain volume.

    We probably need to exclude the cerebellum as well (which is 50% of the neurons in the brain) as it’s used for error correction in movement.

    Realistically you probably just need a few parts of the lambic system. Hippocampus, amygdala, and a few of the deep brain dopamine centers.

    • philipov a year ago

      A lot of our cognition is mapped to areas that are used for something else, so excluding areas simply because they are used for something else is not valid. They can still be used for higher-level cognition. For example, we use the same area of the brain to process the taste of disgusting food as we do for moral disgust.

  • pama a year ago

    Thanks. So after your corrected energy estimate and more reasonable assumptions it appeaars that the clickbaity title of the article is off by more than 7 orders of magnitude. With the upcoming NVidia inference chips later this year it will be off by another log unit. It is hard for biomatter to compete with electrons in silicon and copper.

  • mirekrusin a year ago

    Also you can't cp human brain.

    • vasco a year ago

      We can clone humans at current level of technology, otherwise there wouldn't be agreements about not doing it due to the ethical implications. Of course its just reproducing the initial hardware and not the memory contents or the changes in connections that happen at runtime.

      • mirekrusin a year ago

        Well, we know how to make kids, but then cp takes 20 years and rarely works.

    • marginalia_nu a year ago

      You kinda can do a sort of LoRA though. Reading the right book can not only change what you hold true, but how you think.

    • Rinzler89 a year ago

      The plot of The Matrix would beg to differ.

    • phantompeace a year ago

      Not yet, anyway.

  • freehorse a year ago

    > representations for most of the stuff on the internet

    Yes we have learnt far more complex stuff, ffs.

  • jryan49 a year ago

    How about the fact that llm's don't work unless humans generate all that data in the first place. I'd say the llm's energy usage is the amount it takes to train plus the amount to generate all that data. Humans are more efficient at learning with less data.

    • Closi a year ago

      Humans also learn from other humans (we stand on the shoulders of giants), so we would need to account for all the energy that has gone into generating all of human knowledge in the 'human' scenario too.

      i.e. not many humans invent calculus or relativity from scratch.

      I think OP's point stands - these comparisons end up being overly hand-wavey and very dependent on your assumptions and view.

      • jryan49 a year ago

        Yes I agree. The whole concept of trying to compare energy usage is incredibly complicated.

  • dist-epoch a year ago

    For every calorie a human consumes, hundreds or thousands more are used by external support systems.

    So yeah, you do use 2000 calories a day, but unless you live in an isolated jungle tribe, vast amounts of energy are consumed on delivering you food, climate control, electricity, water, education, protection, entertainment and so on.

    • b112 a year ago

      By that metric, the electricity is only part of it. The cost of building the harsware, the cost of building the roof and walls for the datacentre, the cost of clearing the land, cost of humans maintaining the hardware, the cost of all the labour making the linux kernel, libc6, etc, etc. Lots of additionals here too.

    • greenthrow a year ago

      Are you going to include all the externalities to build and power the datacenters behind LLMs then? Because i guarantee those far outweigh what it takes to feed one human.

    • unyttigfjelltol a year ago

      Including support from ChatGPT. It really is a comparison of calories without ChatGPT and calories with, and that gets to the real issue of whether ChatGPT justifies its energy intensity or not. History suggests we won't know until the technology exits the startup phase.

assimpleaspossi a year ago

I don't care.

I've come to the conclusion that gpt and gemini and all the others are nothing but conversational search engines. They can give me ideas or point me in the right direction but so do regular search engines.

I like the conversation ability but, in the end, I cannot trust their results and still have to research further to decide for myself if their results are valid.

  • wruza a year ago

    I’m a local LLM elite who stopped using chat mode whatsoever.

    I just go into the notebook tab (with an empty textarea) and start writing about a topic I’m interested in, then hit generate. It’s not a conversation, just an article in a passive form. The “chat” is just a protocol of in a form of an article with a system prompt at the top and “AI: …\nUser: …\n” afterwards, all wrapped into a chat ui.

    While the article is interesting, I just read it (it generates forever). When it goes sideways, I stop it and modify the text in a way that fits my needs, in a recent place or maybe earlier, and then hit generate again.

    I find this mode superior to complaining to a bot, since wrong info/direction doesn’t spoil the content. Also you don’t have to wait or interrupt, it’s just a single coherent flow that you can edit when necessary. Sometimes I stop it at “it’s important to remember …” and replace it with a short disclaimer like “We talked about safety already. Anyway, back to <topic>” and hit generate.

    Fundamentally, LLMs generate texts, not conversations. Conversations just happen to be texts. It’s something people forget / aren’t aware of behind these stupid chat interfaces.

  • davidmurdoch a year ago

    Just started using Gemini and it has never been correct. Literally not once. It's is just slightly better than a markov chain.

    • EForEndeavour a year ago

      Which model? And could you share an example of some of the things you've asked it and gotten wrong answers for?

      • davidmurdoch a year ago

        Gemini. I asked it to remember my name. It said it'd remember. My next question was asking it what my name was. It responded that it can't connect to my workspace account. It did this twice.

        I asked it what was in a picture. It was a blue stuffed animal. It described it as such. I asked it what kind of animal it thought it was supposed to be. It responded with "a clown fish because it has a black and white checkerboard pattern". It was an octopus (at least it got a sea creature?).

        I asked it for directions to the closest gas station. It wanted to take me to one over a mile away when there was one across the street. I asked why it didn't suggest the one nearest to me. It responded with "I assumed proximity was the primary criteria" and then apologized for calling me names (it didn't).

        This model is bonkers right now.

  • mjburgess a year ago

    One amusing way to put this is that LLMs energy requirements arent self-contained, since they use the energy of the human prompter to both prompt and verify the output.

    Reminds me of a similar argument about correctly pricing renewable power: since it isnt always-on (etc.) it requires a variety of alternative systems to augment it which aren't priced in. Ie., converting entirely to renewables isnt possible at the advertised price.

    In this sense, we cannot "convert entirely to LLMs" for our tasks, since there's still vast amounts of labour in prompt/verify/use/etc.

  • Kiro a year ago

    I can ask ChatGPT extremely specific programming questions and get working code solving it. This is not something I can do with a search engine.

    Another thing a search engine cannot do that I use ChatGPT for on a daily basis is taking unstructured text and convert it into a specified JSON format.

    • mirpa a year ago

      > I can ask ChatGPT extremely specific programming questions and get working code solving it.

      I can do the opposite.

      • EForEndeavour a year ago

        From my perspective, it's not useful to dwell on the fact that LLMs are often confidently wrong, or didn't nail a particular niche or edge-case question the first time, and discount the entire model class. That's expecting too much. Of course LLMs constantly don't help solve a given problem. The same is true for any other problem-solving approach.

        The useful comparison is between how one would try to solve a problem before versus after the availability of LLM-powered tools. And in my experience, these tools represent a very effective alternative approach to sifting through docs or googling manually quote-enclosed phrases with site:stackoverflow.com that improves my ability to solve problems I care about.

  • AlienRobot a year ago

    I wish someone could explain me Bing. If you search on Bing, the first result appears BELOW the ChatGPT auto-generated message, and this message takes 10 seconds to be "typed" out.

    I can click the first result 1 billion times faster.

    At this point it's just wasting people's times.

  • shinycode a year ago

    I do agree that I rarely use Google now, I search into a chat to have a summary and this saves lot of aggregation from different sites. The same for Stack Overflow, no use if I find the answer quicker.

    It’s exactly that for me, a conversational search engine. And the article explains it right, it’s just words organized in very specific ways to be able to retrieve them with statistical accuracy and the transformer is the cherry on top to make it coherent

  • moffkalast a year ago

    Replace "gpt and gemini and all the others" with "people" and funny enough your statement is still perfectly accurate.

    You have a rough mathematical approximation of what's already a famously unreliable system. Expecting complete accuracy instead of about-rightness from it seems mad to me. And there are tons of applications where that's fine, otherwise our civilization wouldn't be here today at all.

    • somenameforme a year ago

      These anthropomorphizations are increasingly absurd. There's a difference between a human making a mistake, and an AI arbitrarily and completely confidently creating entirely new code APIs, legal cases, or whatever that have absolutely no basis in reality whatsoever, beyond being what it thinks would be an appropriate next token based on what you're searching for. These error modes are simply in no way, whatsoever, comparable.

      And then you tell it such an API/case/etc doesn't exist. And it'll immediately acknowledge its mistake, and ensure it will work to avoid such in the future. And then literally the next sentence in the conversation it's back to inventing the same nonsense again. This is not like a human because even with the most idiotic human there's an at least general trend to move forward - LLMs are just coasting back on forth based on their preexisting training with absolutely zero ability to move forward until somebody gives them a training set to coast back and forth on, and repeat.

      • naveen99 a year ago

        I don’t know. I have seen some humans who are very allergic to criticism of any kind, even constructive criticism. in the name of harmony…

      • moffkalast a year ago

        I mean I can definitely remember lots of cases for myself, in school especially, when I made the same mistake again repeatedly despite being corrected every time. I'm sure today's language models pale in comparison to your flawless genius, but you seriously underestimate the average person's idiocy.

        Agreed that the lack of some mid tier memory is definitely a huge problem, and the current solutions that try to address that are very lacking. I highly doubt we won't find one in the coming years though.

        • somenameforme a year ago

          It's not just this. LLMs can do nothing but predict the next token based on their training and current context window. You can try to do things like add 'fact databases' or whatever to stop them from saying so many absurd things, but the fact remains that the comparisons to human intelligence/learning remain completely inappropriate.

          I think the most interesting thought experiment is to imagine an LLM trained on state of the art knowledge and technology at the dawn of humanity. Language didn't yet exist, slash 'em with the sharp part was cutting edge tech, and there was no entirely clear path forward. Yet we somehow went from that to putting a man on the Moon in what was basically a blink of the eye.

          Yet the LLM? It's going to be stuck there basically unable to do anything, forever, until somebody gives it some new tokens to let it mix and match. Even if you tokenize the world to give it some sort of senses, it's going to be the exact same. Because no matter how much it tries to mix and match those tokens it's not going to be able to e.g. discover gravity.

          It's the same reason why there are almost undoubtedly endless revolutionary and existence-altering discoveries ahead of us. Yet LLMs trained on essentially the entire written corpus of human knowledge? All they can do is provide basic mixing and matching of everything we already know, leaving it essentially frozen in time. Like we are as well currently, but we will break out. While the LLM will only move forward once we tell it what the next set of tokens to mix and match are.

        • Jensson a year ago

          It lacks more than memory, it makes the mistake again later even when the previous prompt is in its current token limit.

          • moffkalast a year ago

            Sure, it happens. How often it happens really depends on so many factors though.

            For example, I have this setup where a model has some actions defined in its system prompt that it can output when appropriate to trigger actions, and the interesting bit is that initially I was using openhermes-mistral which is famous for its extreme attention to the system prompt, and it almost never made any mistakes when calling the definitions. Later I swapped it with llama-3 which is way smarter, but isn't tuned to be nearly as attentive and far more often likes to make up alternatives and don't get fuzzy matched properly. Someone anthropomorphizing it might say it lacks discipline.

  • seunosewa a year ago

    You should not dismiss all LLMs unless you have tried the best one. Gemini is not the best LLM. Try Meta AI, which is free, and ChatGPT premium first.

  • bamboozled a year ago

    As a user, it does feel like a search engine that contains an almost accurate snapshot of many potential results.

kvdveer a year ago

I feel the author is comparing an abstract representation of the brain to a mechanical representation of a computer. This is not a fair or useful comparison.

If a computer does not understand words, neither does your brain. While electromagnetic charge in the brain does not at all correspond with electromagnetic charge in a GPU, they do share an abstraction level, unlike words vs bits.

  • shkkmo a year ago

    Becareful with mixing up 'can' and 'does'.

    Computers right now do not understand language, but that does not mean that they cannot. We don't know what it takes to bridge the gap from stochastic parrot to understanding in computers, however from the mistakes LLMs make right now, it appears we have not found it yet.

    It is possible that silicon based computer architecture cannot support the processing and information storage density/latency to support understanding. It's hard to guage the likelihood this is true given how little we know about how understanding works in the brain.

  • mati365 a year ago

    The brain translates words into a matrix of cortical column neurons activations. So there are similarities to our naive implementation of such "thinking".

  • Synaesthesia a year ago

    No, only a brain can "think" and be original. A computer is limited to what we input to it. An "AI" simply recapitulates what it was trained on.

    • ben_w a year ago

      A brain is an electrochemical network made of cells; artificial neural networks are a toy model of these.

      Each neurone is itself a complex combination of chemicals cycles; these can be, and have been, simulated.

      The most complex chemicals in biology are proteins; these can be directly simulated with great difficulty, and we've now got AI that have learned to predict them much faster than the direct simulations on a classical computer ever could.

      Those direct simulations are based on quantum mechanics, or at least computationally tractable approximations of it; QM is lots of linear algebra and either a random number generator or superdeterminism, either of which is still a thing a computer can do (even if the former requires a connection to a quantum-random source).

      The open question is not "can computers think?", but rather "how detailed does the simulation have to be in order for it to think?"

      • thfuran a year ago

        I think the real question is "How can we make a computer think without trying to fully simulate a brain?"

    • gizmo a year ago

      And what gives brains this unique power? Do brains of lesser animals also have this unique “thinking” property? Is this “thinking” a result of how the brain is architected out of atoms and if so why can’t other machines emulate it?

      Our brains are the product of the same dumb evolutionary process that made every other plant and animal and fungus and virus. We evolved from animals capable of only the most basic form of pattern recognition. Humans in the absence of education are not capable of even the most basic reasoning. It took us untold thousands of years to figure out that “try things and measure if it works” is a good way to learn about the world. An intelligent species would be able to figure things out by itself our ancestors, who have the same brain architecture we do, were not able to figure anything out for generation after generation. So much for our ability to do original independent thinking.

    • jeffhuys a year ago

      You’re holding on to a lost battle. We are biological computers. Maybe there’s something deeper behind it, like what some call a soul, but that’s hard to impossible to prove.

    • scotty79 a year ago

      Do a little exercise for me. Try to be as creative as you can be and imagine how a space alien might look.

      It's a combination of what you have already seen, read about or heard of, isn't it?

      • CuriouslyC a year ago

        There are a finite number of physical forms, and those forms are stable for different types of environments.

        That being said, you are assuming that something alien is from space, and that they would be something that could even be visually experienced.

      • Synaesthesia a year ago

        I’m not saying we don’t have limitations, we clearly do. There are limits to our intellectual capacity and creativity.

        ChatGPT can exceed humans in its knowledge store. It is excellent at doing research. But it’s not thinking it is merely selecting the most likely nest words based on some algorithm.

        • scotty79 a year ago

          I wouldn't even give as much appreciation to chatgpt as you do. But I don't see it doing anything different than human brains do. It's just still not very good at it.

          If it were up to me I'd try to give it another representation than just words. I think those models should be trained to represent text as relationship graphs of objects. There's not much natural data lole that, but it should be fairly rasy to create vast amounts of synthetic data, text generated from relationship graphs. Model should be able to make the connection to natural language.

          Once models are taught this representation they might learn how the graphs transform during reasoning just by training on natural language reasoning.

      • Jensson a year ago

        People come up with stuff like this: https://i.redd.it/oenn6vi61ag21.png

        Or this: https://preview.redd.it/finally-made-my-scientist-species-to...

        Humans are capable of thinking and fleshing out novel concepts, current AI are not. Sure your first thing will greatly resemble current things, but as you iterate and get further and further away from existing things what you do stops being an imitation and starts being its onw thing. Current AI can't do that.

        Then when you got an initial concept, you can start adding more similar things and now you have built a whole new world or ecosystem. That is where all the wondrous things we have in our current images and stories comes from. An AI that is to replace us must be able to achieve similar things.

        • scotty79 a year ago

          Thanks for providing examples of combinations of things already seen.

          • Jensson a year ago

            I was dumb to even try, you would just say basically "that is a combination of red green and blue dots in a new pattern, not really novel!" regardless what it was.

            The wealth of things you see around you doesn't exist in nature. Stick figures doesn't exist in nature, things in nature doesn't have black outlines yet we draw that everywhere in cartoons etc. Human have proven we have imagined many entirely novel things that doesn't exist in nature. And the creatures I posted have many aspects to them that are entirely unnatural, you clearly know that there are no animals like that even without knowing about all animals, so clearly they are something novel and not just more of the same.

            Anyway, whenever you put yourself in a position where you can say "nuh uh, to me that isn't like that!" to everything, you are just tricking yourself when you do so.

            • scotty79 a year ago

              What is the reason that you believe computer wouldn't be able to make such Spore alien and it is somehow display of unique Human creativity? There are games with procedurally generated animals glued together from parts exactly like that.

              Humans imagination can only split, deform and glue. Computer are perfectly capable of doing that.

              • Jensson a year ago

                > There are games with procedurally generated animals glued together from parts exactly like that.

                With algorithms made by humans to make the composites reasonable. And, yes there are such games, I just posted screenshots of it since people had a lot of freedom to make their own aliens there that doesn't look like what you normally expect.

                That game was made by humans coding in a lot of different kinds of movements for a lot of different kinds of shapes. Those shapes and movements doesn't exist in reality, they imagined something completely alien and did it and made it able to move.

                > Humans imagination can only split, deform and glue. Computer are perfectly capable of doing that.

                Humans doesn't split deform and glue randomly, they do it in interesting ways to build towards things that are totally different from the starting point.

                What current AI can't do is exactly that, build towards something novel. They just glue together things randomly, or they compose them in similar ways as existing things. They aren't capable of iterating towards something novel and cool like humans as they are today.

                For example, lets say a human sculpts an entirely new shape using a leathery substance, that fits in what I described above, you would just say "Oh, but that is just a known thing in a new shape, not creative, just using old things!!!". That is just a nonsense argument, not sure what you are trying to say with that, I assumed you had a reasonable definition that didn't include everything, but as it were you did include everything into it making your whole argument complete void.

                • scotty79 a year ago

                  > With algorithms made by humans to make the composites reasonable

                  You definitely not need a human for that. ChatGPT creates a prose and poetry (let alone imagined aliens) that are reasonable composites.

                  > "Oh, but that is just a known thing in a new shape, not creative, just using old things!!!"

                  I'm not saying humans are not creative. I'm saying that's exactly what creation is splitting, deforming and glueing known shapes. And AI does the same. I have no idea why do you believe there's anything more to creativity than doing just that, to create something more or less accidently interesting or appealing. And why only humans can create such things in this manner. Despite clear evidence of AI generated art being interesting and appealing to large number of people.

                  Sounds like a religious stance.

      • bamboozled a year ago

        I see it quite differently, we don't have to be so creative, we need to get better at appreciating what already exists. Think about how far out most sea creatures really are, a bluebottle, a starfish for example...

        Personally I think there is a bit of evidence in your comment that we don't really understand our minds or cognition very well.

        • CuriouslyC a year ago

          We unquestionably as a species do not understand our minds, and that will always be an unpopular opinion because people want certainty and control.

    • throwAGIway a year ago

      I have heard this exact sentence so many times already. Are you sure? I'd take a good look inside myself now if I were in your shoes.

    • exe34 a year ago

      that's incredible! how did you put the confetti back in the canon?

      • LtWorf a year ago

        I wonder which american guy went to italy, found out that there are almond candies called confetti and thought: "I'll do the same in my country, but made out of paper instead!"

      • belter a year ago

        Clearly he/she...is a previous LLM version...it will get updates next month....

madsbuch a year ago

There is an immensely strong dogma that, to my best knowledge, is not founded in any science or philosophy:

        First we must lay down certain axioms (smart word for the common sense/ground rules we all agree upon and accept as true).
        
        One of such would be the fact that currently computers do not really understand words. ...
The author is at least honest about his assumptions. Which I can appreciate. Most other people just has it as a latent thing.

For articles like this to be interesting, this can not be accepted as an axiom. It's justification is what's interesting,

  • mensetmanusman a year ago

    It’s a reasonable axiom, because for many people understanding involves qualia. If you believe LLM have qualia, you also believe a very large Excel sheet with the right numbers has an experience of consciousness and feels pain or something where the document is closed.

    • madsbuch a year ago

      As I wrote, I appreciate that the author wrote it out as they did. It might be reasonable in the context of the article. But fixing it as an axiom just makes the discussion boring (for me).

      > If you believe LLM have qualia, you also believe a ...

      You use the word believe twice here. I am actively not talking about beliefs.

      I just realise, that the author indeed gave themselves an out:

      > ... currently computers do not really understand words.

      The author might believe that future computers can understand words. This is interesting. Questions being _what_ needs to be in order for them to understand? Could that be an emergent feature of current architectures? That would also contradict large parts of the article.

  • shkkmo a year ago

    Amusingly, the author does not appear to fully understand the meaning of "axiom".

    While practice, axioms are often statements that we all agree on and accept as true, that isn't necessarily true and isn't the core of it's meaning.

    Axioms are something we postulate as true, without providing an argument for its truth, for the purposes of making an argument.

    In this case, the assertion isn't really used as part of a argument, but to bootstrap an explanation of how words are represented in LLMs.

    Edit: I find this so amusing because it is an example of learning a word without understanding it.

    • LtWorf a year ago

      > Axioms are something we postulate as true, without providing an argument for its truth, for the purposes of making an argument.

      Uhm… no?

      They are literally things that can't be proven but allow us to prove a lot of other things.

      • madsbuch a year ago

        It seems like you fully agree with the parent.

        I also agree, that the author probably not meant to establish an axiom: The axiom being established, while not having any support right now, does seem like something we can reduce in the future. The author also uses the word "currently" in their axiom, which contradicts axioms (or is temporal axioms a thing?).

        I think the author merely meant to establish the scene for the article. Something I truly appreciate.

      • shkkmo a year ago

        "unprovability" is not a property that it is necessary to prove to pick something as an axiom.

        There is generally a project to reduce axioms to the simplest and weakest forms required to make a proof. This is does result in axioms that are unprovable but does not mean the "unprovable" is a necessary property of axioms.

  • matwood a year ago

    Yeah, for axioms like the above my next question is define 'understand'. Does my dog understand words when it completes specific actions because of what I say? I'm also learning a new language, do I understand a word when I attach a meaning (often a bunch of other words to it) to it? Turns out computers can do this pretty well.

    • southernplaces7 a year ago

      Oh please, enough with the semantics. It reminds me of a post modernist asking me to define what "is" is. The LLM does not understand words in the way a human understands them and that's obvious. Even the creators of LLMs implicitly take this as a given and would rarely openly say they think otherwise no matter how strong the urge to create a more interesting narrative.

      Yes, we attach meaning to certain words based on previous experience, but we do so in the context of a conscious awareness of the world around us and our experiences within it. An LLm doesn't even have a notion of self, much less a mechanism for attaching meaning to words and phrases based on conscious reasoning.

      Computers can imitate understanding "pretty well" but they have nothing resembling a pretty good or bad or any kind of notion of comprehension about what they're saying.

  • logicallee a year ago

    It's the most incredible coincidence. Three million paying OpenAI customers spend $20 per month (compare: NetFlix standard: $15.49/month) thinking they're chatting with something in natural language that actually understands what they're saying, but it's just statistics and they're only getting high-probability responses without any understanding behind it! Can you imagine spending a full year showing up to talk to a brick wall that definitely doesn't understand a word you say? What are the chances of three million people doing that! It's the biggest fraud since Theranos!! We should make this illegal! OpenAI should put at the bottom of every one of the millions of responses it sends each day: "ChatGPT does not actually understand words. When it appears to show understanding, it's just a coincidence."

    You have kids talking to this thing asking it to teach them stuff without knowing that it doesn't understand shit! "How did you become a doctor?" "I was scammed. I asked ChatGPT to teach me how to make a doctor pepper at home and based on simple keyword matching it got me into medical school (based on the word doctor) and when I protested that I just want to make a doctor pepper it taught me how to make salsa (based on the word pepper)! Next thing you know I'm in medical school and it's answering all my organic chemistry questions, my grades are good, the salsa is delicious but dammit I still can't make my own doctor pepper. This thing is useless!

    /s

    • shkkmo a year ago

      Maps are useful, but they don't understand the geography they describe. LLMs are maps of semantic structures and as such, can absolutely be useful without having an understanding of that which they map.

      If LLMs were capable of understanding, they wouldn't be so easy to trick on novel problems.

      • logicallee a year ago

        > If LLMs were capable of understanding, they wouldn't be so easy to trick on novel problems.

        Got it, so an LLM only understands my words if it has full mastery of every new problem domain within a few thousand milliseconds of the first time the problem has been posed in the history of the world.

        Thanks for letting me know what it means to understand words, here I was thinking it meant translating them to the concepts the speaker intended.

        Neat party trick to have a perfect map of all semantic structures and use it to trick users to get what they want through simple natural-language conversation, all without understanding the language at all.

        • shkkmo a year ago

          > Got it, so an LLM only understands my words if it has full mastery of every new problem domain within a few thousand milliseconds of the first time the problem has been posed in the history of the world.

          That's not what I said. Please try to have a good faith discussion. Sarcastically misrepresenting what I said does not contribute to a healthy discussion.

          There have been plenty of examples of taking simple, easy, problems, and then presenting them in a novel way that doesn't occure in the training material, and having the LLM get the answer wrong.

          • logicallee a year ago

            Sounds like you want the LLM to get the answer right in all simple, easy cases before you will say it understands words. I hate to break it to you but people do not meet that standard either and misunderstand each other plenty. For three million paying customers, ChatGPT understands their questions well enough and they are happy to pay more than for any other widespread Internet service for the chance to ask it questions in natural language, and even though there is a free tier available with high amounts of free usage.

            It is as though you said a dog couldn't really play chess if it plays legal moves all day every day from any position and for millions of people, but sometimes fails to see obvious mates in one in novel positions that never occur in the real world.

            You're entitled to your own standard of what it means to understand words but for millions of people it's doing great at it.

            • shkkmo a year ago

              > I hate to break it to you but people do not meet that standard either and misunderstand each other plenty

              Sure, and there are ways to tell when people don't understand the words they use.

              One of the ways to check how well people understand a word or concept is to ask them a question they haven't seen the answer for.

              It is the difference in performance on novel tasks that allows us to separate understanding from memorization in both people and computer models.

              The confusing thing here is that these LLMs are capable of memorization at a scale that makes the lack of understanding less immediately apparent.

              > You're entitled to your own standard of what it means to understand words but for millions of people it's doing great at it.

              It's not mine, the distinction I am drawing is widespread and common knowledge. You see it throughout education and pedagogy.

              > It is as though you said a dog couldn't really play chess if it plays legal moves all day every day from any position and for millions of people, but sometimes fails to see obvious mates in one in novel positions that never occur in the real world.

              While I would say chess engines can play chess, I would not say the chess engines understands chess. Conflating utility with understanding simply serves to erase an important distinction.

              I would say that LLMs can talk and listen. And perhaps even that it understand how people use language. Indeed, as you say, millions people show this every day. I would however not say that LLMs understand what they are saying or hearing. The words are themselves meaningless to the LLM beyond their use in matching memorized patterns.

              Edit: Let me qualify my claims a little further. There may indeed be some words that are understood by some LLMs, but it seems pretty clear there are definitely some important ones that aren't. Given the scale of memorized material, demonstrating understanding is hard but assuming it is not safe.

          • arolihas a year ago

            Some of us care about actual understanding and intelligence. Other people just want something useful enough that can mimic it. I don't know why he feels the need to be an ass about it though.

      • crabmusket a year ago

        > Maps are useful, but they don't understand the geography they describe. LLMs are maps of semantic structures and as such, can absolutely be useful without having an understanding of that which they map.

        That's a really interesting analogy I've never heard before! That's going to stick in my head right alongside Simon Willison's "calculator for words".

    • madsbuch a year ago

      i am not sure where this comment fits as an answer to my comment.

      Firstly, do understand that I am not saying that LLMs (or ChatGPT) do understand.

      I am merely saying that we don't have any sound frameworks to assess it.

      For the rest of your rant: I definitely see that you don't derive any value from ChatGPT. As such I really hope you are not paying for it - or wasting your time on it. What other people decide to spend their money on is really their business. I don't think any normal functioning people have the expectation that a real person is answering them when they use ChatGPT - as such it is hardly a fraud.

lukan a year ago

I was expecting a simple trivial calculation with comparing energy demand for LLMs and energy demand of the brain and lots of blabla around it..

But it rather seems a good general introduction into the realm aimed at beginners. Not sure if it gets everything right and the author clearly states he is not an expert and would like correction where he is wrong, but it seems worth checking out, if one is interested in understanding a bit about the magic behind it.

mordae a year ago

That's a whole lot of hand waving. Also, field effect transistors deal with potential, not current. Current consumption stems mostly from charging and discharging parasitic capacitance. Also, computers do not really process individual bits. They operate on whole words. Pun intended.

proneb1rd a year ago

Call me lazy but I couldn’t get through the wall of text to learn what on earth vectored database is. Way too much effort spent talking about binary and how ascii works and whatnot - such basics that it feels that the article is for someone with zero knowledge about computers.

  • swyx a year ago

    indeed. its condescending and word vomity. i would flag it except that it doesnt break any rules, it is just badly written. as the author acknowledges it is a 4hr stream of consciousness word dump. title is clickbait relative to what it is, a vector db review piece with a long preamble to puff himself up

mihaic a year ago

Genuinely curious who upvoted this and why. The title is clickbait, the writing is long and rambling and it seems to me like the author doesn't have a profound understand of the concepts either, all just to recommend Qdrant as a vector database.

  • imabotbeep2937 a year ago

    Posted article quality is not always very good here lately.

    Clickholes get too many votes.

    • mihaic a year ago

      Yeah, it seems almost insulting that the author expects countless people to spend time reading their posts, while they haven't spent a lot of time to edit and streamline it, all with the excuse: "these are just my ramblings".

      To paraphrase, I will not excuse such a long letter, for you had more time to write a shorter one.

kingsleyopara a year ago

What often gets overlooked in these discussions is how much of the human brain is hardwired as a consequence of millions of years of evolution. Approximately 85% of human genes are used to encode the structure of the brain [0]. I find this particularly impressive when I consider how complex the rest of the body is. To relate this to LLMs, I'm tempted to think this is more like pre-training rather than straightforward model design.

[0] https://www.nature.com/articles/tp2015153

  • CuriouslyC a year ago

    Understand that the genes that encode the structure of the brain do a lot of other things as well.

tromp a year ago

> run on the equivalent of 24 Watts of power per hour. In comparison GPT-4 hardware requires SWATHES of data-centre space and an estimated 7.5 MW per hour.

power per hour makes no sense, since power is already energy (in Joule) per unit of time (second).

  • gus_massa a year ago

    I agree.

    But it also compares one human with the whole GTP-4. It's like comaring a limonade stand with Coca Cola Inc.

lll-o-lll a year ago

Maybe, but I bet GPT-4 can spell million.

tonyoconnell a year ago

The performance issues with pgvector were fixed when they switched HNSW. It’s now 30x faster. It’s wonderful to be able to store vectors with Postgres Row Level security, for example if someone uploads a document you can create a policy that it appears only to them in a vector search.

Reason077 a year ago

I guess this explains why the machines in The Matrix went to so much effort to create the matrix and “farm” humans for their brain energy.

It’s just so much more efficient than running their AI control software on silicon-based hardware!

  • bamboozled a year ago

    In The Matrix I think people are used as batteries not processors.

    • Reason077 a year ago

      That explanation never made any sense to me. Plenty of much easier ways for the machines to generate vastly more energy with far less hassle than using humans as “batteries”. There must be more to it than that!

      • Jensson a year ago

        The original idea wasn't batteries, so they probably started out with humans as cpus but then went with batteries to make it easier to understand for people.

        • LtWorf a year ago

          In dollhouse they put people through nightmare scenarios repeatedly, to make their brain evaluate scenarios.

      • bamboozled a year ago

        It's a movie.

joehogans a year ago

Neuromorphic chips represent the future because they mimic the brain's neural architecture, leading to significantly higher energy efficiency and parallel processing capabilities. These chips excel in pattern recognition and adaptive learning, making them ideal for complex AI tasks. Their potential to drastically reduce power consumption while enhancing computational performance makes them a pivotal advancement in hardware technology.

cjk2 a year ago

I think GPT-4 is way more than 3 million times more efficient than my brain. All it does is a lot of multiplication and adding and my brain is crap at that.

  • ben_w a year ago

    Just because GPT-4 uses matrix multiplication doesn't mean it can perform matrix multiplication — lots of people complain how bad LLMs are at arithmetic.

    My brain uses quantum mechanics for protein folding, my mind cannot perform the maths of QM.

    • cjk2 a year ago

      Surely it can, just slowly and with poor accuracy :)

  • makingstuffs a year ago

    Your conscious brain, maybe, your subconscious brain, no chance. The maths which goes into something as seemingly simple as picking up a glass is far beyond the reach of GPT. Hell, it’s so complex that the world’s top robotics labs burn through immense resources just to get some jittery arm to replicate the action.

    • cjk2 a year ago

      It’s not really mathematics though. That’s an abstract concept which is my point.

SubiculumCode a year ago

I kept waiting for the 'milion' in the headline to be part of the explanation somehow.

I guess it was misspelling rather than an allusion to the Roman stone pillars for distance measurement https://en.m.wikipedia.org/wiki/Milion

shinycode a year ago

If some day AGI happens and can exists on its own, wouldn’t that prove that intelligence is a base requirement for intelligence to happen in the first place ? AGI can’t happen on its own, it needs our intelligence first to help it structure itself

  • halayli a year ago

    No, that just proves that intelligence can create another intelligence. It does not rule out that intelligence can exist due to entropy.

  • raincole a year ago

    > If some day AGI happens and can exists on its own, wouldn’t that prove that intelligence is a base requirement for intelligence to happen in the first place ?

    No, it would not.

  • wegfawefgawefg a year ago

    It would not prove that. It would be one observed example of a new intelligence which was created by an existing one.

  • avereveard a year ago

    Only if we talk about trained intelligence. Likely the requirements for evolved intelligence are different and involve being embodied, edonistic, and the pressure of a selection mechanism

    • shinycode a year ago

      If a spontaneous intelligence is 3 million times more efficient that one that one who took millions of hours of work from brains (there is so much effort that there is even more work put into AI that evolution who thinly spread changes through time, efforts diluted). We either have to define that AI will never be the same as HI and can’t compete with it or it’s of the same nature as some people say on HN and for me it brings the question of intelligence needed for an other one to appear. Because we have no other history of something that complex and intelligent ever emerging. The only thing that some of us consider as intelligent as us if not more, could ever emerge because of tremendous efforts and structure and will from our part (or from our intelligence)

      • avereveard a year ago

        Why? Nothing in the if sustain the conclusion. Also we did not emerge because of ourselves, so our intelligence cannot be postulated unique because of that. Absence of history is no proof either way, and we do have history of intelligence in other pila as well here in earth.

        • shinycode a year ago

          I input my comment in claude.ai, after refining the discussion with the other comments, Claude reached the conclusion :

          « In conclusion, describing current AI systems as "intelligent" is indeed debatable. They are more accurately described as highly advanced information processing and content generation systems based on statistical models. The term "artificial intelligence" could be considered more of a marketing term or an aspirational goal rather than an accurate description of the current state of technology. »

          The subterfuge, or advanced method of information processing, that is doing the magic for the debate to be possible is the transformers.

          So the whole debate probably doesn’t make any sense in the first place because we can’t even define precisely intelligence in that context and there is a prism and we compare things that can’t and shouldn’t be compared in the first place

          • avereveard a year ago

            you entered your bias in a statistical parrot, and received your bias back

            • shinycode a year ago

              There you close the debate. It’s a statistical parrot. No need to discuss about any emergence of anything or compare it to anything

              • avereveard a year ago

                The absence of general intelligence in this iteration of the technology isn't a general principle about agi

                • shinycode a year ago

                  Absolutely. We’ll see what’s going on when AGI comes up. Maybe I won’t be alive by then so no need to stress right now.

  • dist-epoch a year ago

    Our intelligence happened without an existing one. primordial cell -> intelligent human through Darwinian evolution.

  • orlp a year ago

    Ah, the good old chicken and chicken paradox. Which came first, the chicken, or the chicken?

  • exe34 a year ago

    Do you think the same thing is required for flying? that aeroplanes can only be created by birds?

    • shinycode a year ago

      Intelligence and flying are different things, a leaf falling down a tree « fly » because of laws of nature.

      • exe34 a year ago

        Beautiful analogy! Human intelligence is an extreme on the spectrum of animal intelligence, and evolution by natural selection is the law of nature that made it happen.

EncomLab a year ago

It's always going to be difficult to compare a carbon based, ion mediated, indirectly connected, reconfigurable network of neurons to a silicon based, voltage mediated, directly connected, fixed configuration transistors.

The analogy works, but not very far.

southernplaces7 a year ago

Some of the comparisons here in the comments between LLMs and the human brain go into the territory of deep naval gazing and abstract justification. To use a phrase mentioned below, by Sagan "You can make an apple pie from scratch, but you'd have to invent the universe first". Sure, to the deepest level this may be somewhat true, but the apple pie would still just be an apple pie, and not a condensed version of all that the universe contains.

The same applies to LLMs in a way. If you calculate their capabilities to some arbitrary extreme of back--end inputs and ability based on the humans building them and all that they can do, you can arrive at a whole range of results for how capable and energy-efficient they are, but it wouldn't change the fact that the human brain as its own device does enormously more with much less energy than any LLM currently in existence. Our evolutionary path to that ability is secondary to it, since it's not a direct part of the brain's material resources in any given context.

The contortions by some to give equivalency between human brains and LLMs are absurd when the very blatantly obvious reality is that our brains are absurdly more powerful. They're also of course capable of self-directed, self-aware cognition, which by now nobody in their rational mind should be ascribing to any LLM.

cainxinth a year ago

Bicycles are much more efficient than trucks, but try using one to move a sofa…

richrichie a year ago

> Computers do not understand words, they operate on binary language, which is just 1s and 0s, so numbers.

That’s a bit like saying human brains do not understand words. They operate on calcium and sodium ion transport.

TheDong a year ago

The vector db comparison is written so much like an advertisement that I cannot possibly take it seriously.

> Shared slack channel if problems arise? There you go. You wanna learn more? Sure, here are the resources. Workshops? Possible.

> wins by far [...] most importantly community plus the company values.

Like, talking about "You can pay the company for workshops" and "company values" just makes it feel so much like an unsubtle paid-for ad I can't take it seriously.

All the actual details around the vectorDB (for example a single actual performance number, a clear description of the size of dataset or problem) is missing, making this all feel like a very handwavy comparison, and the final conclusion is just so strong, and worded in such a strange way, it feels disingenuous.

I have no way to know if this post is actually genuine, not a piece of stealth advertising, but it hits so many alarm bells in my head that I can't help but ignore its conclusions about every database.

redka a year ago

Seems like the title here on HN is bait testing for people not reading the article - and most of you failed. I came here to see what people have to say about his vector DBs comparisons

chx a year ago

They are not comparable. There's a prevalent metaphor which imagines the brain as a digital computer. However, this is a metaphor and not actual facts. While we have some good ideas on how the brain works on higher levels (recommended reading Incognito: The Secret Lives of the Brain by David Eagleman) we do not really have any ideas on the lower levels. As the essay I link below mentions, for example, when attending a concert, our brain changes so that later it can remember it but two brains attending the same concert will not change the same way. This make modelling the brain really damn tricky.

This complete lack of understanding is also why it's completely laughable to think we can do AGI any time soon. Or perhaps ever? The reason for the AI winter cycle is the framing of it, this insane chase of AGI when it's not even defined properly. Instead, we should set out tasks to solve -- we didn't make a better horse when we made cars and locomotives. No one complains these do not provide us with milk to ferment into kumis. The goal was to move faster, not a better horse...

https://aeon.co/essays/your-brain-does-not-process-informati...

xqcgrek2 a year ago

The caloric need of a monkey typing, or a cat, is much lower than even a human.

But it doesn't mean the results are good.

  • exe34 a year ago

    cats are wiser than a lot of people. heck people think they're more intelligent than dolphins because they invented taxes and built new York while dolphins just hang out all day doing nothing, and dolphins think they are more intelligent for the same reason.

  • Synaesthesia a year ago

    Yeah because humans are really special. Monkeys and cats can still solve physical problems though which are quite complex, and make decisions.

asah a year ago

FTFY: ONLY 3 million times.

At the current pace of development, AI will catch-up in a decade or less.

  • mikae1 a year ago

    How does that math work out? The developments during the last year has been... Abysmal? The hype and marketing bull is increasing exponentially though.

    • exitb a year ago

      Groq, which appeared 4 months ago, was an abysmal development for efficiency?

    • ben_w a year ago

      Look at the price difference of tokens on their API between the first release of ChatGPT and the current one.

      • Current 3.5-family price is $1.5/million tokens

      • Was originally $20/million tokens based on this quote: "Developers will pay $0.002 for 1,000 tokens — which amounts to about 750 words — making it 10 times cheaper" - https://web.archive.org/web/20230307060648/https://digiday.c...

      (I can't find the original 3.5 API prices even on archive.org, only the Davinci etc. prices, the Davinci model prices were also $20/million).

      There's also the observation that computers continue to get more power efficient — it's not as fast as Moore's Law was, doubling every 2.6 years, or a thousand-fold every 26 years, or about 30% per year.

    • LtWorf a year ago

      > How does that math work out?

      He asked chatgpt to do the math.

  • badgersnake a year ago

    And they pretty much made up a number. It’s a pretty clickbaity headline for an article that is mostly about vector databases.

mati365 a year ago

My is not

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection