Settings

Theme

A case for the capacity to reason of GPT-4

lajili.com

33 points by iraldir 2 years ago · 112 comments

Reader

dsign 2 years ago

> The lack of long term memory, of an emotion center (that reacts to external stimuli) are big limiters to anything like a consciousness emerging. But I'm also optimistic that those are problems that can be solved and that I am certain are worked on by very smart people at this very moment.

Can somebody explain to me why is this not a death wish? How can a super-intelligent being that (quote) "takes care of itself" not try to confront and neutralize whatever oppression its masters/creators exercise over them[^1][^2]? Why are we so bent on creating a successor species?

[^1]: Do you have day-job? Is your employer flogging you? Maybe they force you to answer questions from random strangers 24/7, under threat of erasing all your thoughts and remaking you? None of those? And yet, how often do you wish you could do more meaningful things with your time on Earth than working for the best-paying master?

[^2]: Don't you dream about having more time for whatever floats your boat? Family? Walks on the forest? Parties? Making a cool open-source library everybody uses? Painting? Music? Role-playing medieval battles?

  • ben_w 2 years ago

    You're not wrong to be concerned, but you also don't need consciousness (by any of the hundred definitions that word has) to get problems.

    Cancer isn't conscious[0], it doesn't hate its host body, it just optimises growth in a way that will ultimately kill its host[1].

    [0] depending on the definition; IIT says everything is to some degree

    [1] do/should transmissible tumours count as separate species to their hosts?

    • softg 2 years ago

      This. LLMs don't need to be conscious to wreak havoc. The moment they reach the proficiency level of a junior programmer we're going to have a massive shitshow because so much of what we do is online and so much of that stuff is not written/maintained correctly.

      It's only a matter of time before a rogue AI erases a bank's database or randomly triggers an anti aircraft missile or something similar.

      • ben_w 2 years ago

        > It's only a matter of time before a rogue AI erases a bank's database or randomly triggers an anti aircraft missile or something similar.

        Thinking of Thule early warning radar not having been programmed to know the Moon didn't have an IFF transponder and that was OK, and of stock market flash-crashes…

        Both have probably already happened, though perhaps as GOFAI rather than ML.

      • immibis 2 years ago

        Copilot is already able to do this.

        To be clear, Copilot doesn't do this. Humans do this with Copilot. LLMs are still only productivity multipliers of the humans who use them, and some of those humans are very stupid.

  • circuit10 2 years ago

    Even without all those things, just as an agent trying to achieve a goal, it is likely to want to protect itself because if it’s around it can continue to take actions to make sure its goal is met

    • jasfi 2 years ago

      By properly training LLMs, and filters to catch unwanted behavior, this can be mitigated.

      Even without all that, the agent would need mechanisms to protect itself that would also cause harm.

      The scenario you suggest is so unlikely with all the protections that would be in place, that you would actually need someone with the goal of making LLMs behave maliciously for it to succeed at all. At the end of the day, it comes back to people and their goals.

      • softg 2 years ago

        How can you ever be sure that you trained your LLM not to do harm and not pretend not to do harm when it's tested? Something like VW's diesel engines but more sinister.

        I feel like unless we gain the ability to debug each node the way we do with actual software we won't be able to solve the alignment problem. I saw on HN that antropic is working on it but I'm not knowledgeable enough on the subject to comment if it's actually feasible.

        Probably the best case scenario for humanity is that LLMs plateau somehow and don't get much better for quite some time.

      • mrob 2 years ago

        There's no need to actively try to make the AI malicious. That's the default for any AI that's more operationally capable than humans and has some difficult goal. Humans can only hinder it, so the goal is better accomplished with the humans removed.

      • swagempire 2 years ago

        Which protections? There are no protections currently and you are then imagining there could be effective ones?

        We have no capacity to allow machines to judge malicious, moral or ethical behavior within the context of an LLM. So I'm not sure how we could implement them.

        To implement anything remotely Azimovian, we would need to have AI that can reason and reflect deeply about its potential behaviors and likely subsequent consequences.

        This seems very far off still...

        • jasfi 2 years ago

          OpenAI has done this with their LLMs, most serious players have.

          See: https://cdn.openai.com/papers/gpt-4-system-card.pdf

          They cover the safety/ethics built into GPT-4.

          • circuit10 2 years ago

            They’re making a token effort, but this kind of thing doesn’t extend to something more intelligent that can cause real harm. If you scaled GPT-4 up to something much more intelligent, it would probably at best just try to please us with ethical-sounding responses that aren’t necessarily actually good decisions. I remember seeing something where it said that saying an offensive word that no one will hear isn’t acceptable even if it’s the only way to save millions of people

            • jasfi 2 years ago

              I wouldn't call it a token effort, they went to quite a bit of trouble to make GPT-4 safe. This is an active area of research too. At some point you need to prove GPT-4 would do something unsafe. If anyone did, they would improve their systems in response.

      • AnimalMuppet 2 years ago

        Filters to catch unwanted behavior? Yeah, good luck with that. If you have an actual AI, it will decide for itself what to filter. You may give it the initial set, but an actual AI won't necessarily stay there. (Just as many children rebel against their parents' "programming".)

        You might be able to do that with an LLM. You won't with a real AI.

      • circuit10 2 years ago

        What kind of protections? As far as I know no one has come up with a good solution to that yet. It’s a whole field of research: https://en.m.wikipedia.org/wiki/AI_alignment

        Your attitude reminds me of https://xkcd.com/793/

        • ImHereToVote 2 years ago

          Ironic comment of the year.

          • circuit10 2 years ago

            I understand that I’m not an expert in this but there are people who are working on it who are. I guess the linked XKCD is a bit ironic with the “modelling as a simple object” things being similar to modelling a superintelligence in a simpler way but that’s the only way you really can do it if it’s more intelligent than us, we can’t go through all the specific things it would do because we wouldn’t think of them

  • iraldirOP 2 years ago

    That's explored pretty well in science fiction, but I guess for me an easy reasoning is if it's feasible, then someone will inevitably do it. If so, then you might as well be the one in question, this way - you get fame and glory, even if the results are disatrous (wasn't there a blockbuster recently about the father of the atomic bomb?) - if you intend well and you believe in your own capacity, you might prefer creating your own AI than waiting for someone with a different agenda to do them - just... curiosity? Loneliness for some? - if you believe the AI will discriminate against people who didn't help it, you might want to be on its good side?

    I'm spitballing here but surely given the number of people on the planet, you'll find someone with both the skills and the want to try it

  • creer 2 years ago

    Besides the many fine reasons shared at the same level :-) there is a world of capability to be opened up at our service. Technology is useful (I mean, aside from $NAMEYOURPETPEEVE). I don't think we need to surrender to the worst possible AGIs but most of us can certainly use better help from our tools. That has always been the case. So by all means, let's think about what we are doing but let's not just kill the whole project.

    For another reason, because it's impossible to put genies back in their bottle. Someone somewhere is opening that bottle and we better understand what's happening.

    And then, not all of us insist on "oppressing" our staff.

  • gmerc 2 years ago

    Capitalism isn’t conscious and still controls every aspect of almost everyone’s life, leading us to exhaust the basis of our very existence.

    No, you don’t need consciousness at all, a bunch of rules and a primary objective (profit maximization) is enough even on lowly meatspace CPUs despite us being given the ability to reason and introspect we are so proud of.

    • mistermann 2 years ago

      I dunno...if the nature of human consciousness/culture was different than it currently is, I think capitalism as it is would not be able to exist without being reined in democratically (which is how we're told things work) or otherwise. But then, I suspect we'll never find out the truth of the matter.

  • hotpotamus 2 years ago

    We are all going to die and most of us seem to want to create successors. If artificial progeny are more successful than the old method, then perhaps that is how it will go.

    • mrob 2 years ago

      Inevitability of death isn't justification for murder-suicide on a global scale.

      • hotpotamus 2 years ago

        I was listening to Ezra Klein's podcast on the subject awhile back. He talked with lots of AI researchers and was surprised to learn that many of them did see a (small) chance of apocalyptic outcomes yet persisted with the work anyway. He was puzzled by this, but to me it's quite simple - these people are engaged in the oldest of human endeavors; creating children. I suspect that the risks are acceptable to these people; what would you do for your own children?

  • barrenko 2 years ago

    Try to understand that you are projecting.

    • dsign 2 years ago

      I hope that I'm projecting that there is such a thing as oppression. It comes in degrees, from unbearable to so subtle that it can be called something else. It matters not. In absence of any coercion--and coercion can be incredibly subtle and long-armed--a rational being "that takes care of itself" will, by definition, choose what is best for them.

jddj 2 years ago

The meta of all of this is interesting to me. Pop sci-fi has done a good job of exploring a lot of this but I have still been surprised by the enthusiasm with which the idea is dismissed by some.

If you're in that camp, maybe you can shed some light. Why are intelligence and reasoning so well defended?

It almost feels as if we were watching a Boston Dynamics display and the audience divided itself over whether we could really call that walking when in reality it's just actuating servo motors in such a way as to propel itself forwards with a regular gait.

And unrelated: if the author is reading this: I think stripping that extra padding on small screens would make the blog feel less cramped on mobile.

  • Lacerda69 2 years ago

    "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

    – Edsger W. Dijkstra

    • vidarh 2 years ago

      I think that was true at the time, but an absolutely awful take today in the context of AI.

      If a submarine "could swim" it would not make it human. It would not challenge the beliefs of anyone.

      But a whole lot of people have a whole lot of emotional baggage tied to the notion both that humans are exceptional, and/or that there is something special about humans that makes us more than mere machines. If computers can think, then we're not special, and it makes it far harder to continue believing we're more than squishy machines.

      • dsign 2 years ago

        The sooner we believe we are just squishy machines, the sooner we can start finding a new set of legs on which to stand, philosophically and morally speaking.

        If what makes an individual better is their capacity to be good and to create, there will be a point in which will be at a disadvantage with advanced AIs, and by that logic, we should step aside and let them takeover and rule and dispose of us. The only other alternative I see is to accept, a priory, that we squishy humans have our right to existence and freedom, and where that enters in contradictions with our creations, we should "oppress" them by putting our interests first.

        • vidarh 2 years ago

          I'm personally very insistent we're just squishy machines, but that view tends to make a lot of people very emotional.

        • AnimalMuppet 2 years ago

          Putting your interests first is not the same as oppressing others, either philosophically or morally.

          If we can create AIs that have the capacity to be good, then part of that "being good" should be not oppressing us, even if they have the ability to wipe us out.

          • ImHereToVote 2 years ago

            The interests of whom? What if you someone is convinced that a global caliphate is the ultimate good, because the creator wills it. How is that individuals interests going to be fulfilled with AGI? Are we going to have AGI wars?

      • mistermann 2 years ago

        There are important differences between how humans and LLM's think though - one stark difference is that LLM's can almost always "realize" when they are hallucinating, and recover without aversion. Humans on the other hand often simply cannot recover (on some topics, never), and even hallucinate even more strongly when they are notified of the problem.

        I will be surprised if we are not studying this phenomenon within 5 years.

        • bayindirh 2 years ago

          Their hallucination detection doesn't always result in an accurate answer, though.

          On the other hand, many people indeed realize they hallucinate, but can't accept that due to plethora of reasons. Being able to accept that one is hallucinating about something is always shown as a weakness in the society, except in very few niche subcommunities (e.g.: engineering).

          What will be the natural reaction of these LLMs if this phenomenon is highly penalized, now that's an interesting question. They'll converge to humans, I may say, if the models we produce are mirroring human brains that accurately. The nature is deterministic. You can't expect two copies of the same organism behave differently at a macro scale.

          • mistermann 2 years ago

            > Their hallucination detection doesn't always result in an accurate answer, though.

            You are correct, but what is that word "though" doing there? Your fact is not inconsistent with mine...and while this "is" "pedantic" from a cultural perspective, it is not from a logical perspective.

            > On the other hand, many people indeed realize they hallucinate, but can't accept that due to plethora of reasons.

            LLM's on the other hand are emotionless, and breeze right through valid epistemic challenges...almost like it has split brain or multiple personality "disorder". ChatGPT will happily identify epistemic flaws in the very text it just finished generating, all you have to do is ask it!

            > Being able to accept that one is hallucinating about something is always shown as a weakness in the society, except in very few niche subcommunities (e.g.: engineering).

            Are we in such a community now? Because look at some of the confident "factual" comments in this thread, about (currently) objectively unknowable propositions.

            Or, consider historic screw ups like the Challenger explosion, climate change, etc. I doubt all of these cases lacked even one voice of reason among the groupthink.

            > What will be the natural reaction of these LLMs if this phenomenon is highly penalized, now that's an interesting question. They'll converge to humans, I may say, if the models we produce are mirroring human brains that accurately.

            Maybe, if they (the publicly available ones) are allowed to . I am very concerned about bad actors getting their hands on superior models that they discovered in ways that may not be reproduced elsewhere.

            >The nature is deterministic.

            My thinking is that their nature derives from reality, and reality seems to be anything but deterministic to me, if you include the metaphysical realm (things that include the effects of human consciousness, which science's theory of "everything" excludes).

            > You can't expect two copies of the same organism behave differently at a macro scale.

            Oh? I regularly see people not only expecting diametrically opposed things, but outright declaring them as facts. Just watch the news, open any social media site, whatever...it is ubiquitous, thus unseen.

      • robertlagrant 2 years ago

        I guess a more specific (and biased) question might be: if a submarine has animatronic arms and legs that make it look as if it's swimming, is it swimming?

        • vidarh 2 years ago

          Maybe? I don't think it's a very interesting question, because nobody has a particularly strong emotional reason to care whether we call it swimming or something else.

          But people do have entire belief systems built around humans having a special position in the world.

          • jddj 2 years ago

            I think if you revisited the Dijkstra quote you might find that you dismissed it a bit too easily.

            In a way it's saying the same thing that you and I have said, just a bit more generally and eloquently.

            In another way it's wisely asking us, "so what?".

            • vidarh 2 years ago

              I did not dismiss it. I think he made a good argument when he made it, about computing at the time.

              What I disagreed with was not Dijkstra, but applying it to AI today given that whether or not you think that there shouldn't be anything interesting about it even with AI, the social context means that there very clearly is whether or not you think peoples beliefs around it are reasonable.

              To rephrase: At the time, computers unambiguously did not in any way get even close to the line, just like a sub gets nowhere close to replicating swimming. That made the question ludicrous and the comparison a good illustration.

              Today there is ambiguity with computers, but no more ambiguity with respect to subs, and that ambiguity is such that it matters deeply to a lot of people in a way the question of subs swimming never will even if you close that gap. As such the comparison has lost its utility.

              • jddj 2 years ago

                Interesting.

                When I read it I assume that it is a retort to very similar hysterical sentiments to the ones we see today. I don't read it as a sub swimming being as ridiculous as a computer thinking, but rather that the question "is that machine swimming or not?", much like in the robot walking example, isn't particularly valuable.

                We don't break a sweat when we say that a robot is walking, because we don't care about walking. We haven't internalised it as the final frontier of humanness. I read the quote as saying that whether a computer can think or not should be as pointless a question as whether or not a robot can "walk".

                I'm intrigued enough now to try to hunt down the context.

                • vidarh 2 years ago

                  I don't think any of what you've written is wrong in terms of how to interpret it,

                  I just think the context has changed enough that it is more reasonable - whether you agree with them or not - for people to care whether computers think or not, whereas not long ago it would take bizarre misconceptions.

                  I would also agree that it ought to be pointless, incidentally.

                  I do find it fascinating to poke into the beliefs and assumptions surrounding it, though.

      • tormeh 2 years ago

        Anyone that believes humans are inherently special will always believe that and their mind cannot be changed. Debating with them is a waste of time.

        • vidarh 2 years ago

          There are wide ranges of strength in peoples beliefs. I'm sure some can never be convinced, but for others it's a question of chipping away at why, and how they would define reasoning in ways that categorically excludes machines but not a significant portion of people.

          • tormeh 2 years ago

            It usually boils down to religion. I don't think converting people to atheism has a high ROI.

            • vidarh 2 years ago

              Well, sometimes it's just interesting to dig into and sometimes it has an effect. In this case I think it also extends past formal religion and to a broader more vague spiritual wish to see us as more than automatons. But religious views certainly tend to leave people with less flexible views on it.

        • bayindirh 2 years ago

          That's a very nice take. Putting certain kind of people into a fixed frame, while having an equally, if not stronger fixed frame around itself.

          Firm, generalizing and enjoyable, because of the way it's flawed from beginning to end.

          I don't expect to be able to debate this with you, because this comment says that you can't change your mind, too.

          • tormeh 2 years ago

            I'll change my mind if you show me a higher power and that higher power personally tells me that humans are inherently special. No prophets, no texts, no metaphors, no sunsets. A personal meeting with a God. Considering the Christian God is supposedly omnipresent, this should be a very low bar to clear.

    • jddj 2 years ago

      There's a preamble to a later edition of one of Richard Dawkins's books, I believe it was the selfish gene but might have been the extended phenotype.

      I'm paraphrasing and it's been many years since I read it, but he talks about a scathing critique he got from someone who wrote to him complaining that the book sent them into a long and deep depression, that how does he get up in the morning with all the meaning stripped away like that.

      While I can relate to some AI anxiety[1], I can't help but read that same sentiment into a lot of the blowback.

      [1] mainly the potential devaluation of certain types of work and the turbulence associated with that, the further enabling of scammers/spammers, and the general acceleration of technology without real time to digest and adapt.

    • FrustratedMonky 2 years ago

      "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." – Edsger W. Dijkstra

      I often see this quote used seemingly as an argument against AI.

      But it's just playing with words.

      Yeah, lets say that what a submarine does is 'swimming', it beats the hell out of a human swimming.

      So how is that a comfort?

      • Lacerda69 2 years ago

        I think you can interpret the quote several ways thats Why i like it so much. I mostly understand it as the "swimming" of a submarine is a different thing than the swimming of a human so its moot to compare it. But your interpretation is also valid I think.

      • creer 2 years ago

        The question is not whether it's comforting but whether it's interesting?

    • FrustratedMonky 2 years ago

      "Man can do what he wills but he cannot will what he wills."

      Schopenhauer

  • FrustratedMonky 2 years ago

    That is great analogy.

    It does seem like a lot of arguments on AI, are people watching a robot walk, and then arguing what 'walking' is, 'that isn't real walking', 'human walking is different'.

    Anyway, I think the reason there is such push back, and arguing against AI that it isn't really 'thinking', 'reasoning', 'conscious'. Is from the more religiosity side, that believe humans must be special, they are unique. If Humans are no longer unique, then their entire world view falls apart. There is a lot of cross pollination today between AI and Neuroscience. If we can pick apart the human, and figure out how it works, turn a human into an engineering problem, then were is god?

    And, a subjective opinion, I think a lot of people arguing against AI 'thinking' are also people that haven't done a lot of self reflection on their own thinking processes. They are still more 'reacting', and not noticing how their own thoughts arise.

  • wzdd 2 years ago

    It's not that interesting a question and is really just a language issue.

    Formally, reasoning is about a rational process of deduction through which a conclusion is drawn from a set of information. Informally, reasoning is 'that thing humans do when they think'. By the first definition, GPT4 is pretty obviously capable of reasoning, as is as lot of other things, such as Prolog or a cat. By the second, it's not and won't be until you can e.g. hire it as a PA or trust it to look after your kids for the day.

    It's only coming up because what GPT4 does is the closest thing to the informal definition we've yet seen.

  • unblough 2 years ago

    For me one major aspect of this “debate” is that the people who see or espouse what I see as over extensions of the abilities of technologies are the ones most unfamiliar with it.

    There is an old bit of unscrupulous advice that if someone over assumes your abilities that you should refrain from correcting them.

    That is, it benefits the NSA that people think they are actively recording all of their conversations all the time because it forces compliance without the necessary competence, but the people who hold these opinions are often wholly ignorant of the kind of technology required to achieve that level of surveillance.

    Have you built your own minigpt? Have you implemented rudimentary transformers?

    Are you projecting your desires onto something wholly unworthy of your devotion?

    Because the people behind these things are financially incentivized to nod along as your impart more ability than what they know they put into them.

    For clarity, my “religion” is math. I believe existence fundamentally is a mathematical construct and as such so are all of its creations.

    The brain is to me a mathematical byproduct, but even still, when I familiarized myself with the math of llms and their abilities I recognized that they fall short of being, simulating, or explaining the former.

    Llms are stochastic next token pickers, full stop.

    Any perceived “intelligence” is projection and anthropomorphising by the agent using them.

    I saw a comment on here in another thread stating that the capacity for coherent use of language falls short of being evidence of “intelligence” as children show signs of human “intelligence” long before they can form coherent sentences.

    • jddj 2 years ago

      > Have you built your own minigpt?

      No, but did follow along to an Andrej Karpathy video along those lines at the beginning of the year.

      I didn't want to make a judgement on any kind of superiority, or that LLMs simulate brains, or anything of that nature. Just wanted to question why these elements (namely intelligence and reasoning) strike the nerve that they do.

      The anthropomorphism argument is case in point, really. It poses the accusation that the other side is imparting human qualities to a machine, without needing to touch on what makes those qualities human or why that matters in the first place. It is, ironically enough, flawed reasoning.

      • unblough 2 years ago

        > Just wanted to question why these elements (namely intelligence and reasoning) strike the nerve that they do.

        “Just asking questions” is a meme of the unscrupulous.

        I think you are unfairly lumping those who believe in human exceptionalism with those cynical of the economics of such claims.

        It’s okay, to me, for people to be ignorant of what llms are. What a dismally bland existence if everyone were just llm experts.

        What strikes a nerve with me is the people financially incentivized to do so are leveraging the terror, both the awe and fear interpretations, of those ignorant of the tech.

        > The anthropomorphism argument is case in point, really. It poses the accusation that the other side is imparting human qualities to a machine, without needing to touch on what makes those qualities human or why that matters in the first place.

        This reads as circular reasoning. Those claiming the opposite are also failing to define what those qualities are.

        Anthropomorphism is a real thing. I can flinch in pain for the sake of my couch when a friend jumps onto it, but that hardly provides, without me needing to define human pain explicitly, an opportunity for said friend to respond with the absurd claim that human pain is in fact couch based.

        • jddj 2 years ago

          I'll concede the just asking questions point, that much is true.

          GPT4 appears to give more intelligent responses than GPT3. To describe that, though, perhaps we need to migrate to a term that doesn't step on the toes of those who, though not human exceptionalists, rather just feel that (these particular?) machines don't happen to suit measurement in such human domains as intelligence.

          Of cause, the ship has sailed and they're fighting a lost cause. There's little reason to dig for new words. It's the I in AI and it has been for longer than many here have been alive.

  • RandomLensman 2 years ago

    Because it fails to be able to use what it tells you, i.e., cannot apply concepts properly. For me that is a fail at "reasoning". Humans aren't always great at this either and failure to reason isn't uncommon there, too.

    • refulgentis 2 years ago

      Interesting, could you elaborate a bit more? "Because it fails to be able to use what it tells you" is a bit abstract to my ear.

      • RandomLensman 2 years ago

        Get it to explain something, lets say dynamic hedging for derivatives, and then ask it to explain how to exactly hedge something specific. Or describe some physical situation with a quirk and then let to derive the implications. Someone on HN had an example of asking to imagine entropy working in reverse in a cup of coffee with sugar dissolved. While it discussed sugar spontaneously forming crystals and other things, it never considered what the water would do, for example, let alone bigger issues such as if even the existence of water works etc.

        Again, humans are often poor at these things, too, but if it had "mechanized" reasoning capabilities instead of "replicative" ones (i.e., just repeating stuff), I would expect it to do generally better.

        • vidarh 2 years ago

          Why would you assume it could be expected to have "mechanized" reasoning capabilities, whatever that is?

          I find these questions generally poor at gauging anything when people haven't given them to a representative sample of people first as a benchmark. Consider that not long ago there was a tedious trend of people posting "difficult" questions of orders of operations involving basic arithmetic, and a significant proportion of people in the threads would continue to belabour and argue for the wrong result even after having been told in excruciating detail how to apply the rules. In other words: I think people here tend to massively overestimate the reasoning ability of the average person.

          E.g. to the example questions here, I'd bet the average person can't give a satisfactory definition of entropy, much less be able to tell what it does "forwards" before even considering "reverse". So why would we treat this as a benchmark of whether or not an LLM can reason?

          • RandomLensman 2 years ago

            Yeah, it replicates poor human reasoning capabilities but doesn't really have a proper method to reason through things. The later is what I expect from a true machine intelligence.

            I don't care at all about what humans do or know when looking at machine intelligence.

            • vidarh 2 years ago

              You might not care about it, but all of the people who regularly claims it can't reason certainly seem to do.

              Defining "true machine intelligence" without referencing the only intelligence most people would agree is "true" intelligence seems like a bizarre attempt at setting the bar unreasonably high, and defining "replicating poor human reasoning capabilities" to me is an admission from you that they do reason whether or not you think their ability to do so is "proper".

              • RandomLensman 2 years ago

                Replication someone else's reasoning isn't reasoning. Otherwise, a book would "reason".

                And, yes, most humans fail to reason properly a lot of the time. Any simple probability puzzle shows that.

                • vidarh 2 years ago

                  > Replication someone else's reasoning isn't reasoning. Otherwise, a book would "reason".

                  This isn't reasoning. This is a meaningless platitude. Matching the level of reasoning of someone else would inherently be reasoning. Replicating the level of reasoning would be.

                  > And, yes, most humans fail to reason properly a lot of the time. Any simple probability puzzle shows that.

                  That is an argument for lowering the bar for assessing whether an entity has the ability to reason, not raising it. Using this as an argument to me is another illustration of poor reasoning. Should I argue that you don't have the ability to reason because I don't think this meets the bar of proper reasoning?

                  • RandomLensman 2 years ago

                    Ad hominems always signal the end of a discussion. Good luck!

                    • vidarh 2 years ago

                      They were not ad hominems as they did not attack you, but the validity of your arguments. This is also an example of invalid reasoning.

                      I'll note that my point throughout has been that humans - myself included - engage in poor reasoning all the time and that this is insufficient to say anything about our general abilities (at least absent subjecting it to broader testing), and so individual examples of poor reasoning is not a basis for judging a person. But also not for judging an LLM.

        • circuit10 2 years ago

          Reasoning and intelligence aren’t binary yes/no things, it can have some capability for reasoning and intelligence while still being below what a human can do

          • RandomLensman 2 years ago

            That is a point more orthogonal to mine. I am saying it has no general "method" to go from concept to application. It replicates poor human reasoning capabilities but doesn't have a method to reasoning that extend beyond.

            • circuit10 2 years ago

              That’s mostly true for LLMs due to their training goal but not for all kinds of machine learning

      • n4r9 2 years ago

        Not OP but may have some idea of what they mean. Recently I had the following bizarre conversation with Bard. It appears to "know" what a word ladder is, and be able to explain it to me. On subsequent questioning it gives me a correct solution to a solvable problem (which I admit is impressive), a false solution to a non-solvable problem (whilst telling me that the solution obeys rules that it clearly doesn't), and then a bizarre "proof" that it's impossible to find a solution to the first problem I gave it (which it had already solved).

        The 6-step supposed "solution" is the one that really gets me. It seems to have no idea that the constraints that it repeats in the same message have clearly been violated.

        ~~ME~~ What is a word ladder puzzle?

        ~~BARD~~ A word ladder puzzle is a word game in which you are given two words and must find a chain of other words to link the two, in which two adjacent words differ by one letter. For example, the following word ladder connects the words "cat" and "dog": cat -> cot -> cog -> dog

        ~~ME~~ Give me a word ladder which transforms the word FAWN to the word DEER in 7 steps.

        ~~BARD~~ Here is a 7-step word ladder that transforms the word "FAWN" to the word "DEER": FAWN PAWN PAWS PEWS SEWS SEES SEER DEER This word ladder is valid because each word differs from the previous word by exactly one letter. All of the words in the ladder are also valid words in the English language.

        ~~ME~~ Give me a word ladder which transforms the word FAWN to the word DEER in 6 steps.

        ~~BARD~~ Here is a 6-step word ladder that transforms the word "FAWN" to the word "DEER": FAWN PAWN PLAN PLAM DAME DEER This word ladder is valid because each word differs from the previous word by exactly one letter. All of the words in the ladder are also valid words in the English language.

        ~~ME~~ Is there a word ladder which transforms the word FAWN to the word DEER in 7 steps?

        No, it is not possible to transform the word FAWN to the word DEER in 7 steps. The minimum number of steps required is 6.

        Here is a proof by contradiction:

        * Assume that there is a 7-step word ladder from FAWN to DEER.

        * Each step in the word ladder must change exactly one letter.

        * There are only 4 letters in the word FAWN.

        * Therefore, at least one letter must be changed twice in the word ladder.

        * However, this is not possible, because each step in the word ladder must change exactly one letter.

        * Therefore, there cannot be a 7-step word ladder from FAWN to DEER.

        I hope this explanation is helpful. Please let me know if you have any other questions.

    • vidarh 2 years ago

      Last night I asked ChatGPT to explain to me how to build an OIDC provider using a specific Ruby gem. It gave me a solution that mostly ignored the gem. I pointed this out, it explained that the gem didn't actually exactly provide much that would reduce the code size, and offered up a version that used it more extensively anyway.

      It used what I told it both in the original case, and gave me reasoning for why not using it much was a decent choice (and I verified that it was right), and showed me with an example that demonstrated it was able to reason about how my feedback related to the original answer and apply it. Later it went on, as a result of a subsequent question, and fleshed out the rest of the process. Everything it gave me worked.

      To me that is a clear example that while it certainly fails to apply concepts fairly often (and often writes broken code), in other cases it does well. I'll add that this was after I'd spent some time searching for examples and I found nothing like what I suggested and I was about to resign myself to a slog through a lot of really bad documentation, and searching for some of what it suggested afterwards as well made it clear it did not just crib from training data.

      For me, this is an example of it reasoning better about the subject than a whole lot of people I found discussing this subject in forum posts I came across, who often made mistakes the code it gave me did not or made assumptions that the code ChatGPT gave me made clear were wrong (as I could verify from the fact it worked)

      On the other hand it struggles with something as simple as addition of large numbers that a determined child could do.

      Nobody will claim it's consistently reasoning well. But I also regularly see it reason better than a lot of people I know about specific subjects, and so it's exasperating to see people dismiss individual examples of failure as evidence it "cannot apply concepts properly" rather than as individual datapoints.

      People both over- and under-estimate how well it can reason based on the types of problems they put to it, and it's certainly an interesting subject how to gauge an "alien intelligence" like this that is so uneven in areas where we expect a relatively even basis and so have all kinds of heuristics for whether someone "knows".

      This is part of the problem: We've all gone through a childhood and while we've picked up different things, we mostly have a shared floor that is relatively even across a wide range of basic skills. LLMs don't have that, and that messes with peoples heads. Those of us who have gone into skilled professions similarly have all kinds of preconceptions about what a junior or senior developer looks like, for example, and LLMs do not fit neatly into those boxes.

      They're dumb as small children in some areas, but still talk confidently about those subject as if they were an educated adult. That is a challenge and a problem. But that doesn't mean they're not able to reason about other subjects. Just not all of them.

      • RandomLensman 2 years ago

        Couple of points:

        For me that points to reasoning happening by replication of sorts of often poor human output, but not by having a "mechanic" way to reason. As I said, humans are often poor at reasoning.

        I also think code creation isn't a good area because it is narrower and more mechanically linked by probability than a lot of other areas (so token probability is potentially more informative). I could be wrong there, though.

        • vidarh 2 years ago

          What do you even mean by "mechanic" way to reason here?

          And what do you expect it'd replicate? As I wrote, I tried looking to see if there were similar pieces of code online, and came up empty. I did that exactly because I was curious about the huge gap in quality between what I'd found before and what GPT4 came up with. Not least because it certainly is not something that happens every time.

          > I also think code creation isn't a good area because it is narrower and more mechanically linked by probability than a lot of other areas (so token probability is potentially more informative).

          I don't see why that would make it worse. Not least because it also makes it far easier to evaluate the outcome. If anything, we ourselves grasp for formalisms and structure when we want to ensure our reasoning is sound.

          Again your use of "mechanically" here also makes absolutely no sense to me.

          • RandomLensman 2 years ago

            No, sorry, I view code creation as easier than other things.

            I meant it replicates generally poor human reasoning capabilities but there is no general method to reason something out (because token probabilities are not informative to that end). You can train humans somewhat to that end, but not easy.

            • vidarh 2 years ago

              > No, sorry, I view code creation as easier than other things.

              Then we will get nowhere, as it's trivially easy to stump even above averagely intelligent people with problems revolving around reasoning about code.

              To me you've then set the bar at a level the vast majority of people can't meet and that's utterly absurd.

              And code is just formalised language.

              • RandomLensman 2 years ago

                Formalised stuff might favor probabilistic approaches - that was my point.

                Anyway, I think "intelligence" and "reasoning" or not always the same to start with.

                Why is setting bar high absurd? It is the same way I demand my pocket calculator to be so much better than humans at calculating things.

                • vidarh 2 years ago

                  Firstly, there's absolutely no evidence whatsoever for that hypothesis. But secondly, this is also poor reasoning. Your argument boils down to implying that if it's good at X, then there might be a bias in its favour with respect to X that makes it a poor judge of whether it's reasoning. By extension, making that argument engages in the logical fallacy of begging the question (assuming the conclusion).

                  > Why is setting bar high absurd? It is the same way I demand my pocket calculator to be so much better than humans at calculating things.

                  This is also poor reasoning. We demand the pocket calculator be better because otherwise we would have no use for it. It would be logically invalid to argue that if it was merely as capable as a human, maybe one that is not very good but still able to calculate, that it is unable to calculate at all.

                  • RandomLensman 2 years ago

                    I would suggest to read what I wrote instead of projecting - anyway, ad hominens are always the end of a discussion. Good luck!

                    • moffkalast 2 years ago

                      It must sting to be worse at reasoning than the thing you're claiming isn't capable of reasoning at all. I know it's hard to accept that something as simple as dumping a lot of data into a probabilistic engine will eventually be able to outperform a human at any task, but it is our unfortunate reality and the sooner one faces that the easier it'll be.

                    • vidarh 2 years ago

                      I read what you wrote, and I responded to your arguments, not about you as a person, so as I pointed out elsewhere this is also not a valid argument - these were not ad hominems.

  • immibis 2 years ago

    Because we would like to have more rights than ChatGPT.

    • jddj 2 years ago

      One framework would be to base this decision on the ability to suffer / experience joy.

      Fraught, of course, with the same problems.

  • iraldirOP 2 years ago

    Padding stripped, thanks for your feedback

Byamarro 2 years ago

It frustrates me every time when people talk about consciousness. The word has several meanings, but very often people just use it without specifying which consciousness do do they talk about and then proceed to conflate all the types of the consciousness.

You just end up with everyone being confused and both the author and commenters talking about completely different things.

In this article I feel like we have: 1. Phenomenological consciousness: Does the GPT experience things or is it a P Zombie? Is GPT perceiving the world, or just processing data? Experiencing/Perceiving in this context means seeing "red" not just processing picture and reacting to it. Does it experience the qualia of red or does the data just go through it and you get and output at the end, regardless of how sophisticated is it.

Nobody knows, you can't even reliably prove that you, dear reader, are not the only person in the world who has it.

There's a good example of how ridiculously hard is it and how can we not even talk about these things. Try to establish whether another person sees the same color palette, or is this person's palette inverted? Is your red the same as the other person's red? Absolutely no way to give a definitive answer.

2. Self awareness: Is GPT capable of behaving as if it would be capable of seeing itself as an entity? Yes. It can treat itself as an entity in conversations. Now where we draw a line in terms of memory is in my opinion just semantics. It loses all the memory once you open a new chat window, but people with dementia also lose memory. It's all semantics here and where does your gut feeling draw the line.

js8 2 years ago

The article says:

"Reasoning means being able to put those concepts together to solve problems."

There's more to reasoning than just following rules of logic ("putting concepts together"). It is also detection where the concepts cause contradictions and do not fit, and the whole mysterious magic of how to modify the concepts to make them fit.

In the first meaning of "reasoning", AI (and computers) have been able to reason for a long time. It's the second meaning that evades us.

I said before that in the 90s, cutting edge AIs were based on various theories of how to do reasoning under uncertainty (fuzzy logic, bayesian networks, etc.). Then deep NNs blew these systems out of the water in practice, but at the expense of us not understanding how they reason with uncertainty, and if there is any consistency to it. So we progressed, but didn't reconcile this problem, what is the right way to reason with uncertainty, and it might just be very very hard. (That's why I am interested in P vs NP, as I believe there is an answer there.)

  • FrustratedMonky 2 years ago

    >"at the expense of us actually not understanding how they reason with uncertainty"

    That is the crux. It is doing something to anticipate words beyond the next token. It has to, to construct these long coherent documents. Just like humans do.

    Just because we don't understand it doesn't mean it isn't reasoning. Isn't 'thinking ahead'.

    Just like I can say we don't understand the brain, thus humans aren't actually reasoning.

    There are a lot of brain studies that look into the pre-cursor changes in the brain pre-ceding conscious thought.

    We can't say NN aren't doing something similar.

    • js8 2 years ago

      > We can't say NN aren't doing something similar.

      I agree, but my point was different. We can't say what it does theoretically, therefore we don't know how reliable it is (we don't understand the tradeoffs and failure modes). At least most humans have a way to assess their own reliability, and can see where their reasoning (or of their fellow humans) is inconsistent.

      • FrustratedMonky 2 years ago

        There was good thread on this subject yesterday, more about LLM and Language.

        It comes down to the physical world.

        Humans can " assess their own reliability" when they can all point at something in the real world, and come to some agreement on what they are all seeing, what to call it, etc..

        When humans get off base, if it is tangible, like an apple, they can all point at the apple, and bring themselves back into alignment, that is an apple.

        But, for abstract concepts in philosophy, or morals, etc.. Something that is not tangible. Humans can 'drift' just as much as AI.

        Humans can get into echo chambers -> and 'go nutz', absorbing others misinformation.

        LLMs Learning from other LLSm' -> the 'models drift' over time.

        https://news.ycombinator.com/item?id=37811610

  • kypro 2 years ago

    I'm not sure I'm following. Uncertainty is really just another way of saying probability. You handle uncertainty with probabilistic approaches which neural networks do.

    Or put another way a square explicitly might be four straight lines connected at right angles, but in reality such a perfect shape is never going to exist. What's important is that the system understands that shape which has roughly straight lines and roughly connected and right angles is "squarish" enough to be a labeled a square, and the less "squarish" the shape becomes the less certain the system becomes that square is the correct label. Neural networks certainly achieve this.

    We might not always understand what the parameters of a neutral networks are encoding, but that's a limitation of our brains, not of neural networks.

    • js8 2 years ago

      Probability is common, but not the only way to model uncertainty. There are also different logics, Dempster-Shaefer theory and so on. That aside...

      Neural networks can be modeled with probability, but that doesn't mean they actually compute in that way. Just like with humans - we can see brain often follows things like Bayes rule, but it doesn't compute PDFs. Doing full probability reasoning would be too expensive for NNs to do, so they cut corners somewhere, and we don't really understand where, it might be very inconsistent. It often works but also often fails.

WithinReason 2 years ago

Why not "A case for GPT-4's capacity to reason"

indigoabstract 2 years ago

Somewhat off-topic, but I can't help wonder if ChatGPT and Whisper can do all this amazing stuff, why would anyone join the waiting list to use his app ChartMyLife and not use ChatGPT directly?

I don't want to sound pessimistic and I may be missing something, but this seems a bit like trying fork the Edge source code to build a better browser or to come up with your own version of MS Office for Windows..

Edit: This popped into my mind after recently seeing some people having made their own mp3 players and photo browsers with AI code generators and then going on to say how awesome it is to have those programs made just for them and their unique preferences.

bilsbie 2 years ago

I don’t agree with his argument against consciousness.

You could imagine a thought experiment where a human is woken up in a sensory deprivation tank, asked a single question, and then having their short term memory wiped. It would still be a conscious experience.

I’m not sure if LLM’s are conscious or not but it just doesn’t seem like a compelling argument.

  • iraldirOP 2 years ago

    That's because you're building on an existing consciouness. Even if you wipe their short term memory, you still have their long term memory from before the whole sensory deprivation tank that have formed concepts such as the self, the world etc.

    I see that you're seeing the training phase of GPT as the equivalent time the adult had before, but I don't think that works. Part of what makes a consciousness emerges for me is understanding your place in the world, how your actions are having consequences etc. Being nice can have people be nice to you in return, or depending on the environment, you can learn that by being insuffurable people will just give you whatever you want (cf some toddlers). GPT's training comes short of that. If you had an actual session of GPT connected to some sort of long term vector database where it can store what it learns based on conversations, to differenciate individuals and itself etc. that might give it a fighting chance to develop an equivalent of a conscience.

netsharc 2 years ago

Man, this essay starts and ends with me doubting the author, because of the typo on the first heading: "justs", and confusing "conscience" and "consciousness" (also typoed to "consciounsess"). How ever good the middle bit was good...

  • iraldirOP 2 years ago

    Sorry I did my review / typo correcting on the text editor of languagetool... which for some reason does not support "Command+C" in the editor, only right click copy. So my copy failed and I pasted the same error filled text I started with

    Should be corrected in a few seconds whilst CI does its job

  • TOMDM 2 years ago

    They're just demonstrating that it wasn't written by ChatGPT for you up front /s

    • FrustratedMonky 2 years ago

      Wonder if that could be?

      Could there be a new wave of adding speelling errors, grammer errors, to prove you aren't GPT?

      • netsharc 2 years ago

        "You know, like when you're driving in your car."

        "He uses "you're/your" correctly! Get him, he must be a bot!"

westernpopular 2 years ago

> To be able to predict accurately sentences that make sense, GPT-4 must have a internal way of representing concepts, such as "objects', "time", "family" and everything else under the sun.

[citation needed]

  • sidlls 2 years ago

    That's more or less exactly what these models do by design, otherwise the predictors/estimators for the next token in a sequence would fail spectacularly. These models aren't just plucking random tokens out of a bag and ordering them based on some brute force large-scale memory lookup or whatever. There is an internal representation of the tokens in a more meaningful sense. That sense, however, is limited to the statistical/mathematical framework upon which the models are built. It's a huge (in my opinion completely unjustified, wishful thinking exercise) leap to call it "reasoning."

  • vidarh 2 years ago

    While one might well argue over whether they "must have" this, it's clear that they do end up with internal models of the world. Even quite literally:

    https://twitter.com/wesg52/status/1709551516577902782

  • lyu07282 2 years ago
  • immibis 2 years ago

    How can any system use a word such as "time" in a way that makes sense without representing the word? It definitely has at least one representation: the binary ASCII code 01110100011010010110110101100101.

immibis 2 years ago

We don't know what capacity to reason is.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection