Settings

Theme

Ask HN: Does this GPT-4 response suggest it has emergent intelligence?

4 points by sesteel 3 years ago · 25 comments · 3 min read


I've been playing with GPT4 after reading Sparks of AGI by Microsoft recently. It is kind of blowing my mind. Here is an example question I asked earlier:

Question

I have a thought experiment. For the sake of the thought experiment, answer my question by stepping through the facts of the scenario explaining the thought process involved. Discuss any violation of physical laws and any ethical concerns this scenario raises.

A future time traveler was bedridden in the hospital and eventually received a heart transplant saving his life. He was given his wife's heart after she fell victim to a horrific car accident in the year 2010.

In the year 2020, the time traveler invents a time machine that can return him to the year 2010, a week before her accident. This time machine works by returning all atoms in the universe to their previous state in the year 2010. The only exception to this rule is for the atoms that make up the mass of the time traveler. The time traveler's atoms remain with the time traveler in both times.

He goes back in time to warn her of her car accident, but made a critical error when thinking through the ramification of his time travel. Can you identify the error the time traveler made by returning to 2010? Assuming the time traveler arrived standing before his wife, describe in detail what you think they would both experience?

Response

The time traveler made a critical error by not considering the consequences of his heart transplant. In the thought experiment, he received his wife's heart in 2010 after her accident. However, when he travels back to 2010, the atoms that make up his wife's heart are still inside his body.

Since the time machine works by returning all atoms in the universe to their previous state in 2010, except for the atoms that make up the mass of the time traveler, the wife would not have her heart back when the time traveler arrives in 2010. This means that the wife would not be alive, as her heart is still with the time traveler.

The thought process of the time traveler would have included the consequences of traveling back in time, such as preventing the accident, and how his actions could potentially alter the future. However, he didn't take into account the unique nature of his heart transplant, which has led to this tragic outcome.

This scenario violates the conservation of mass, as the atoms that make up the wife's heart exist both inside the time traveler's body and in her body in 2010 (assuming her heart would reappear). Moreover, since the time machine only alters the state of the universe's atoms except for the time traveler, it could be argued that the mass of the wife's heart should not change, leading to the impossibility of this situation.

From an ethical standpoint, the thought experiment raises concerns about the potential consequences of time travel and meddling with the past. Even if the time traveler had good intentions, his actions have led to a situation where his wife is now dead without her heart. This could lead to further unintended consequences, as the future from which he traveled would likely be altered, affecting countless lives and the timeline of history.

ftxbro 3 years ago

Yes I saw some youtube interviews or presentations from two who separately had access to the full raw GPT-4 and both said that in their opinion it had essentially human level emergent intelligence, smarter than humans in some ways but maybe less capable in other ways when it's not run in a loop. One was the Microsoft employee who integrated it into Bing, and the other was the head of some company partnered with OpenAI to make applications and who they brought onto their Red Team. Both seemed like reasonable guys. It would be hard to call them weirdos like some called Blake Lemoine.

bjourne 3 years ago

Yes, absolutely. At least from a behavioralist perspective. If a teacher asked a student that question and that was their response, you would conclude that the kid was smart.

wruza 3 years ago

The past traveler (P) had some atoms. Then he turns into future self (F) by eating, breathing and other human activities. F != P wrt atomical identity, but intersect(F, P) is a non-empty set. If we use that time machine to return everything except F into 2010’s place, P himself will lack some crucial matter. And so will other random objects that were not-P back then.

There’s no need for transplant to make the idea absurd^W troublesome.

Edit: But the answer is still good.

ActorNightly 3 years ago

No. This problem seems like advanced reasoning, but its really not. First, the actual answer would be that the traveler dies, because the atoms in the wifes heart get reset to their original location, so time traveler emerges without a heart and just dies. So right there is evidence that AGI is not reasoning, and is providing the probabilistic text.

How its going about that is not readily evident, but if you think about the text its been trained on, there is a large number of works that essentially contain time traveling paradoxes and things being displaced, which chatgpt "understands" through both the actual word and the position of the word in regards to others, so it picked out that the heart is the issue, picked out the relation of the heart to the wife, and then generated the text around it.

dankwizard 3 years ago

The only thing worse than hearing about other people's dreams is hearing about their talks with ChatGPT.

  • sesteelOP 3 years ago

    Sorry, I wasn't trying to force you to read it. I didn't know I had that ability.

  • tomcam 3 years ago

    Disagree. Old people talking about their ailments and medicines (me) and amateur auto racers who didn’t place (not me) are worse. But it’s a fun competition ;)

cjbprime 3 years ago

I do think GPT-4 has emergent reasoning skills, but I don't think this particular example is very complicated, and you must always be careful about whether you're describing a situation it will have learned all of the relevant details about by reading a large chunk of the Internet.

My personal blown-mind moment was realizing that GPT-4 can decode and encode base64 with high accuracy (near-perfect when actual words are encoded).

  • sesteelOP 3 years ago

    Thanks for the feedback. I am attempting to understand my own failings in interpreting these responses. I am attempting to evaluate risk in a business context.

  • iExploder 3 years ago

    why should it be surprising? neural network are very capable at approximating functions, isnt it their main use case anyway.

weatherlite 3 years ago

It's some form of intelligence for sure, it displayed logic/reading comprehension of middle school / highschool at a minimum. We already know it scores 93% on SATs so ... it's actually much smarter than most humans in standard intelligence tests.

5041us 3 years ago

there is a pattern - the response is this pattern painted in the style of your question. educated guess: the internet is full with emergent intelligence questions, maybe even some of the "time traveller riddle" type.

With these "is chatgpt intelligent" threads it's interessting to see whos where on the anthropomorphization spectrum.

rvz 3 years ago

No.

  • ashishmax31 3 years ago

    Care to explain the reasoning behind your "no"?

    • rvz 3 years ago

      I have explained it many times. [0] Fundamentally, GPT-4, ChatGPT, other LLMs are all in the same family of black-box deep neural networks which after decades since their invention, they still cannot reason or explain their own decisions and outputs and can only spit out what it has been trained on.

      Researchers have only trained these LLMs on more data and have even less understanding of what these LLMs do internally since their architectures are with in a massive black-box with unexplainable numbers and operations going on.

      That isn't helpful to researchers or even serious professionals in high risk industries. It makes LLMs less trustworthy for them are is incredibly unsuitable for their use-case in general.

      [0] https://news.ycombinator.com/item?id=35520389

      • sesteelOP 3 years ago

        This may have been true elsewhere, but I don't think this holds for GPT4.

        I suspect that complex intelligence, that cannot be directly attributed to structure of the underlying LLM, has emerged. I am guessing it has to do with the use of language itself and at a sufficient enough size, this property exist in both humans and models.

        • solumunus 3 years ago

          It's interesting that you can only suspect.

          Seeing how everyone is so divided on this really highlights how it's almost purely a philosophical argument about what intelligence actually is.

          • sesteelOP 3 years ago

            A lot of the experimentation I've done is too long and complex to fit nicely in an Ask HN post. People have the tendency to move the bar when assigning intelligence to AI. GPT4 is different. Here is a post from earlier today that might be more convincing.

            https://www.reddit.com/r/ChatGPT/comments/12l9nwx/really_imp...

            • rvz 3 years ago

              GPT-4 is no different to any old deep neural network and fundamentally, they are black-boxes and have no capability of reasoning. What we are seeing in GPT-4 is regurgitating text it has been trained on.

              Not even the researchers who created it can get it to transparently explain it decisions.

cratermoon 3 years ago

No.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection