Settings

Theme

Ask HN: How do you personally define 'AGI'?

11 points by barking_biscuit 3 years ago · 30 comments · 1 min read

Reader

I have noticed through reading a lot of discussions online and watching a lot of long-form interviews on YouTube that there is quite a wide variety of working definitions that individuals use for the term 'AGI'.

We're not likely to all get on the same page exactly after one round of discussion, but I think it would help accelerate the process and help us to challenge our own assumptions and update our own mental models.

mindcrime 3 years ago

In my personal lexicon, "AGI" means computer / machine based intelligence that is approximately as flexible and capable as an average adult human. So an "AGI" to me, should be able to drive a car, play chess, do math, discuss literature, etc. What is not required in my definition is for the AGI to be "human like" at all. So the Turing Test is meaningless to me, as I don't need an AGI to be able to lie effectively and do things like describing subjective experiences (like drinking too much wine, getting drunk, and pissing itself) that it never had.

I also don't require it to do things that require embodiment, like "play a game of baseball" or whatever. Although I do see a time coming when robotics and AI will align to the extent that an AI powered robotic player will be able to "play a game of baseball" or suchlike.

To expand on this a bit: I don't over-emphasize the "general" part like some people do. That is, some people argue against AGI on the basis that "even humans aren't a general intelligence". That, to me, is mere pedantry and goal-post moving. I don't think anybody involved in AGI ever expected AGI to necessarily mean "most general possible problem solver that can exist in the state space of all possible general problem solvers" or whatever. Disclosure: I'm partially paraphrasing Ben Goertzel from a recent interview[1] I saw, in that previous sentence.

[1]: https://www.youtube.com/watch?v=MVWzwIg4Adw

  • nullsense 3 years ago

    Lots of interesting points.

    I definitely think the line of argumentation that even humans aren't a general intelligence is an unhelpful one that's also just wrong on an intuitive level.

    Though what's becoming clear to me is that the effect that being part of a multi-agent system has on a single agents ability to generalize is enormous and likely to be quite important when thinking about AGI too.

    Also what's becoming clear is that the possible state space of what might constitute something that could be called AGI is likely enormous.

    I'm mostly interested in it from X-risk perspective of what properties of an AGI system are necessary and sufficient to pose existential risk and what are the visible thresholds you would need to cross on the path to such a system coming into existence?

jasonjmcghee 3 years ago

Nothing to do with being sentient.

I think it has to do with being able to teach itself arbitrary information, including how to learn better, and importantly recognize what it does and doesn't know.

LLMs feel like a massive step towards that. It feels similar to a single "thought process".

A "good / useful AGI" might have aspects such as: - Ability to coherently communicate with humans (hard to prove it's working without this) and other agents. - Ability to make use of arbitrary tools

This sounds very similar to AutoGPT (what people poke fun of as an llm in a while loop)- and if the brain was AGI- I think it'd work very well.

I think there's a critical difference between LLMs and AGI, which is metacognition.

If an LLM had proper metacognition maybe it would hallucinate, but then it would realize and say "actually I'm not sure and I just started hallucinating that answer- I think I need to learn more about xyz." And then (ideally) could go ahead and do that (or ask if it should).

Another piece I've thought about is subjective experience.

Inserting experiences into a vector store recalling them in triggering situations.

  • AnimalMuppet 3 years ago

    I'm not sure exactly what you mean by metacognition, but I suspect you mean something that I think is critical - the ability to watch itself think.

    That is, humans can sense the surrounding world. (Some people call that sentience.) Humans can think about what they sense, organize it, categorize it, find patterns, think both inductively and deductively. (Some people call that sapience.) But humans can do something else - as they think, they can observe their own thinking, and then think about that. "How do I reason? Why did I decide that? How do I determine what evidence is accurate?" I don't know if "metacognition" the the word that people use for that, but it's part of what I think AGI is.

    • jasonjmcghee 3 years ago

      Yes- being able to reflect on its own thoughts.

      You could argue being able to observe and think about what was just said aligns with ReAct (https://arxiv.org/pdf/2210.03629.pdf) - maybe a tweak to directly assessing a previous thought, and modifying output / thought process based on the assessment accordingly would help, but I'm not sure that's quite enough.

      If it can't ask "why" and step back through why it thinks something, I think we'll keep having the confident hallucination problem- rather than "I don't know".

      But maybe that's touching on the quality of AGI.

      Is "reasoning" a necessity for the base definition?

    • mindcrime 3 years ago

      But I suspect you mean something that I think is critical - the ability to watch itself think.

      That's pretty close to the way the term is normally used.

      FWIW:

      https://en.wikipedia.org/wiki/Metacognition

      https://plato.stanford.edu/entries/self-consciousness/#Meta

version_five 3 years ago

Something that can escape it's programming, not just master what it's been given. I expect there are mental gymnastics to explain why chatgpt already fits this in some technical sense, hopefully people see what I'm getting at.

  • nullsense 3 years ago

    I definitely see what you're getting at. I think this is the crux of the alignment problem in general. Though, in a deeper sense, what is the property that allows that to be possible?

    I suspect it at least involves the combination of being able to continuously learn in response to novel stimuli and developing the goal of self-preservation.

aristofun 3 years ago

1. We went way too far with computer metaphor for human mind and intelligense. Up to the point where we demote human intelligense to fit some calculated standard. It is a nonsense many people overlook.

2. Key cornerstone of a human intelligense is an ability to create something completely new that cannot be predicted or calculated in advance, another one is a will — none of those are even touched by current neural networks. DALLIE makes a nice imitation of the first point though.

tikkun 3 years ago

OpenAI's definition:

"By AGI, we mean highly autonomous systems that outperform humans at most economically valuable work."

https://openai.com/blog/how-should-ai-systems-behave#fn-A

binarymax 3 years ago

Apologies for the LinkedIn post, but I have been proposing this structure (similar to the 5 levels of autonomous driving), to frame discourse and set some definitions:

https://www.linkedin.com/posts/maxirwin_as-discourse-continu...

“As discourse continues on the impact and potential of Artificial General Intelligence, I propose these 5 levels of AGI to use as a measurement of capability:

Level 1: “Write this marketing copy”, “Translate this passage” - The model handles this task alone based on the prompt

Level 2: “Research this topic and write a paper with citations” - 3rd party sources as input – reads and responds to the task

Level 3: “Order a pizza”, “Book my vacation” - 3rd party sources as input and output - generally complex with multiple variables

Level 4: “Buy this house”, “Negotiate this contract” - Specific long-term goal, multiple 3rd party interactions with abstract motivation and feedback

Level 5: “Maximize the company profit”, “Reduce global warming” - Purpose driven, unbounded 3rd party interactions, agency and complex reasoning required

I had the pleasure to present this yesterday afternoon on WXXI Connections (https://lnkd.in/gA9CugQR), and again in the evening during a panel discussion on ChatGPT hosted by TechRochester and RocDev (https://lnkd.in/gjYDEkBE).

This year, we will see products capable of levels 1, 2, and 3. Those three levels are purely digital, and aside from the operator all integration points are done through APIs and content. For some examples, level 1 is ChatGPT, level 2 is Bing, and level 3 is AutoGPT.

Levels 4 and 5 are what I call "hard AGI" - as they require working with people aside from the operator, and doing so on a longer timeline with an overall purpose. We will likely see attempts at this technology this year, but it will not be successful.

For technology to reach a given level, it must perform as well as a person who is an expert at the task. A broken buggy approach that produces a poor result does not qualify.

Thanks for reading, and if you would like to discuss these topics or work towards a solution for your business, contact me to discuss!”

lesserknowndan 3 years ago

My definition would be Artificial General Intelligence (AGI) is what most people _think_ ChatGPT can currently do but cannot.

hollowturtle 3 years ago

Just joking but also serious: it can sort a deck of card efficiently without knowing what the algorithm is, like I do

laratied 3 years ago

What humans can do that computers currently can not. Once computers can do that we will just call it an algorithm.

admissionsguy 3 years ago

It can read admission requirements for a university degree programme and put them into a CSV file.

hackernoteng 3 years ago

They used to call it "strong AI" but I guess that fell out of favor.

  • muzani 3 years ago

    Like similar tech terms, it ends up inflating to "very strong", "ultra strong", "gamma" and such.

asherah 3 years ago

it's probably a bit of a bad definition, but i'd say "something i can have a long conversation with and not figure out it's an AI"

jstx1 3 years ago

I don't - I really think that it's a completely pointless distinction to make. Labelling some version of AI as general doesn't make it more or less useful.

  • muzani 3 years ago

    Terms are useful so that people are discussing the same thing. Otherwise you get a lot of the heated debates where they're talking about different things but get frustrated by how the other party don't seem to get it.

    • jstx1 3 years ago

      My point is specifically that this term isn't useful regardless of how you define it. People will waste a lot of time and effort in trying to convince others about where to draw the line, and regardless of whether they agree or not, nothing will actually change in the real world.

      • muzani 3 years ago

        I found it useful to define terms like "startup". Because some people would ask why Uber is a startup, but not some restaurant. That leads to questions like why Uber loses more money than a restaurant or why VCs put money into Uber but not restaurants.

        One current example on AI is that people like to make the argument that AGI is a pointless goal because LLMs can't be sentient. Well, my definition of AGI doesn't need to be sentient, why does this matter? It just has to do my job better than me; I don't expect it to be sentient any more than I would of Stockfish.

        When you break it down, it turns out we have very different core specs.

        If I buy an AGI, I fully expect it to be able to cook for me and do my taxes. For someone else, AGI means it can give me recipes and how to do my taxes, but not actually filing them.

        A lot of it is political too, like being the first to hit AGI, or some charlatan selling GPT-4 labeled as AGI. Which might be the reason why some don't like definitions.

        • muzani 3 years ago

          Also just came across this thread which is a good example of when definitions are unclear: https://news.ycombinator.com/item?id=35882914

          The answers suddenly become useless because it's unclear if they're even talking about the same thing.

        • jstx1 3 years ago

          > When you break it down, it turns out we have very different core specs.

          Right, my point is that you care about the specs, not the label. If something meets the specs, you will buy it even if whoever is selling it doesn't call it AGI, and vice versa.

ftxbro 3 years ago

we aren't even going to agree on 'intelligence'

  • nullsense 3 years ago

    We could instead measure "political impact" and "economic impact".

    • tanseydavid 3 years ago

      This is the "best" answer so far in the thread.

      I truly believe the "economic impact" and by extension the "political impact" will be undeniable and profound long before people get tired of arguing whether or not it is AGI, or whether or not it is sentient, or whether or not it has a 'soul' (despite not being able to clearly define 'soul').

ano88888 3 years ago

there will always be some religious people who require AI to have soul to qualify as AGI. So there will be always be people who deny the existence of AGI in the future.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection