Since the Chatgpt phenomenon in 2023, the topic of when we will get AGI has become a hot topic of debate. A very informal and vague definition of AGI is a computer that is “smarter” than the average (alt. median, 90th percentile) human being at a wide range of tasks. However, smartness is not quantifiable; hence, we can never unequivocally say AGI has been achieved based on such a definition. Many would claim that based on the average intelligence alternative of this definition, LLMs have already achieved AGI. The most common counter-argument to this claim is that LLMs are simply repeating what they have learned from the internet and are lacking the “creativity” that most individual humans possess. However, this objection is again extremely vague and unfounded. The current LLMs already exhibit fairly creative human behaviors in specific areas. IMO is probably the toughest high school mathematics competition, and Google’s deepmind model was able to compete with some of the most “creative” mathematics students and attain a silver medal in this exam.
This post is purely concerned with the definition of AGI and the possibility or impossibility of achieving AGI. In particular, the ethical and social debate on whether or how to develop AGI (or AI alignment) is not part of this discussion. To be clear, that is not to say that it is of less importance in any way.
Formal definition/techniques
A big source of confusion on this topic is the definition itself. Following are some related concepts and definitions:
- AGI: While there are a few alternate definitions of AGI. The Wikipedia definition seems fairly in line with the widely accepted definition. “Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks”
- Technical singularity: This is roughly defined as the point in time when machine intelligence starts exponentially increasing such that it becomes obvious that the intelligence of computers/technology will surpass human beings. Some will claim we have already achieved singularity. However, I do not agree with that claim because, in theory, human beings are still very capable of stopping AI progress. In his book, The Singularity Is Near, Ray Kurzweil predicted that we will achieve singularity by 2045 (which does not seem entirely out of the question).
- ASI: The definition of ASI is similar to the AGI definition but takes it to the extreme where the machine is considerably smarter than the smartest humans.
While these are all intuitively meaningful, it’s very hard to agree on what it means to achieve AGI based on these definitions. Additionally, even if we were to define the question more formally, mathematics/reasoning is extremely underdeveloped to able to answer such computational complexity questions. Many more basic questions in computational complexity (including the famous P vs NP) are unsolved.
One way to formalize the definition of AGI could be: “Can a Turing machine simulate the human brain?” However, this is still equally vague. Turing machines are a formal definition for the current computing models. A similar definition for the human brain is missing. Precisely because we do not understand the human brain well enough. While this is still not a formal definition, we will use this definition to make some interesting arguments on why we need some fairly radical computer innovation before we can achieve AGI.
The halting problem
The most basic impossibility proof that every computer science student reads is about the halting problem. In simple terms, it proves that a computer (a Turing machine) can not decide whether a given program will ever terminate or not. More generally, this class of problems that cannot be solved using a Turing machine are called undecidable problems.
If we assume that human brains are capable of solving one of these problems, that would essentially mean that the current computers are not capable of achieving AGI. We do not have a formal model for the human brain to be able to prove such aspects. However, loosely speaking, one can claim that a human being can usually argue in “creative” ways to decide whether a given piece of software would ever terminate or not.
Classical computers vs Quantum computers
Quantum computers have been proven to be more capable than classical computers in terms of efficiency. However, it’s not clear if they can solve some problems that classical computers cannot solve. It’s also being debated whether the human brain relies on quantum effects for computing (e.g. Orch-OR Theory). If it turns out to be true that might also mean that a brain is exponentially more capable than classical computers. It must be emphasized though that arguments here are highly speculative.
Open-Endedness
Deepmind published an interesting paper this year on this topic, titled Open-Endedness is Essential for Artificial Superhuman Intelligence. The paper formalizes the definition roughly as a system that produces a sequence of artifacts that are both novel and learnable. The “Learnable” part is important because that encompasses that the output is sensible. Otherwise, generating novel output is trivial, just keep expanding the digits of Pi. This definition while a bit stricter is a good formalism of the concept. Therefore, open-endedness seems like a necessary but probably not a sufficient condition for AGI.
It’s not clear if LLMs as they stand are open-ended based on this definition because the output they generate is unbounded only if they get an infinite stream of sensible prompts. Based on this definition at least we can pose a well-defined problem of whether such a system can be built using computers. It seems very likely that proving or disproving this is going to be hard like the other open computation complexity problems.
Conclusion
It’s important to point out that this post is simply a collection of “guesses” based on some hints. However, this post takes a stance that to achieve AGI in the truest sense, we might have to move away from the Turing machine computing models.
The rapid advancements in AI technology mark a pivotal moment in the course of humanity. It’s very hard to guess, when or whether we will achieve AGI and ASI. However, The AI models will continue to get “smarter” in specific domains. In some specific domains, they are already smarter than human beings. The social impact of AI and alignment is a pretty critical topic in itself but has been intentionally left out of this discussion.