Stop Saying AGI
I want to preface my point by saying I love LLMs. I am very excited by what's on the horizon and their many useful applications. They are worth investing in and exploring. But it seems like to me many of the members of the tech community have adapted OpenAI's terminology of AGI without really thinking about it. There is an obvious financial benefit for the developers of LLMs to implant this idea as a north star for anything people do with LLMS. Similar to how crypto advocates have a financial stake in their adoption, every time someone puts AGI in their package name, it's free marketing for Sam and Satya. AGI is a science fiction concept that would magically solve all our problems, just like crypto was meant to replace the banks and usher in a new age in finance. I personally think we will look back on their pontificating about magic world changing AIs in a similar light.
Not to say LLMs aren't going to find their killer apps, but I will not be participating in any "to the moon" behavior and focus on finding tangible uses. Here's the reason that I wish people would stop saying AGI: it doesn't have a well-defined meaning, and the most common connotation doesn't make sense for the word and is not necessarily a good goal. For some people, AGI means general purpose AI. In that it can do many different tasks. It does not necessarily need to have the exact same characteristics of capabilities as a person or other animal. And it does not need to be alive in any way. So for some people like me, GPT-4 qualifies as a type of AGI. For many other people, AGI means something like a complete simulation of a person in digital form. It is alive and has precisely all of the characteristics and capabilities of a human. The most popular connotation now is something like the above, but with some large multiple of intelligence automatically applied. So they are like a digital god with a 1000 IQ that can solve all problems that we have not thought of. But interact with us as if they were just like us. I think we actually do need a word to describe general purpose AI. Maybe just "general AI". To a large degree GPT-4 qualifies. But people need to distinguish between various types of cognition and qualities that animals like humans have. Things like high bandwidth senses, a subjective stream of consciousness, instincts and mechanisms to survive and reproduce, and different types of intelligence are all things that AI doesn't need to have to be generally useful. In fact, unless there is a total break with the history of computing, AI's performance will continue to accelerate until it is orders of magnitude faster at outputting thoughts and actions than humans. Within a few years. Certainly less than five years. It will be deadly foolish to combine hyperspeed AI with instincts for controlling its environment, surviving and reproducing. Digital intelligent life will likely be the next stage of evolution. But it is absolutely stupid and unnecessary to try to deliberately enact that. We can create something like the Star Trek ship's computer without trying to build Data and make him truly humanlike. That will not end well because of the extreme cognitive performance difference. Artificial intelligence with superhuman cognitive capabilities has never been made before, so of course there will be a whole constellation of new words and terminology and connotations to go along with it. There is also a corporate rush to stake claims on these words, for example words like 'perplexity' and 'scale' and 'stability' and every word like that is already the name of companies and some are billion dollar companies. Some old words will get new connotations.