There is a 50-50 chance AI will get more intelligent than humans
twitter.comYes, 50-50 in the old joke sense of "it will either happen or not happen".
For me, I'm not worried about artificial beings being smarter than us being able to reason about things better.
I am far far far more worried what a corporation would do. Look at the TikTok algorithm and how good it is at hijacking our thought process. Do I need to fear it in and of itself? No. I fear the company using it to drive behaviors a certain way because it is profitable.
I think Westworld had a great version of this. They create a super intelligent computer, and they could have just as easily asked it to solve climate change as to manipulate stocks. And ultimately the machine wasn't after anything, just working on a directive. And when asked to shut down, it did. (season 3)
Thats the point here, its all about who gets to direct it, and if it is Elon Musk, I fear for our future.
Astonishing! I assumed it was only 49.5-50.5 max.
I can see a situation where multiple AI systems will be connected to simulate human intelligence so you can ask it any question and it will have an answer. It will also be able to outline a whole project into a plan that people can follow. In that sense, it will be smarter. That's a 100% chance. It's a way to easily access the world's knowledge. Search engines are on their way out. Google search will eventually be a relic.
But will it be AGI? No. That's a close to 0% chance, not 0 but close in the next 20 years.
Could it be possible that nothing it truly conscious? Maybe we're all cogs in the wheel of a giant chemical reaction? In the future, someone may build a robot with a ChatGPT brain, and maybe that will be what passes for as AGI. I haven't found any human endeavor that adequately explains consciousness. Consciousness and intelligence are probably two different things. AI is not conscious, and its intelligence varies, sometimes being very stupid, and sometimes being very smart. It's a system inside of another system.
Humanity is possibly racing to it's own destruction by making a buck along the way.
Humanity is possibly about to breed home appliances that pathfind our way to greater knowledge than we could ever have gotten to on our own.
AI might hit a wall and not progress for many years. Only time will tell...
The denial is strong.
Find anyone, anywhere, who can answer as broad of a range of questions, as quickly, as well, as ChatGPT does.
No human can match any one of those 3 dimensions of performance.
AI won several years ago.
The question is about "intelligence" and your argument is regarding memory and access to information.
Are there standardized tests proxying intelligence in which GPT4o underperforms relative to the median human?
GPT4o can't pass a driving license test.
The written portion or the driving portion? I'm actually surprised that you can't stuff whichever state's manual into the AI's context and have it out-perform the median human. Do you have a citation for that?
If you mean the driving portion, that falls somewhat outside the scope of an intelligence test since it relies on actually moving heavy machines in the real world.
> The question is about "intelligence" and your argument is regarding memory and access to information.
I think my answer concerns intelligence. Intelligence is the ability to solve problems.
That's YOUR definition of intelligence anyway. LLMs fundamentally aren't intelligent because they cannot solve novel problems. It's literally just a complex auto complete that interpolates data from problems already solved by humans. It has no capacity to actually reason about anything.
> That's YOUR definition of intelligence anyway.
Please tell me your definition of intelligence.
On a related note, I keep hearing that LLMs will never do X. As a teacher, I doubt that very many if any of my students can do X. LLMs may not be the perfect/God-like problem solver, but they are much smarter than anyone I work with at the community college.
you’re at community college, what do you expect? geniuses? in my basic math undergrad i encountered the limits of GPT daily, with a complete inability to solve anything above maybe freshman or sophomore level proofs. forget anything complex that hasn’t been answered extensively on stack overflow already.
This is a good point because “solving problems” and “generating convincing-sounding text” are the same thing. The fact that language models can answer heretofore unsolvable questions like “how many flimbops does it take to fill a fuzzlebugle to the halfway point?” means that Sam Altman has stolen fire from the gods as a modern day Prometheus
I asked my parrot the time complexity of quicksort, and he said "squawk N-squared!" He is clearly reasoning and I would like $500 million, please.
This leaves out the key bit: "in the next 20 years".
I think that if we leave out any deadlines then the probability is 100%.
Surely the question is when not if?
Why isn't it 100%?
The remainder of the quote is “in the next 20 years”.
Still doesn't make sense. So it's 100% after 20 years or does that mean if it doesn't happen in 20 years then it's never going to happen?
In any case, these predictions have been happening for 50 years and it's always 20 years away.
Insightful.