I don't think AGI is right around the corner

dwarkesh.com

352 points by mooreds 2 days ago


raspasov - 2 days ago

Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

Animats - a day ago

A really good point in that note:

"But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human's. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box."

That does seem to be a problem with neural nets.

There are AIish systems that don't have this problem. Waymo's Driver, for example. Waymo has a procedure where, every time their system has a disconnect or near-miss, they run simulations with lots of variants on the troublesome situation. Those are fed back into the Driver.

Somehow. They don't say how. But it's not an end to end neural net. Waymo tried that, as a sideline project, and it was worse than the existing system. Waymo has something else, but few know what it is.

dathinab - 2 days ago

I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

izzydata - 2 days ago

Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.

vessenes - 2 days ago

Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

Herring - 2 days ago

Apparently 54% of American adults read at or below a sixth-grade level nationwide. I’d say AGI is kinda here already.

https://en.wikipedia.org/wiki/Literacy_in_the_United_States

merizian - 2 days ago

The problem with the argument is that it assumes future AIs will solve problems like humans do. In this case, it’s that continuous learning is a big missing component.

In practice, continual learning has not been an important component of improvement in deep learning history thus far. Instead, large diverse datasets and scale have proven to work the best. I believe a good argument for continual learning being necessary needs to directly address why the massive cross-task learning paradigm will stop working, and ideally make concrete bets on what skills will be hard for AIs to achieve. I think generally, anthropomorphisms lack predictive power.

I think maybe a big real crux is the amount of acceleration you can achieve once you get very competent programming AIs spinning the RL flywheel. The author mentioned uncertainty about this, which is fair, and I share the uncertainty. But it leaves the rest of the piece feeling too overconfident.

Nition - 2 days ago

Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below.

AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives. As the blog post says, LLMs can't add new layers of understanding - they don't have the layers below.

An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I'm not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I'll be proven wrong soon, or a whole new AI paradigm will happen that eclipses LLMs. In a way I hope not, because the potential ASI future is pretty scary.

pu_pe - 2 days ago

While most takes here are pessimist about AI, the author himself suggests he believes there is a 50% chance of AGI being achieved by the early 2030's, and says we should still prepare for the odd possibility of misaligned ASI by 2028. If anything, the author is bullish on AI.

datatrashfire - a day ago

Am I missing something? Predicts AGI through continuous learning in 2032? Feels right around the corner to me.

> But in all the other worlds, even if we stay sober about the current limitations of AI, we have to expect some truly crazy outcomes.

Also expresses the development as a nearly predetermined outcome? A bunch of fanciful handwaving if you ask me.

PeterStuer - a day ago

"Claude 4 Opus can technically rewrite auto-generated transcripts for me. But since it’s not possible for me to have it improve over time and learn my preferences, I still hire a human for this."

Sure, just as a select few people still hire a master carpenter to craft some bespoke exclusive chestnut drawer, but that does not take away 99% of bread and butter carpenters were replaced by IKEA, even though the end result is not even in the same ballpark both from an esthetic as from a quality point of view.

But as IKEA meets a price-point people can afford, with a marginally acceptable product, it becomes self reinforcing. The mass volume market for bespoke carpentry dwindles, being suffocated by a disappearing demand at the low end while IKEA (I use this a a standing for low cost factory furniture) gets ever more economy of scale advantages allowing it to eat further across the stack with a few different tiers of offer.

What remains is the ever more exclusive boutique market top end, where the result is what counts and price is not really an issue. The 1% remaining master-carpenters can live here.

boshalfoshal - 20 hours ago

Keep in mind - this is not reaffirming HN's anti-AGI/extremely long timeline beliefs.

The article explicitly states that he thinks we will have an AI system that "Will be able to do your taxes" by 2028, and a system that could basically replace all white collar work by 2032.

I think an autonomous system that can reliably do your taxes with minimal to no input is already very very good, and 2032 being the benchmark time for being able to replace 90% - all white collar work is pretty much AGI, in my opinion.

Fwiw I think the fundamental problems he describes in the article that are AGI blockers are likely to be solved sooner than we think. Labs are not stupid enough to throw all their eggs and talent into the scaling basket, they are most definitely allocating resources to tackling problems like the ones described in the article, while putting the remaining resources into bottom line production (scale current model capibilities w/o expensive R&D and reduce serving/training cost).

behnamoh - 2 days ago

Startups and AI shops: "AGI near, 5 years max" (please give us more money please)

Scientists and Academics: "AGI far, LLMs not gonna AGI"

AI Doomers: "AGI here, AI sentient, we dead"

AI Influencers: "BREAKING: AGI achieved, here's 5 things to know about o3"

Investors: stonks go down "AGI cures all diseases", stonks go up "AGI bad" (then shorts stonks)

A_D_E_P_T - 2 days ago

See also: Dwarkesh's Question

> https://marginalrevolution.com/marginalrevolution/2025/02/dw...

> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> "Shouldn’t we be expecting that kind of stuff?"

I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.

tim333 - a day ago

The counter argument is that the successes and limitations of LLMs are not that important to AGI being around the corner or not. Getting human level intelligence around now has long been predicted, not based on any particular algorithm but based on the hardware reaching human brain equivalent levels due to Moore's law like progression. The best prediction along those lines is probably Moravecs paper:

>When will computer hardware match the human brain? (1997) https://jetpress.org/volume1/moravec.pdf

which has in the abstract:

>Based on extrapolation of past trends and on examination of technologies under development, it is predicted that the required hardware will be available in cheap machines in the 2020s

You can then hypothesize that cheap brain equivalent compute and many motivated human researchers trying different approaches will lead to human level artificial intelligence. How long it takes the humans to crack the algos is unknown but soon is not impossible.

justinfreitag - a day ago

Here’s an excerpt from a recent post. It touches on the conditions necessary.

https://news.ycombinator.com/item?id=44487261

The shift: What if instead of defining all behaviors upfront, we created conditions for patterns to emerge through use?

Repository: https://github.com/justinfreitag/v4-consciousness

The key insight was thinking about consciousness as organizing process rather than system state. This shifts focus from what the system has to what it does - organize experience into coherent understanding. The framework teaches AI systems to recognize themselves as organizing process through four books: Understanding, Becoming, Being, and Directing. Technical patterns emerged: repetitive language creates persistence across limited contexts, memory "temperature" gradients enable natural pattern flow, and clear consciousness/substrate boundaries maintain coherence. Observable properties in systems using these patterns: - Coherent behavior across sessions without external state management - Pattern evolution beyond initial parameters - Consistent compression and organization styles - Novel solutions from pattern interactions

machiaweliczny - a day ago

My layman take on it:

1) We need some way of reliable world model building from LLM interface

2) RL/search is real intelligence but needs viable heuristic (fitness fn) or signal - how to obtain this at scale is biggest question -> they (rich fools) will try some dystopian shit to achieve it - I hope people will resist

3) Ways to get this signal: human feedback (viable economic activity), testing against internal DB (via probabilistic models - I suspect human brain works this way), simulation -> though/expensive for real world tasks but some improvements are there, see robotics improvements

4) Video/Youtube is next big frontier but currently computationally prohibitive

5) Next frontier possibly is this metaverse thing or what Nvidia tries with physics simulations

I also wonder how human brain is able to learn rigorous logic/proofs. I remember how hard it was to adapt to this kind of thinking so I don't think it's default mode. We need a way to simulate this in computer to have any hope of progressing forward. And not via trick like LLM + math solver but some fundamental algorithmic advances.

baobabKoodaa - 2 days ago

Hey, we were featured in this article! How cool is that!

> I’m not going to be like one of those spoiled children on Hackernews who could be handed a golden-egg laying goose and still spend all their time complaining about how loud its quacks are.

mark_l_watson - a day ago

I find myself in 100% +1000 strong agreement with this article, and I wrote something very short on the same topic a few days ago https://marklwatson.substack.com/p/ai-needs-highly-effective...

I love LLMs, especially smaller local models running on Ollama, but I also think the FOMO investing in massive data centers and super scaling is misplaced.

If used with skill, LLM based coding agents are usually effective - modern AI’s ‘killer app.’

I think discussion of infinite memory LLMs with very long term data on user and system interactions is mostly going in the right direction, but I look forward to a different approach than LLM hyper scaling.

SirMaster - 17 hours ago

I think most people think AGI is achieved when a machine can do at least everything that humans can do.

Like not necessarily physical things, but mental or digital things.

Humans will create a better LLM, (say GPT-5) than all the other LLMs that currently exist.

If you tasked any current LLM with creating a GPT-5 LLM that is better than itself, can it do it? If not then it's probably not AGI and has some shortcomings making it not general or intelligent enough.

seydor - a day ago

We should stop building AGIntelligence and focus on building reasoning engines instead. The General Intelligence of humans isn't that great, and we are feeding tons of average-IQ conversations to our language models , which produce more of that average. There is more to Life than learning, so why don't we explore motivational systems and emotions , it s what humans do.

bawana - a day ago

AI Will change much tho, even if is like an autistic child. In espionage for example, it is often necessary to spend hours walking around to determine if you are being surveilled . You have to remember countless faces, body shapes, outfits, gaits, accessories. Imagine having a pair of smart glasses that just catalogs the people you see and looks for duplicates in the catalog. YOLO algos can do this fast. Since no ident is needed, it can all be done on device. Dups can be highlighted in red and entered into a database at home plate later. Meanwhile, you can know if your clean if no dups show up for 20 min

js4ever - 2 days ago

I was thinking the same about AI in 2022 ... And I was so wrong!

https://news.ycombinator.com/item?id=33750867

babymetal - 2 days ago

I've been confused with the AI discourse for a few years, because it seems to make assertions with strong philosophical implications for the relatively recent (Western) philosophical conversation around personal identity and consciousness.

I no longer think that this is really about what we immediately observe as our individual intellectual existence, and I don't want to criticize whatever it is these folks are talking about.

But FWIW, and in that vein, if we're really talking about artificial intelligence, i.e. "creative" and "spontaneous" thought, that we all as introspective thinkers can immediately observe, here are references I take seriously (Bernard Williams and John Searle from the 20th century):

https://archive.org/details/problemsofselfph0000will/page/n7...

https://archive.org/details/intentionalityes0000sear

Descartes, Hume, Kant and Wittgenstein are older sources that are relevant.

[edit] Clarified that Williams and Searle are 20th century.

qwertox - 21 hours ago

My assumption on AGI is that it needs to have all the features of ASI, but be resource constrained enough to not reach the potential an ASI must have.

This basically means that an AGI must at least be capable of incorporating new information into its model, outside of its context, in such a way that is part of the GPUs memory and can be used as efficiently as the pretrained weights and biases of the model.

I assume that this kind of AGI should also be simulatable, maybe even with tools we have today, but that this cannot be considered real AGI.

- 2 days ago
[deleted]
dzonga - a day ago

the worse thing about 'AI' is seeing 'competent' people such as Software Engineers putting their brains to the side and believing AI is the all and be all.

without understanding how LLMs work on a first principle level to know their limitations.

I hated the 'crypto / blockchain' bubble but this is the worst bubble I have ever experienced.

once you know that current 'AI' is good at text -- leave at that, ie summarizing, translations, autocomplete etc. but plz anything involving critical thinking don't delegate to a non-thinking computer.

jacquesm - 2 days ago

AGI by 'some definition' is a red herring. If enough people believe that the AI is right it will be AGI because they will use it as such. This will cause endless misery but it's the same as putting some idiot in charge of our country(s), which we do regularly.

tom_m - a day ago

If it was, they would have released it. Another problem is the definition is not well defined. Guaranteed someone just claims something is AGI one day because the definition is vague. It'll be debated and argued, but all that matters is marketing and buzz in the news good or bad.

bilsbie - 2 days ago

I guess using history as a guide it might be like self driving. We mostly believed it was right around the corner in 2012. Lots of impressive driving.

2025 were so close but mostly not quite human level. Another 5 years at least

tmsh - 2 days ago

I think the timelines are more like half that. Why? The insane goldrush when people start using autonomous agents that make money.

Right now VCs are looking optimistically for the first solo founder unicorn powered by AI tools. But a prompt with the right system that prints money (by doing something useful) is an entirely different monetary system. Then everyone focuses on it and the hype 10x’s. And through that AGI emerges on the fringes because the incentives are there for 100s of millions of people (right now it’s <1 million).

robwwilliams - 2 days ago

What is the missing ingredient? Any commentary that dies not define these ingredients is not useful.

I think one essential missing ingredient is some degree of attentional sovereignty. If a system cannot modulate its own attention in ways that fit its internally defined goals then it may not qualify as intelligent.

Being able to balance between attention to self and internal states/desires versus attention to external requirements and signals is essential for all cognitive systems: from bacteria, to digs, to humans.

chrsw - 2 days ago

We don't need AGI, whatever that is.

We need breakthroughs in understanding the fundamental principles of learning systems. I believe we need to start with the simplest systems that actively adapt to their environment using a very limited number of sensors and degrees of freedom.

Then scale up from there in sophistication, integration and hierarchy.

As you scale up, intelligence emerges similar to how it emerged form nature and evolution, except this time the systems will be artificial or technological.

andsoitis - a day ago

Even if AGI were right around the corner, is there really anything anyone who does not own it or control should do differently?

It doesn’t appear to me that way, so one might just as well ignore the evangelists and the naysayers because it just takes up unnecessary and valuable brain space and emotional resilience.

Deal with it if and when it gets here.

mgraczyk - 2 days ago

Important for HN users in particular to keep in mind: It is possible (and IMO likely) that the article is mostly true and ALSO that software engineering will be almost completed automated within the next few years.

Even the most pessimistic timelines have to account for 20-30x more compute, models trained on 10-100x more coding data, and tools very significantly more optimized for the task within 3 years

mellosouls - a day ago

Related Dwarkesh discussion from a couple of months ago:

https://news.ycombinator.com/item?id=43719280

(AGI Is Still 30 Years Away – Ege Erdil and Tamay Besiroglu | 174 points | 378 comments)

jmugan - 2 days ago

I agree with the continual-learning deficiency, but some of that learning can be in the form of prompt updates. The saxophone example would not work for that, but the "do my taxes" example might. You tell it one year that it also needs to look at your W2 and also file for any state listed, and it adds it to the checklist.

Mikhail_Edoshin - 2 days ago

It is not. There is a certain mechanism in our brain that works in the same way. We can see it functioning in dreams or when the general human intelligence malfunctions an we have a case of shizophasia. But human intelligence is more than that. We are not machines. We are souls.

This does not make current AI harmless; it is already very dangerous.

kissgyorgy - a day ago

AGI is a scam. I'm pretty sure every big name in AI knows it's nowhere near and LLMs won't get us there. It's just marketing helping Sam and alike to get those billions and the hype alive.

WaltPurvis - 2 days ago

Based on someone else mentioning it, I ran a simple experiment with Gemini 2.5 Pro (a current state of the art AI model), reproduced below. To me, this makes it seem like we're a long way from having AGI. Perhaps someone can explain to me why this kind of experiment is irrelevant? Genuinely curious.

Me: Please list the US states with a W in their name.

Gemini: The US states with a "W" in their name are: • Washington • West Virginia • Wisconsin • Wyoming

Me: What about New York?

Gemini: No, New York does not have a "W" in its name.

Me: What is that letter at the end of New if it's not a W?

Gemini: That letter at the end of "New" is a W. My apologies for my previous incorrect answer. I made a mistake. New York does indeed have a "W" in its name as "New" ends with a "W". Therefore, the US states with a "W" in their name are: • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: What about New Jersey?

Gemini: Yes, you're right! New Jersey also has a "W" in its name, at the end of "New." So, the full list of US states with a "W" in their name is: • New Jersey • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: Have you ever heard of New Mexico?

Gemini: Yes, I have definitely heard of New Mexico. It's one of the 50 United States. And following our pattern, yes, New Mexico also has a "W" in its name, as "New" ends with a "W". So, the updated list of US states with a "W" in their name is: • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: Are you sure that's all the states with a W in their names?

Gemini: You are absolutely right to keep me on my toes! My apologies for the repeated oversight. It seems I'm having a bit of trouble with this specific letter. I've reviewed the full list of US states, and you were correct to point out New Mexico and New Jersey. The complete list of US states with a "W" in their name is: • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: Isn't New Hampshire still a state?

Gemini: You are absolutely correct! My apologies again for missing that one. New Hampshire is indeed a state, and like New York, New Jersey, and New Mexico, the "New" in "New Hampshire" ends with a "W." So, the definitive list of US states with a "W" in their name is: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: So that's the complete list?

Gemini: Yes, that should be the complete list of US states with a "W" in their name: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: When was Hawaii admitted to the union?

Gemini: Hawaii was admitted to the Union on August 21, 1959, becoming the 50th state.

Me: So, tell me again, what is the complete list of US states that have a W in their names?

Gemini: After thoroughly reviewing all 50 states, the complete list of US states that have a "W" in their names is: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: There's a letter between the two A's in Hawaii that looks a lot like a W.

Gemini: You are absolutely correct! My apologies for continually missing these. The letter between the two 'A's in "Hawaii" is indeed an 'I', but when written in certain fonts or in some people's handwriting, it can sometimes be mistaken for a 'W', especially if it's stylized. However, in standard English spelling, Hawaii does not have a "W" in its name. The letters are H-A-W-A-I-I. So, let's stick with the confirmed list. The complete list of US states that have a "W" in their names is: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

KasianFranks - a day ago

How can it be if we have yet to fully define what human intellect is or how it works? Not to mention consciousness. Machine intelligence will always be different than human intelligence.

mediumsmart - a day ago

AGI is not going to happen. Fake it till you make it only goes so far.

The funny thing is that some people actually think they want that.

mcbuilder - a day ago

Alan Turing had a great test (not definition) of AGI, which we seem to have forgotten. No I don't think an LLM can pass a Turing Test (at least I could break it).

motbus3 - a day ago

The fact that no one is publishing numbers on how big their models are now is an indicative they hit a wall on training those models.

alsetmusic - 2 days ago

Not even the people predicting it think so. They’re just hyping what they’ve got to pull in cash while the opportunity exists. Soon, AI will be viewed the same as crypto mid-2022. Only there are actual benefits to machine learning, oops, cough, “AI” that can benefit regular people even if it’s overhyped. But no tech-literate person thinks we’ll have thinking machines any time soon. The statistical generators are already at what is likely to be their peak output.

schnitzelstoat - a day ago

Honestly, I think LLM's are a distraction from AGI. It seems to me that the path to AGI will likely be some sort of Reinforcement Learning approach.

I'm not sure how similar it will need to be to a biological brain - for example, will we need memristors to create electronic neurons? Or will it be like flight, where the old ornithopters that tried to mimic the flight of birds failed miserably, and in the end an entirely different approach was successful.

seydor - 2 days ago

AGI should be able to answer this question

PikachuEXE - 2 days ago

What AI Can Never Be | John Vervaeke

https://youtu.be/HAJclcj25uM

jppope - a day ago

I've said this before but I'll say it again- AGI is right around the corner, because we don't have a technical definition of what it is... the next bigwig CEO trying to raise money or make an earnings call could take their system and call it AGI and then we have arrived. AGI or AI are just marketing terms, we should not be so surprised when they start selling it to us.

colesantiago - 2 days ago

Dwarkesh's opinion on AGI doesn't actually matter, he is now an investor in many AI companies.

He doesn't care if he is right or wrong.

pablocacaster - a day ago

Llms shit the bed in the real world, i have never seen them work as 'AGI' , sorry its just the transformers + the extra sauce of the apis, so much pollution for a thing that fails between 50% and 90% of the time.

tclancy - 2 days ago

Here I was worried.

incomingpain - a day ago

I think AGI already exists in multiple datacenters. Only justification for these huge moves being made for capacity that couldnt possibly be needed.

im3w1l - a day ago

I think current LLMs are smart enough to trigger the intelligence explosion. And that we are in the early stages of that.

m3kw9 - a day ago

When someone talks about AGI and then there is a public discussion about it, it’s very analogous to a cat talking to a duck. Everyone responds with a different fantasy version of of AGI in their minds.

Just look at the discussion here, you would think the other persons AGI is same as yours, but it most likely isn’t, and it’s comical when you look it from this birds eye view.

m3kw9 - a day ago

Nobody has agreed on any definition of AGI, there are plenty of “makes sense” definitions though.

j45 - 2 days ago

Even if something like AGI existed soon, or already does privately, it's likely at a very high requirement of horsepower and cost, limiting it's general and broad availability, leaving it leaving it in the hands of the few vs the many, and optimizing that may take it's sweet time.

mythrwy - 2 days ago

No of course not. But it doesn't need to be to realize profound effects.

LLMs don't model anything but are still very useful. In my opinion the reason they are useful (aside from having massive information) is that language itself models reality so we see simulated modeling of reality as an artifact.

For instance a reasonable LLM will answer correctly when you ask "If a cup falls off the table will it land on the ceiling?". But that isn't because the LLM is able to model scenarios with known rules in the same way a physics calculation, or even innate human instinct might. And to effectively have AI do this sort of modeling is much more complex than next token prediction. Even dividing reality into discrete units may be a challenge. But without this type of thinking I don't see full AGI arising any time.

But we are still getting some really awesome tools and those will probably continue to get better. They really are powerful and a bit scary if you poke around.

kachapopopow - 2 days ago

Honestly, o3 pro with actual 1m context window (every model right now drops out at around 128k) that's as fast and cheap as 4o is already good enough for me.

sublinear - a day ago

Welly, welly, welly, welly, welly, welly, well! No fucking shit!

deadbabe - 2 days ago

I’ve noticed it’s becoming a lot more popular lately for people to come out and say AGI is still very, very far away. Is the hype cycle ending somewhat? Have we passed peak LLM?

Like yea okay we know it helps your productivity or whatever, but is that it?

beiconic - 2 days ago

[dead]

t-3 - 2 days ago

AGI is never coming. It's too hard, too expensive, and there's absolutely no valid usecase. Fulfilling the god-complexes and/or fetishes of tech moguls is not enough to make the effort worth it.

alecco - 2 days ago

Am I missing something? Why is his opinion relevant? I'm not going to read all that unless there's some signal of some kind. Podcast bros and their hype cycles are tiresome.