I'm Sorry to Burst Your Bubble: You Are Being Fooled About AI, and You Will Soon Feel Really Stupid

10 min read Original article ↗

Update: I am on a mission against misinformation and pseudoscience. Therefore, I wrote the following article:

AI Learned to Write Like You. Detection Is Mathematically Impossible.

This is a small part of a larger research project on how LLMs scale and what happens with their distributions when models double in size. I also wrote a paper where I explain and prove, mathematically, why it is impossible for AI detectors to work.

In this article, I use a linguistic construction known as “contrastive negation followed by a corrective assertion.” I do it because it is how I talk. But mostly, because I can.

I am here on Substack, contributing with my two cents to undo at least part of the confusion the “AI bros” continuously produce by neglecting institutional knowledge.

I know this is not the most apologetic article you will read from me. But I want to make something crystal clear upfront: I have enormous respect for every person I’m about to discuss. What I have zero respect for is the fog machine that has replaced honest technical discourse about artificial intelligence.

Now let’s get into it.

Yann LeCun is the voice of sobriety about AI in today’s world. A genuine expert. One of the vanishingly few public figures who actually understands what AI is and how it works, and whose motivations don’t appear tempered by conflicting agendas. He seems firmly rooted in objective reality and technical merit.

Which is exactly why his voice is drowned out by louder, more celebrated ones.

If there is a voice that “they” want to silence, it’s LeCun’s. Not with censorship but with irrelevance. His perspective destroys the hype. It dismantles the fear-mongering. It makes the venture capital narrative collapse under its own weight and makes everyone involved feel (and look) utterly stupid.

Why does his voice kill the hype? Because LeCun has been saying, clearly and repeatedly, what the industry doesn't want to hear: Large Language Models are a dead end on the path to human-level intelligence. They are, at their core, text-prediction engines, extraordinarily good at retrieving, recombining, and generating language, but fundamentally incapable of understanding the world they talk about. They lack common sense, causal reasoning, and any model of physical reality. A child learns physics by dropping a spoon from a high chair a thousand times; an LLM learns "physics" by reading sentences about gravity. LeCun argues that no amount of scaling, that is, bigger models, more data, more compute, will ever bridge that gap. To get anywhere near genuine intelligence, AI must go far beyond text and learn from high-bandwidth sensory experience: video, spatial data, interaction with the physical world. That is not a minor technical objection. It is a wholesale rejection of the trajectory that virtually every major AI company is spending billions to pursue.

So what did Meta do? They poured $14.3 billion into Scale AI, brought in its founder Alexandr Wang, and built a new “Superintelligence” division around him. Then they restructured FAIR, the research lab LeCun had founded and led for over a decade, and placed it under Wang’s chain of command. The man who had been Meta’s Chief AI Scientist for seven years was now reporting to someone he would later publicly call “inexperienced.”

LeCun didn’t wait around. In November 2025, he announced his departure to launch Advanced Machine Intelligence Labs, a venture focused on the very “world model” approach that Meta had been starving of resources in favor of the LLM bet he considers a dead end.

The message was clear: Meta chose the hype trajectory. LeCun chose the science. And Meta made sure the science reported to the hype before he walked out the door

Sam Altman is a fundraiser. An extraordinarily successful one. He displays a level of confidence that deserves to be studied in business schools worldwide. He has pushed OpenAI’s valuation to record after record without a remotely profitable business and he does it unapologetically.

There is a lot to acknowledge and respect about what he has accomplished.

But make no mistake: he is the last person you should listen to if you want to learn anything real about AI. His mission is to raise money for OpenAI. Full stop. If what you’re after is a grounded, technical understanding of where AI actually stands today, what it can do, what it can’t, and why, there is nothing else there for you to seriously consider.

When Sam Altman posts on X something like, “I was using my own product and it was so incredible that I felt useless”, isn’t the play obvious? Isn’t that just... marketing? Since when do we take a founder’s breathless endorsement of their own product as a technical assessment?

And yet millions do. Every single time.

Dario Amodei is primarily an AI researcher, a physicist, and an entrepreneur, in that order. He understood early that a full-blown probabilistic system chasing artificial general intelligence was a hopeless trajectory on its own. So he proposed structure. Standards. Protocols. His company, Anthropic, developed what they call Skills, bounded, verifiable capabilities rather than mystical leaps toward machine consciousness.

He knows what he is talking about.

Any statement from him that sounds like it flirts with hype? Consider the source of the pressure. When your competitor is making weekly proclamations about the imminent arrival of superintelligence, the public expects you to match the energy or be seen as falling behind. That’s not a technical problem. That’s a communications problem. And it’s one the entire industry now shares.

Before we go further, let’s pay our respects to the people who actually created this field, not as a Silicon Valley narrative, but as a scientific discipline:

As I wrote about it before, John McCarthy coined the term “Artificial Intelligence” in 1955 for a workshop that would take place at Dartmouth in 1956, the seminal event where AI was formally established as a research field. Alan Turing proposed the Turing Test in 1950 to measure machine intelligence. Newell and Simon built the Logic Theorist. Minsky advanced neural networks. Shannon gave us information theory.

These people were not selling anything. They were asking questions.

That distinction matters more now than ever.

And then there is Geoffrey Hinton.

A true expert. A true researcher. A man who collects awards the way others collect conference badges. How about receiving the Nobel Prize in Physics in 2024? His contributions to deep learning are monumental to say the least. The man is called the Godfather of AI for good reasons.

But when Hinton speaks mysteriously about AI, presenting the existential warnings and using the ominous tone, the vague suggestions of emergent consciousness (one of his most attacked takes), one can't help but ask: what happened? This is the man whose life's work is the architecture. He built the backpropagation techniques. He advanced the gradient descent methods. He knows, better than almost anyone alive, that what sits inside these systems is matrix multiplication, weight adjustment, and statistical optimization, not a mind, not a will, not an emergent being. His own body of work is the single most powerful argument against his most alarming claims.

So how can one reconcile deep admiration for Hinton with strong opposition to his current views on modern AI?

I find comfort in this thought: Every human genius is more human than genius.

That’s not an insult. It’s the most compassionate observation anyone can make. Brilliance does not make someone immune to narrative capture, to social pressure, to the seduction of relevance in a world that suddenly cares very much about your life’s work.

Here is what I notice: Hinton’s most recent narrative helps the hype. It adds gravitas to the fear-mongering. It gives investors and fundraiser-founders the frisson they need to keep the machine running. “Even the godfather of AI is scared!” Come on! What a pitch deck slide that makes.

Marketing is most of what is done and said about AI nowadays. A CEO of an AI company like Elon Musk will pay the tolls of the AI hype here and there.

AI feels magical. It isn’t.

It is built from spectacularly ordinary pieces. At the foundation, you have basic math: addition, multiplication, averages, probability estimates. On top of that, you stack statistics, linear algebra, and massive grids of numbers called matrices. Then come algorithms that aren’t mysterious in the slightest as they simply adjust those numbers by small increments whenever the computer guesses wrong.

That is all “learning” means.

You feed in enormous volumes of data like sentences, images, recordings, and the computer repeats the same routine billions of times, slowly tuning itself. Add hardware that can perform these tiny operations at extraordinary speed, and suddenly you get a model that can spot patterns and generate predictions. Wrap that model in software that lets you chat with it or assign it tasks, and you get what most people now call AI.

And AI agents? They are even simpler than people imagine. An agent is just a model paired with a to-do list and some tools. It reads a goal, breaks it into steps, decides what to do first, uses whatever tools are available such as search, email, code execution, and checks its own work. There is no inner mind. No mystery. Just structured workflows, conditional logic, and repeated loops.

AI and AI agents are not magic. They are not secret brains. They are layers of simple ingredients running at massive scale: math, data, repetition, compute, and a bit of organized decision-making.

I have used AI many times as a brainstorming mechanism, a sounding board for early-stage ideas.

And here is what I’ve found: every time I had an idea that was genuinely outside the box, AI discouraged me from pursuing it. When I persisted, it didn’t just push back, it practically begged me to stop. To not continue. To not insist.

Think about what that means. The tool that everyone tells you is “creative” and “revolutionary” is architecturally incapable of doing anything other than averaging what already exists. It is a statistical summary of the past masquerading as a window into the future.

When it tells you your unconventional idea is bad, it’s not evaluating your idea. It’s telling you that your idea doesn’t resemble the data it was trained on.

That’s not intelligence. That’s pattern-matching with a confidence problem.

There is a meme in Brazil that goes:

“How do you manage to do so much in a single day?”

“I do everything very poorly.”

That is AI in one exchange.

AI does incredible things, very fast, in the sloppiest possible way. It is the most productive intern you’ve ever had, one who never sleeps, never complains, and many times fails to check whether the work is actually correct.

And we are building entire industries on top of this.

Take a deep breath.

Not because AI is worthless. Far from that, actually. It is a powerful, useful, and genuinely impressive engineering achievement. But it is an engineering achievement, not a miracle. Not a mind. Not a harbinger of human obsolescence.

The people selling you the apocalypse and the utopia are often the same people selling you the product. The fear and the wonder serve the same function: they keep you paying attention, and they keep the money flowing.

The math hasn’t changed. The architecture hasn’t achieved consciousness. The models are not “thinking.” They are executing matrix multiplications at a scale that makes the output feel like thought.

And if today there is a broad misconception about what AI is and does, that's not a testament to AI's power. It's a testament to how aggressively and relentlessly the hype has been orchestrated. So breathe. And then look closer.

If you enjoyed this piece, consider subscribing. I promise to keep bursting bubbles in the most informative and respectful way, of course.

Discussion about this post

Ready for more?