Intellectual AI Bubble.

5 min read Original article ↗

There are two kinds of financial bubbles. Inflection bubbles are useful, they allow unrestricted flow of capital to fund progress, like the dot com bubble which built the infrastructure underlying current internet. Inflection bubbles always grow on technological innovation. Greed bubbles, like the subprime mortgage bubble, are not fueled by ideas or progress, but rather by the desire to earn as much quick buck as possible. What is common between the two, is that in both people lose money and companies go bankrupt. Howard Marks in his Oaktree memo [0] goes to a great detail to describe AI from the financial bubble perspective. I would highly recommend reading it, if you don’t want to go broke financially. If you, however, are not an investor, but a white collar worker, there is another bubble that you should care about - intellectual AI bubble. There is no doubt that AI is a real thing and is not just fueled by greed. I use it everyday to prototype, validate my ideas, create off by one tools. For a non native speaker, it’s tremendously useful for checking grammar, spelling, sentence order. Even though it’s a good kind of bubble, it still doesn’t mean you will benefit from it. Just as it doesn’t make sense to put all of your money into Nvidia stock, it doesn’t make sense to put all your intellect and time into prompts.

Don’t do things just for the AI’s sake.

Don’t be a leader that formulates goals in terms of LLM adoption. Language models are not a goal. They are merely tools. Just as you wouldn’t force your organization to use Vim instead of Emacs, there is no point in forcing them to use large language models instead of time series models. Rewriting your code base using LLM’s for the rewriting sake is just as good idea as it was rewriting Netscape in the nineties. Your competition will use the AI productivity gains to build a product that will put you out of business, before you rewrite is completed. As trivial as it may sound, it’s now more important than ever to put your customers first. Prototyping new ideas is faster than ever. A good leader will understand that and apply their judgment to select what matters to their customers.

Don’t gamble your future.

Writing, programming, generating images with AI is like a slot machine. You pull the handle and wait, at the edge of your seat, for the results. The better you can guide the model the bigger your chances for success, but after a while, you don’t really care about that and just pull the handle hoping that the next output will do it. If you struggle, like I do, with short attention span, this endless cycle of excitement, disappointment and potential reward will cause you long-term suffering. Tasks that require deep work and focus will feel unrewarding. This is best summarized by the METR study[1]: „Before being given the task to complete, the 16 developers were asked to estimate how much time using AI would help them save while completing the task. Despite thinking they would work 24% quicker, in reality, it took them 19% longer than without AI assistance.”. AI is exciting, easy and productive, everything else boring and difficult. Getting back your ability to focus and just do things by yourself will be tough.

Your ideas differentiate you.

It gets easier and easier to build stuff, soon startups may no longer require tireless work and sky-high funding. Going from an idea to a product will be faster than ever. Single person unicorns may become possible. The only differentiators left will be your ideas and judgement. People are already interrogating models for unique startup ideas. They will all be the same. If you are there only to do work, but you no longer understand what you are doing, you can’t generate new ideas. I hate to break it to you, you are not employed to work, you are employed to be blamed.

What will matter going forward?

Large language models are inherently easy to learn. Their entire premise is that you can use them only using vague, imprecise natural language, same language that you have been using your entire life. Learning and perfecting a programming language takes months, if not years. Figuring out what prompts work is a matter of hours, maybe days. Even assuming that you could master prompting in some meaningful way, the next model, may’ve different preferences. It’s like a programming language that changed not only its syntax but also semantics with each release. Even if it’s powerful enough to learn, let’s not pretend it’s worth the time to master it. What’s worth it, is understanding the underlying paradigms. Whether it’s imperative, declarative, functional, object-oriented those concepts will make you better at what you do and will make it easier for you to adapt. The analogy breaks here, learning the syntax of the language, doesn’t inhibit your ability to understand its paradigms. Using language models to produce content that you don’t fully understand or even review, inhibits your ability to think. You can deliver quicker, while you still understand what you are doing, pumping the bubble, but at the same time the longer your thinking is turned off, the less you can blow. Can you still review code, after months without writing a single line? Surely you have seen good software engineers who after converting to managers abruptly lose their ability to program. Stop wasting your time on mastering the prompting, vibe coding the next feature, learning peculiarities of Cursor or Windsurf to manage the context better.

You are the context, make sure it’s dense, otherwise it may pop. Don’t blow it, your future depends on it.

[0] https://www.oaktreecapital.com/insights/memo/is-it-a-bubble
[1] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/