The Great AI Contraction: 5 Contrarian AI Investment Theses

10 min read Original article ↗

Happy Sunday and welcome to Investing in AI. Be sure to check out the AI in NYC show, where we discuss all kinds of interesting topics. I also want to plug some awesome research we’ve been doing at Neurometric about auto-generating small task based models for agentic workflows. If you are into SLMs and model ensembles, check it out. (If you aren’t into those things, you are missing the next AI wave)

Back in summer of 2023, I wrote about 5 Contrarian Theses in AI Investing. You can go back and see how those played out. Today I want to do that again because I think the market is in a place where there are tons of things investors are getting wrong about AI right now.

For the past few years the dominant thesis goes like this - whoever gets to AGI first, even just by a few days, their model will start innovating and improving itself and that will happen faster than the other labs who are all still using humans to improve their AI and so now for the winning lab’s model, its intelligence will accelerate and basically they will win everything. I’ve written multiple times about why I think this thesis is idiotic despite being the dominant AI narrative for so long. Many of my contrarian view points below are driven by the opposite view - that within 3 months of having AGI (whatever that ends up meaning) it will be open sourced and the whole world will have access to superintelligence and so you have to invest assuming “quality of raw intelligence” is a commodity, not a moat.

So below are 5 thesis that seem to be contrarian, based on my conversations with AI folks. Four of them I feel pretty strongly about and one I’m so-so on but, figured I would include it anyway. (I’ll let you guess which one is least likely in my opinion)

Everyone is looking for the next OpenAI. The next frontier lab breakthrough. The next $10B round. But while that search is happening, they are missing the quieter and far more consequential shift that is underway: AI is commoditizing intelligence itself. And when intelligence becomes free, the value of everything else — trust, physical assets, distribution, regulatory moats — goes through the roof.

Here’s my uncomfortable thesis: AI won’t just expand markets. It will contract some of them first. It will disrupt trust, collapse pricing models, and commoditize the software features that VCs spent twenty years unbundling. The companies that survive and thrive won’t be the ones building the flashiest AI. They’ll be the ones who own the things AI cannot replicate, and the ones who own the system infrastructure causing all the rapid collapsing.

I’ve been thinking about this through the lens of my Post-Model World framework — the idea that the model itself is becoming a shrinking share of total AI value, and the systems and assets surrounding it are what actually matter. If you take that seriously, you end up in some places that look nothing like the consensus AI bull case. I’m bullish enough on thesis #4 that I started Neurometric just to take advantage of it.

Here are my 5 contrarian theses.

I am starting with this one because it is the least discussed in the AI world. We’re deploying AI agents into sales cycles, procurement workflows, and customer success — and the implicit assumption is that removing humans from these loops makes them faster and cheaper. That’s true. But it also makes them less trusted.

AI agents optimize for efficiency. Humans buy based on risk mitigation. When the human in the loop disappears, the buyer’s defenses go up. Salesforce’s “State of the AI Connected Customer” report showed that trust in companies using AI ethically dropped from 58% to 42% between 2023 and 2024. That’s a sixteen-point collapse in a single year. Meanwhile, Forrester found that 43% of B2B buyers now make “defensive” purchase decisions — favoring the safe, known vendor over the innovative one — more than 70% of the time.

Think about what that means. You’re building an AI-first sales process to move faster, and your buyer is simultaneously slowing down because they don’t trust the process. That’s not a growth story. That’s friction.

The investment opportunity here isn’t in the agents themselves. It’s in the verification layer — the startups building provenance, reputation, and auditability for machine-mediated transactions. Think of it like the early days of e-commerce: nobody bought anything online until SSL certificates and credit card guarantees made it feel safe. We need the SSL certificate for AI agents, and the companies building that layer are going to be wildly important.

Here’s a mental model I keep coming back to: if AI makes thinking cheap, then doing becomes the bottleneck.

The market hasn’t priced this in at all. AI-native software startups are commanding 25x to 30x revenue multiples. Traditional SaaS is at 6x. And asset-heavy incumbents — logistics companies, energy firms, industrial operators — are trading at even lower multiples despite owning the physical infrastructure, proprietary datasets, and regulatory licenses that AI needs to actually do anything useful in the real world.

This is backwards. EQT’s research on AI value migration points to the same conclusion: the durable winners are companies like Palantir and Oracle (I disagree on the latter), which combine deep proprietary data with massive physical and security moats that no startup can replicate with code alone. You can build a brilliant AI model for optimizing freight routes in a weekend. You cannot build a fleet of trucks in a weekend. You cannot conjure a warehouse network, a utility grid, or a government security clearance out of a GitHub repo.

I think about this as the “scarcity inversion.” For decades, software was the scarce, high-margin layer and physical assets were the commodity. AI flips that. Software intelligence is becoming abundant. The trucks, the licenses, the warehouses, the proprietary datasets that took decades to accumulate — those are the new scarcity.

The most contrarian trade in AI right now might be going long on boring, asset-rich incumbents who are quietly using AI to fortify moats they’ve spent decades building. The market is handing you a discount on scarcity because it’s infatuated with abundance.

For twenty years, the venture playbook was unbundling. Take a feature from a big platform, make it a standalone product, charge per seat. It worked brilliantly. AI is about to reverse the entire trend.

Users don’t want ten AI point solutions. They want one platform where AI handles ten workflows. The standalone AI note-taker, the standalone AI scheduler, the standalone AI email writer — these are all getting absorbed as features in other platforms or products. A 2025 SAP/CIO survey found that 90% of IT leaders are now prioritizing software consolidation, with a goal of reducing vendor counts by at least 20%.

And here’s the structural problem for point solutions: the per-seat pricing model collapses when AI agents are performing the tasks. If an AI agent handles what three humans used to do, you’re not paying for three seats anymore. You’re paying for one outcome. The platforms that survive this will be the ones that charge for outcomes across a wide surface area, not for headcount against a narrow feature. The pricing model has to change because the unit of work is changing.

If you’re evaluating an AI startup and their moat is a single feature that a horizontal platform could build in a quarter, that’s not a moat. That’s a temporary head start. And the head start is getting shorter every month. The venture math that worked for the unbundling era — find a wedge, land and expand, build a category — breaks down when the expansion path leads directly into Microsoft’s living room.

I’ve written about this before and I’ll keep beating the drum: we have passed the capability threshold. The frontier model race has entered diminishing returns for most commercial applications, and the real battle has moved to inference economics — who can deliver good-enough intelligence at the lowest cost and the lowest latency.

The numbers are staggering. Between late 2022 and late 2025, the cost of GPT-4 class performance dropped from roughly $20 per million tokens to about $0.40. That’s a 50x reduction in three years. By mid-2024, open-source models like Llama 3 and Mistral had crossed the commercial utility threshold for 90% of business tasks, which means the competition shifted from “who is smartest” to “who is cheapest and fastest.”

This is a classic commoditization curve, and it has a clear implication: stop investing in the model and start investing in the system. Model distillation, inference optimization, intelligent routing, small language models fine-tuned for specific tasks — this is where the margin lives now. The company that’s 10x cheaper and 10x faster will beat the company that’s 2% more accurate on a benchmark, every single time.

I started Neurometric to make intelligence free. Our tools help companies identify the best models on a per-task basis, not looking at broader benchmarks. Mature businesses are always looking for ways to reduce COGS, and now that some companies are spending high 5 to low 7 figures on monthly inference, the same is coming for AI. AT&T just recently went through this process after spending 8 billion tokens per month on their AI systems.

Finally, the thesis that should make every AI investor lose sleep is this last one. In competitive markets, efficiency gains don’t accrue to the producer. They accrue to the consumer through lower prices.

If every company uses AI to cut costs by 30%, competition forces prices down by 30%. You haven’t created margin. You’ve created consumer surplus. This isn’t speculation — it’s economic history. During the 1990s internet boom, labor productivity surged, but corporate margins in many digitized sectors actually thinned as price transparency and competition intensified.

Man Group’s research makes this even more pointed: while R&D spending has surged across public companies, research productivity — measured as profit growth per dollar of R&D — has declined by nearly 40% annually at firms like Apple since 2005. AI might fix your internal efficiency, but your competitors are using the exact same tools to undercut you. It’s a red queen race. You have to run just to stay in place.

This is why I’m skeptical of the “AI boosts margins across the board” narrative. In concentrated markets with pricing power — maybe. In competitive markets where five companies all deploy the same AI tools to optimize the same workflows — the savings get competed away. The surplus flows to the buyer, not the seller.

The winners in this environment aren’t the ones using AI to cut costs. They’re the ones using AI to do things that were previously impossible — to enter markets, create new categories, or deliver outcomes that didn’t exist before. Cost optimization is table stakes. Category creation is the alpha.

Stop looking for the next OpenAI. The LLM layer is commoditizing, and the frontier labs know this which is why they are becoming application companies. The winner-take-all dynamics everyone is betting on are far less certain than the consensus suggests. Start looking for trust layer startups and asset-rich incumbents who are using AI to fortify existing moats rather than build new castles in the cloud.

I believe there will still be some money in models, but you have to play in the areas not popular today - new architectures, foundation models for stuff other than language and media, smaller task specific models that can ensemble together. And then there will be a lot of opportunity in the things an algorithm cannot generate but are needed to implement many AI systems: physical presence, verified reputation, and regulatory control. The Great AI Contraction isn’t a bear case. It’s a realist’s roadmap. The expansion is coming — but first, we have to survive the squeeze. And the companies that come out the other side won’t be the ones the current AI consensus is betting on.

Position accordingly, and thanks for reading.

Discussion about this post

Ready for more?