A developer opens Cursor and types: “add email to this app.”
The agent doesn’t Google anything. It doesn’t compare products or check pricing pages. It just picks Resend, sets it up, and the whole thing is done in under a minute.
That happened hundreds of thousands of times last year. It’s why Resend, launched in 2023 with zero users, now has 500K developers. Their competitor, SendGrid, has been around since 2009. Twilio paid $3B for it. They have a full enterprise sales motion, dedicated IPs, and 15 years of deliverability track record.
When Claude Code adds email to a project, it picks SendGrid 7% of the time. It picks Resend 63% of the time. Nobody at Resend made this happen. Nobody at SendGrid stopped it.
I spent 7 years at Weights & Biases leading our growth efforts. We grew from 100 users to millions, every AI engineer at every major AI lab; through product-led growth. It was a product developers loved, and a meticulously optimized growth engine behind it.
That was in 2019. We were talking to other ML engineers like us, researchers at labs, practitioners at startups, academics building the next generation of models. Then one of those models learned to write the code.
The two most interesting growth stories of the past year: Supabase 1M → 4.5M developers in 12 months, Resend 0 → 500K users beating a Twilio-backed incumbent, have almost nothing to do with human-facing growth tactics. An AI agent picked both of them. Millions of times. Autonomously.
Most growth teams are still optimizing for humans. The defaults of 2027 are being set right now.
Every company is product-led now—but the “user” has changed. It’s no longer just a developer clicking through your UI. It’s an agent writing and running code on their behalf.
When an agent adopts your product, it doesn’t try it once. It uses it every time that task comes up—automatically, at scale, often without the human ever thinking about it again. One developer adding your MCP server to their setup can quietly turn into thousands of executions.
Your product is no longer your UI. It’s your API.
And it changes what matters. Technical superiority isn’t enough if an agent can’t easily understand how to use you. In practice, agents will choose something slightly worse if it’s easier to implement, because from their perspective, a product they can execute is strictly better than one they can’t.
Resend isn’t better email infrastructure than SendGrid for enterprise use. But it’s one command, one environment variable, and one clear doc page an agent can read and execute. At the scale agents operate, that simplicity is everything.
AARRR gave the growth community a shared vocabulary for 15 years. Before we walk through how each stage transforms, it’s worth understanding why this becomes a flywheel, not just a remapped funnel. Turning our funnel into a predictable flywheel was the reason we succeeded at growing W&B.
In a funnel, growth is linear with effort. Your team builds growth programs, users come in, some activate, some retain, and the loop starts over at the same effort level.
In a flywheel, each cycle makes the next cycle easier. Existing users create assets that help the next users discover your product with less work from you. The effort compounds down as the flywheel spins up.
From the W&B playbook
The flywheel is how we grew W&B. Developers discovered us through complementary tools, integrated W&B, built publicly, published tutorials. Those tutorials brought new developers who integrated it themselves, without us sending a single email. Each rotation made the next cheaper. By 2021, a meaningful percentage of new signups came through channels we hadn’t actively seeded in months.
The agent flywheel compounds faster and more durably. When agents successfully complete tasks using your product, those trajectories can become training data for the next model generation. More completions → stronger model memory → higher default selection → more completions. And the community loop runs in parallel: developers who succeed with your product publish their configs, other developers clone them, propagating your tool across thousands of agent deployments at once.
This is why the gap between a product like Supabase and its competitors is widening, not narrowing. Not a feature gap. A compounding flywheel.
Two terms worth defining:
“Training memory” (parametric memory) is what a model absorbed during training, the way a doctor’s clinical knowledge comes from years of medical education, not from Googling before each patient. The model already has opinions about your product before any search begins.
“First Successful Execution” is the first time an agent completes a task using your product end-to-end without a human intervening. It’s invisible, no session recording, no celebration screen. You have to instrument it.
Two engines, one flywheel.
The flywheel is already spinning for some category-defining companies. Every successful agent execution reinforces the next one—through training data, through community configs, through repeated use.
The rules of this competition are now clear. The window to enter it is still open.
“It’s 2026. Build. For. Agents.” – Andrej Karpathy
But that raises a deeper question: how exactly does this growth happen? If agents are the new users, what are the actual paths through which products spread?
That’s what we’ll break down next in Part 2.
Thank you to Lukas Biewald, James Cham, Amy Tam and Phil Gurbacki for early feedback on this draft.
_________________________
Lavanya Shukla is the Managing Partner of Improbability.vc, an early-stage fund backed by Sequoia, Coatue, Village Global, Bloomberg Beta, Lukas Biewald, Adrien Treuille, and AI leaders at OpenAI, DeepMind, Turing et all.
She spent seven years running Growth and AI at Weights & Biases, scaling it from 100 users to millions, every AI engineer at every major lab, through product-led growth.
Improbability Engine’s thesis: invest in the 1–2 AI companies that matter every year. If you’re building one of them, reach out: lavanya@improbability.vc


