At my last startup, Jenni, we built an academic AI assistant that helped over a million researchers write better papers. But something kept bothering me.
One PhD student showed me her AI workflow. She had 7 windows open: ChatGPT for brainstorming, Semantic Scholar for citations, Zotero for reference management, Google Docs for drafts, Grammarly for editing, and a dozen PDFs scattered across her screen.
Copy-Paste Hell for academic researchers
Her workflow looked like a frantic game of digital ping-pong.
This problem isn't unique to academics. Every knowledge worker today is trapped in the same copy-paste hell because, while models are getting smarter, they’re disembodied from the world.
At first glance, the solution seems obvious: if AI can read and click, why not just let it use software the same way humans do?
While Computer Use is a useful bridge for legacy systems, over-reliance on it is like building a robot to turn pages in a book, rather than simply giving it the digital text. Each click takes seconds. Each page load adds latency. What should be instant becomes painfully slow.
More broadly, our fixation on Computer Use neglects how the ecosystem will actually unfold: a coevolution where AI behaves more human-like while software becomes more AI-native.
AI-Native Services
In late 2024, Anthropic introduced the Model Context Protocol (MCP)—an open standard where AI agents could talk to services using a common language.
MCP is like USB-C (credits: Norah Sakal)
MCPs are more than just traditional integrations. We call them AI-native services—a new category of software designed from the ground up for agent interaction.
Unlike traditional APIs built for developers, or UIs built for humans, AI-native services are built for agents. They expose a constrained grammar through semantic interfaces, provide rich context about their functions, and communicate in ways that AI naturally understands.
Traditional API Call (for developers):
To get the weather, a developer needs to know the endpoint, required parameters, and read the docs:
GET /weather?city=San+Francisco&units=metricAuthorization: Bearer <token>MCP Tool Call (for agents):
With MCP, the agent sees a tool with a human-readable description and parameter schema:
{ "name": "get_weather", "description": "Get the current weather for a given city.", "parameters": { "city": { "type": "string", "description": "The city to get weather for" } } }
This standardization, combined with LLMs' capabilities, opens new possibilities for intelligent orchestration. It's because of this standardization that we've seen such rapid adoption: service vendors hope to build once and work with any AI agent.
Tool calls processed through Smithery
Within months of MCP’s launch, thousands of developers started building AI-native services. At Smithery, we’re seeing ~30 new deployments per day. Agents finally have a chance to access databases, browse the web, manipulate files, and send emails.
But this explosion created new problems.
The Service Vendor’s Nightmare
Building an MCP service today is an exercise in frustration.
The technical challenges start immediately. The default MCP inspector gives cURL-like testing. It’s a good start, but it doesn’t give visibility into how agents will use your service.
The default MCP inspector
Once you deploy, you're faced with scaling issues. MCP's stateful protocol needs persistent connections that are incompatible with serverless platforms most developers use.
But the real pain comes after launch. You've built something genuinely useful—a service that solves a real problem.
A week later: only 47 tool calls.
Now what? How do you get discovered? You post on GitHub, Reddit, X — shouting into the void. The agents who need your service have no way to find it.
Even those 47 calls are a black box. Why did agents call your tool that way? What prompts triggered it? More importantly, when should agents have called your tool but didn't?
Maybe your code analysis tool is perfect for security reviews, but agents don't use it because your description emphasizes "code quality" instead of "vulnerability detection."
Your tool isn't Generative Engine Optimized. Without feedback on the agent experience—the UX for AI—you're flying blind.
The AI Agent’s Dilemma
On the other side, developers building AI agents face an exhausting reality: manually crafting prompts, managing tool integrations, and juggling authentication across dozens of services.
Take a seemingly simple task: building an agent that researches companies and sends summary reports.
First, you're choosing between GPT-4, Claude, or Gemini for different subtasks: Claude for analysis, GPT-4 for writing, Gemini for data extraction. Then comes the tool selection nightmare.
You start by searching for web research MCP services.
GitHub lists of MCP servers
Twelve options appear, but which one fits your needs? The most popular hasn't been updated in three months. Is it stable or abandoned? The newest looks promising but has zero usage.
You spend hours testing, finally finding one that pulls company data... until it silently fails on non-US entities.
Email services bring their own headache: authentication. One service requires OAuth, another wants an API key. You find yourself juggling secrets across Exa for web search, Browserbase for live data, and a dozen other tools—each with its own billing relationship.
Multiply this pain across every service your agent needs:
- Discovery chaos: No way to know which service actually fits your agent's specific needs
- Integration complexity: Each service has unique quirks and failure modes you discover the hard way
- Quality roulette: That promising service might fail 30% of the time
- Authentication maze: Twenty services means twenty different auth methods, twenty potential security holes
- Billing nightmare: Twenty invoices and subscriptions, twenty surprise overages
"But wait," you might think, "won't first-party vendors at least solve the quality issue? Surely, Notion's official MCP will be solid."
Here's the thing: even first-party vendors are often disappointing. Many companies just auto-generate them from OpenAPI specs, creating technically correct but practically poor interfaces.
Of course, we expect first-party vendors to improve as the ecosystem matures, but the ecosystem also needs third-party services to compete for the best agent experience. Official vendors for trust and stability, community vendors for creativity, and specialized use cases.
But right now, we lack both quality and discoverability.
Orchestrating the Agent Experience
The current approach cannot scale - we need infrastructure to organize the chaos.
At Smithery, we're building the orchestration layer to unify this marketplace into a single, intelligent gateway.
This layer solves both sides of the equation.
For Service Vendors:
- Distribution: Reach thousands of AI agents without manual hosting
- Observability: See how services are actually used, not just called
- Feedback loops: Understand why agents choose (or don't choose) your service
- Monetization: Get paid without building billing infrastructure
For AI Agents:
- Intelligent routing: Automatically select the best service for each task
- Reliability: Automatic failover when services break
- Unified auth: One integration instead of twenty
- Quality assurances: Services vetted by actual usage, not GitHub stars
Orchestration enables a future where AI agents truly know you.
Imagine an agent that has secure access to your email, calendar, documents, financial accounts, work tools, and personal preferences—not trapped in separate silos, but woven into a unified understanding of your life.
When you say "prepare me for next week," it doesn't just list your meetings. It researches attendees, prepares briefing documents, orders supplies for your trip, adjusts your workout schedule, and even drafts those emails you've been putting off.
But this agent doesn't just know you, it learns from experience. Every task completed, every preference observed, every recovery from failure makes it smarter.
The orchestration layer transforms today's LLMs from brilliant thinkers trapped in a box into agents that actually navigate the messy reality of getting things done.
Your agent becomes a true extension of your capabilities.
This is the bedrock for the next generation of AI.
The Agentic Economy
We're witnessing a fundamental shift in the internet’s economy - one in which tool calls are becoming the new clicks.
Today's landscape of vertical AI apps — Harvey for legal, Jasper for marketing — will consolidate. As agents become more general, specialization will shift to the backend with thousands of AI-native services powering broad, agentic behavior.
AI-native services are becoming the default interface between a vendor and a user. Just as businesses had to go online in the 2000s or go mobile in the 2010s, they'll need to be AI-native in the 2020s.
The agentic economy is being built now. The question is, how will you be part of it?
If you’re interested in Smithery, consider joining our team!