MCP is the New GraphQL

4 min read Original article ↗

For the longest time, I’ve thought MCP is a weird protocol.

I understand the problem it’s trying to solve, but I also think it drags along a bunch of problems we didn’t need. The cost–benefit tradeoff here isn’t universally favourable, and in many cases it’s probably more detrimental than people realise.

At a high level, safely calling a set of pre-defined APIs is a solved problem. We already do this via tool calling, which itself is a slightly weird abstraction that’s built on top of LLM text genration.

Yet the MCP hype train has already achieved escape velocity. It’s starting to acquire the same status GraphQL once had.

If you remember, GraphQL was trying to solve a very specific problem: overfetching in web and mobile clients. As a secondary concern, it also tried to solve REST versioning (/v1, /v2) by expressing APIs as operations rather than representations of internal state, a la REST. [1]

In practice, this trade wasn’t a clear win.

Yes, front-end clients got flexibility. They could pick and choose fields and reduce round-trips. But this came with real costs: recursive access into internal state, queries of unbounded complexity, and a whole class of operational problems that didn’t exist before.

One that’s rarely discussed enough is observability at the edge. As a personal anecdote, an engineering org I ran had a pretty solid rate-limiting and abuse-prevention setup using Cloudflare’s endpoint discovery over REST APIs. You could do very targeted things. For example, rate-limit a POST /sms-otp endpoint at the edge and effectively shut down SMS pumping attacks.

The moment we moved to GraphQL, all of that collapsed into a single /graphql endpoint. All the edge-layer configuration we relied on just disappeared. To get back to parity, we had to seriously consider buying a GraphQL gateway from another vendor, just to re-implement things REST gave us for free.

I’m not saying GraphQL was bad. But I did work at large tech companies where the explicit OKR was to build one graph over all internal REST APIs, not because it was clearly the right abstraction, but because Shopify had done it. Around that time, there was this sense of “getting tricked” into using GraphQL by authority figures.

MCP feels like the latest reincarnation of that same authority bias.

We’re extremely early in LLM tooling, which means we should be more critical of early abstractions. Abstractions are often wrong. Chances are that it solves a very constrained problem that doesn’t generalise well.

For example, giving Claude quick access to external tools (personal Gmail account, Linear workspace) is a real, unsolved problem. Standing up an MCP server is a perfectly reasonable solution for that use case. It works. It’s convenient. It’s probably the fastest way to get there today.

But that doesn’t automatically make MCP the right abstraction for calling a set of pre-defined APIs in a production system, especially one with fine-grained AuthZ, audit requirements, and clear ownership boundaries. Those concerns don’t go away just because the interface looks nicer to an LLM.

Talking to friends, I’m starting to get the sense that number of MCP servers is already someone’s OKR in a sufficiently large org. That should sound familiar. It’s not too different from companies trying to build their own version of GitHub’s one graph a few years ago, long before they were clear on why they needed it.

MCP isn’t bad technology. But history suggests that reason exits out the door once an abstraction becomes a status symbol.


[1] It tried solving a other problems too, but I argue that REST tooling was good enough even in 2020 that the value proposition wasn’t really solid. For example, runtime type checking and client generation was a solved problem via tools like Swagger. A modern version of this is ts-rest.

[2] Still can’t believe we all got tricked into using graphql