Why Integrated Copilots Suck

5 min read Original article ↗

Most ordinary people’s experience of AI in 2026 is pressing a small blue button inside a tool they already resent using. It says “Ask AI” or “Copilot” or, if the product team was feeling ambitious, “Your AI Assistant.” It knows nothing about you beyond what exists inside that one application. It gives you a mediocre answer. You close the panel and go back to doing the thing manually.

This is the median AI experience right now, and it often only makes things worse.

The value of an AI response is deeply dependent on the relevant context it can access.

Consider what happens when Slack ships an AI feature. It can see your Slack messages. It cannot see your email, your calendar, your codebase, your CRM, your documents, or the contents of your head. So when you ask it to help you prepare for a meeting, it can summarise what people said in the channel last week. Helpful, in the way that a calculator is helpful to someone who already knows the answer.

Now consider what happens when your AI can see Slack and your calendar and your email and your CRM. You ask it to prepare for a meeting and it tells you that the client emailed yesterday with concerns about pricing, that the last three meetings ran over by twenty minutes, that the proposal in Google Docs references numbers that were updated in the spreadsheet last week, and that you have a hard stop in forty-five minutes. This is a categorically different experience. Not incrementally better. Categorically better.

The returns to context are superlinear across domains. More Slack context gives you diminishing returns fairly quickly. Adding email context to Slack context produces increasing returns, because cross-domain synthesis is where the actual value lives.

Every vendor shipping an integrated copilot is, by construction, massively limiting the value they provide to users.

Here is the situation in practice. You have an AI in Slack. An AI in Google Docs. Copilot in Excel. An AI assistant in your CRM. An AI in your email client. Each of these knows almost nothing about you. Each gives generic, context-starved responses.

To get anything useful done across these systems, you have to copy context between them. You paste the email into the Slack AI. You re-explain the project to the Excel copilot. You manually summarise the CRM data for the Docs assistant. You are performing the integration. You are the integration layer.

This is, to put it gently, not what the brochure promised.

The explanation is not technical. The protocols for cross-application AI access exist. Anthropic’s Model Context Protocol, and a dozen open-source equivalents all solve the plumbing problem. The reason every vendor ships a context-starved copilot instead of opening their data to a unified agent is straightforward: concern that opening the platform up would undermine their moat.

The vendor’s fear is this: if your AI assistant can access Slack as just another data source, you stop caring about Slack’s interface. Slack becomes a message database with a legacy frontend. So the copilot ships as a retention mechanism. It keeps you inside the product surface. The fact that it gives worse answers than a unified agent would is, from the vendor’s perspective, an acceptable cost. They don’t pay it, you do.

This is the same logic that produced walled gardens, proprietary file formats, and the app store model. The twist is that this time, the thing being locked inside the wall is your own context: your data, your history, your preferences, the accumulated record of how you work and what you’re working on.

The further twist is that the fear is wrong, and the protectionism it produces is self-defeating.

Slack’s interface has real brand equity, real muscle memory, real switching costs. These advantages would survive just fine in a world of unified agents. People will keep using Slack to talk to each other. They’ll just also have an agent that can read and act on Slack alongside everything else.

The danger for Slack is not that agents replace its interface. The danger is that users migrate to a different messaging tool that cooperates better with their agent. A unified agent experience is so much better than a thin copilot that switching costs stop mattering. Slack’s competitive threat is not unified agents, it’s rivals that better embrace them (see WhatsApp and Telegram).

The equilibrium is obvious once you see it. Every tool designed for humans will eventually need two interface layers: one for humans and one for agents. The tools that figure this out early, that provide excellent agent interfaces while maintaining their human interface, will keep their users. The tools that hide behind a walled copilot will discover that they were not protecting their moat. They were digging their grave.

Credit where it is due: Slack already ships an MCP server, which is exactly the right move. This is what playing both sides of the equilibrium looks like. Lots of smaller SaaS players have not figured this out yet. They are still shipping copilots and calling it an AI strategy.

To those players: good luck with the blue button.

Discussion about this post

Ready for more?