The 4-Plane Architecture of AI-Native Content Engineering

15 min read Original article ↗

Why the Modern CMS Should Look More Like a Git Repository Than a Publishing Tool, and How Four Communicating Planes Replace the Traditional Content Pipeline

Gabriel L. Manor

WordPress was released in 2003. Contentful, the first serious headless CMS, launched in 2013. Between them, almost every content management system built over the last two decades has shared the same baseline assumption: humans write content, machines render it.

For most of those 20+ years, this was the right assumption. Writing was the expensive part of the pipeline. A typical long-form post could take a team more than a week, starting with research and ideation conversations, moving through outline, draft, review, and finally publication. The writer was the bottleneck, and the tools were built to make that bottleneck move faster: better editors, faster previews, smarter publishing flows, richer media libraries.

Press enter or click to view image in full size

Around 2024, this assumption broke. Writing stopped being the expensive part of the pipeline, and most CMSes haven’t absorbed the shift yet.

In this article, I want to propose a different architecture for content systems, one built on four planes that run in parallel and communicate with each other, rather than a sequential pipeline from idea to publication. I’ll walk through each plane, explain why they need to talk to each other, and then describe what the modern CMS should actually look like as a result. Spoiler: it looks more like a Git repository than a publishing tool.

The Old Assumption Is Broken

Before we look at the new model, it’s worth being specific about what AI actually broke in the old one.

In the WordPress era, the CMS owned three things: authoring, storage, and rendering. Writers wrote in the CMS, the database stored the post, and the theme rendered it to HTML. It was a coherent single system for a time when most content lived on the company’s own website.

The headless era (Contentful, Strapi, Sanity, and others) decoupled two of these three concerns. Storage and rendering became separate concerns. You stored content as structured data in an API, and any front-end could render it. This made sense once content started living in many places at once: the main website, the mobile app, a widget embedded in someone else’s product, a newsletter, a sales deck.

But authoring didn’t change. The writer still sat in an editor, typed words into a box, and clicked publish. The tools got smarter about where the content could go after it was written, but they didn’t change the assumption about who was writing it or why.

This is the assumption AI actually breaks. Specifically:

  • The writer stopped being the bottleneck. A half-decent LLM can convert a context bundle into a long-form blog post, three LinkedIn variants, a Reddit thread, a short-video script, and an AEO-optimized FAQ page in a few minutes. The production step that used to take a week takes minutes.
  • The channel count exploded. A single topic might need to reach a human reader on LinkedIn, an AEO-optimized page for LLM crawlers, a Reddit comment for authority signal, a short-form video for distribution, and internal sales enablement. Each one has its own format rules and cadence.
  • The decision layer got harder. Because producing content is cheap, deciding what to produce, when, and where is now the expensive part. We used to optimize for throughput of a single pipeline; now we need to optimize for scheduling across many parallel ones.
  • Human attention became the scarce resource. Content teams no longer struggle to produce, they struggle to review, steer, and prioritize. A CMS that optimizes the “writing a post” workflow is optimizing the cheap step while ignoring the expensive one.

So what should we build instead?

The Four Planes

The model I want to propose is built on four planes that operate simultaneously. Not sequential stages of a pipeline, but parallel systems, each with its own job, each communicating with the others.

  • Context: the knowledge base of the organization.
  • Signaling and Prioritization: the agents that decide what content matters, listen for signals, and direct the other planes.
  • Production: the agents that generate text, graphic, and video artifacts.
  • Distribution: the agents that understand channels and automate the push, with humans in the loop where needed.

Press enter or click to view image in full size

Let’s walk through each one.

Plane 1: Context

The context plane is the knowledge base of everything the organization knows: what the product does, how customers describe the problem it solves, the positioning decisions that have been made, customer meeting transcripts, engineering documentation, the founder’s opinions, the sales objections that keep coming up.

Critically, the context plane is decoupled from any representation. It’s not blog posts, not tweets, not slides. It’s structured, machine-readable knowledge, the kind of thing an agent can query to answer “what have we said about ABAC policies in the last six months?” or “what is our current position on authorization for AI agents?”

Most organizations treat this plane as a byproduct of day-to-day work. Meeting notes pile up in Notion, docs live in Google Drive, positioning decisions happen in Slack threads. The context exists, but it isn’t engineered. In the architecture I’m proposing, the context plane is the primary product of the content system. Everything else draws from it.

Plane 2: Signaling and Prioritization

The signaling plane is the brain of the system. Its job is to understand why content exists and what should be produced now.

It listens to signals from the outside world: trends on X, ABM triggers from sales tools, AEO ranking gaps, competitor announcements, themes recurring across three customer calls this week. It uses those signals to make decisions:

  • What should be added to the context plane? For example, if a new customer pain point comes up repeatedly in sales calls, the signaling plane can request that the context plane pull in the transcripts and expand that area of the knowledge base.
  • What should the production plane generate next? For example, a timely LinkedIn post on a trending topic where we have a clear point of view.
  • What should the distribution plane push, to which channels, and in what order?

The signaling plane also owns prioritization across channels. You can’t post three times a day from one LinkedIn account. The algorithm won’t reward it, and the audience won’t read it. But you might have 15 active channels, each with its own ideal cadence, audience, and format. Coordinating across all of them is a scheduling problem that no human can sustain manually. Signaling is the plane that solves it.

Plane 3: Production

The production plane is what most CMSes already think they are. It’s the generators: agents specialized by content type and channel.

A production agent for an AEO answer is a fundamentally different agent from one that writes a long-form blog post, which is different from one that cuts a short-form video script, which is different from one that composes a Reddit reply designed to read like a developer rather than a marketer. Each needs its own prompt, its own templates, its own style rules, its own validation.

The good news: this is the easiest plane to stand up. The LLM ecosystem has already solved most of the hard problems here. The important design decision isn’t how the production agents generate. It’s what they pull from (the context plane) and what they push to (the distribution plane). A production agent without a rich context plane produces generic LLM slop. A production agent without a well-defined distribution target produces content that no channel actually wants.

Plane 4: Distribution

The distribution plane is where a lot of content strategies quietly fail.

Distribution agents understand each channel deeply: its format requirements, algorithmic preferences, optimal timing, audience context, and retargeting implications. They automate the actual push to the channel, with humans in the loop on high-stakes moves (a founder’s LinkedIn announcement) and fully autonomous on the sub-channels where approval would be absurd overhead. A good example of the latter is a programmatic update to an AEO-optimized FAQ page when the underlying docs change. Nobody should be approving those one by one.

Distribution is not “scheduling a post.” Distribution is understanding that one context artifact fans out into many representations across many channels, each with its own measurement loop, each reporting back to the signaling plane so priority can be updated in real time.

How the Planes Communicate

Here’s the insight that most architectures miss: the four planes are not sequential. They don’t run as a pipeline from context to signaling to production to distribution. They’re a mesh. Each plane needs continuous information from the others, and the overall behavior of the system emerges from their ongoing communication.

Get Gabriel L. Manor’s stories in your inbox

Join Medium for free to get updates from this writer.

Remember me for faster sign in

A few concrete examples:

  • The context plane needs the signaling plane to know what to expand next. If signaling detects rising interest in “policy-as-code for AI agents,” context should pull relevant customer meetings, engineering docs, and competitor analysis into that corner of the knowledge base.
  • The context plane needs the production plane to detect positioning drift. When a production agent keeps rewording the value prop in ways that don’t match the ground truth, context needs to either adopt the drift (if it’s an improvement) or correct it (if it’s a regression).
  • The signaling plane needs the context plane to understand what’s already covered. It needs the production plane to know what artifacts exist. It needs the distribution plane to know what actually landed in the wild.
  • The production plane needs the context plane as its baseline. It needs the signaling plane to know what to make and when.
  • The distribution plane feeds measurement back into the signaling plane. It tells the context plane what messaging worked in the wild. It pings the production plane when a channel’s format has changed and existing assets need to be re-cut.

Press enter or click to view image in full size

If the planes don’t communicate, you don’t have a content system. You have four disconnected tools pretending to be one.

The comparison to distributed systems engineering is deliberate. The planes behave more like microservices communicating over a message bus than stages in a Unix pipeline. Each operates on its own timeline, with its own state, and the system’s overall behavior is the sum of those interactions.

What This Means for the CMS

Given the four-plane model, what does the actual content management system need to look like? I’d argue for six principles.

1. Git Is the Right Mental Model, Not WordPress

The single most important shift: the CMS should be built on Git principles rather than publishing-tool principles.

Concretely:

  • Markdown and YAML in a repository as the source of truth. The context plane lives as diff-able, branch-able, version-controlled text.
  • Agents read from and write to the repository directly, proposing changes through pull-request-like workflows. Humans review, approve, and merge.
  • Representation is generated downstream. No WYSIWYG editor. No “this is what it will look like” preview panel. Agents take context nodes and compose representations for whatever channel needs them.

This sounds radical, but it mirrors how modern software engineering already works. GitOps replaced manual server configuration. Infrastructure-as-code replaced point-and-click cloud consoles. The same shift is due for content.

2. Decouple Three Things, Not One

Headless CMSes decoupled content from representation. The modern CMS needs to decouple three things:

  • Context from representation. A context node isn’t a blog post or a tweet. It’s a piece of knowledge.
  • Context from meaning. Meaning is an annotation layer over context. A single paragraph about how we handle ABAC policies has different meanings in different settings: a feature proof point on a product page, a credibility signal in a sales deck, an example in a technical blog post.
  • Meaning from channel. The same meaning renders differently depending on where it’s going. A LinkedIn post articulates meaning one way, an AEO-optimized answer renders it another way, a Reddit comment yet another.

This decoupling is what lets one piece of context fan out into many artifacts without manual re-authoring.

3. Agents Are First-Class Citizens, Not Integrations

14. Pluggable by Default

Your organization already has asset generators you love, video pipelines you’ve tuned, transcription services that work, a design system you’ve invested in. The modern CMS shouldn’t replace any of them. It should plug them in as first-class participants.

The CMS orchestrates the four planes. It doesn’t monopolize them. Bring-your-own production agents, bring-your-own distribution integrations, bring-your-own analytics. This is the same architectural principle that turned Postgres into an engine rather than just a database. The platforms that win are the ones that enable an ecosystem, not the ones that try to own every layer themselves.

5. GTM Goals Are First-Class State

Content isn’t measured in vanity metrics in this architecture. It’s measured in funnel movement, attribution, pipeline contribution.

The signaling plane is wired directly to GTM KPIs. Urgency, priority, cadence, and archival decisions all derive from GTM goals rather than editorial preference. A great piece of content that doesn’t serve a GTM goal is a hobby. In an architecture built for scale, hobby content is actively harmful, because it consumes human review attention that should be spent on content that actually matters.

6. Human Attention Is an Architectural Concern

The last principle is the one most often overlooked: humans cannot inspect all four planes at once without burning out.

A reviewer who’s in focus mode for approvals should see queued approvals and nothing else. A strategist auditing positioning should see context diffs and messaging divergence, not a calendar grid. An operator tuning production should get the artifacts in flight, not a firehose of distribution analytics.

Context switch is the silent killer in content work at AI scale. If approving a post requires checking three dashboards, compressing images manually, and re-reading last week’s posts to maintain tone, then the context-switch cost exceeds the actual work. The system must understand which human is doing which job right now and surface only what is relevant to that job.

This is an architectural concern, not a UI-polish concern. Designing for it as an afterthought is how teams burn out on tools that were supposed to help them.

Start with Context

There are three ways to start building toward this architecture today, even if no off-the-shelf product fully implements it.

The first is to start with the context plane. Before you build any agents, move your organizational knowledge into a structured, version-controlled, machine-readable format. Markdown in a Git repo is a perfectly fine starting point. Meeting transcripts, positioning docs, FAQs, customer pains, all of it. Agents can’t produce distinctive content without distinctive context, and this is the foundational investment.

The second is to add signaling before production. Resist the urge to build production agents first just because they’re the fun ones. Without a signaling layer that decides what to produce and why, your production agents will generate noise. Start with a simple signaling agent that watches two or three channels you care about and nominates topics based on trends and gaps.

The third is to instrument distribution as a measurement loop, not a push mechanism. Every piece of content that goes out should come back with data: what landed, what didn’t, what’s ranking, what’s getting ignored. This is what lets signaling actually prioritize in a data-driven way.

We’ve Seen This Pattern Before

The shift from a production-centric CMS to four-plane content engineering is, in my view, the natural next step in a pattern we’ve already seen in this space.

The move from WordPress to headless wasn’t random. It happened because two things changed at the same time. First, writing frontend code became a commodity relative to backend, storage, and management concerns. Anyone could spin up a React app, and the expensive engineering moved elsewhere. Second, publishing got more flexible, and the cadence of frontend updates accelerated as mobile apps, embedded widgets, and third-party platforms all needed to render the same content in different ways. The CMS had to stop assuming the website was the only destination.

Press enter or click to view image in full size

The shift to four-plane content engineering follows the same pattern, one layer up. First, channels have expanded far beyond “write to my website.” A single piece of context might need to reach LinkedIn, AEO crawlers, Reddit, short-form video, sales decks, and internal enablement, each with its own format, cadence, and success criteria. Second, the coding of everything else in the CMS has become a commodity. Authoring, storage, retrieval, orchestration, the things that used to be the real engineering work, can now be assembled quickly with LLMs and agents.

When both of those things happen at the same time, the architecture has to change. WordPress couldn’t survive mobile and the API economy. Traditional CMSes won’t survive AI-native content distribution.

The tools will catch up. The mental model needs to shift first.

Let’s Connect

In this blog, I write about GTM engineering, developer marketing, and the architecture behind modern growth systems, drawing on my experience as a software engineer and my work leading growth and DevRel at Permit.io. Following my profile will make sure you get notified when I publish the next one 😉

If you enjoyed this post, I’d love to hear what you think. You can 👏🏻 it, leave a comment, or reach out on LinkedIn at https://www.linkedin.com/in/gmanor/.

This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.

Subscribe to our newsletter and YouTube channel to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!