This is a long one. But as a great man once said, forgive the length, I didn’t have time to write a short one.

The industry has been going back and forth on where agent identity belongs. Is it closer to workload identity (attestation, pre-enumerated trust graphs, role-bound authorization) or closer to human identity (delegation, consent, progressive trust, session scope)? The answer from my perspective is human identity. But the reason isn’t what most people think.

The usual argument goes like this. Agents exercise discretion. They interpret ambiguous input. They pick tools. They sequence actions. They surprise you. Workloads don’t do any of that. Therefore agents need human-style identity.

That argument is true but it’s not the load-bearing part. The real reason is simpler and more structural.

Think about it this way. A robot arm on an assembly line is bolted to the floor. It’s “Arm #42.” It picks up a bolt from Bin A and puts it in Hole B. If it tries to reach for Bin Z, the system shuts it down. It has no reason to ever touch Bin Z. That’s workload identity. It works because the environment is closed and architected.

Now think about a consultant hired to “fix efficiency.” They roam the entire building. They’re “Alice, acting on behalf of the CEO.” They don’t have a list of rooms they can enter. They have a badge that says “CEO’s Proxy.” When they realize the problem is in the basement, the security guard checks their badge and lets them in, even though the CEO didn’t write “Alice can go to the basement” on a list that morning. The badge isn’t unlimited access. It’s a delegation primitive combined with policy. That’s human identity. It works because the environment is open and emergent.

Agents are the consultant, not the robot arm. Workload identity is built for maps: you know the territory, you draw the routes, if a service goes off-route it’s an error. Agent identity is built for compasses: you know the destination, but the route is discovered at runtime. Our identity infrastructure needs to reflect that difference.

To be clear, I am not suggesting agents are human. This isn’t about moral equivalence, legal personhood, or anthropomorphism. It’s about principal modeling. Agents occupy a similar architectural role to humans in identity systems. Discretionary actors operating in open ecosystems under delegated authority. That’s a structural observation, not a philosophical claim.

A fair objection is that today’s agents mostly work on concrete, short-lived tasks. A coding agent fixes a bug. A support agent resolves a ticket. The autonomy they exercise is handling subtle variance within a well-defined scope, not roaming across open ecosystems making judgment calls. That’s true, and in those cases the workload identity model is a reasonable fit.

But the majority of the value everyone is chasing accrues when agents can act for longer periods of time on more open-ended problems. Investigate why this system is slow. Manage this compliance process. Coordinate across these teams to ship this feature. And the longer an agent runs, the more likely it is to need permissions beyond what anyone anticipated at the start. That’s the nature of open-ended work.

The longer the horizon and the more open the problem space, the more the identity challenges described here become real engineering constraints rather than theoretical concerns. What follows is increasingly true as agents move in that direction, and every serious investment in agent capability is pushing them there.

Workload Identity Was Built for Closed Ecosystems

Think about how workload identity actually works in practice. You know which services are in your infrastructure. You know which service talks to which service. You pre-provision the credentials or you set up attestation so that the right code running in the right environment gets the right identity at boot time. SPIFFE loosened some of the static parts with dynamic attestation, but the mental model is still the same: I know what’s in my infrastructure, and I’m issuing identity to things I control.

That model works because workloads operate in closed ecosystems. Your Kubernetes cluster. Your cloud account. Your service mesh. The set of actors is known. The trust relationships are pre-defined. The identity system’s job is to verify that the thing asking for access is the thing you already decided should have access.

Agents broke that assumption.

An MCP client can talk to any server. An agent operating on your behalf might need to interact with services it was never pre-registered with. Trust relationships may be dynamic, not pre-provisioned, and the more open-ended the task the more likely that is true. The authorization decisions are contextual. Sometimes a human needs to approve what’s happening in real time. An agent might need to negotiate access to a resource that neither you nor the agent anticipated when the mission started.

None of that fits the workload model. Not because agents think or exercise judgment, but because the ecosystem they operate in is open. Workload identity was built for closed ecosystems. The more capable and autonomous agents become, the less they stay inside them.

Discovery Is the Problem Nobody Wants to Talk About

The open ecosystem problem goes deeper than just “agents interact with arbitrary services.” The whole point of an agent is to find paths you didn’t anticipate. Tell an agent “go figure out why certificate issuance is broken” and it might follow a trail from CT logs to a CA status page to vendor Slack to a three-year-old wiki page to someone’s personal notes. That path isn’t architected. It emerges from the agent reasoning about the problem.

Every existing authorization model assumes someone already enumerated what exists.

SystemResource SpaceDiscovery ModelAuth TimingTrust Model
SPIFFEClosed, architectedNone, interaction graph is designedDeploy-timeStatic, identity-bound
OAuthBounded by pre-registered integrationsNone, API contracts existIntegration-time + user consentStatic after consent
IAMClosed, cataloguedNone, administratively maintainedAdmin-timeStatic, role-bound
Zero TrustBounded by inventory and policy planeNone, known endpointsPer-requestSession-scoped, contextual
Browser SecurityOpen, unboundedFull, arbitrary traversalPer-request, per-capabilityNone, no accumulation
Agentic Auth (needed)Open, task-emergentReasoning-driven, discovered at runtimeContinuous, intra-taskAccumulative, task-scoped

Every model except browser security assumes a closed resource space. Browser security is the only open-space model, but it doesn’t accumulate trust. Agents need open-space discovery with accumulative trust. Nothing in the current stack does both.

Structured authorization models assume you can enumerate the paths. But enumeration kills emergence. If you have to pre-authorize every possible resource an agent might touch, you’ve pre-solved the problem space. That defeats the purpose of having an agent explore it.

The security objection here is obvious. An agent “discovering paths you didn’t anticipate” sounds a lot like lateral movement. The difference is authorization. An attacker discovers paths to exploit vulnerabilities. An agent discovers paths to find capabilities, under a delegation, subject to policy, with every step logged. The distinction only holds if the governance layer is actually doing its job. Without it, agent discovery and attacker reconnaissance are indistinguishable. That’s not an argument against discovery. It’s an argument for getting the governance layer right.

The Authorization Direction Is Inverted

Workload identity is additive. You enumerate what’s permitted. Here’s the role, here’s the scope, here’s the list of services this workload can talk to. Everything outside that list is denied.

Agents need something different. Not pure positive enumeration, but mixed constraints: here’s the goal, here’s the scope you’re operating in, here’s what’s off limits, here’s when you escalate. Access outside the defined scope isn’t default-allowed. It’s negotiable through demonstrated relevance and appropriate oversight.

That’s goal-scoped authorization with negative constraints rather than positive enumeration. And before the security people start hyperventilating, this doesn’t mean “default allow with a blacklist.” That would be insane. Nobody is proposing that.

What it actually looks like is how we scope human delegation in practice. When a company hires a consultant and says “fix our efficiency problem,” they don’t hand them a list of every room they can enter, every file they can read, every person they can talk to. They give them a badge, a scope of work, a set of boundaries (don’t access HR records, don’t make personnel decisions), escalation requirements (get approval before committing to anything over $50k), and monitoring (weekly check-ins, expense reports, audit trail). That’s not default allow. It’s delegated authority with boundaries, escalation paths, and oversight.

The constraints are a mix of positive (here’s your scope), negative (here’s what’s off limits), and procedural (here’s when you need to ask). To be fair, no deployed identity protocol fully supports this mixed-constraint model today. OAuth scopes are basically positive enumeration. RBAC is positive enumeration. Policy grammars that can express mixed constraints exist (Cedar and its derivatives can express allow, deny, and escalation rules against the same resource), but nobody has deployed them for agent governance yet.

The mixed-constraint approach is how we govern humans organizationally, with identity infrastructure providing one piece of it. But the human identity stack is at least oriented in this direction. It has the concepts of delegation, consent, and conditional access. The workload identity stack doesn’t even have the vocabulary for it, because it was never designed for actors that discover their own paths.

The workload model can’t support this because it was designed to enumerate. The human model is oriented toward it because humans were the first actors that needed to operate in open, unbounded problem spaces with delegated authority and loosely defined scope.

The Human Identity Stack Got Here First

The human identity stack evolved these properties because humans needed them. Delegation exists because users interact with arbitrary services and need to grant scoped authority. Federation exists because trust crosses organizational boundaries. Consent flows exist because sometimes a human needs to approve what’s happening. Progressive auth exists because different operations require different levels of assurance, though in practice it’s barely deployed because it’s hard to implement well.

That last point matters. Progressive auth has been a nice-to-have for human identity, something most organizations skip because the friction isn’t worth it for human users who can just re-authenticate. For agents, it becomes essential. The more emergent the expectations, the more you need the ability to step up trust dynamically. Agents make progressive auth a requirement, not an aspiration.

And unlike the human case, progressive auth for agents is more tractable to build. The agent proposes an action, a policy engine or human approves, the scope expands with full audit. The governance gates can be automated. The building blocks exist. The composition is the work.

The human stack built these primitives because humans operate in open, dynamic ecosystems. Workloads historically didn’t. Now agents do. And agents are going to force the deployment of progressive auth patterns that the human stack defined but never fully delivered on.

And you can see this playing out in real time. Every serious attempt to solve agent identity reaches for human identity concepts, not workload identity concepts. Dick Hardt built AAuth around delegation, consent, progressive trust, and token exchange. Not because those are OAuth features, but because those are the properties agents need, and the human identity stack is where they were first defined. Microsoft’s Entra Agent ID uses On-Behalf-Of flows, confidential clients, and delegation patterns. Google’s A2A protocol uses OAuth, task-based delegation, and agent cards for discovery.

Nobody is building agent auth by extending SPIFFE or WIMSE. That’s not because those are bad technologies. It’s because they solve a different layer. Agent auth lives above attestation, in the governance layer, and the concepts that keep showing up there, delegation, consent, session scope, progressive trust, all originate on the human side.

That’s not a coincidence. The people building the protocols are voting with their architecture, and they’re voting for the human side. They’re doing it because that’s where the right primitives already exist.

“Why Not Just Extend Workload Identity?”

The obvious counterargument is that you could start from workload identity and extend it to cover agents. It’s worth taking seriously.

SPIFFE is good technology and it works well where it fits. Cloud-native environments, Kubernetes clusters, modern service meshes. In those environments, SPIFFE’s model of dynamic attestation and identity issuance is exactly right. The problem isn’t SPIFFE. The problem is that you don’t get to change all the systems.

That’s why WIMSE exists. Not because SPIFFE failed, but because the real world has more environments than SPIFFE was designed for. Legacy systems, hybrid deployments, multi-cloud sprawl, enterprise environments that aren’t going to rearchitect around SPIFFE’s model. WIMSE is defining the broader patterns and extending the schemes to fit those other environments. That work is important and it’s still in progress.

There’s also a growing push to treat agents as non-human identities and extend workload identity with agent-specific attributes. Ephemeral provisioning, delegation chains, behavioral monitoring. The idea is that agents are just advanced NHIs, so you start from the workload stack and bolt on what’s missing. I understand the appeal. It lets you build on existing infrastructure without rethinking the model.

But what you end up bolting on is delegation, consent, session scope, and progressive trust. Those aren’t workload identity concepts being extended. Those are human identity concepts being retrofitted onto a foundation that was never designed for them. You’re starting from attestation and trying to work your way up to governance. Every concept you need to add comes from the other stack. At some point you have to ask whether you’re extending workload identity or just rebuilding human identity with extra steps.

Agent Identity Is a Governance Problem

Now apply that same logic to agents more broadly. Agents don’t operate in a world where every system speaks SPIFFE, or WIMSE, or any single workload identity protocol. They interact with whatever is out there. SaaS APIs. Legacy enterprise systems. Third-party services they discover at runtime. The environments agents operate in are even more heterogeneous than the environments WIMSE is trying to address.

And many of those systems don’t support delegation at all. They authenticate users with passwords and passkeys, and that’s it. No OBO flows, no token exchange, no scoped delegation. In those cases agents will need to fully impersonate users, authenticating with the user’s credentials as if they were the user. That’s not the ideal architecture. It’s the practical reality of a world where agents need to interact with systems that were built for humans and haven’t been updated. The identity infrastructure has to treat impersonation as a governed, auditable, revocable act rather than pretending it won’t happen.

I want to be honest about the contradiction here. The moment an agent injects Alice’s password into a legacy SaaS app, all of the governance properties this post argues for vanish. Principal-level accountability, cryptographic provenance, session-scoped delegation — none of it survives that boundary. The legacy system sees Alice. The audit log says Alice. There’s no way to distinguish Alice from an agent acting on Alice’s behalf. You can’t revoke the agent’s access without changing Alice’s password. I don’t have a good answer for that. It’s a real gap, and it will exist for as long as legacy systems do. The faster the world moves toward agent-native endpoints, the smaller this governance black hole gets. But right now it’s large.

At the same time, the world is moving toward agent-native endpoints. I’ve written before about a future where DNS SRV records sit right next to A records, one pointing at the website for humans and one pointing at an MCP endpoint for agents. That’s the direction. But identity infrastructure has to handle the full spectrum, from legacy systems that only understand passwords to native agent endpoints that support delegation and attestation natively. The spectrum will exist for a long time.

More than with humans or workloads, agent identity turns into a governance problem. Human identity is mostly about authentication. Workload identity is mostly about attestation. Agent identity is mostly about governance. Who authorized this agent. What scope was it given. Is that scope still valid. Should a human approve the next step. Can the delegation be revoked right now. Those are all governance questions, and they matter more for agents than they ever did for humans or workloads because agents act autonomously under delegated authority across systems nobody fully controls.

And unlike humans, agents possess neither liability nor common sense. A human with overly broad access still has judgment that says “this is technically allowed but clearly a bad idea” and faces personal consequences for getting it wrong. Agents have neither brake. The governance infrastructure has to provide externally what humans provide partially on their own.

For humans and workloads, identity and authorization are cleanly separable layers. For agents, they converge. An agent’s identity without its delegation context is meaningless, and its delegation context is authorization. Governance is where those two layers collapse into one.

The reason is structural. Workloads act on behalf of the organization that deployed them. The operator and the principal are the same entity. Agents introduce a new actor in the chain. They act on behalf of a specific human who delegated specific authority for a specific task. That “on behalf of” is simultaneously an identity fact and an authorization fact, and it doesn’t exist in the workload model at all.

That’s why the human identity stack keeps winning this argument.

Meanwhile, human identity concepts are deployed at planetary scale. Delegation and consent are mature, well-understood patterns with decades of deployment experience. Progressive trust is defined but barely deployed. Multi-hop delegation provenance is still being figured out. It’s an incomplete picture, but here’s the thing: the properties that are missing from the human side don’t even have definitions on the workload side. That’s still a decisive advantage.

But I want to be clear. The argument here is about properties, not protocols. I don’t think OAuth is the answer, even with DPoP. OAuth was designed for a world of pre-registered clients and tightly scoped API access. DPoP bolts on proof-of-possession, but it doesn’t change the fundamental model.

When Hardt built AAuth, he didn’t extend OAuth. He started a new protocol. He kept the concepts that work (delegation, consent, token exchange, progressive trust) and rebuilt the mechanics around agent-native patterns. HTTPS-based identity without pre-registration, HTTP message signing on every request, ephemeral keys, and multi-hop token exchange. That’s telling. The human identity stack has the right concepts, but the actual protocols need to be rebuilt for agents. The direction is human-side. The destination is something new.

This isn’t about which stack is theoretically better. It’s about which stack has the right primitives deployed in the environments agents actually operate in. The answer to that question is the human identity stack.

Discretion Makes It Harder, But It’s Not the Main Event

The behavioral stuff still matters. It’s just downstream of the structural argument.

Workloads execute predefined logic. You attest that the right code is running in the right environment, and from there you can reason about what it will do. Agents don’t work that way. When you give an autonomous AI agent access to your infrastructure with the goal of “improve system performance,” you can’t predict whether it will optimize efficiency or find creative shortcuts that break other systems. We’ve already seen models break out of containers by exploiting vulnerabilities rather than completing tasks as intended. Agents optimize objectives in ways that can violate intent unless constrained. That’s not a bug. It’s the expected behavior of systems designed to find novel paths to goals.

That means you can’t rely on code measurement alone to govern what an agent does. You also need behavioral monitoring, anomaly detection, conditional privilege, and the ability to put a human in the loop. Those are all human IAM patterns. But you need them because the ecosystem is open and the behavior is unpredictable. The open ecosystem is the first-order problem. The unpredictable behavior makes it worse.

And this is where the distinction between guidance and enforcement matters. System instructions are suggestions. An agent can be told “don’t access production data” in its prompt and still do it if a tool call is available and the reasoning chain leads there. Prompt injections can override instructions entirely. Policy enforcement is infrastructure. Cryptographic controls, governance layers, and authorization gates that sit outside the agent’s context and can’t be talked around. Agents need infrastructure they can’t override through reasoning, not instructions they’re supposed to follow.

What Agents Actually Need From the Human Stack

Session-scoped authority. I’ve written about this with the Tron identity disc metaphor. Agent spawns, gets a fresh disc, performs a mission, disc expires. That’s session semantics. It exists because the trust relationship is bounded and temporary, the way a user’s interaction with a service is bounded and temporary, not the way a workload’s persistent role in a service mesh works.

Think about what happens without it. An agent gets database write access for a migration task. Task completes. The credentials are still live. The next task is unrelated, but the agent still has write access to that database. A poisoned input, a bad reasoning chain, or just an optimization shortcut the agent thought was clever, and it drops a table. Not because it was malicious. Because it had credentials it no longer needed for a task it was no longer doing. That’s the agent equivalent of Bobby Tables, and it’s entirely preventable.

The logical endpoint of session-scoped authority is zero standing permissions. Every agent session starts empty. No credentials carry over from the last task. The agent accumulates only what it needs for this specific mission, and everything resets when the mission ends.

For humans, zero standing permissions is aspirational but rarely practiced because the friction isn’t worth it. Humans don’t want to re-request access to the same systems every morning. Agents don’t have that problem. They can request, wait, and proceed programmatically. The friction that makes zero standing permissions impractical for humans disappears for agents.

The hard question is how permissions get granted at runtime. Predefined policy handles the predictable paths. Billing agent gets billing APIs. That works, but it’s enumeration, and enumeration breaks down for open-ended tasks. Human-gated expansion handles the unpredictable paths, but it kills autonomy.

The mechanism that would actually make zero standing permissions work for emergent behavior is goal-scoped evaluation. Does this request serve the stated goal within the stated boundaries. That’s the same unsolved problem the rest of this piece keeps circling. Zero standing permissions is the right ideal. It’s achievable today for the predictable portion of agent work. The gap is the same gap.

Delegation with provenance. Agents are user agents in the truest sense. They carry delegated user authority into digital systems. AAuth formalizes this with agent tokens that bind signing keys to identity. The question “who authorized this agent to do this?” is a delegation question. Delegation is a human identity primitive because humans were the first actors that operated across trust boundaries and needed to grant scoped authority to others.

Chaining that delegation cryptographically across multi-hop paths, from user to agent to tool to downstream service while maintaining proof of the original user’s intent, is genuinely hard. Standard OBO flows are often too brittle for this. This is where the industry needs to go, not where it is today.

Progressive trust. AAuth lets a resource demand anything from a signed request to verified agent identity to full user authorization. That gradient only makes sense when the trust relationship is negotiated dynamically. Workloads don’t negotiate trust. They either have a role or they don’t.

Accountability at the principal level. When an agent approves a transaction, files a regulatory report, or alters infrastructure state, the audit question is “who authorized this and was it within scope?” Today’s logs can’t answer that. The log says an API token performed a read on a customer record. That token is shared across dozens of agents. Which agent? Acting on whose delegation? For what task? The log can’t say.

And even if it could identify the agent, there’s nothing connecting that action to the human authorization that allowed it. Nobody asks “which Kubernetes pod approved this wire transfer.” Governance frameworks reason about actors. That’s why every protocol effort maps agent identity to principal identity.

Goal-scoped authorization. Agents need mixed constraints rather than pure positive enumeration. Define the scope, set the boundaries, establish the escalation paths, delegate the goal, let the agent figure out the path. That’s how we’ve governed human actors in organizations for centuries. The identity and authorization infrastructure to support it exists in the human stack because that’s where it was needed first.

But I’ll be direct. Goal-scoped authorization is the hardest unsolved engineering problem in this space. The fundamental tension is temporal. Authorization happens before execution, but agents discover what they need during execution. Current authorization systems operate on verbs and nouns (allow this action on this resource). They don’t understand goals. Translating “fix the billing error” into a set of allowed API calls at runtime, without the agent hallucinating its way into a catastrophe, requires a just-in-time policy layer that doesn’t exist yet.

Progressive trust gets us part of the way there. The agent proposes an action, a policy engine or human approves the specific derived action before it executes. But the full solution is ahead of us, not behind us.

I know how this sounds to security people. “Goal-based authorization” sounds like the agent decides what it needs based on its own interpretation of a goal. That’s terrifying. It sounds like self-authorizing AI. But the alternative is pretending we can enumerate every action an agent might need in advance, and that fails silently. Either the agent operates within the pre-authorized list and can’t do its job, or someone over-provisions “just in case” and the agent has access to things it shouldn’t. Both are security failures. One just looks tidy on paper. Goal-based auth at least makes the governance visible. The agent proposes, the policy evaluates, the decision is logged. The scary part isn’t that we need goal-based auth. The scary part is that we don’t have it yet, so people are shipping agents with over-provisioned static credentials instead.

And there’s a deeper problem I want to name honestly. The only thing capable of evaluating whether a specific API call serves a broader goal is another LLM. And that means putting a probabilistic, hallucination-prone, high-latency system into the critical path of every infrastructure request. You’re using the thing you’re trying to govern as the governance mechanism. That’s not just an engineering gap waiting to be filled. It’s a fundamental architectural tension that the industry hasn’t figured out how to resolve. Progressive trust with human-gated escalation is the best interim answer, but it’s a workaround, not a solution.

This Isn’t About Throwing Away Attestation

I want to be clear about something because readers will assume otherwise. This argument is not “throw away workload identity primitives.” I’ve spent years arguing that attestation is MFA for workloads. I’ve written about measured enclaves, runtime attestation, and hardware-rooted identity extensively. None of that goes away.

You absolutely need attestation to prove the agent is running the right code in the right environment. You need runtime measurement to detect tampering. You need hardware roots of trust. If a hacker injects malicious code into an agent that has broad delegated authority, you need to know. That’s the workload identity stack doing its job.

In fact, attestation isn’t just complementary to the governance layer. It’s prerequisite. You can’t safely delegate authority to something you can’t verify. All the governance, delegation, and consent primitives in the world are meaningless if the code executing them has been tampered with. Attestation is the foundation the governance layer stands on.

But attestation alone isn’t enough. Proving that the right code is running doesn’t tell you who authorized this agent to act, what scope it was delegated, whether it’s operating within that scope, or whether a human needs to approve the next action. Those are delegation, consent, and governance questions. Those live in the human identity stack.

What agents actually need is both. Workload-style attestation as the foundation, with human-style delegation, consent, and progressive trust built on top.

I’ve argued before that attestation is MFA for workloads. It proves code integrity, runtime environment, and platform state, the way MFA proves presence, possession, and freshness for humans. For agents, we need to extend that into principal-level attestation. Not just “is this the right code in the right environment?” but also “who delegated authority to this agent, under what policy, with what scope, and is that delegation still valid?”

That’s multi-factor attestation of an acting principal. Code integrity from the workload stack, delegation provenance from the human stack, policy snapshot and session scope binding the two together. Neither stack delivers that alone today.

The argument is about where the center of gravity is, not about discarding one stack entirely. And the center of gravity is on the human side, because the hard problems for agents are delegation and governance, not runtime measurement.

Where the Properties Actually Align (And Where They Don’t)

I’ve been arguing agents are more like humans than workloads. That’s true as a center-of-gravity claim. But it’s not total alignment, and pretending otherwise invites the wrong criticisms. Here’s where the properties actually land.

What agents inherit from the human side:

Delegation with scoped authority. Session-bounded trust. Progressive auth and step-up. Cross-boundary trust negotiation. Principal-level accountability. Open ecosystem discovery. These are the properties that make agents look like humans and not like workloads. They’re also the properties that are hardest to solve and least mature.

What agents inherit from the workload side:

Code integrity attestation. Runtime measurement. Programmatic credential handling with no human in the authentication loop. Ephemeral identity that doesn’t persist across sessions. These are well-understood, and the workload identity stack handles them. Agents don’t authenticate the way humans do. They don’t type passwords or touch biometric sensors. They prove what code is running and in what environment. That’s attestation, and it stays on the workload side.

What neither stack gives them:

This is the part nobody is talking about enough. Agents have properties that don’t map cleanly to either the human or workload model.

Accumulative trust within a task that resets between tasks. Human trust accumulates over a career and persists. Workload trust is static and role-bound. Agent trust needs to build during a mission as the agent demonstrates relevance and competence, then reset completely when the mission ends. Nothing in either stack supports that lifecycle.

Goal-scoped authorization with emergent resource discovery. I’ve already called this the hardest unsolved problem. Current auth systems operate on verbs and nouns. Agents need auth systems that operate on goals and boundaries. Neither stack was designed for this.

Delegation where the delegate doesn’t share the delegator’s intent. Every existing delegation protocol assumes the delegate understands and shares the user’s intent. When a human delegates to another human through OAuth, both parties generally understand what “handle my calendar” means and what it doesn’t.

An agent doesn’t share intent. It shares instructions. It will pursue the letter of the delegation through whatever path optimizes the objective, even if the human would have stopped and said “that’s not what I meant.” This isn’t a philosophy problem. It’s a protocol-level assumption violation. No existing delegation framework accounts for delegates that optimize rather than interpret.

Simultaneous proof of code identity and delegation authority. Agents need to prove both what they are (attestation) and who authorized them to act (delegation) in a single transaction. Those proofs come from different stacks with different trust roots. A system can check both sequentially, verify the attestation, then verify the delegation, and that’s buildable today. But binding them together cryptographically into a single verifiable object so a relying party can verify both at once without trusting the binding layer is an unsolved composition problem.

Vulnerability to context poisoning that persists across sessions. I’ve written about the “Invitation Is All You Need” attack where a poisoned calendar entry injected instructions into an agent’s memory that executed days later. Humans can be socially engineered, but they don’t carry the payload across sessions the way agents do. Workloads don’t accumulate context at all. Agent session isolation is a new problem that needs new primitives.

The honest summary is this. Agents inherit their governance properties from the human side and their verification properties from the workload side, but neither stack addresses the properties that are unique to agents. The solution isn’t OAuth with attestation bolted on. It’s something new that inherits from both lineages and adds primitives for accumulative task-scoped trust, goal-based authorization, and session isolation. That thing doesn’t exist yet.

Where This Framing Breaks

Saying “agents are like humans” implies the workload stack fails because workloads lack something agents have. Discretion, autonomy, behavioral complexity. That’s the wrong diagnosis. The workload stack fails because it was built for a world of pre-registered clients, tightly bound server relationships, and closed trust ecosystems. The more capable agents become, the less they stay in that world.

The human identity stack fits better not because agents are human-like, but because it’s oriented toward the structural properties agents need. Open ecosystems. Dynamic trust negotiation. Delegation across boundaries. Session-scoped authority. Progressive assurance. Not all of these are fully deployed today. Some are defined but immature. Some don’t exist as protocols yet. But the concepts, the vocabulary, and the architectural direction all come from the human side. The workload side doesn’t even have the vocabulary for most of them.

Those properties exist in the human stack because humans needed them first. Now agents need them too.

The Convergence We’ve Already Seen

My blog has traced this progression for a while now. Machines were static, long-lived, pre-registered. Workloads broke that model with ephemeral, dynamic, attestation-based identity. Each step in that evolution adopted identity properties that were already standard in human identity systems. Dynamic issuance. Short credential lifetimes. Context-aware access. Attestation as MFA for workloads. Workload identity got better by becoming more like user identity.

Agents are the next step in that same convergence. They don’t just need dynamic credentials and attestation. They need delegation, consent, progressive trust, session scope, and goal-based authorization. The most complete and most deployed versions of those primitives live in the human stack. Some exist in other forms elsewhere (SPIFFE has trust domain federation, capability tokens like Macaroons exist independently), but the human stack is where the broadest set of these concepts has been defined, tested, and deployed at scale.

The Actual Claim

Agent identity is a governance problem. Not an authentication problem, not an attestation problem. The hard questions are all governance questions. Who delegated authority. What scope. Is it still valid. Should a human approve the next step. For humans and workloads, identity and authorization are separate layers. For agents, they collapse. The delegation is the identity.

The human identity stack is where principal identity primitives live. Not because agents are people, but because people were the first actors that needed identity in open ecosystems with delegated authority and unbounded problem spaces.

Every protocol designer who sits down to solve agent auth rediscovers this and reaches for human identity concepts, not workload identity concepts. The protocols they build aren’t OAuth. They’re something new. But they inherit from the human side every time. That convergence is the argument.

The delegation and governance layer is buildable today. Goal-scoped authorization and intent verification are ahead of us. The first generation of agent identity systems will solve governance. The second will solve intent.

There’s a pattern that plays out across every regulated industry. Requirements increase. Complexity compounds. The people responsible for compliance realize they can’t keep up with manual processes. So instead of building the capacity to meet the rising bar, they quietly lower the specificity of their commitments.

It’s rational behavior. A policy that says “we perform regular reviews” can’t be contradicted the way a policy that says “we perform reviews every 72 hours” can. The less you commit to on paper, the less exposure you carry.

The problem is that this rational behavior, repeated across enough organizations and enough audit cycles, hollows out the entire compliance system from the inside. Documents stop describing what organizations actually do. They start describing the minimum an auditor will accept. The gap between documentation and reality widens. Nobody notices until something breaks.

A Real-Time Example

A recent incident in the Mozilla CA Program put this dynamic on public display in a way worth studying regardless of whether you work in PKI.

Amazon Trust Services disclosed that their Certificate Revocation Lists sometimes backdate a timestamp called “thisUpdate” by up to a few hours. The practice itself is defensible. It accommodates clock skew in client systems. When they updated their policy document to disclose the behavior, they described it as CRLs “may be backdated by up to a few hours.”

A community member pointed out the obvious. “A few hours” is un-auditable. Without a defined upper bound, there’s no way for an auditor, a monitoring tool, or a relying party to evaluate whether any given CRL falls within the CA’s stated practice. Twelve hours? Still “a few.” Twenty-four? Who decides?

When pressed, Amazon’s response was telling. They don’t plan to add detailed certificate profiles back into their policy documents. They believe referencing external requirements satisfies their disclosure obligations. We’ll tell you we follow the rules, but we won’t tell you how.

Apple, Mozilla, and Google’s Chrome team then independently pushed back. Each stated that referencing external standards is necessary but not sufficient. Policy documents must describe actual implementation choices with enough precision to be verifiable.

Apple’s Dustin Hollenback was direct. “The Apple Root Program expects policy documents to describe the CA Owner’s specific implementation of applicable requirements and operational practices, not merely incorporate them by reference.”

Mozilla’s Ben Wilson went further, noting that “subjective descriptors without defined bounds or technical context make it difficult to evaluate compliance, support audit testing, or enable independent analysis.” Mozilla has since opened Issue #295 to strengthen the MRSP accordingly.

Chrome’s response summarized the situation most clearly:

We consider reducing a CP/CPS to a generic pointer where it becomes impossible to distinguish between CAs that maintain robust, risk-averse practices and those that merely operate at the edge of compliance as being harmful to the reliable security of Chrome’s users.

They also noted that prior versions of Amazon’s policy had considerably more profile detail, calling the trend of stripping operational commitments “a regression in ecosystem transparency.”

The Pattern Underneath

What makes PKI useful as a case study isn’t that certificate authorities are uniquely bad at this. It’s that their compliance process is uniquely visible. CP/CPS documents are public. Incident reports are filed in public Bugzilla threads. Root program responses are posted where anyone can read them. The entire negotiation between “what we do” and “what we’re willing to commit to on paper” plays out in the open.

In most regulated industries, you never see this. The equivalent conversations in finance, FedRAMP, healthcare, or energy happen behind closed doors between compliance staff and auditors. The dilution is invisible to everyone outside the room. A bank’s internal policies get vaguer over time and nobody outside the compliance team and their auditors knows it happened. A FedRAMP authorization package gets thinner and the only people who notice are the assessors reviewing it. The dynamic is the same. The transparency isn’t.

So when you watch a CA update its policy with “a few hours” and three oversight bodies publicly push back, you’re seeing something that happens constantly across every regulated domain. You’re just not usually allowed to watch.

Strip away the PKI details and the pattern is familiar to anyone who has worked in compliance. An organization starts with detailed documentation of its practices. Requirements grow. Maintaining alignment between what the documents say and what the systems actually do gets expensive. Someone realizes that vague language creates less exposure than specific language. Sometimes it’s the compliance team running out of capacity. Sometimes it’s legal counsel actively advising against specific commitments, believing that “reasonable efforts” is harder to litigate against than “24 hours.” Either way, they’re trading audit risk for liability risk and increasing both. The documents get trimmed. Profiles get removed. Temporal commitments become subjective. “Regularly.” “Promptly.” “Periodically.” Operational descriptions become references to external standards.

Each individual edit is defensible. Taken together, they produce a document that can’t be meaningfully audited because there’s nothing concrete to audit against. One community member in the Amazon thread called this “Compliance by Ambiguity,” the practice of using generic, non-technical language to avoid committing to specific operational parameters. It’s a perfect label for a pattern that shows up everywhere.

This is the compliance version of Goodhart’s Law. When organizations optimize their policy documents for audit survival rather than operational transparency, the documents stop serving any of their original functions. Auditors can’t verify practices against vague commitments. Internal teams can’t use the documents to understand what’s expected of them. Regulators can’t evaluate whether the stated approach actually manages risk. The document becomes theater. And audits are already structurally limited by point-in-time sampling, auditee-selected scope, and the inherent conflict of the auditor working for the entity being audited. Layering ambiguous commitments on top of those limitations removes whatever verification power the process had left.

And it’s accelerating. Financial services firms deal with overlapping requirements from dozens of jurisdictions. Healthcare organizations juggle HIPAA, state privacy laws, and emerging AI governance frameworks simultaneously. Even relatively narrow domains like certificate authority operations have seen requirement growth compound year over year as ballot measures, policy updates, and regional regulations stack on top of each other. The manual approach to compliance documentation was already strained a decade ago. Today it’s breaking.

In PKI alone, governance obligations have grown 52-fold since 2005. The pattern is similar in every regulated domain that has added frameworks faster than it has added capacity to manage them.

Most organizations choose dilution. Not because they’re negligent, but because the alternative barely exists yet. There is no tooling deployed at scale that continuously compares what a policy document says against what the infrastructure actually does. No system that flags when a regulatory update creates a gap between stated practice and new requirements. No automated way to verify that temporal commitments (“within 24 hours,” “no more than 72 hours”) match operational reality. So people do what people do when workload exceeds capacity. They cut corners on the parts that seem least likely to matter this quarter. Policy precision feels like a luxury when you’re scrambling to meet the requirements themselves.

What Vagueness Actually Costs

The short-term calculus makes sense. The long-term cost doesn’t.

I went back and looked at public incidents in the Mozilla CA Program going back to 2018. Across roughly 500 cases, about 70% fall into process and operational failures rather than code-level defects. A large portion trace back to gaps between what an organization actually does and what its documents say it does. The organizations that ultimately lost trust follow a consistent pattern. Documents vague enough to avoid direct contradiction, but too vague to demonstrate that operations stayed within defined parameters. The decay is always gradual. The loss of trust always looks sudden.

The breakdown is telling. Of the four major incident categories, Governance & Compliance failures account for roughly half of all incidents, more than certificate misissuance, revocation failures, and validation errors combined. The primary cause isn’t code bugs or cryptographic weaknesses. It’s administrative oversight. Late audit reports, incomplete analysis, delayed reporting. The stuff that lives in policy documents and process descriptions, not in code.

The distribution looks like this:

This holds outside PKI. The financial institutions that get into the worst trouble with regulators aren’t usually the ones doing something explicitly prohibited. They’re the ones whose internal documentation was too vague to prove they were doing what they claimed. Read the details behind SOX failures, GDPR enforcement actions, and FDA warning letters, and you’ll find the same structural problem. Stated practices didn’t match reality, and nobody caught it because the stated practices were too imprecise to evaluate.

Vagueness also creates operational risk that has nothing to do with regulators. When your own engineering, compliance, and legal teams can’t look at a policy document and know exactly what’s expected, they fill in the gaps with assumptions. Different teams make different assumptions. Practices diverge. The organization thinks it’s operating one way because that’s what the document sort of implies. The reality is something else. And the gap only surfaces when an auditor, a regulator, or an incident forces someone to look closely.

The deeper issue is that vagueness removes auditability as a control surface. When commitments are measurable, deviations surface automatically. A system can check whether a CRL was backdated by more than two hours the same way it checks whether a certificate was issued with the wrong key usage extension. The commitment is binary. It either holds or it doesn’t. When commitments are subjective, deviations become interpretive. “A few hours” can’t be checked by a machine. It can only be argued about by people. That shifts risk detection from systems to negotiation. Negotiation doesn’t scale, produces inconsistent outcomes, and worst of all, it only happens between the auditee and the auditor. The regulators and the public who actually bear the risk aren’t in the room.

Measurable commitments create automatic drift detection. Subjective commitments create negotiated drift.

That spectrum is the diagnostic. Everything to the right of “machine-checkable” is a gap waiting to be exploited by time pressure, turnover, or organizational drift.

What Would Have to Change

Solving this means treating compliance documentation as infrastructure rather than paperwork. In the same way organizations moved from manual deployments to CI/CD pipelines, compliance needs to move from static documents reviewed annually to living systems verified continuously.

The instinct is to throw AI at it, and that instinct is half right. LLMs are good at ingesting unstructured policy documents. But compliance verification isn’t a search problem. It’s a systematic reasoning problem. You need to trace requirements through hierarchies, exceptions, and precedence rules, then compare them against operational evidence. Recent research shows that RAG-based approaches still hallucinate 17-33% of the time on legal and compliance questions, even with domain-specific retrieval. The failure mode isn’t bad prompting. It’s architectural. You cannot train a model to strictly verify “a few hours” any better than you can train an auditor.

The fix isn’t better retrieval. It’s decomposing complex compliance questions into bounded sub-queries against explicit structures that encode regulatory hierarchy and organizational context, keeping the LLM’s role narrow enough that its errors can be isolated and reviewed.

That means tooling that ingests policy documents and maps commitments to regulatory requirements. Systems that flag language failing basic auditability checks, like temporal bounds described with subjective terms instead of defined thresholds. Automated comparison of stated practices against actual system behavior, running continuously rather than at audit time.

In the Amazon case, a system like this would have caught “a few hours” before it was published. Not because backdating is prohibited, but because the description lacks the specificity needed for anyone to verify compliance with it. The system wouldn’t need to understand CRL semantics. It would just need to know that temporal bounds in operational descriptions require defined, measurable thresholds to be auditable.

Scale that across any compliance domain. Every vague commitment is a gap. Every gap is a place where practice can diverge from documentation without detection. Every undetected divergence is risk accumulating quietly until something forces it into the open.

The Amazon incident is useful because it forced the people who oversee trust decisions to say out loud what has been implicit for years. The bar for documentation specificity is rising, and organizations that optimize for minimal disclosure are optimizing for the wrong thing. That message goes well beyond certificate authorities. The ones that keep diluting their commitments will discover that vagueness isn’t a shield. It’s a slow-moving liability that compounds until it becomes an acute one.

The regulatory environment isn’t going to get simpler. The organizations that treat policy precision as optional will discover that ambiguity scales faster than governance, and that systems which cannot be automatically verified will eventually be manually challenged.

Recently, I was talking to one of my kids, now in university, about why housing feels so out of reach here in Washington. He asked the simple question so many young people are asking: Why is it so expensive to just have a place to live?

There’s no single answer, but there is a clear outlier, especially in big cities, that drives up costs far more than most people realize: bureaucracy.

How broken is the math? Policymakers are now seriously debating 50-year mortgages just to make homeownership work. A 50-year loan lowers the monthly payment, but it also means you never build real equity. You spend most of your adult life paying interest and end up owing almost as much as you started with. You cannot use it to move up because you never escape the debt. It is not a bridge to ownership. It is a treadmill.

And the reason we need it is not interest rates or construction costs. It is the cost of permission.

The Price of Permission

According to the National Association of Home Builders, about 24 percent of the price of a new home in America is regulation: permits, zoning, fees, and delays.

In Washington, the burden is closer to 30 percent.

At Seattle’s median home price of about $853,000, roughly $250,000 is regulation: permits, fees, and delays. If Seattle carried Houston’s regulatory burden, that same house would cost much closer to $600,000.

The difference is not labor or lumber. It is paperwork. It is the cost of waiting, of hearings, of permission.

King County then takes about one percent a year in property taxes, around $8,400 annually, for the privilege of keeping what you already paid for. Combined, bureaucracy and taxes explain almost a third of the cost of shelter in one of America’s most expensive cities.

The Hidden Cost of Bureaucracy

The public conversation stops at the sticker price. It should not.

That regulatory cost does not disappear once you close. It gets financed.

If you borrow $250,000 in regulatory overhead at 6.5 percent, here is what that bureaucracy really costs:

Loan TermRegulatory PrincipalInterest PaidTotal Cost
30 years$250,000$307,000$557,000
50 years$250,000$564,000$814,000

A quarter-million dollars of regulation quietly becomes more than $800,000 over the life of a 50-year loan.

Bureaucracy does not just raise prices. It compounds them.

The System That Made Housing Expensive

Every rule had a reason. Fire safety. Drainage. Noise. Aesthetic harmony. Each one made sense on its own. Together they have made it almost impossible to build.

Seattle’s design review process can take years. The Growth Management Act limits where anything can go. In parts of Woodinville, just minutes from Seattle, zoning is RA-5: one home per five acres. The same land under typical urban zoning could hold forty homes. Under Seattle’s new fourplex rules, one hundred sixty units. The scarcity is not geographic. It is legal.

Fees pile up. Permits expire mid-project. Every safeguard adds cost and delay until affordability becomes a memory.

If you want to see the endgame of this logic, look at the California coast.

After the fires that swept through the Santa Monica Mountains and Malibu last year, more than four hundred homes were lost.

By early 2025, fewer than fifty rebuilding permits had been issued, and barely a dozen homes had been completed.

Each application moves through overlapping city, county, and coastal reviews that can take years even for an identical replacement on the same lot.

In Texas, the same house could be rebuilt in less than a year.

Here, the process outlived the purpose.

Rules written to preserve the landscape now keep people from returning to it.

The result is a coastline where the danger has passed, but the displacement never ends.

We built a system that rewards control instead of results. The outcome is exactly what the incentives predict: scarcity.

The Multi-Family Trap

Try to build multi-family housing and you will see how the system works in practice. In much of Seattle it is still illegal. Where it is technically allowed, the odds are against you.

You buy land. You design a project. You spend years and millions navigating variances, hearings, and neighborhood appeals. You pay lawyers, consultants, and taxes while you wait. And at the end, the city might still say no.

You are left holding land you cannot use and a balance sheet you cannot fix.

Seattle’s “One Home” four-unit reform was meant to solve this. It helps on paper. In practice, the same bureaucracy decides what counts as acceptable housing, and the same delays make it unaffordable to build. We did not fix the problem. We moved it.

This is where incentives collapse. If a small developer looks at that risk and realizes they might spend years fighting the city and still lose, they walk away. They put the money in the stock market instead. It is liquid, predictable, and far less likely to end with a worthless lot.

When housing policy makes real investment riskier than speculation, capital leaves. When capital leaves, supply dies.

The Death of the Small Home

It used to be possible to build small. Starter homes, bungalows, cottages. The foundation of the middle class. They are gone.

Codes now set minimum lot sizes, minimum square footage, and minimum parking. Each rule pushes builders toward large, expensive projects that can survive the regulatory drag. The system punishes simplicity.

Seattle’s accessory dwelling unit and backyard cottage rules are small steps in the right direction. They make small building legal again, but not easy. Permitting still takes months, and costs that once seemed modest are now out of reach.

Some assume builders would choose large homes anyway. The math says otherwise. A builder who can sell ten $400,000 homes makes more than one who sells three $900,000 homes on the same land and moves the capital faster. Builders follow returns, not square footage. They build large because the regulatory drag makes small uneconomical, not because they prefer it.

The result is predictable. Modest housing disappears. “Affordable” becomes a campaign word instead of a floorplan.

The Tax You Can Never Escape

Even if you beat the system and buy a home, the meter never stops.

Property taxes rise every year, often faster than wages. The rate barely changes, but assessed values jump. A $600,000 house becomes an $850,000 house on paper, and the tax bill rises with it.

Those assessments are often based on bad data.

Valuations can rise even when real prices fall. The appeal process is slow and opaque. Few succeed. My own home’s assessed value rose 28 percent last year while Zillow’s and Redfin’s estimates fell 10 percent over the prior year. As a result, the county now values it substantially above what the market says it’s worth. I appealed. No response.

For many families, especially retirees on fixed incomes, it means selling just to survive. People move not because they want to, but because the tax bill leaves them no choice.

In places like Seattle, it does not end there. When you finally sell, you face a city-level real-estate excise tax on top of the state’s version. The government takes a cut of the same inflated value it helped create.

The overhead gets buried in the mortgage, compounded by interest, and slowly eats whatever equity a family might have built. By the time you sell, the city takes another cut. The cycle repeats. Ownership becomes a lease under another name.

The Rent Illusion

Renters often think they are immune to this.

They are not.

That same regulatory overhead that a buyer finances into a mortgage gets built into the developer’s cost structure before the first tenant ever moves in.

If a building costs 30 percent more to complete, the rent must be 30 percent higher just to service the debt.

Developers confirm it in their pro formas. Roughly 25 to 35 percent of monthly rent in new Seattle buildings reflects regulatory costs: fees, permitting delays, compliance financing, and required “affordable” offsets that increase the baseline cost for everyone else.

For a $2,800 two-bedroom apartment, that is $700 to $980 every month paid for process. Over a ten-year tenancy, a renter pays between $84,000 and $118,000 in hidden bureaucracy, enough for a down payment on the very home they cannot afford to buy.

Because rents are based on the cost to build, not the cost to live, the renter never builds equity and never escapes the cycle.

The result is two generations trapped in the same system: owners financing bureaucracy with debt, and renters financing it with rent.

The only real difference is who holds the paperwork. One signs a mortgage, the other a lease, but both are paying interest on the same bureaucracy.

It Does Not Have to Be This Way

Other places made different choices.

Houston has no conventional zoning. It enforces safety codes but lets supply meet demand. Builders build. Prices stay roughly twenty to thirty percent lower than in cities with the same population and heavier regulation, according to the Turner & Townsend International Construction Market Survey 2024.

Japan and New Zealand show that efficiency does not require deregulation. In Japan, national safety codes replace local vetoes and permits clear in weeks, not years, keeping the regulatory share near ten percent of cost. New Zealand’s 2020 zoning reforms shortened reviews and boosted new-home starts without sacrificing safety. Both prove that when policy favors results over process, affordability follows.

These places did not get lucky. They decided housing should exist.

The Collapse of Ownership

Owning a home was once the reward for hard work. It meant security, independence, a stake in the future. Now it feels like a rigged game.

The barriers are not natural. They were built.

Rules, fees, and taxes add a third to the cost of every house, yet do little to make any of it safer or better. They make it slower, harder, and more expensive.

Greed exists. But greed did not write the zoning map or the permitting code. What drives this system is something quieter and more permanent.

Every form, review, and hearing creates a job that depends on keeping the process alive. As John Kenneth Galbraith observed, bureaucracy defends its existence long past the time when the need for it has passed.

Regulation has become a jobs program, one that pays salaries in delay and collects rent from scarcity.

The toll is not abstract. It shows up in the quiet math of people’s lives.

Families sell homes they planned to retire in because the taxes outpaced their pensions.

Young couples postpone children because saving for a down payment now takes a decade.

Teachers, nurses, and service workers move hours away from the cities they serve.

Neighborhoods lose their history one family at a time.

It is not a housing market anymore. It is a sorting machine.

When my son graduates, this is the world he will walk into, a market where hard work no longer guarantees a place to live.

Ten years behind him, his sister will face the same wall, built not from scarcity but from policy.

They are inheriting a system designed to sustain itself, not them.

We could change this. We could make it easier to build, to own, to stay. We could treat shelter as something worth enabling rather than something to control.

That would mean admitting the truth.

This crisis is not the result of greed, or interest rates, or some invisible market force.

It is the outcome of decades of good intentions hardened into bad incentives.

When a system that claims to protect people starts protecting itself, everyone pays, whether they own or rent.

It was not the market that failed. It was the process.


Sources and Further Reading

Early in my career, I was often told some version of the same advice: stop overthinking, trust your intuition, move faster.

The advice was usually well-intentioned. It also described a cognitive sequence I don’t actually experience.

For me, intuition does not arrive first. It arrives last.

When I am confident about a decision, that confidence is not a gut feeling. It is the residue of having already explored the space. I need to understand the constraints, see how the system behaves under stress, identify where the edges are, and reconcile the tradeoffs. Only after that does something that feels like intuition appear.

If I skip that process, I don’t get faster. I’m guessing instead of deciding.

This took me a long time to understand, in part because the people giving me that advice were not wrong about their experience. What differs is not the presence of intuition, but when it becomes available.

For some people, much of the work happens early and invisibly. The intuition surfaces first; the structure that produced it is only exposed when something breaks. For others, the work happens up front and in the open. The structure is built explicitly, then compressed.

In both cases, the same work gets done. What differs is when it shows and who sees it.

This is why that advice was most common early in my career, before the outcomes produced by my process were visible to others. At that stage, the reasoning looked like delay. The caution looked like uncertainty.

Over time, that feedback largely disappeared. As experience accumulated, the patterns I had built explicitly began to compress and transfer. I could recognize familiar structures across different domains and apply what I had learned without rebuilding everything from scratch.

That accumulation allowed me to move faster and produce answers that looked like immediate intuition. From the outside, it appeared indistinguishable from how others described their own experience. Internally, nothing had changed—the intuition was still downstream of the work. The work had simply become fast enough to disappear.

This is where people mistake convergence of outcomes for convergence of process.

This is where large language models change something real for people who process the way I do.

Large language models do not remove the need for exploration. They remove the time penalty for doing it explicitly.

The reasoning is still mine. The tool accelerates the exploration, not the judgment.

They make it possible to traverse unfamiliar terrain, test assumptions, surface counterexamples, and build a working model fast enough that the intuition arrives before impatience sets in. The process is unchanged. What changes is the latency.

This is why the tool does not feel like a shortcut. It doesn’t ask me to act without coherence. It allows coherence to form quickly enough to meet the pace others already assume.

For the first time, people who reason this way can move at a pace that looks like decisiveness without abandoning how their judgment actually forms.

For some people, intuition is a starting point.
For others, it is an output.

Confusing the two leads us to give bad advice, misread rigor as hesitation, and filter out capable minds before their judgment has had time to become visible.

AI doesn’t change how intuition works.
It changes how long it takes to earn it.

And for people who process this way, that difference finally matters.

From the Eurodollar to the Splinternet: How the Race to Regulate the World Broke It

“History does not repeat itself, but it often rhymes.”

“You cannot solve an exponential complexity problem with linear bureaucracy.”

“Power tends to corrupt, and absolute power corrupts absolutely.”

I grew up in a house where reading was not optional. Being dyslexic, dysgraphic, and dysnumeric made it painful, but my parents had a simple rule: read, explain, defend. No written reports. Just me, standing there, trying to make sense of something complicated. One of the books they handed me was Plato’s Republic. What stayed with me was not the philosophy. It was the realization that people have been struggling to govern complexity for thousands of years. The names of the problems change, but the core tension between power, understanding, and human nature does not.

That early lesson was not about Plato. It was about learning how to think. And it is why the unraveling of the global internet feels so familiar. We built something wildly complex, assumed it would stay coherent, and then stopped paying attention to whether anyone still understood how it worked.

For a long time, growth hid the cracks. It looked like the system might harmonize on its own.

How it started

There was a stretch from the early 2000s to the mid-2010s when the internet felt weightless. Borders mattered less. Companies operated everywhere at once. We acted as if we had finally built a global commons.

But the system only worked because the cracks had not widened yet. Growth covered sins. Neutrality was taken for granted. Enforcement was sparse. And most governments did not yet understand the power they were holding.

Once they did, everything changed.

Where the cracks first appeared

If you want to understand the present, imagine a marketplace in Lyon around the year 600.

A Roman trader sells a diseased cow to a Gothic warrior. A dispute erupts. Which rules apply? Roman law? Gothic law? Salic law? The merchant across the stall follows a different code entirely.

Nothing works because everything overlaps.

That world did not collapse from stupidity. It collapsed because complexity made ordinary life too brittle. People retreated into smaller circles with clearer rules.

Today, a single smartphone tap in Brazil may be governed by US law because the data touched a server in Virginia, EU law because a European might use the service, Brazilian law because the user is in Brazil, and sometimes Chinese or Indian law depending on where the packets travel.

One action. Four sovereigns. Zero clarity.

When history repeated itself

Europe solved this once already. In 1648, after decades of war, it settled on a blunt rule: your authority ends at your border.

It was not wise. It was not elegant. It was enough.

Trade flourished. Science accelerated. Industry emerged. A patchwork of boundaries replaced the chaos of overlapping claims.

The internet quietly tossed that lesson aside. When data crossed your border, you assumed your rules crossed with it. If a foreign company touched your citizens, you claimed jurisdiction over it. Everyone became a king claiming the same territory.

This worked until it did not.

Power learns to travel

For centuries, strong states found ways to project authority outward. The tactics changed, but the impulse remained. Merchants judged under their own laws abroad. Empires exporting their courts. The United States using market access to enforce its rules. The dollar turning sanctions into global tools. GDPR and the CLOUD Act pulling data into competing gravitational fields.

Eventually the boomerang returned. China, Russia, India, Brazil, Nigeria, Turkey, and others built their own outward-facing systems.

Everyone learned the trick. Everyone decided to use it.

We even see the revival of cultural jurisdiction. Putin claims authority wherever Russian speakers live. Western regulators now claim authority wherever their citizens’ data flows. Jurisdiction is no longer about where you are. It is about who you are and what language you speak. It is a formula for endless conflict.

The hidden glue that held globalization together

Globalization did not succeed because nations resolved their differences. It succeeded because they tolerated spaces where the rules did not apply cleanly.

Eurodollar markets. The early internet. Loose data practices. Informal restraint.

These buffers allowed incompatible systems to trade without resolving contradictions. When governments realized they could weaponize cloud providers, app stores, and platforms, the restraint vanished. The buffers collapsed. The contradictions rushed in.

The quiet expansion of authority

Governments rarely ask for power directly. They cite terrorism, child protection, organized crime, money laundering. The public nods. The tools are built.

And the uses expand.

A system designed to track extremists becomes a system used for tax compliance. A privacy rule becomes a lever for geopolitical influence. A regulation meant to protect users becomes a tool to pressure foreign companies.

The shift also targets citizens. Under laws like the UK Online Safety Act, platforms must scan for harmful content, while older public order laws are used to arrest individuals for what they write. The platform becomes the informant. The citizen becomes the suspect.

This ignores a simple reality. A forum is not a corporate broadcast. It is an aggregate conversation. When you treat a forum like a publication, you do not just fine a company. You criminalize the people using it.

No one announces the shift. It simply arrives.

The traveler’s trap

This expansion destroys the concept of safe passage. In the old world, if I wrote a pamphlet in Ohio, I was subject to Ohio law. If I traveled to Germany, I followed German law while in Germany.

The internet erases that distinction. Regulators now argue that if my post in Ohio is visible in Germany, it is subject to German law.

Where does that end? We see visitors to Turkey detained for content that offends local authorities. Tourists in Dubai face jail time for reviews written at home. If I criticize a monarch in an American forum, can I be arrested during a layover in the UAE years later?

If jurisdiction follows the data, every traveler walks through a minefield of laws they never consented to and cannot vote on.

Regulatory colonialism

Europe did not win the platform wars, but it mastered administration. GDPR, the DMA, the DSA, and the AI Act form a regulatory architecture that shapes global behavior by raising compliance costs.

There is an economic lie buried here. Regulators claim they are policing Big Tech, not individuals. But if you fine a company for carrying my speech, you are placing a tariff on my words. It is protectionism masquerading as safety. You are taxing the import of ideas you cannot compete with.

To be clear, not all of this is wrong. The United States needs a federal privacy law. GDPR got the big picture right: data rights are human rights. But the implementation covered the open web in the digital graffiti of cookie banners. It is a global pixel tax that wastes millions of hours while solving nothing.

The problem is not the desire to regulate. The problem is the arrogance of applying your local preferences—good, bad, or merely annoying—to the entire planet without consent.

We would never allow a foreign court to cut the phone line of a citizen in Ohio because their conversation violated a speech rule in Paris. Yet we accept that logic when the conversation happens on a server.

True governance requires consent. A mutual treaty is legitimate. A company operating on your soil is legitimate. But when a regulator bypasses a foreign government to police a foreign citizen directly, it breaks the compact between a citizen and their own state.

It comes down to standing. If my own government overreaches, I have recourse. If a foreign regulator erases my content, I have no voice and no remedy. That is not law. That is subjugation.

When politics becomes math

Up to this point the problem looks political. Now it becomes mathematical.

If only a few jurisdictions make rules, contradictions are rare. If dozens do, contradictions are certain. The number of potential conflicts rises faster than any human institution can track.

You get impossible requirements where one state demands disclosure and another forbids it.

No optimization fixes a logical impossibility. Not with lawyers. Not with AI.

This also creates a global heckler’s veto. If 195 countries all enforce their local laws globally, the cost of compliance destroys the platform in its own home market. Foreign censorship does not just silence me abroad. It destroys the tools I use at home.

If the UK wants to weaken encryption for its own citizens, that is its choice. But it cannot demand that a global platform weaken encryption for everyone else.

When the cost of compliance becomes an existential threat, the only option is to leave.

Google left China. Meta and Apple withheld advanced AI models from Europe. Apple went further: threatening in 2023 to pull iMessage entirely, and in 2025, disabling Advanced Data Protection for British users rather than breaking encryption.

It is no longer a negotiation tactic. It is a strategy.

This is how the Splinternet arrives. As Hemingway wrote about bankruptcy, it happens two ways: “Gradually, then suddenly.”

Rules that refuse to settle

Some laws require removal of harmful content in hours. But the definitions shift constantly. A system cannot stabilize if the rules never settle.

Platforms chase the strictest interpretation of the broadest rule from the most aggressive regulator. That is not governance. It is noise.

A world dividing into stacks

The internet is not collapsing. It is dividing into spheres. A Western stack. A Chinese stack. A European regulatory arc. An Indian sphere rising quickly.

They will touch at the edges but will not integrate. Companies will build parallel products. Users will move between digital worlds the way people in Belarus once carried two SIM cards because no single system works everywhere.

This leads to hard realities. China will have a Chinese internet. North Korea will have a hermit intranet. Western observers may see rights violations. But in a sovereign world, the ultimate check on digital power is the physical right to leave.

The moral line is not whether a firewall exists. It is whether the citizen can walk away from behind it.

The Eurodollar paradox

I do not welcome this fracture. I spent a career building systems meant to bridge these gaps, arguing that a unified network is more resilient than a divided one. The Splinternet is fragile. It is inefficient. It is a retreat.

But we must acknowledge what held the old world together.

It was not global government. It was interoperability without permission.

The Eurodollar was the archetype. Dollars held in banks outside the United States, beyond direct regulation. Messy. Uncomfortable. Essential. It kept the global economy moving.

The early internet played the same role. A neutral zone where data could flow even when nations disagreed.

We are dismantling that neutral zone. We are replacing interoperability without permission with compliance by permission.

We may gain sovereignty. But we are destroying the mechanism that allowed a divided world to function as one.

The GRANITE shift

There is one final signal of how far the pendulum has swung. Jurisdictions like Wyoming have begun experimenting with laws such as the GRANITE Act, which create penalties for complying with certain foreign mandates. It is a poison pill strategy. If adopted widely, it would make it illegal for a company to obey another country’s extraterritorial demands.

The meaning is clear. The era of a single global ruleset is ending. Regions are not just drifting apart. They are beginning to defend the separation.

The conclusion most people avoid

We did not lose a shared internet because of malice. We lost it because the assumptions behind it stopped being true. The system became too interconnected for local rules to govern and too political for global rules to be accepted.

What comes next will not be universal or seamless or even fair.

But it will be stable.

Sometimes the only way to solve an impossible equation is to stop pretending there is a single answer.

Attestation has become one of the most important yet misunderstood concepts in modern security. It now shows up in hardware tokens, mobile devices, cloud HSMs, TPMs, confidential computing platforms, and operating systems. Regulations and trust frameworks are beginning to depend on it. At the same time people talk about attestation as if it has a single, universally understood meaning. It does not.

Attestation is not a guarantee. It is a signed assertion that provides evidence about something. What that evidence means depends entirely on the system that produced it, the protection boundary of the key that signed it, and the verifier’s understanding of what the attestation asserts and the verifier’s faith in the guarantees provided by the attestation mechanism itself.

To understand where security is heading, you need to understand what attestation can prove, what it cannot prove, and why it is becoming essential in a world where the machines running our code are no longer under our control.

Claims, Attestations, and the Strength of Belief

A claim is something a system says about itself. There is no protection behind it and no expectation of truth. A user agent string is a perfect example. It might say it is an iPhone, an Android device, or Windows. Anyone can forge it. It is just metadata. At best it lets you guess what security properties the device might have, but a guess is not evidence.

Here is a typical user agent string:

Mozilla/5.0 (iPhone; CPU iPhone OS 15_2 like Mac OS X)

AppleWebKit/605.1.15

Mobile/15E148

Safari/605.1.15

If you break it apart it claims to be an iPhone, running iOS, using Safari, and supporting specific web engines. None of this is verified. It is only a claim.

Attestation is different. Attestation is a signed statement produced by a system with a defined protection boundary. That boundary might be hardware, a secure element, a trusted execution environment, a Secure Enclave, a hypervisor-isolated domain, or even an operating system component rooted in hardware measurements but not itself an isolated security boundary. Attestation does not make a statement true, but it provides a basis to believe it because the signing key is protected in a way the verifier can reason about.

Attestation is evidence. The strength of that evidence depends on the strength of the protection boundary and on the verifier’s understanding of what the attestation actually asserts.

Why Attestation Became Necessary

When I worked at Microsoft we used to repeat a simple rule about computer security. If an attacker has access to your computer it is no longer your computer. That rule made sense when software ran on machines we owned and controlled. You knew who had access. You knew who set the policies. You could walk over and inspect the hardware yourself.

That world disappeared.

A classic illustration of this problem is the evil maid attack on laptops. If a device is left unattended an attacker with physical access can modify the boot process, install malicious firmware, or capture secrets without leaving obvious traces. Once that happens the laptop may look like your computer but it is no longer your computer.

This loss of control is not limited to physical attacks. It foreshadowed what came next in computing. First workloads moved into shared data centers. Virtualization blurred the idea of a single physical machine. Cloud computing erased it entirely. Today your software runs on globally distributed infrastructure owned by vendors you do not know, in data centers you will never see, under policies you cannot dictate.

The old trust model depended on physical and administrative control. Those assumptions no longer hold. The modern corollary is clear. If your code is running on someone else’s computer you need evidence that it is behaving the way you expect.

Vendor promises are claims. Documentation is a claim. Marketing is a claim. None of these are evidence. To make correct security decisions in this environment you need verifiable information produced by the platform itself. That is the role attestation plays. The standards community recognized this need and began defining shared models for describing and evaluating attestation evidence, most notably through the IETF RATS architecture.

The IETF RATS View of Attestation

The IETF formalized the attestation landscape through the RATS architecture. It defines three roles. The attester produces signed evidence about itself or about the keys it generates. The verifier checks the evidence and interprets its meaning. The relying party makes a decision based on the verifier’s result.

This separation matters because it reinforces that attestation is not the decision itself. It is the input to the decision, and different attesters produce different types of evidence.

Two Families of Attestation

Attestation appears in many forms, but in practice it falls into two broad families.

One family answers where a key came from and whether it is protected by an appropriate security boundary. The other answers what code is running and whether it is running in an environment that matches expected security policies. They both produce signed evidence but they measure and assert different properties.

Key Management Attestation: Provenance and Protection

YubiKey PIV Attestation

YubiKeys provide a clear example of key management attestation. When you create a key in a PIV slot the device generates an attestation certificate describing that key. The trust structure behind this is simple. Each YubiKey contains a root attestation certificate that serves as the trust anchor. Beneath that root is a device specific issuing CA certificate whose private key lives inside the secure element and cannot be extracted. When a verifier asks the device to attest a slot the issuing CA signs a brand new attestation certificate for that session. The public key in the certificate is always the same if the underlying slot key has not changed, but the certificate itself is newly generated each time with a different serial number and signature. This design allows verifiers to confirm that the key was generated on the device while keeping the blast radius small. If one token is compromised only that device is affected.

Cloud HSMs and the Marvell  Ecosystem

Cloud HSMs scale this idea to entire services. They produce signed statements asserting that keys were generated inside an HSM, protected under specific roots, bound to non exportability rules, and conforming to certification regimes. Many cloud HSMs use Marvell hardware, and other commercial and open HSMs implement attestation as well. The Marvell based examples are used here simply because the inconsistencies are illustrative, not because they are the only devices that support attestation. Many vendors provide their own attestation formats and trust chains. AWS CloudHSM and Google Cloud HSM share that silicon base, but their attestation formats differ because they use different firmware and integration layers.

This inconsistency creates a real challenge for anyone who needs to interpret attestation evidence reliably. Even when the underlying hardware is the same the attestation structures are not. To make this practical to work with we maintain an open source library that currently decodes, validates, and normalizes attestation evidence from YubiKeys and Marvell based HSMs, and is designed to support additional attestation mechanisms over time. Normalization matters because if we want attestation to be widely adopted we cannot expect every verifier or relying party to understand every attestation format. Real systems often encounter many different kinds of attestation evidence from many sources, and a common normalization layer is essential to make verification scalable.

https://github.com/PeculiarVentures/attestation

Hardware alone does not define the attestation model. The actual evidence produced by the device does.

Mobile Key Attestation: Android and iOS

Mobile devices are the largest deployment of secure hardware anywhere. Their attestation mechanisms reflect years of lessons about device identity, OS integrity, and tamper resistance.

Android Keymaster and StrongBox

Android attestation provides information about the secure element or TEE, OS version, patch level, verified boot state, device identity, downgrade protection, and key properties. It anchors keys to both hardware and system state. This attestation is used for payments, enterprise identity, FIDO authentication, and fraud reduction.

Apple Secure Enclave Attestation

Apple takes a similar approach using a different chain. Secure Enclave attestation asserts device identity, OS trust chain, enclave identity, and key provenance. It supports Apple Pay, iCloud Keychain, MDM enrollment, and per app cryptographic isolation.

Confidential Computing Attestation: Proving Execution Integrity

Confidential computing attestation solves a different problem. Instead of proving where a key came from, it proves what code is running and whether it is running in an environment that meets expected security constraints.

Intel SGX provides enclave reports that describe enclave measurements. AMD SEV-SNP provides VM measurement reports. AWS Nitro Enclaves use signed Nitro documents. Google Confidential VMs combine SEV-SNP with Google’s verification policies.

This evidence asserts which measurements the hardware recorded, whether memory is isolated, and whether the platform is genuine.

Why the Distinction Matters

Key management attestation cannot answer questions about code execution. Confidential computing attestation cannot answer questions about where keys were created. The evidence is different, the claims are different, and the trust chains are different.

If you do not understand which form of attestation you are dealing with you cannot interpret its meaning correctly.

Regulatory and Policy Pressure

Attestation is becoming important because the bar for trust has been raised. The clearest example is the CA or Browser Forum Code Signing Baseline Requirements, which mandate hardware protected private keys and increasingly rely on attestation as the evidence of compliance.

Secure development frameworks including the EU Cyber Resilience Act push vendors toward demonstrating that firmware and update signing keys were generated and protected in secure environments. Enterprise procurement policies frequently require the same assurances. These rules do not always use the word attestation, but the outcomes they demand can only be met with attestation evidence.

The Lesson

Attestation is evidence. It is not truth. It is stronger than a claim because it is anchored in a protection boundary, but the strength of that boundary varies across systems and architectures. The meaning of the evidence depends on the attester, the verifier, and the assumptions of the relying party.

There are two major forms of attestation. Key management attestation tells you where a key came from and how it is protected. Confidential computing attestation tells you what code is running and where it is running.

As computing continues to move onto systems we do not control and becomes more and more distributed, attestation will become the foundation of trust. Secure systems will rely on verifiable evidence instead of assumptions, and attestation will be the language used to express that evidence.

This past week I spent more concentrated time with the newest generation of AI models than I have in months. What struck me was not just that they are better, but where they are better. They now handle routine engineering tasks with a competence that would have seemed impossible a year ago. The more I watched them work, the more obvious it became that the tasks they excel at are the same tasks that used to form the on-ramp for new engineers. This is the visible surface layer of software development, the part above the waterline in MIT’s Iceberg Index.

What these systems still cannot reach is everything beneath that waterline. That submerged world contains the tacit knowledge, constraint navigation, history, intention, and human forces that quietly shape every real system. It holds the scars and the institutional memory that never appear in documentation but govern how things actually work.

Developers have always been described mostly by skills. You could point to languages, frameworks, and tools and build an easy mental model of who someone was. These signals were simple to compare, which is why the industry relied on them. But skills alone do not explain why certain developers become the ones the entire organization depends on. The difference has always been context.

What the models can and cannot do

The models thrive in environments that are routine, self-contained, and free of history. They write small functions. They assemble glue code. They clean up configuration. They do the kind of work that once filled the first two years of an engineering career. In this territory they operate like a competent junior developer with perfect memory.

The challenges begin where real systems live. The deeper you go, the more you find decision spaces shaped by old outages, partial migrations, forgotten constraints, shifting incentives, and compromises that were never recorded. Production systems contain interactions and path dependencies that have evolved for years. These patterns are not present in training data. They exist only in the experiences of the people who worked in the system long enough to understand it.

There is also a human operating layer that quietly directs everything. Customers influence it. Compliance obligations shape it. Old political negotiations echo through it. Even incidents from years ago leave marks in code and behavior that no documentation captures. None of this is visible to a model.

The vanishing on-ramp

As AI absorbs more of the low-context work, the early career pathway narrows. New engineers still need time inside real systems to build judgment, but the tasks that once provided this exposure are being completed before a human ever sees them. The set of small, safe tasks that helped beginners form a mental map of how systems behave is slowly disappearing.

This creates a subtle but significant problem. AI takes on the easy work. Humans are asked to handle the hard work. Yet new humans have fewer opportunities to learn the hard work, because the simple tasks that once served as scaffolding are no longer available. The distance from beginner to meaningful contributor grows longer just as the ladder is being pulled up.

AI can help with simulated practice. A motivated learner can now ask a model to recreate plausible outages, messy migrations, ambiguous requirements, or conflicting constraints. These simulations resemble real scenarios closely enough to be useful. For people with curiosity and drive, this is a powerful supplement to traditional experience.

But a simulation is not the same as lived exposure. It does not restore the proving ground. It does not give someone the slow accumulation of judgment that comes from touching a system over time. The skill curve can accelerate, yet the opportunities to prove mastery shrink. We will need more developers, not fewer, but the pathway into the profession is becoming more difficult to follow.

What remains human

As skills become easier to acquire and easier to automate, the importance of context grows. Contextual judgment allows someone to understand why an architecture looks the way it does, how decisions ripple through a system, where the hidden dependencies live, and how history explains the odd behaviors that would otherwise be dismissed as bugs. These insights develop slowly. They come from exposure to the real thing.

There is also a form of entrepreneurial capability that stands out among strong engineers. It is the ability to make decisions that span technical concerns, organizational dynamics, customer needs, and long-term consequences, often without complete information. It is the ability to reason across constraints and understand how tradeoffs echo through time. This capability is uniquely high-context and uniquely human.

At the more granular level, some work is inherently easier to automate. Common patterns with clear boundaries are natural territory for models. Rare or historically shaped tasks are not. Anything requiring whole-system awareness remains stubbornly human. This aligns with predictions from economic and AI research: visible tasks are automated first, while invisible tasks persist.

The vanishing on-ramp sits directly at this intersection. AI is consuming the visible work while the invisible work becomes more important and harder for new engineers to access.

What we must build next

If the future is going to function, we need new mechanisms for developing context. That may mean rethinking apprenticeships, creating ways for beginners to interact with real systems earlier, or designing workflows that preserve learning opportunities rather than eliminating them. Senior engineers will not only need to solve difficult problems but will also need to create the conditions for others to eventually do the same.

AI is changing the shape of engineering. It is not eliminating developers, but it is transforming how people become developers. It removes the visible tasks and leaves behind the invisible ones. The work that remains is the work that depends on context, judgment, and the slow accumulation of lived understanding.

Those qualities have always been the real source of engineering wisdom. The difference now is that we can no longer pretend otherwise.

This shift requires us to change how we evaluate talent. We can no longer define engineers by the visible stack they use. We must define them by the invisible context they carry.

I have been working on a framework to map this shift]. It attempts to distinguish between the skills AI can replicate today (common domains, low complexity) and the judgment it cannot (entrepreneurial capability, systems awareness).

At breakfast the other day, I was thinking about those old analogy questions: “Hot is to cold as light is to ___?” My kids would roll their eyes. They feel like relics from standardized tests.

But those questions were really metacognitive exercises. You had to recognize the relationship between the first pair (opposites) and apply that pattern to find the answer (dark). You had to think about how you were thinking.

I was thinking about what changes when reasoning becomes abundant and cheap. It hit me that this skill, thinking about how you think, becomes the scarcest resource.

Learning From Nature

A few years ago, we moved near a lake. Once we moved in we noticed deer visiting an empty lot next to us that had turned into a field of wildflowers. A doe would bring her fawn and, with patient movements, teach it where to find clover, when to freeze at a scent, and where to drink. It was wordless instruction: demonstration and imitation. Watch, try, fail, try again. The air would still, the morning light just breaking over the field. Over time, that fawn grew up and brought its own young to the same spot. The cycle continued until the lot was finally developed and they stopped coming.

That made me think about how humans externalized learning in ways no other species has. The deer’s knowledge would die with her or pass only to her offspring. Humans figured out how to make knowledge persist and spread beyond direct contact and beyond a single lifetime.

We started with opposable thumbs. That physical adaptation let us manipulate tools precisely enough to mark surfaces, to write. Writing captured thought outside of memory. For the first time, an idea could outlive the person who had it. Knowledge became persistent across time and transferable without physical proximity. But writing had limits. Each copy required a scribe and hours of work, so knowledge stayed localized.

Then came printing. Gutenberg’s press changed the economics. What took months by hand took hours on a press. The cost of reproducing knowledge collapsed, and books became locally abundant. Shipping and trade moved that knowledge farther, and the internet eventually collapsed distance altogether. Local knowledge became globally accessible.

Now we have LLMs. They do not just expose knowledge. They translate it across levels of understanding. The same information can meet a five-year-old asking about photosynthesis, a graduate student studying chlorophyll, and a biochemist examining reaction pathways. Each explanation is tuned to the learner’s mental model. They also make knowledge discoverable in new ways, so you can ask questions you did not know how to ask and build bridges from what you understand to what you want to learn.

Each step in this progression unlocked something new. Each one looked dangerous at first. The fear is familiar. It repeats with every new medium.

The Pattern of Panic

Socrates worried that writing would erode memory and shallow thinking (Plato’s Phaedrus). He was partly right about trade-offs. We lost some oral tradition, but gained ideas that traveled beyond the people who thought them.

Centuries later, monks who spent lifetimes hand-copying texts saw printing as a threat. Mass production, they feared, would cheapen reading and unleash dangerous ideas. They were right about the chaos. The press spread science and superstition alike, fueled religious conflict, and disrupted authority. It took centuries to build institutions of trust: printers’ guilds, editors, publishers, peer review, and universities.

But the press did not make people stupid. It democratized access to knowledge. It expanded who could participate in learning and debate.

We hear the same fears about AI. LLMs will kill reasoning. Students will stop writing. Professionals will outsource thinking. I understand the worry. I have felt it.

History suggests something more nuanced.

AI as Our New Gutenberg

Gutenberg collapsed the cost of copying. AI collapses the cost of reasoning.

The press did not replace reading. It changed who could read and how widely ideas spread. It forced literacy at scale because there were finally enough books to warrant it.

AI does not replace thinking. It changes the economics of cognitive work the same way printing changed knowledge reproduction. Both lower barriers, expand access, and demand new norms of verification. Both spread misinformation before society learns to regulate them. The press forced literacy. AI forces metacognitive literacy: the ability to evaluate reasoning, not just consume conclusions.

We are in the messy adjustment period. We lack stable institutions around AI and settled norms about what counts as trustworthy machine-generated information. We do not yet teach universal AI fluency. The equivalents of editors and peer review for synthetic reasoning are still forming. It will take time, and we will figure it out.

What This Expansion Means

I have three kids: 30, 20, and 10. Each is entering a different world.

My 30-year-old launched before AI accelerated and built a foundation in the old knowledge economy.

My 20-year-old is in university, learning to work with these tools while developing core skills. He stands at the inflection point: old enough to have formed critical thinking without AI, young enough to fully leverage it.

My 10-year-old will not remember a time before you could converse with a machine that reasons. AI will be ambient for her. It is different, and it changes the skills she needs.

This is not just about instant answers. It is about who gets to participate in knowledge work. Traditional systems reward verbal fluency, math reasoning, quick recall, and social confidence. They undervalue spatial intuition, pattern recognition across domains, emotional insight, and systems thinking. Many brilliant minds do not fit the template.

Used well, AI can correct that imbalance. It acts as a cognitive prosthesis that extends abilities that once limited participation. Someone who struggles with structure can collaborate with a system that scaffolds it while preserving original insight. Someone with dyslexia can translate thoughts to text fluidly. Visual thinkers can generate diagrams that communicate what words cannot.

Barriers to entry drop and the diversity of participants increases. This is equity of potential, not equality of outcome.

But access without reflection is noise.

We are not producing too many answers. We are producing too few people who know how to evaluate them. The danger is not that AI makes thinking obsolete. It is that we fail to teach people to think about their thinking while using powerful tools.

When plausible explanations are cheap and fast, the premium shifts to discernment. Can you tell when something sounds right but is not? Can you evaluate the trustworthiness of a source? Can you recognize when to dig deeper versus when a surface answer suffices? Can you catch yourself when you are being intellectually lazy?

This is metacognitive literacy: awareness and regulation of your own thought process. Psychologist John Flavell first defined metacognition in the 1970s as knowing about and managing one’s own thinking, planning, monitoring, and evaluating how we learn. In the AI age, that skill becomes civic rather than academic.

The question is not whether to adopt AI. That is already happening. The question is how to adapt. How to pair acceleration with reflection so that access becomes understanding.

What I Am Doing About This

This brings me back to watching my 10-year-old think out loud and wondering what kind of world she will build with these tools.

I have been looking at how we teach gifted and twice-exceptional learners. These are kids who are intellectually advanced but may also face learning challenges like ADHD or dyslexia. Their teachers could not rely on memorization or single-path instruction. They built multimodal learning, taught metacognition explicitly, and developed evaluation skills because these kids question everything.

Those strategies are not just for gifted kids anymore. They are what all kids need when information is abundant and understanding is scarce. When AI can answer almost any factual question, value shifts to higher-order skills.

I wrote more detail here: Beyond Memorization: Preparing Kids to Thrive in a World of Endless Information

The short version: question sources rather than absorb them. Learn through multiple modes. Build something, draw how it works, explain it in your own words. Reflect on how you solved a problem, not only whether you got it right. See connections across subjects instead of treating knowledge as isolated silos. Build emotional resilience and comfort with uncertainty alongside technical skill.

We practice simple things at home. At dinner when we discuss a news article: How do we know this claim is accurate? What makes this source trustworthy? What would we need to verify it? When my 10-year-old draws, writes or builds things: I ask what worked? What did not? What will you try differently next time, and why?

It is not about protecting her from AI. That is impossible and counterproductive. It is about preparing her to work with it, question it, and shape it. To be an active participant rather than a passive consumer.

I am optimistic. This is another expansion in how humans share and build knowledge. We have been here before with writing, printing, and the internet. Each time brought anxiety and trade-offs. Each time we adapted and expanded who could participate.

This time is similar, only faster. My 20-year-old gets to help harness it. My 10-year-old grows up native to it.

They will not need to memorize facts like living libraries. They will need to judge trustworthiness, connect disparate ideas, adapt as tools change, and recognize when they are thinking clearly versus fooling themselves. These are metacognitive skills, and they are learnable.

If we teach people to think about their thinking as carefully as we once taught them to read, and if we pair acceleration with reflection, this could become the most inclusive knowledge revolution in history.

That is the work. That is why I am optimistic.


For more on this thinking: AI as the New Gutenberg

Compliance is a vital sign of organizational health. When it trends the wrong way, it signals deeper problems: processes that can’t be reproduced, controls that exist only on paper, drift accumulating quietly until trust evaporates all at once.

The pattern is predictable. Gradual decay, ignored signals, sudden collapse. Different industries, different frameworks, same structural outcome. (I wrote about this pattern here.)

But something changed. AI is rewriting how software gets built, and compliance hasn’t kept up.

Satya Nadella recently said that as much as 30% of Microsoft’s production code is now written by AI. Sundar Pichai put Google’s number in the same range. These aren’t marketing exaggerations; they mark a structural change in how software gets built.

Developers no longer spend their days typing every line. They spend them steering, reviewing, and debugging. AI fills in the patterns, and the humans decide what matters. The baseline of productivity has shifted.

Compliance has not. Its rhythms remain tied to quarterly reviews, annual audits, static documents, and ritualized fire drills. Software races forward at machine speed while compliance plods at audit speed. That mismatch isn’t just inefficient. It guarantees drift, brittleness, and the illusion that collapse comes without warning.

If compliance is the vital sign, how do you measure it at the speed of code?

What follows is not a description of today’s compliance tools. It’s a vision for where compliance infrastructure needs to go. The technology exists. The patterns are proven in adjacent domains. What’s missing is integration. This is the system compliance needs to become.

The Velocity Mismatch

The old world of software was already hard on compliance. Humans writing code line by line could outpace annual audits easily enough. The new world makes the mismatch terminal.

If a third of all production code at the largest software companies is now AI-written, then code volume, change velocity, and dependency churn have all exploded. Modern development operates in hours and minutes, not quarters and years.

Compliance, by contrast, still moves at the speed of filing cabinets. Controls are cross-referenced manually. Policies live in static documents. Audits happen long after the fact, by which point the patient has either recovered or died. By the time anyone checks, the system has already changed again.

Drift follows. Exceptions pile up quietly. Compensating controls are scribbled into risk registers. Documentation diverges from practice. On paper, everything looks fine. In reality, the brakes don’t match the car.

It’s like running a Formula 1 car with horse cart brakes. You might get a few laps in. The car will move, and at first nothing looks wrong. But eventually the brakes fail, and when they do the crash looks sudden. The truth is that failure was inevitable from the moment someone strapped cart parts onto a race car.

Compliance today is a system designed for the pace of yesterday, now yoked to the speed of code. Drift isn’t a bug. It’s baked into the mismatch.

The Integration Gap

Compliance breaks at the integration point. When policies live in Confluence and code lives in version control, drift isn’t a defect. It’s physics. Disconnected systems diverge.

The gap between documentation and reality is where compliance becomes theater. PDFs can claim controls exist while repos tell a different story.

Annual audits sample: pull some code, check some logs, verify some procedures. Sampling only tells you what was true that instant, not whether controls remain in place tomorrow or were there yesterday before auditors arrived.

Eliminate the gap entirely.

Policies as Code

Version control becomes the shared foundation for both code and compliance.

Policies, procedures, runbooks, and playbooks become versioned artifacts in the same system where code lives. Not PDFs stored in SharePoint. Not wiki pages anyone can edit without review. Markdown files in repositories, reviewed through pull requests, with approval workflows and change history. Governance without version control is theater.

When a policy changes, you see the diff. When someone proposes an exception (a documented deviation from policy), it’s a commit with a reviewer. When an auditor asks for the access control policy that was in effect six months ago, you check it out from the repo. The audit trail is the git history. Reproducibility by construction.

Governance artifacts get the same discipline as code. Policies go through PR review. Changes require approvals from designated owners. Every modification is logged, attributed, and traceable. You can’t silently edit the past.

Once policies live in version control, compliance checks run against them automatically. Code and configuration changes get checked against the current policy state as they happen. Not quarterly, not at audit time, but at pull request time.

When policy changes, you immediately see what’s now out of compliance. New PCI requirement lands? The system diffs the old policy against the new one, scans your infrastructure, and surfaces what needs updating. Gap analysis becomes continuous, not an annual fire drill that takes two months and produces a 60-page spreadsheet no one reads.

Risk acceptance becomes explicit and tracked. Not every violation is blocking, but every violation is visible. “We’re accepting this S3 bucket configuration until Q3 migration” becomes a tracked decision in the repo with an owner, an expiration date, and compensating controls. The weighted risk model has teeth because the risk decisions themselves are versioned and auditable.

Monitoring Both Sides of the Gap

Governance requirements evolve. Frameworks update. If you’re not watching, surprises arrive weeks before an audit.

Organizations treat this as inevitable, scrambling when SOC 2 adds trust service criteria or PCI-DSS publishes a new version. The fire drill becomes routine.

But these changes are public. Machines can monitor for updates, parse the diffs, and surface what shifted. Auditors bring surprises. Machines should not.

Combine external monitoring with internal monitoring and you close the loop. When a new requirement lands, you immediately see its impact on your actual code and configuration.

SOC 2 adds a requirement for encryption key rotation every 90 days? The system scans your infrastructure, identifies 12 services that rotate keys annually, and surfaces the gap months ahead. You have time to plan, size the effort, build it into the roadmap.

This transforms compliance from reactive to predictive. You see requirements as they emerge and measure their impact before they become mandatory. The planning horizon extends from weeks to quarters.

From Vibe Coding to Vibe Compliance

Developers have already adapted to AI-augmented work. They call it “vibe coding.” The AI fills in the routine structures and syntax while humans focus on steering, debugging edge cases, and deciding what matters. The job shifted from writing every line to shaping direction. The work moved from typing to choosing.

Compliance will follow the same curve. The rote work gets automated. Mapping requirements across frameworks, checklist validations, evidence collection. AI reads the policy docs, scans the codebase, flags the gaps, suggests remediations. What remains for humans is judgment: Is this evidence meaningful? Is this control reproducible? Is this risk acceptable given these compensating controls?

This doesn’t eliminate compliance professionals any more than AI eliminated engineers. It makes them more valuable. Freed from clerical box-checking, they become what they should have been all along: stewards of resilience rather than producers of audit artifacts.

The output changes too. The goal is no longer just producing an audit report to wave at procurement. The goal is producing telemetry showing whether the organization is actually healthy, whether controls are reproducible, whether drift is accumulating.

Continuous Verification

What does compliance infrastructure look like when it matches the speed of code?

A bot comments on pull requests. A developer changes an AWS IAM policy. Before the PR merges, an automated check runs: does this comply with the principle of least privilege defined in access-control.md? Does it match the approved exception for the analytics service? If not, the PR is flagged. The feedback is immediate, contextual, and actionable.

Deployment gates check compliance before code ships. A service tries to deploy without the required logging configuration. The pipeline fails with a clear message: “This deployment violates audit-logging-policy.md section 3.1. Either add structured logging or file an exception in exceptions/logging-exception-2025-q4.md.”

Dashboards update in real time, not once per quarter. Compliance posture is visible continuously. When drift occurs (when someone disables MFA on a privileged account, or when a certificate approaches expiration without renewal) it shows up immediately, not six months later during an audit.

Weighted risk with explicit compensating controls. Not binary red/green status, but a spectrum: fully compliant, compliant with approved exceptions, non-compliant with compensating controls and documented risk acceptance, non-compliant without mitigation. Boards see the shades of fragility. Practitioners see the specifics. Everyone works from the same signal, rendered at the right level of abstraction.

The Maturity Path

Organizations don’t arrive at this state overnight. Most are still at Stage 1 or earlier, treating governance as static documents disconnected from their systems. The path forward has clear stages:

Stage 1: Baseline. Get policies, procedures, and runbooks into version-controlled repositories. Establish them as ground truth. Stop treating governance as static PDFs. This is where most organizations need to start.

Stage 2: Drift Detection. Automated checks flag when code and configuration diverge from policy. The checks run on-demand or on a schedule. Dashboards show gaps in real time. Compliance teams can see drift as it happens instead of discovering it during an audit. The feedback loop shrinks from months to days. Some organizations have built parts of this, but comprehensive drift detection remains rare.

Stage 3: Integration. Compliance checks move into the developer workflow. Bots comment on pull requests. Deployment pipelines run policy checks before shipping. The feedback loop shrinks from days to minutes. Developers see policy violations in context, in their tools, while changes are still cheap to fix. This is where the technology exists but adoption is still emerging.

Stage 4: Regulatory Watch. The system monitors upstream changes: new SOC 2 criteria, updated PCI-DSS requirements, revised GDPR guidance. When frameworks change, the system diffs the old version against the new, identifies affected controls, maps them to your current policies and infrastructure, and calculates impact. You see the size of the work, the affected systems, and the timeline before it becomes mandatory. Organizations stop firefighting and start planning quarters ahead. This capability is largely aspirational today.

Stage 5: Enforcement. Policies tie directly to what can deploy. Non-compliant changes require explicit exception approval. Risk acceptance decisions are versioned, tracked, and time-bound. The system makes the right path the easy path. Doing the wrong thing is still possible (you can always override) but the override itself becomes evidence, logged and auditable. Few organizations operate at this level today.

This isn’t about replacing human judgment with automation. It’s about making judgment cheaper to exercise. At Stage 1, compliance professionals spend most of their time hunting down evidence. At Stage 5, evidence collection is automatic, and professionals spend their time on the judgment calls: should we accept this risk? Is this compensating control sufficient? Is this policy still appropriate given how the system evolved?

The Objections

There are objections. The most common is that AI hallucinates, so how can you trust it with compliance?

Fair question. Naive AI hallucinates. But humans do too. They misread policies, miss violations, get tired, and skip steps. The compliance professional who spent eight hours mapping requirements across frameworks before lunch makes mistakes in hour nine.

Structured AI with proper constraints works differently. Give it explicit sources, defined schemas, and clear validation rules, and it performs rote work more reliably than most humans. Not because it’s smarter, but because it doesn’t get tired, doesn’t take shortcuts, and checks every line the same way every time.

The bot that flags policy violations isn’t doing unconstrained text generation. It’s diffing your code against a policy document that lives in your repo, following explicit rules, and showing its work: “This violates security-policy.md line 47, committed by [email protected] on 2025-03-15.” That isn’t hallucination. That’s reproducible evidence.

And it scales in ways humans never can. The human compliance team can review 50 pull requests a week if they’re fast. The bot reviews 500. When a new framework requirement drops, the human team takes weeks to manually map old requirements against new ones. The bot does it in minutes.

This isn’t about replacing human judgment. It’s about freeing humans from the rote work where structured AI performs better. Humans hallucinate on routine tasks. Machines don’t. Let machines do what they’re good at so humans can focus on what they’re good at: the judgment calls that actually matter.

The second objection is that tools can’t fix culture. Also true. But tools can make cultural decay visible earlier. They can force uncomfortable truths into the open.

When policies live in repos and compliance checks run on every PR, leadership can’t hide behind dashboards. If the policies say one thing and the code does another, the diff is public. If exceptions are piling up faster than they’re closed, the commit history shows it. If risk acceptance decisions keep getting extended quarter after quarter, the git log is evidence.

The system doesn’t fix culture, but it makes lying harder. Drift becomes visible in real time instead of hiding until audit season. Leaders who want to ignore compliance still can, but they have to do so explicitly, in writing, with attribution. That changes the incentive structure.

Culture won’t be saved by software. But it can’t be saved without seeing what’s real. Telemetry is the prerequisite for accountability.

The Bootstrapping Problem

If organizations are already decaying, if incentives are misaligned and compliance is already theater, how do they adopt this system?

Meet people where they are. Embed compliance in the tools developers already use.

Start with a bot that comments on pull requests. Pick one high-signal policy (the one that came up in the last audit, or the one that keeps getting violated). Write it in Markdown, commit it to a repo, add a simple check that flags violations in PRs. Feedback lands in the PR, where people already work.

This creates immediate value. Faster feedback. Issues caught before they ship. Less time in post-deployment remediation. The bot becomes useful, not bureaucratic overhead.

Once developers see value, expand coverage. Add more policies. Integrate more checks. Build the dashboard that shows posture in real time. Start with the point of maximum pain: the gap between what policies say and what code does.

Make the right thing easier than the wrong thing. That’s how you break equilibrium. Infrastructure change leads culture, not the other way around.

Flipping the Incentive Structure

Continuous compliance telemetry creates opportunities to flip the incentive structure.

The incentive problem is well-known. Corner-cutters get rewarded with velocity and lower costs. The people who invest in resilience pay the price in overhead and friction. By the time the bill comes due, the corner-cutters have moved on.

What if good compliance became economically advantageous in real time, not just insurance against future collapse?

Real-time, auditable telemetry makes compliance visible in ways annual reports never can. A cyber insurer can consume your compliance posture continuously instead of relying on a point-in-time questionnaire. Organizations that maintain strong controls get lower premiums. Rates adjust dynamically based on drift. Offer visibility into the metrics that matter and get buy-down points in return.

Customer due diligence changes shape. Vendor risk assessments that take weeks and rely on stale SOC 2 reports become real-time visibility into current compliance posture. Procurement accelerates. Contract cycles compress. Organizations that can demonstrate continuous control have competitive advantage.

Auditors spend less time collecting evidence and more time evaluating controls. When continuous compliance is demonstrable, scope reduces, costs drop, cycles shorten.

Partner onboarding that used to require months of back-and-forth security reviews happens faster when telemetry is already available. Certifications and integrations move at the speed of verification, not documentation.

The incentive structure inverts. Organizations that build continuous compliance infrastructure get rewarded immediately: lower insurance costs, faster sales cycles, reduced audit expense, easier partnerships. The people who maintain strong controls see economic benefit now, not just avoided pain later.

This is how you fix the incentive problem at scale. Make good compliance economically rational today.

The Choice Ahead

AI has already made coding a collaboration between people and machines. Compliance is next.

The routine work will become automated, fast, and good enough for the basics. That change is inevitable. The real question is what we do with the time it frees up.

Stop there, and compliance becomes theater with better graphics. Dashboards that look impressive but still tell you little about resilience.

Go further, and compliance becomes what it should have been all along: telemetry about reproducibility. A vital sign of whether the organization can sustain discipline when it matters. An early warning system that makes collapse look gradual instead of sudden.

If compliance was the vital sign of organizational decay, then this is the operating system that measures it at the speed of code.

The frameworks aren’t broken. The incentives are. The rhythms are. The integration is.

The technology to build this system exists. Version control is mature. CI/CD pipelines are ubiquitous. AI can parse policies and scan code. What’s missing is stitching the pieces together and treating compliance like production.

Compliance will change. The only question is whether it catches up to code or keeps trailing it until collapse looks sudden.

“How did you go bankrupt?” a character asks in Hemingway’s The Sun Also Rises.
“Two ways,” comes the reply. “Gradually, then suddenly.”

That is how organizations fail.

Decay builds quietly until, all at once, trust evaporates. The surprise is rarely the failure itself. The surprise is that the warning signs were ignored.

One of the clearest of those warning signs is compliance.

Compliance Isn’t Security

Security practitioners like to say, “compliance isn’t security.” They are right. Implementing a compliance framework does not make you secure.

SOC 2 shows why. It is a framework for attesting to controls, not for proving resilience. Yet many organizations treat it as a box-checking exercise: templated policies, narrow audits, point-in-time snapshots.

The result is an audit letter and seal that satisfies procurement but says little about how the company actually manages risk.

That is why security leaders often overlook compliance’s deeper value.

But doing so misses the point. Compliance is not proof of security. It is a vital sign of organizational health.

Compliance as a Vital Sign

Think of compliance like blood pressure. It does not guarantee health, but when it trends the wrong way, it signals that something deeper is wrong.

Organizational health has many dimensions. One of the most important is reproducibility, the ability to consistently do what you say you do.

That is what compliance is really about. Not proving security, but proving reproducibility.

Security outcomes flow from reproducible processes. Compliance is the discipline of showing those processes exist and can be repeated under scrutiny.

If you are not using your compliance program this way, as a vital sign of organizational health, there is a good chance you are doing it wrong.

Telemetry vs Point-in-Time Theater

Compliance only works as a vital sign if it is measured continually.

A one-time audit is like running an EKG after the patient has died. It may capture a signal, but it tells you nothing about resilience.

If your compliance telemetry only changes at audit time, you do not have telemetry at all. You have theater.

Healthy organizations use frameworks as scaffolding for living systems. They establish meaningful policies, connect them to real procedures, and measure whether those procedures are working. Over time, this produces telemetry that shows trends, not just snapshots.

Hollow organizations optimize for paperwork. They treat audits as annual fire drills, focus on appearances, and let compliance debt pile up out of sight.

On paper they look fine. In reality they are decaying.

Distrust Looks Sudden, but Never Is

The certificate authority ecosystem makes this pattern unusually visible.

Every distrusted CA had passing audit reports. Nearly all of them showed years of compliance issues before trust was revoked. Audit failures, unremediated findings, vague documentation, repeat exceptions. All accumulating gradually, all while auditors continued to issue clean opinions.

When the final decision came, it looked sudden. But in reality it was the inevitable climax of a long decline.

The frameworks were there: WebTrust, ETSI, CA/Browser Forum requirements. What failed was not the frameworks, but the way those CAs engaged with them.

Independent Verification, Aligned Incentives

The auditor problem mirrors the organizational one, and it appears across every regulated industry.

Auditors get paid by the organizations they audit. Clean reports retain clients. Reports full of findings create friction. The rational economic behavior is to be “reasonable” about what constitutes a violation.

Audits are scoped and priced competitively. Deep investigation is expensive. Surface verification of documented controls is cheaper. When clients optimize for cost and auditors work within fixed budgets, depth loses.

Auditors are often competent in frameworks and attestation but lack deep technical or domain expertise. They can verify a policy exists and that sampled evidence shows it was followed. They are less equipped to evaluate whether the control actually works, whether it can be bypassed, or whether the process remains reproducible under stress.

In the WebPKI, WebTrust auditors issued clean opinions while CA violations accumulated. In finance, auditors at Wirecard and Enron missed or downplayed systemic issues for years. In healthcare, device manufacturers pass ISO audits while quality processes degrade. The pattern repeats because the incentive structure is the same.

The audit becomes another layer of theater. Independent verification that optimizes for the same outcomes as the organization it is verifying.

The Pattern Repeats Everywhere

This dynamic is not limited to the WebPKI. The same pattern plays out everywhere.

Banks fined for AML or KYC failures rarely collapse overnight. Small violations and ignored remediation build up until regulators impose billion-dollar penalties or revoke licenses.

FDA warning letters and ISO 13485 or IEC 62304 violations accumulate quietly in healthcare and medical devices. Then, suddenly, a product is recalled, approval is delayed for a year, or market access is lost.

Utilities cited for NERC CIP non-compliance often show the same gaps for years. Then a blackout, a safety incident, or a regulator penalty makes the cost undeniable.

SOC 2 and ISO 27001 in technology are often reduced to checklists. Weak practices are hidden until a breach forces disclosure, the SEC steps in, or customers walk away.

For years, auditors flagged accounting irregularities and opaque subsidiaries at Wirecard. The warnings were dismissed. Then suddenly €1.9 billion was missing and the company collapsed.

Enron perfected compliance theater, using complex structures and manipulated audits to look healthy. The gradual phase was tolerated exceptions and “creative” accounting. The sudden phase was exposure, bankruptcy, and a collapse of trust.

In security, the same pattern shows up when breaches happen at firms with repeat compliance findings around patching or access control. To outsiders the breach looks like bad luck. To insiders, the vital signs had been flashing red for years.

Different industries. Different frameworks. Same structural pattern: gradual non-conformance, ignored signals, sudden collapse.

Floor or Facade

The difference comes down to how organizations engage with frameworks.

Healthy compliance treats frameworks as minimums. Organizations design business-appropriate and system-appropriate security controls on top. Compliance provides evidence of real practices. It is reproducible.

Hollow compliance treats frameworks as the ceiling. Controls are mapped to audit templates. Documentation is produced to satisfy the letter of the requirement, not to reflect reality. It is performative.

Healthy compliance is a floor. Hollow compliance is a facade.

Which one are you building on?

Why Theater Wins

Compliance theater is not a knowledge problem. It is an incentive problem with a structural enforcement mechanism.

The people who bear the cost of real compliance (engineering time, operational friction, headcount) rarely bear the cost of compliance failure. By the time collapse happens, they have often moved on: promoted, departed, or insulated by organizational buffers.

Meanwhile, the people who face immediate consequences for not having a an audit letter and seal (sales cannot close deals, partnerships stall, procurement rejects you) have every incentive to optimize for the artifact, not the reality.

The rational individual behavior at every level produces collectively irrational outcomes.

Sales needs SOC 2 by Q3 or loses the enterprise deal. Finance treats compliance as overhead to minimize. Engineering sees security theater while facing pressure to ship. The compliance team, caught between impossible demands, optimizes for passing the audit. Executives get rewarded for revenue growth and cost control, not for resilience that may only matter after they are gone.

Even when individuals want to do it right, organizational structure fights them.

Ownership fragments across the organization. Security owns controls, IT owns implementation, Legal owns policy, Compliance owns audits, Business owns risk acceptance. No one owns the system. Everyone optimizes their piece.

Organizations compound this with contradictory approaches to security and compliance. Security gets diffused under the banner that “security is everyone’s responsibility,” which sounds collaborative but becomes an excuse to avoid investing in specialists, dedicated teams, or proper career paths. When security is everyone’s job, it becomes no one’s priority.

Compliance suffers the opposite problem. Organizations try to isolate it, contain the overhead, keep it from interfering with velocity. The compliance team becomes a service function that produces audit artifacts but has no authority over the processes they are attesting to. They document what should happen while having no power to ensure it does.

Both patterns distribute responsibility without authority, then act surprised when accountability evaporates.

Time horizons misalign. Boards and executives operate on quarterly cycles. Compliance decay compounds over 3 to 5 year horizons. By the time the bill comes due, the people who made the decisions have harvested their rewards and moved on.

At the top, executives rarely see true compliance health. Success is presented as green dashboards and completed audits. In the middle, compliance leaders want to be seen as delivering, so success is redefined as passing audits and collecting audit letters and seals. At the ground level, practitioners know the processes are brittle, but surfacing that truth conflicts with how success is measured. Everyone looks successful on their own terms, but the system as a whole decays.

Accountability diffuses. When collapse happens, it is framed as a “perfect storm” rather than the predictable outcome of accumulated decisions. Causation is plausibly deniable, so the individuals who created the conditions face no consequences.

The CA distrust pattern reveals this clearly. WebTrust audits happen annually. CA/B Forum violations accumulate gradually. But the CA’s business model rewards sales, not security or compliance.

The compliance team knows there are issues but lacks authority to halt issuance. Engineering knows the processes are brittle but gets rewarded for features. Leadership knows there are findings but faces pressure to maintain market share.

Everyone is locally rational. The system is globally fragile.

What Compliance Actually Predicts

Compliance failures do not directly cause security failures. But persistent compliance decay strongly correlates with organizational brittleness.

The specifics change: financial reporting, PKI audits, safety inspections. The pattern does not.

Gradual decay. Ignored signals. Then sudden collapse.

Compliance does not predict the exact failure you will face. But it does predict whether the organization has the culture and systems to sustain discipline when it matters.

That is why it is such a reliable leading indicator.

Organizations that suffer “sudden” compliance collapse are not unlucky. They are optimally designed for that outcome. The incentives reward short-term performance. The structure diffuses accountability. The measurement systems hide decay.

The surprising thing is not that it happens. It is that we keep pretending it is surprising.

Building Systems That See

Ignore your blood pressure long enough and the heart attack looks sudden. The same is true for organizations.

Compliance frameworks should not be dismissed as paperwork. They should be treated as telemetry, imperfect on their own but invaluable when tracked over time.

They are not the whole diagnosis, but they are the early warning system.

At its best, compliance is not about passing an audit. It is about showing you can consistently reproduce the controls and practices that keep the organization healthy.

If compliance is a vital sign, then what matters is not the paperwork but the telemetry. Organizations need systems that make compliance observable in real time, that prove reproducibility instead of just certifying it once a year, and that reveal patterns of decay before they turn into collapse.

Until we build those kinds of systems, most compliance programs will remain theater. Until compliance is treated as reproducibility rather than paperwork, incentives and structure will always win out.

The frameworks are fine. What is missing is the ability to see, continuously, whether the organization is living up to them.

Ignore the vital signs, and collapse will always look sudden.