The Learning Firm under Poverty of Stimulus

53 min read Original article ↗

Under sparse, noisy, and delayed feedback, firms must learn. Drawing on the Chomskyan competence–performance split, I advance a novel theory—the I/E Learning Architecture (IELA): a gate-separated, cadence-governed design that keeps a versioned rulebook (competence) distinct from the examined stream (performance). I show that IELA is Turing-complete; hence no universal “learning-done” oracle exists and cadence becomes a governance necessity. This expressivity yields carbon–silicon universality: when the licensing layer and gates are shared, human and AI enactments instantiate one learning loop. Carbon–silicon universality thus stands as a novel organizational theory: it makes the learning loop the primary object, renders its levers carrier-agnostic, and pins accountability to gate design and inscription rather than to the choice of human or machine. To render the architecture operable and falsifiable, I specify a compact kernel (D1–D6, L1–L3, P1–P9) with auditable measures and tests. The result is a carrier-agnostic, unified, auditable theory that returns management to the firm’s fundamental learning loop and recovers classic results as special cases. This paper establishes the governance architecture. A companion essay introduces the strategic doctrine base and compositional sentence grammar that supply the I-language content IELA governs.

I start from a simple claim: firms must learn to compete. By “learning” I mean the continual revision of attention and decision rules. Under poverty of stimulus (thin, noisy, delayed feedback), competitive fit drifts, so metrics and incentives alone cannot explain learning. Classic management theories, e.g., knowledge integration, routines, dynamic capabilities etc. map where learning appears. I take a second-order step: if firms are learning entities, the proper object of management theory is learning about that learning, i.e., meta-learning.

To ground that move, I turn to linguistics, which faced the investigation into the mind of learning. Chomsky (1965) separated an internal rule system (competence) from observed behavior (performance) to explain how speakers acquire rule-governed generative capacity from limited input. Keeping the layers distinct is essential: competence licenses novelty; performance supplies graded evidence; learning is revising the former in light of the latter under explicit constraints (e.g., Universal Grammar, UG). This paper adopts this lineage as foundational and applies it to firms.

In firms, the same split holds. Most firms already operate with a scattered rule system: policies, standards, decision rights, operating procedures, approval rubrics, legacy scripts, and tacit heuristics. Read together, these fragments form a shadow grammar that licenses action (competence), while what shows up in the world: decisions, transactions, audit trails, post-mortems; is the record of those rules in use (performance).

These fragments constitute a shadow grammar — already operative, already licensing and blocking action, but unversioned, unaudited, and often internally inconsistent. IELA does not impose a grammar on a grammar-free organization. It makes the existing grammar visible, versioned, and governable. The political act is formalization, not creation.

I take an axiom-first view of the learning firm. First, C1: an internal language (grammar) and its rulebook (versioned corpus) that licenses what may be proposed before any run. The firm’s shadow grammar, made explicit and versioned, functioning as a UG-like prior that constrains what can be said and done. Second, C2: split gates at the boundary: every proposal is judged twice, first for form at a syntax gate (is it well-formed under the rulebook?) and then for worth at a semantics gate (does it satisfy the admissible evaluation predicates?), with the two judgments separated and auditable so the grading cannot rewrite the rules arbitrarily. Third, C3: cadence from exposure to write-back: proposals cycle through constrain, expose, minimal edit, and write back, so examined evidence is converted into versioned edits of the rulebook. Absent any one of C1–C3, metrics may still be tuned, but the meta-learning (the rules that revise the rules) remain unobservable and uneditable.

Literal, Not Metaphorical Constructs

I-language (competence). The firm’s licensing grammar (schema): types, invariants, interfaces, operators; which determines well-formedness before exposure. A rulebook is the versioned store of clauses expressed in that schema.

E-language (performance). The examined stream in which licensed moves are realized and judged.

Gates: auditability and placement. The S-gate (syntax) checks that a proposed action is well-formed. The M-gate (semantics) sits at the I/E boundary, then routes and judges proposals using the predicate menu, the whitelisted set of admissible predicates. A proposal is examined once the M-gate applies an admissible predicate set and logs a decision token with predicates used, examiner, and timestamp; the examined stream is the canonical record of those tokens and associated outcomes. Use I-side and E-side only to locate mechanisms: the predicate menu on the I-side; examiners, routings, and logs on the E-side.

Cadence. Constrain, expose, minimal edit, and write back. Constrain: apply the I-language (competence) so the proposal is well-typed and checkable. Expose: present the well-typed proposal to the M-gate for evaluation. Minimal edit: the smallest scoped change that revises only the intended types or invariants, carries explicit versioning or deprecation, satisfies the newly adopted predicates, and targets a single addressed seam with a migration path. These conditions keep recomputation local, preserve rollback and traceability, and still permit stepwise paradigm shifts via a few high-leverage edits. Write-back: after S-gate and M-gate evaluation, commit the versioned edit to its rulebook address. Competence changes only through such commits.

Constraint block. The structured header a proposal carries to gates: the declared types, invariants, addressed seam, admissible predicate set (for M-gate), stop rules/clock, and rollback path, i.e., the compile-time contract S-gate verifies and M-gate evaluates.

Examiner networks. An examiner is a role/agent authorized to apply predicates at the M-gate. The examiner network is the set of examiners plus their routing rules. Compile: to compile a clause is to map it into the shared licensing grammar. A compiled clause may be exposed; it becomes examined only once the M-gate logs a decision token. Compilation succeeds if and only if the clause is well-typed at the S-gate and its evaluation predicates are admissible at the M-gate.

Canonization. To canonize a clause is to commit a minimal I-edit into the versioned rulebook at its unique address after it has passed the S-gate (well-typed) and the M-gate (evaluated by admissible predicates). Canonization is complete at the rulebook commit; only canonized edits count as competence change. Canonization rate: Share of examined moves in a window that produce an I-edit committed to the rulebook. Canonization latency: Elapsed time from the decision token (M-gate outcome) to the rulebook commit.

Reopen (rate). A subsequent change to the same rulebook address within a defined window; the reopen rate is the share of canonized clauses that are revised again. Implications: higher canonization and lower latency lead to fewer reopens.

A firm maintains an explicit I-language (competence), a licensing grammar stored as a versioned rulebook. Proposals must clear a split S-gate for well-formedness and an M-gate for evaluation against an admissible predicate menu to enter the E-language, the examined stream of decisions and outcomes. Cadence runs constrain, expose, minimal edit, write-back; only a canonized, addressed edit to the rulebook changes competence. E-side records supply evidence; meters and narration do not alter competence unless they result in an addressed write-back. I call this the I/E Learning Architecture (IELA); an IELA system is any concrete instantiation of IELA.

Carbon–Silicon Firm

Modern firms act through two carriers, human and AI, and algorithmic management now treats governance as a system of rules, routings, and logged decisions across those carriers (Keegan & Meijerink, 2025). However, discussions often blur (i) form checks (what is well-formed) with worth judgments (what is valuable), and (ii) exam logs vs. rulebook inscriptions. In IELA terms, the stream often conflates syntax and semantics and logs decisions without tying them to rulebook edits. IELA unblurs this by making three axioms explicit.

I/E illustrations by carrier:

• Carbon (human): I-language—capital-allocation policy, investment criteria, risk limits, proposal templates. E-language—submitted proposals, committee minutes, spend logs.

• Silicon (AI): I-language—API/schema types, interface obligations, service standards. E-language—execution traces, deployments, experiment outcomes.

Keegan and Meijerink (2025) call for transparent, auditable decision criteria and clear accountabilities across human–AI workflows. Under IELA, this requires a carbon–silicon firm governed at the licensing layer with one shared grammar and one examined stream, reducing translation-debt: the extra rework and delays caused mismatched grammar and examined stream.

Contribution and Outline

1. Meta-learning for management. I axiomatize the licensing layer, split S-gate/M-gate plus cadence as the causal mechanism that converts examined performance into competence. Management theory then becomes the learning about the learning firm. I introduce IELA as a meta-architecture (C1–C3): an explicit rulebook, split judgments of form and worth, and a cadence that converts examined decisions into versioned edits. This reframes classic theory: knowledge integration, routines/dynamic capabilities etc. as phases of one learning loop. (See Literature Re-reads for how each is grounded in the IELA loop and audited.)

2. Carbon–silicon universality. With IELA in place, I ask what it takes for human and AI enactments to participate in one learning loop. The claim is architectural, defined at the licensing layer and the gates. Formal entailments and boundary conditions appear in Claim A; implications are developed in The Carbon–Silicon Frontier.

IELA is a meta-learning and carrier-agnostic architecture. It recenters management on one learning loop and sets a concrete governance agenda: run the enterprise as a carbon–silicon learning firm. The paper invites IELA-based theory building and supplies a lean field program with portable falsification procedures, converting the agenda into testable, revisable practice (see Implications and Field Program).

THE GENERATIVE KERNEL

  • Claim A1 (Expressive power). Under C1–C3, IELA can simulate a standard register/WHILE machine and is therefore Turing-complete in expressive power. This expressivity is necessary for the enterprise learning problem: under poverty of stimulus, the hypothesis space must be open-ended. Turing completeness is the minimal condition that guarantees this.

  • Claim A2 (No closure oracle). At Turing-complete expressivity, no universal “learning-done” test exists — any rule certifying completion from arbitrary states would decide halting. Therefore continuous cadence is a governance necessity, not a process preference. In the finite-controller boundary (fixed gates, bounded loops), termination reduces to reachability and reinforcement with explicit stop rules suffices (Skinner, 1953).

  • Claim A3 (Carrier independence). With Turing-complete expressivity and C1–C3, expressive power is carrier-independent: human and AI enactments satisfying C1–C3 are architecturally equivalent for learning. Practical limits stem from governance caps (budgets, policies, regulation) and examiner routing, not the architecture.

Joint sufficiency for realizing the learning loop. C1 (typed, versioned rulebook), C2 (separate S-gate/M-gate with an admissible predicate menu), C3 (continuous cadence with versioned write-backs) together (i) give a determinate target for competence change, (ii) firewall evaluation from licensing so meters cannot rewrite form, and (iii) convert examined decisions into canonized edits that update competence. This is not incidental: under poverty of stimulus, enterprise learning requires an open-ended hypothesis space. Turing completeness is the minimal necessary condition; sufficiency comes from C1–C3, which make the expressive power operative as learning rather than mere computation.

I next formalize the kernel: D1–D3 (axioms), D4–D6 (their deductive consequences), state the environmental pressures under thin evidence (L1–L3), and derive propositions tied to observable traces (P1–P9). I use this kernel because it supports literature re reads, testable propositions; readers can see Propositions, Traces, and Tests for the full statements and tests. This separation keeps constructs clear and predictions auditable.

The kernel is organized in three layers. Definitions D1–D3 are the axioms — they establish the objects (I-language, E-language, cadence) and are jointly necessary and sufficient for the IELA loop. D4–D6 are deductive consequences: parsimony, composability, and corrigibility follow from D1–D3 and make the loop operable under real conditions. L1–L3 are environmental lemmas that state how poverty of stimulus creates specific pressures (constraint priority, mirroring requirements, M-gate drift). P1–P9 are testable propositions derived from D+L, each with expected patterns, observable traces, and disconfirmation rules. The reader can treat D1–D3 as the architecture, D4–D6 as its engineering requirements, L1–L3 as the threat model, and P1–P9 as the audit program.

Definitions and Deductive Consequences

D1 - I-language (competence). What I require: a compact licensing grammar: types, invariants, interfaces, operators, that licenses well-formedness before exposure; carrier-agnostic; finite at a given time, extensible via write-back. What follows: the S-gate have compile targets to be verified; licensed novelty is portable; the rulebook is auditable and versioned; proposals have a compile target. If I omit it: the S-gate has no compile target; meters overwrite rules; learning collapses to results control. P4 (Constraint-first advantage), P5 (Structure beats surfaces), and P9 (Internal loop effectiveness) lose footing.

D2 - E-language (performance). What I require: a designated examined stream where licensed clauses are realized and judged. Multiple operational streams may exist; only those mapped to the canonical examined stream count as evidence. Gate outcomes log a decision token into the examined stream. Poverty of stimulus lives here: observation is examined tokens; only split gates and disciplined write-backs prevent meters from rewriting form. What follows: the M-gate routes and evaluates on a distinct evidence base; predicates and routing are observable; disconfirmation is possible. If I omit it: form (syntax) and worth (semantics) collapse; misrouting is untraceable; unmapped logs don’t produce learning; P7–P9 (Fluency as aligned throughput; Gate-semantics lock-in; Internal loop effectiveness) are untestable.

D3 – Cadence. What I require: a continuous loop, constrain, expose, minimal edit, write-back that constitutes the exposure to write‑back loop. Write-backs target a unique rulebook address. Logged decision tokens make the next state well-defined. What follows: examined performance becomes competence; decision-to-rulebook latency and canonization are measurable; keep-open discipline prevents premature closure; iteration stays safe. If I omit it: enactments don’t inscribe; drift accumulates; closure is arbitrary; reinforcement-only boundary cases dominate; P9 (Internal loop effectiveness) fails.

D4 – Parsimony. What I require: keep the live rulebook only as large as scope demands so recomputation stays local. Thus parsimony is required for C3 to be executable under C1. What follows: edits stay local; diagnostics remain identifiable; recomputation cost is bounded; minimal edits remain feasible. If I omit it: edits propagate across layers; attribution breaks; results-only control creeps back in. The small-shock to small-edit signature in P1 (Parsimony wins until expressiveness binds) fails.

D5 – Composability. What I require: A seam is a named boundary where two licensed clauses meet and exchange obligations, specified by an interface and its invariants. A typed seam makes the interface, invariants, and allowed handoffs, making boundary conformance auditable at the S-gate/M-gate; so clauses compose without re-parsing internals. What follows: clauses and components can be reused and transported across rulebook revisions; effects can be tested in isolation at seams. If I omit it: Every inscription starts from scratch, reuse is fragile, and effects are unmeasurable. P2 (Composability enables scope) and P5 (Structure beats surfaces) can’t be cleanly tested; L2 (Near-decomposability and mirroring) locality breaks.

D6 – Corrigibility. What I require: engineer local repair behind versioned interfaces; rollback, waivers, deprecation windows; and make targeted forgetting a standard, documented operation. Therefore, corrigibility is required to preserve C3 given error, drift, and exogenous shocks. What follows: cadence stays safe and continuous; blast radius (count of downstream activities affected) is bounded; decision-to-rulebook latency and canonization improve; inscription remains reliable across carriers. If I omit it: errors escalate into global rewrites; cadence stalls; inscription gaps persist. The resilience and write-back signatures in P3 (Corrigibility predicts resilience) and P9 (Internal loop effectiveness) collapse.

Environmental Lemmas under Poverty of Stimulus

L1 - Constraint priority. What I state: Under poverty of stimulus, outcomes alone cannot identify good rules unless ex-ante constraints are in place. Without prior constraints, the most recent payoff steers the grammar and meters bleed into licensing. What follows: publish constraints block before exposure, and prevent worth tests from directly rewriting form; any change to constraints must be proposed and inscribed as its own examined I-edit (L3). Traces: signs of results-only control. If I omit it: overfitting to recent outcomes becomes default; competence erodes; P4 (Constraint-first advantage) cannot hold.

L2 - Near-decomposability and mirroring. What I state: When the representation mirrors operative dependencies (per the alignment and validity checks above), edits stay local; when it does not, changes propagate non-locally. What follows: enforce typed seams so edits remain contained and measurable. Traces: edit span; blast radius; time-to-first-fix; readiness intervals for cross-unit work; alignment (required vs actual coordination). If I omit it: edits cascade unpredictably; diagnostics fail.

L3 - M-gate drift after success. What I state: success biases the M-gate toward incumbent predicates of worth. Syntactically valid novelties are misrouted or rejected. What follows: diversify examiners via routing; unify predicate menus across carriers; log predicates used at approval so drift is visible and correctable. Traces: predicate diversity at approvals; routing share for licensed novelty; translation-debt indicators; decision-to-rulebook latency. If I omit it: incumbency lock-in hardens; novelty stalls at the gate; translation-debt mounts.

Propositions, Traces, and Tests

For each proposition I specify an expected pattern and traces that bind it to data, and a disconfirmation rule that narrows scope or forces revision. This keeps the kernel testable. I derive nine testable propositions (P1–P9) from the axioms (D1–D3), their deductive consequences (D4–D6), and the environmental pressures (L1–L3), grouped as: Parsimony and composability under accurate mirroring yield recovery and scope (P1–P3); explicit constraint and structure support novelty (P4–P6); cadence plus alignment and drift control govern throughput and inscription (P7–P9). Two simple maps used below (common management artifacts): (i) Dependency map: who/what depends on whom/what (units, roles, processes, artifacts, suppliers, contracts). (ii) Communication map: who actually coordinates with whom (messages, ticket flows, approvals).

P1 - Parsimony wins until expressiveness binds. Built on: D4 (Parsimony) and D5 (Composability), under the pressures in L1 (Constraint priority) and L2 (Near-decomposability and mirroring). What I expect: when the parsimoni¬ous rulebook mirrors real interdependence, small shocks lead to small edits. As interdependence rises, a compact licensing grammar eventually hits an expressiveness threshold, and performance declines until the grammar is extended. Traces I watch: track edit span and edit latency against a rulebook compactness index; expect a rise-then-fall once the expressiveness threshold is reached. Test and disconfirmation: Pre-register comparable shocks and show a non-monotone pattern with a clear expressiveness threshold. If, after shock control and validated mapping, the curve is strictly monotone increasing, refine mapping measures and expand expressiveness (add types/seams/operators) and re-test. Why it matters: Confirms that economy of rules preserves locality until scope outruns expressiveness.

P2 - Composability enables scope. Built on: D5 (Composability) and the locality logic in L2 (Near-decomposability and mirroring). What I expect: Typed seams license recombination without spreading change when mirroring holds. Traces I watch: seam-violation rate (violations at typed seams). Test and disconfirmation: If typed seams do not reduce violation rates and edit spans relative to comparable untyped boundaries, treat P2 as failed for that context and revisit the typed-schema, mirroring, or gate enforcement. Why it matters: Shows that typed composition, not ad-hoc integration, is the scalable path.

P3 - Corrigibility predicts resilience. Built on: D6 (Corrigibility) operating within continuous D3 (Cadence), with locality supplied by L2 (Near-decomposability and mirroring). What I expect: Versioned seams, rollbacks, and waivers shrink blast radius and speed the fix when dependencies are mirrored. Reliability shows up as detect, contain, reconfigure. Traces I watch: time to fix; blast radius from the dependency map. Test and disconfirmation: Show that smaller blast radius predicts faster time to fix. If blast radius is small yet fixes are slow, I examine detection and routing; if blast radius is large despite versioning, mirroring is wrong. Why it matters: Makes recovery a property of learning.

P4 - Constraint-first advantage. Built on: D1 (I-language) and D3 (Cadence) with a keep-open discipline (clocks/stop rules prevent premature closure), under L1 (Constraint priority). What I expect: With a constraint block, exposure is informative: either canonize, or return a seam-addressed diagnostic. Rework falls; out-of-sample performance on novelties rises. Traces I watch: presence of a constraint block; rework rates; out-of-sample results on novel cases. Test and disconfirmation: A/B the presence and quality of constraint blocks. If nothing moves, I re-examine constraint quality or how keep-open is enforced. Why it matters: Protects competence from being rewritten by meters under thin evidence.

P5 - Structure beats surfaces. Built on: D1 (I-language) and D5 (Composability), guided by L2 (Near-decomposability and mirroring). What I expect: Policies tied to types and interfaces travel across structural change. KPI rescaling without structural grounding fails when the underlying relations shift. Traces I watch: Transport success of well-typed clauses across process, role, or standard changes versus KPI tweaks. Test and disconfirmation: Compare transport rates. If KPI rescaling transports better than well-typed clauses under structural change, I revisit how types and seams are defined. Why it matters: Keeps rules aligned with architecture rather than surface meters.

P6 - Creativity under constraint. Built on: D1 (I-language) and D5 (Composability), read against L2 (Mirroring). Expectation: Impact peaks at middle combinational distance—where licensed recombinations (S-gate-valid, M-gate-admissible) mix components whose type-sets are neither too similar nor too disjoint. Measurement: Relate impact-weighted novelty to combinational distance (type-set dissimilarity across typed seams), conditioning on type coverage (share of parts/handoffs under the shared schema). Test and disconfirmation: Fit the inverted-U; the peak should widen as type coverage rises. If no peak after conditioning, revisit distance operationalization, seam typing, or routing. Why it matters: Guides search toward high-value mixes without chaos. P1 is about locality vs. rulebook, whereas P6 is about impact vs. combinational distance.

P7 - Fluency as aligned I/E throughput. Built on: D3 (Cadence) and D2 (E-language), read against L2 (Mirroring). Expectation: As dependency alignment (required vs actual coordination) rises, cycle time variance falls, and cadence sustains the gain. Under a shared grammar (same gates, predicate menu) the signature should match across carriers; residual timing gaps reflect compute or policy, not the inscription loop. Traces: alignment score (dependency vs communication maps); cycle time mean and variance by stage. Test and disconfirmation: Track before/after cadence stabilization. If variance doesn’t fall with higher alignment—or carbon and silicon diverge under a shared grammar—inspect examiner routing, seam typing, and the dependency map. Why it matters: Separates real throughput from activity spikes.

P8 - Gate-semantics lock-in. Built on: D1 (I-language) and D3 (Cadence); identifies L3 (M-gate drift). Expectation: Without split gates and diversified examiner routing, incumbent predicates colonize approvals. Unifying predicate menus across carriers reduces translation-debt and improves the routing of licensed novelty. Traces: predicate diversity at approvals; share of licensed novelties routed to non-incumbent examiners; decision-to-rulebook latency. Test and disconfirmation: After examiner diversification and menu unification, expect higher predicate diversity and novelty routing with lower latency; otherwise revisit routing rules or predicate definitions. Why it matters: Prevents success from hardening into systematic misgrading.

P9 - Internal loop effectiveness. Built on: D3 (Cadence), D6 (Corrigibility), D1 (rulebook as write-back target). What I expect: Under a shared grammar, lower decision-to-rulebook latency and higher canonization rate predict fewer reopens; persistent timing gaps indicate inscription failure rather than speed or policy differences. Traces: latency; canonization rate; reopen rate. Test and disconfirmation: Estimate the latency, canonization, and reopens relationship controlling for routing and resourcing. If it fails, inspect examiner routing, the predicate menu, and write-back ownership/versioning. Why it matters: Shows performance becomes competence only via timely inscription, not narration.

The Carbon–Silicon Frontier

Architectural equivalence is a learning result: it follows from C1–C3, not from attributing human cognition to AI. The management case for collaborative intelligence argued that humans and AI would jointly outperform by redesigning workflows, reassigning roles, and building complementary strengths (Wilson and Daugherty, 2018). The lever was coordination and complementarity. Yet the accumulated human–AI teaming evidence is mixed: a preregistered meta-analysis finds that human–AI pairs typically do not outperform the best solo agent, with gains concentrated in creative tasks and contingent on who is stronger at baseline (Vaccaro et al., 2024). The gap between aspiration and outcome suggests the lever is not teaming per se but the learning architecture the team sits inside. The so-called automation–augmentation “paradox” (Raisch & Krakowski, 2021) is a measurement artifact of mixed predicates, scopes, and clocks. When C1–C3 hold, there is one rulebook with split S-gate/M-gate, a shared predicate menu, and continuous cadence. The IELA loop shows clean P7–P9 signatures. Any remaining tension is about routing and seam design, not a logical contradiction.

Empirically, LLMs learn best under explicit grammatical constraints and stumble on “impossible” grammars, underscoring the value of structure for generalization (Kallini et al., 2024; Nandi et al., 2025). Theoretically, attention architectures are Turing-complete (Pérez, Barceló & Marinković, 2021), which supplies the necessary expressive power but not sufficient conditions for learning. Sufficiency comes from the IELA, which turns expressivity into auditable learning. Governance at the licensing layer implication: Algorithmic systems often hard-code rationalist assumptions that displace judgment (Lindebaum et al., 2020); therefore, publish and log the predicate menu at the M-gate so evaluation cannot rewrite syntax and biases are auditable and contestable. Two observations further support the need for a shared licensing grammar:

  • 1. Silicon-Carbon (AI/human). Even with schema-constrained generation, forcing outputs to a predefined format, reliability varies across real-world schemas; the grammar is often bolted on at the interface rather than shared as internal competence (Geng et al., 2025).

  • 2. Carbon-Silicon (human/AI). When AI provides predicate-level rationales (reasons in terms of evaluation criteria), humans align better and joint performance improves relative to black-box aids (Senoner et al., 2024).

The distinction is between a generalized language model — expressive but ungoverned, producing fluency without portable insight — and a scoped system constrained by a shared licensing grammar under IELA governance. A generalized LLM can generate well-sounding proposals; a scoped system compiles them at the S-gate, evaluates them at the M-gate against an admissible predicate menu, and inscribes the results as versioned write-backs. The governance is identical for both carriers because it operates at the licensing layer. Remaining carrier differences (hallucination risk and distributional shift for silicon; cognitive bias and politics for carbon) are performance-layer phenomena the gates are designed to catch. They require different examiner routing within the same architecture, not different architectures.

In short, without a shared licensing grammar across carriers, you get fluency (more activity) without portable insight. These are not teaming bugs; they are learning-architecture bugs. Govern at the licensing layer: keep one rulebook, keep one examined stream, split the S-gate/M-gate, and run continuous cadence, i.e., the IELA loop. Make organizational structure and technical architecture co-instantiate a single examiner network under the IELA loop, treating markets and regulators as external examiners. Strategy decides what enters the examined stream, which predicates are exposed to which human/AI examiners, and at what cadence. “Machines as teammates” then becomes two governance moves: (i) design examiner routing under one predicate menu at the M-gate (Seeber et al., 2020), and (ii) enforce typed human–AI seams with explicit interfaces/invariants at the S-gate so handoffs are verifiable. Hybrid-intelligence patterns, human–AI collaboration with explicit role allocation and interaction/feedback mechanisms (Dellermann et al., 2019) are thus typed interfaces (S-gate) and examiner routing (M-gate), with canonization tests, turning collaboration design into auditable learning. Under a shared rulebook and the IELA loop, expect the alignment signature (P7–P9): stable throughput, fast canonization, bounded edit spans. Without it, expect coordination without learning: activity rises while decision-to-rulebook latency stays long, canonization low, and reopens frequent. Hence the management and AI problems converge into the same governance problem: maintaining the IELA loop. Seeing algorithms as organizational actors (Faraj et al., 2018) is compatible with architectural equivalence: cadence—not the carrier—governs learning. The governance lever is therefore at the licensing layer. I now re-read classic management theories by locating each at specific phases of this loop with shared traces and disconfirmation tests.

Recurring pattern across these literatures: each classic theory operates as a partial grammar that licenses certain strategic expressions and blocks others. Most function at the level of linear templates and single-axis diagnostics — strong within scope but unable to handle nested composition, cross-layer constraints, or recursive rewrite. IELA does not dismiss these grammars; it identifies where their expressive power runs out and supplies the missing governance: gates, seams, inscription, and cadence. The companion essay (Part Two) provides the formal classification.

These are theory-led corollaries and operational remappings, not syntheses; under C1–C3 they follow by construction. I first state what the literature claims. I then scope it into IELA by recovering the relevant D (objects/deductive consequences), L pressures, and P propositions. The examined stream is evidence; the rulebook is explanation. Because IELA is expressively complete under C1–C3, these are operational remappings, not metaphors: every mechanism lands at a specific point: S-gate (form), M-gate (worth), cadence, or rulebook (inscription).

A brief meta-IELA note explains recurring blurs you’ll see across subfields. Scholarship follows what is observable (data gravity) and instrumented (method bias), so papers often stop at exposure and outcomes and skip auditable write-backs, lack split gates (letting worth tests rewrite form), or under-specify seams (turning integration into ad-hoc recombination). Across the re-read, I therefore (i) place the mechanism in the loop, (ii) name the blurred phase (e.g., S-gate typing, M-gate predicates, inscription), and (iii) give the fix as governance with probes (P1-P9). This prepares the reader to see why literatures sit where it does in IELA, and how its usual conflations arise and are corrected by gates, seams, cadence, and inscription.

Integration and the Knowledge-Based View: Compilation, not Meters.

Under the knowledge-based view (KBV), firms exist to integrate specialized, distributed knowledge; Grant (1996) frames the firm’s role as knowledge application via rules, sequencing, routines, and group problem-solving. IELA maps Grant’s integration problem onto licensing/compile artifacts: typed seams and a versioned rulebook that accumulate through cadence. Likewise, IELA renders Kogut & Zander’s (1992) “combinative capability” as typed composition at seams: preserving interfaces and invariants so novelty arises from rule-respecting recombination (P1, P5), rather than mere stockpiling. Following March (1991), thin evidence amplifies exploitation bias, overweighting recent payoffs. IELA’s constraint-first cadence is the remedy: publish constraints before exposure so meters cannot rewrite form, keeping change local via minimal, addressed edits (P1, D4).

Absorptive capacity (ACAP) separates “potential” (acquisition, assimilation) from “realized” (transformation, exploitation) (Zahra & George, 2002). IELA reads this as: potential ACAP prepares S-gate candidates; realized ACAP runs M-gate evaluation; it becomes learning only at canonization, an addressed write-back to the rulebook. Social capital accounts for how ties, trust, and shared codes move knowledge (Nahapiet & Ghoshal, 1998). In IELA it improves routing and evaluation reliability, getting proposals to the right S-gates and stabilizing M-gate judgments, yet it does not change competence without inscription.

Socialization, Externalization, Combination, Internalization (SECI) charts motion between tacit and explicit (Nonaka & Takeuchi, 1995; Tsoukas, 2009). IELA adds the governance test: externalizations change competence only if they carry constraints, compile at the S-gate, pass admissible predicates at the M-gate, and are written back at an addressed seam. Treating tacit to explicit as conversion alone risks mistaking talk for knowledge; knowledge creation is dialogical and practice-bound, so articulation without material change in artifacts or processes is insufficient (Tsoukas, 2009). Boundary resources function as S-gate artifacts that make cross-unit composition auditable (Carlile, 2004).

Under IELA: KBV’s integration, ACAP’s movement from potential to realized, and SECI’s tacit–explicit shifts all count only when they culminate in inscription, i.e., knowledge is composed at typed seams, passes the compile check, and is written back as an addressed rulebook edit. The blurred phase is that these literatures often treat externalization is often counted as learning as untyped recombination, with no explicit S-gate typing and no addressed write-backs, so performance narratives are mistaken for competence change. The governance fix is to make integration equal typed composition at seams (D5), enforce a compile test at S and admissible-predicate evaluation at M, and count learning only when proved clauses are inscribed into the D1 rulebook, then monitor P1 (edit span/locality), P2 (decision-to-rulebook latency), P5 (portability across seams), and P9 (externalization-to-canonization rate).

Dynamic Capabilities and Routines: Inscription, not Narration.

Organizational Learning (OL) and Dynamic Capabilities (DC) read in historical sequence and map cleanly into IELA: Fiol & Lyles (1985) distinguish lower-level vs. higher-level learning, the E-side adjustment that leaves the rulebook unchanged vs. I-side inscription that revises governing rules. Argyris & Schön (1978) echo this as single-loop vs. double-loop, action tuning vs. rule change via addressed write-back. Crossan, Lane & White (1999) chart the 4I flow: intuition/interpretation generate candidate clauses, integration operationalizes them at typed seams, and institutionalization is canonization (the I-edit that changes competence). DC then tighten the mechanism: Teece, Pisano & Shuen (1997) define higher-order abilities to integrate, reconfigure; Eisenhardt & Martin (2000) specify identifiable routines. In IELA terms, OL explains how proposals arise and approach the gates, while DC are the repeatable gate-to-inscription sequences that reconfigure and reliably write back the rulebook under change.

What endures is the recurrent pattern (Winter, 2003). In IELA, enduring capabilities appear as recurrent I-edits that are observable as high canonization and low reopen rates (P9). The bridge from performance to competence is the routine as inscription: ostensive patterns become durable only when performative enactments are written back into rule-bearing artifacts: standards, templates, code, etc. (Pentland and Feldman, 2005). Within IELA, change counts only when inscribed as a typed artifact at a seam; this operationalizes Orlikowski’s practice lens as inscription rather than persuasion alone (Orlikowski, 2000). Materialization means the clause exists as a typed artifact at a seam (D5).

Nayak et al. (2020) identify a valuable source mechanism — how idiosyncratic sensitivities and predispositions arise through practice. In IELA terms, this is where proposals originate. However, the account remains on the E-side: cadence and situated adjustment refine these sensitivities but stop short of clause formation and inscription. The refinements become competence only when cadence routes and examines them to produce candidate clauses, and inscription writes those clauses back as seam-addressed I-edits that change the rulebook (D3).

OL sits on the E-side: surface candidates for the S-gate, supply evidence at the M-gate; until an addressed I-edit is canonized, it’s not learning. Dynamic capabilities live at cadence and inscription. Read through the dynamic capabilities lens, the 4I model, and single vs. double-loop learning, a capability is a recurrent, well-closed pattern of I-edits that becomes visible as typed artifacts at seams. Routines-as-ostensive/performative and the practice lens motivate the materialization requirement but do not by themselves ensure inscription. The blurred phase is to label capabilities and describe routines in action without logging the edits that change competence, which blurs gate roles and lets worth tests rewrite form while tacit sensitivities and predispositions remain E-side fluency. The governance fix is to define a capability as a canonized edit pattern and to run cadence so proposals compile at the S-gate, are evaluated against admissible predicates at the M-gate, and only proved clauses are inscribed at addressed seams with rollback paths. The audit probes are canonization rate and latency and the reopen rate (P9); if decision-to-rulebook latency stays long or reopens remain high, you have pattern recognition without inscription and must tighten gate discipline, edit ownership, and materialization at typed seams.

Attention, categories, incentives, platforms: routing and public gates.

Ocasio (1997) shows that labels and channels steer organizational attention; attention based view (ABV); this is pre-syntax-gate routing (D3), which proposals even reach the S-gate depends on the label set and who is listening. Zuckerman (1999) shows that off-category entries are penalized even at equal quality; the penalty appears as low predicate diversity at approvals (P8), an M-gate bias where incumbent evaluation criteria (predicates of worth) down-route syntactically valid novelties before they can be canonized. Tripsas and Gavetti (2000) show that managers’ incumbent evaluation frames, the prevailing predicate set for what counts as worth, led firms to read digital imaging through film-era criteria, misgrading otherwise well-formed digital imaging moves. Christensen and Rosenbloom (1995) show value-network predicates anchor evaluation; the fix is to expand the predicate menu and diversify examiners so well-formed moves are graded at the right semantic gate (P8).

Holmström and Milgrom (1991) show that paying on a subset of tasks reweights effort; incentives live on the performance layer, and treating them as I-language would convert evidence into licensing without canonization—an anti-IELA move. Sitkin (1992) argues for learning via small losses; small-loss trials train examiners without letting M-gate predicates rewrite S-gate form (D3, P8). Cannon and Edmondson (2005) formalize intelligent failure; failure is intelligent only when the resulting I-edit is written back (P9).

Baldwin and Woodard (2009) model platforms as stable cores with standardized slots; platform cores plus slots are the public S-gate at ecosystem seams (D5). Ghazawneh and Henfridsson (2013) show how boundary resources (specs, SDK policies) operationalize platform control; these artifacts are the gate objects third parties must compile against (D5). Adner (2017) frames ecosystems as role and interdependence structures; ecosystem role structure is examiner assignment by seam, preventing misgrading (D3).

Leavitt et al. (2021) position machine learning (ML) as a complement that can surface and test mid-range constructs. In IELA, test constructs become worth-predicates used at the M-gate. To surface means, propose predicates are evaluated at the examined stream. Licensing stays in the I-side rulebook; only inscription, writing a proved predicate as a clause, changes competence. Leavitt et al. (2024) link employee trust in algorithmic performance management to managerial transparency. In IELA, make the M-gate legible by publishing the predicate menu and logging examiner-by-predicate use. Transparency becomes a governance requirement that enables auditable evaluations and, when warranted, upstream inscription of validated predicates into the rulebook.

ABV, categories, incentives, and platforms govern routing and gatework in IELA: labels and channels steer what even reaches syntax; platform boundary resources are public S-gates at ecosystem seams; and incentives act on the performance side, not in licensing. The common blur is to read routing and evaluator drift as competence, to let M-gate worth tests rewrite S-gate form, and to leave public S-gates underspecified so third-party work compiles ad hoc. The fix is architectural: split the gates, publish an admissible predicate menu, and log examiner-by-predicate use at the M-gate; treat boundary resources as S-gate artifacts with explicit types, interfaces, and invariants; keep incentives E-side and require inscription for any rule change. Read progress through shared probes: P8 (predicate diversity at approvals; routing of licensed novelty to non-incumbent channels), P5 (seam conformance and portability of third-party contributions), P7 (throughput alignment of required vs actual coordination), and P9 (decision-to-rulebook latency when gate decisions warrant write-back). If predicate diversity and novelty routing do not rise after examiner diversification, or if typed resources do not improve conformance and portability, revisit routing design, gate placement, and seam typing.

Invent Fast, Break Small: Licensed Creativity vs. Engineered Reliability

Structures that people use to coordinate should be echoed in the artifacts they build: organizational communication patterns tend to imprint on system seams (Conway, 1968). This is L2 mirroring: talk tracks dependence; seams should reflect it. Making those seams safe starts with module boundaries that hide internals and carry explicit obligations—an idea born in software but equally apt for teams and roles (Parnas, 1972). Hiding internals bounds blast radius and enables local rollback; this is D6 (corrigibility) supporting containment and recovery (P3). Product architecture allocates responsibilities across components and interfaces (Ulrich, 1995); IELA renders these as typed seams (D5). High-reliability systems pre-position defences-in-depth(Reason, 1997); in IELA, S-gate/M-gate checks play that role. Prepositioned barriers are defenses-in-depth around S-gate/M-gate that stop semantic drift from rewriting syntax. Design rules and versioning make change reversible: in code and product platforms and in policy or standard updates (Baldwin and Clark, 2000).

Design rules and versioning operationalize reversibility; measure this via rollback activations and containment (P3). Empirically, performance stabilizes when required dependencies map to who actually talks to whom, evidence from engineering teams coordinating around real work (Cataldo et al., 2006). Compute the alignment score as the share of required-coordination pairs that actually coordinate (P7). Software studies reinforce the mirror: modular codebases co-vary with organizational structure and evolve more safely when seams are enforced (MacCormack et al., 2006). Better mirroring predicts lower edit span and faster time-to-fix (P1–P3). In IELA terms, Weick & Sutcliffe’s reliability practices become engineered corrigibility that enables detect–contain–reconfigure and consistent canonization (Weick and Sutcliffe, 2007).

Creativity is licensed novelty: ideas that pass the S-gate (well-formed) and are novel and appropriate/useful to the task at the M-gate qualify as creative and may proceed toward inscription (Amabile, 1996). IELA separates channels: novelty is an I-language property (licensed at the S-gate); fluency is a performance property (cycle time level and variance), so throughput is not mistaken for invention. To turn exploration into learning, start with process discipline: write assumption logs, pre-commit kill-rules, and update the rulebook when tests resolve (McGrath and MacMillan, 1995). Assumption logs and kill-rules require write-back to count as learning (P9).

Layer the economics of timing: value depends on explicit exercise and abandon rules tied to time (Trigeorgis, 1996). Timing enters as explicit clocks attached to the clause which I test under P4–P5. Boundary rule: options are typed and clocked; absent explicit triggers and exercise/abandon clocks they devolve into drift, precisely the misapplication Adner & Levinthal caution against, which IELA prevents by requiring closure via addressed write-backs (Adner and Levinthal, 2004).

Reliability first: mirror real interdependence (L2), enforce typed seams and versioned interfaces (D5–D6), and run rollback/waivers in cadence (D3) so shocks stay local and reversals are safe. Then layer creativity on that scaffold: license novelty at the S-gate (D1, D5), test worth at the M-gate (D2), and type every option with explicit triggers and expiration clocks (D3–D4) so exploration closes by inscription, not activity. Classic accounts blur these phases—treating culture as reliability, throughput as creativity, “options” without clocks, externalization as learning, and unsplit gates that let worth tests rewrite form, so wins don’t travel and competence doesn’t change. Read progress through P1 (small shocks to small edits), P3 (detect–contain–reconfigure), P5 (portability of typed clauses), P7 (coordination alignment with lower variance), and P9 (faster canonization, fewer reopens). If clocks don’t move closures, alignment doesn’t cut variance, or typed clauses don’t transport better than metric tweaks, you have activity without inscription—tighten gate discipline, seam typing, and write-back ownership.

Kodak - Digital Imaging, Value-Network Lock-In, and M-Gate Drift.

Placement in the loop: D1–D3 present; L3 (M-gate drift) misroutes valid novelties. Kodak saw digital coming, but incumbent predicates of worth (print margins, channel protection) colonized the M-gate. Digital proposals could pass the S-gate yet were routed as print boosters instead of competence-level business-model edits. The result was low canonization and high canonization latency for platform moves; the I-language (types and interfaces for creating and capturing value) stayed film-anchored, so exposure did not become inscription (Anthony, 2016).

Traces: D1–D3 to show that competence and cadence were present but the M-gate drifted (L3), misrouting valid novelties. P5, P8, P9: narrow predicate menus at approvals; low routing of licensed novelties to non-incumbent examiners; slow write-back of digital standards. Theory fit: Knowledge integration requires I-edits. SECI helps only when externalizations carry types, predicates, and invariants that compile. Attention and category labels act as routing code; when predicates are locked in, they misroute licensed novelty.

Blockbuster - Meters Mistaken for Grammar; Options without Clocks.

Placement in the loop: D3 and D6 matter; drift is driven by L1 (constraint-priority failure) and L3 (M-gate drift). Blockbuster maximized performance-layer meters (store turns and fee-driven economics) and then treated them as grammar at the M-gate, biasing examiners against subscription and streaming. The I-language (store-centric types, contracts, policies) remained fixed; the “go digital” option lacked typed triggers, clocks, and versioned interfaces, so cadence was reactive, not constitutive (Downes and Nunes, 2013).

Traces: P4, P7–P9: missing constraint blocks in digital proposals; misalignment between required and actual coordination as streaming dependencies arrived; long write-back lags from pilots to policy and spec updates. Theory fit: Incentives versus grammar. Meters bleed into licensing unless S-gates and M-gates are split and the predicate menu is diversified. Options must be typed and clocked; without triggers and clocks, drift persists. Dynamic capabilities—patterned I-edits—were delayed or missing.

Toyota - Kaizen’s Local Wins, Missing System-Wide Cadence

Toyota runs a complete manufacturing learning loop: standard work is competence; line results are performance; S-gate checks “does it meet the standard?” precede M-gate predicates “safety, quality, cost”; proposals are constrained, exposed, minimally edited, and written back quickly, so improvements canonize on the production line (Spear and Bowen, 1999). At enterprise scope, gates and routing lagged when predicates shifted: during the battery-EV pivot, predicates such as charging, software, and platform economics needed to be licensed as edits and routed to the right examiners, but Toyota moved later than faster BEV competitors (Taira, 2024). Similarly in the recall crisis, fixes concentrated on factory routines while governance-level gates and write-back (who examines what, on which predicates) were slower to be formalized, evidence of a production-scoped learning loop that had not yet operated as an enterprise-wide learning architecture (Camuffo and Wilhelm, 2016).

Traces: P1–P3, P5–P7, P9: small edits and fast canonization on the production line; where electrification and governance predicates dominate, expect narrower predicate menus and slower inscription until enterprise gates and routing are extended. Theory fit: Routines hinge performance to competence, deviations become minimal edits and canonize rapidly. License novelty at the S-gate; keep throughput on the performance layer. When predicates shift beyond the production line, gates and grammar to the enterprise are needed.

NVIDIA — Product to Platform to Ecosystem Cadence.

NVIDIA’s shift was less about shipping more chips than widening the learning loop. At the product stage, it published a clear, versioned rulebook for building on its stack and standardized tooling and enterprise distributions so conformance could be checked for form at the S-gate before anything ran. At the platform stage, well-formed contributions were examined at scale across NVIDIA and partner operated “exam rooms” that applied common M-gate predicates.

Traces: NVIDIA governs at the licensing layer: it versions a shared rulebook (CUDA/driver ABI, packaging/certification standards) and enforces uniform S-gates so partner work compiles to that grammar before execution (D1, D5; P5). NVIDIA and partners run common exam rooms where contributions are judged at M-gates against an explicit predicate menu (technical performance, cross-industry adoption), with decisions logged (P8). Cadence then turns exams into minimal, versioned write-backs to the NVIDIA-led rulebook (D3, D6; P9). A global rulebook defines the grammar; local rulebooks are federated, but must compile to the global grammar and canonize to their own addresses. Typed seams across GPUs, high-speed networking (via Mellanox), and software keep edits local and portable, widening scope without global rewrites (NVIDIA, 2020). As scope widened, NVIDIA mirrored real interdependence—unifying compute and networking, and pushed the same rulebook into multiple exam rooms run by major partners, allowing different examiners to apply common predicates without fragmenting the grammar (P7, P8). The move from product to platform to ecosystem is expansion of the examiner network under one shared grammar largely authored by NVIDIA; ecosystem learning accrues to NVIDIA’s competence because gates and write-backs terminate at NVIDIA (P9). Where Toyota’s cadence was factory-scoped, NVIDIA’s loop is macro-strategic: a product became a platform others must compile to locked in rulebook sovereignty over a federated examiner network.

Theory fit: Knowledge integration shows up as boundary resources (typed seams) others must compile to; external exams resolve into patterned I-edits with fast canonization. Platform governance and ecosystem structure fix design rules and boundary resources (locking S-gates) and standardize value-network predicates (locking M-gates). Exams occur everywhere; consequential write-backs are locked to the NVIDIA-authored rulebook, with partners’ local edits federated but required to compile to that global grammar.

A compact D–L–P kernel restates classic management theory and case studies by centering the learning loop: competence licenses moves, performance supplies the examined stream, split S-gate/M-gate, keep form distinct from evaluation, and cadence turns exposure into minimal, versioned I-edits. With just these parts, long-standing findings on knowledge integration, routines, dynamic capabilities, reliability, creativity, and disruption collapse into phases of the IELA loop, and each claim cashes out in shared, auditable traces. Read through the IELA lens, the cases differ only in where the loop lives and how edits canonize: Kodak kept learning at the product layer while legacy predicates ruled the gate; Blockbuster mistook store meters for grammar and stalled inscription; Toyota ran a complete factory loop yet slowed at the enterprise seam when predicates shifted; NVIDIA widened the loop from product to platform to ecosystem while keeping one rulebook, so small edits traveled everywhere.

NVIDIA’s spread is heterogeneous: banks, hospitals, and factories. Take the meta-learning view: a carbon–silicon firm learns how to learn by keeping one shared grammar (the rulebook) while letting industries localize evaluation and timing. What matters at this level isn’t specific meters or workflows but whether licensed novelty stays portable across carriers and locally examinable in each context. When that holds, competence compounds across sectors without flattening domain worth; when it fails, firms accumulate translation-debt—busy adoption with little transferable insight. Management theory, therefore, should treat architectural coherence (shared grammar) plus contextual plurality (sector-specific gates and clocks) as the core design problem of learning in carbon–silicon enterprises.

The meta-learning observation: NVIDIA’s spread across heterogeneous sectors (banks, hospitals, factories) works because the licensing layer is shared while evaluation and timing are localized. Competence compounds across sectors without flattening domain worth. When a firm governs at the licensing layer and lets industries localize gates, it achieves what no single-sector strategy can: cross-context learning. When it fails to do this — when the grammar fragments by sector — it accumulates translation-debt: busy adoption with little transferable insight.

What Is Testable (and How It Can Be Wrong)

IELA is an architectural commitment (competence vs performance; split gates; cadence). The testable claims sit at the rendering level (e.g., the D–L–P kernel here): once a rendering specifies objects, traces, and tests, its claims are conditionally falsifiable given the listed operating conditions (constraints before exposure; split gates; typed seams; continuous cadence; logged predicate use; versioned write-backs). Failure under those conditions narrows scope or revises premises; success cumulates evidence about the architecture’s utility without claiming uniqueness of P1–P9. IELA would be abandoned — not merely narrowed — if, under full C1–C3 conditions with properly split gates and continuous cadence, inscription (decision-to-rulebook latency, canonization rate, reopen rate) shows zero predictive relationship to adaptive fitness across multiple sectors and shock types. This would mean competence changes through channels IELA does not capture. Short of this, failures narrow scope or revise premises within the architecture. Researchers and practitioners are invited to instantiate IELA with different formalisms, tailored to audience, data, and purpose, under the same IELA equivalence.

  • Rendering 1: Axiomatic (logic-forward, proof-friendly). State a small set of architectural axioms; competence-first rulebook, split S-gate/M-gate, cadence with minimal write-backs, versioned state, and logged transitions, then define the objects (rulebook, gates, examiner set), admissible operations (license, route, examine, write-back), and obligations (traceability, versioning). This yields clean derivations (e.g., no universal “learning-done” test), crisp disconfirmation clauses, and reusable lemmas portable across settings. Best for theoretical clarity, comparability across studies, and proving necessity results, with the trade-off that it stays abstract and lighter on organizational topology and day-to-day routing dynamics.

  • Rendering 2: Graph-based (structure-forward, diagnosis-ready). Model the enterprise as a typed governance graph, nodes for rules, predicates, examiners; edges for dependencies, handoffs, routings; events as edge/node updates, while specifying dependency and communication layers, gate placements, predicate menus, and edge-attached event logs. This produces predictive signatures about locality (edit span), routing diversity, cycle time variance, and blast radius, with direct hooks to org data. It excels at field diagnosis, intervention planning, and measuring alignment between “required” and “actual” coordination, trading some formal compactness for descriptive richness (proofs become simulations or graph-theoretic arguments).

Most firms already log what is needed: approval texts and criteria (predicates), meeting/issue threads (routing and examiners), change logs and policy/spec updates (write-back), stage timestamps (cycle time), and basic dependency and communication maps. These routinely logged traces are sufficient to observe whether exposure becomes inscription. To convert observation into testable evidence, I organize them into a four-phase field program.

Field Program

A four-phase program operationalizes IELA in the field. It moves from mapping (competence and seams) to measurement (loop behavior) to light randomization (causal tests), without collapsing competence into meters. Each phase binds to D–L–P and situates established theories, knowledge-based view, dynamic capabilities, platform governance, attention-based view etc., at specific phases of the IELA loop.

Phase I: Map competence and seams. This phase draws on the knowledge-based view, product/system architecture, and the mirroring hypothesis. Recover the rulebook (types, invariants, interfaces, API schemas, policy standards, operators). Draw a dependency map (required coordination) and a communication map (actual coordination); typed seams render “integration” as competence and make recombination auditable. The diagnostic is translation-debt where rules lack owners and seams lack clarity. Key traces are rule-set breadth, seam quality, the alignment snapshot, and decision-to-rulebook latency.

Phase II: Use external exams to test seams. This phase builds on platform governance, boundary resources, and ecosystem structure. Treat launches, policy changes, regulatory notices, and deprecations as natural tests. Pre-register seams to exercise; observe readiness interval, handoff violations, and rollback activations. Public S-gates at ecosystem seams make portability and resilience observable. The diagnostic is which seams are portable and resilient versus which force rework.

Phase III: Measure edits and grades; read the learning loop. This phase draws on routine dynamics, dynamic capabilities, and high-reliability organizing. Code approvals for predicates used; track the routing of licensed novelties to non-incumbent evaluators; measure decision-to-rulebook latency, canonization rate, and reopens. After shocks, record edit span and time-to-first-fix; compute cycle time across stages. This treats routines as inscription, capabilities as patterned I-edits, and reliability as detect–contain–reconfigure. Diagnostics are actual learning (fast, faithful write-back), openness to novelty (predicate diversity, diversified routing), and containment (local edits, quick fixes).

Phase IV: Randomize lightly; show cause–effect. This phase builds on the attention-based view, categorization in markets, and real options. Hold the S-gate fixed; vary M-gate predicate menus by committee or time block; diversify examiner assignments; split-test the presence and quality of constraint blocks in briefs; time-box deprecations and enforce versioned seams. Expected shifts (by P-label) are: predicate diversity and novelty routing increase (P8); out-of-sample performance improves (P4–P6); decision-to-rulebook latency decreases and canonization increases (P9); cycle time variance falls as alignment rises (P7). The discriminant prediction is that licensing-layer interventions (constraints, gates, cadence) outperform additional metering on the performance layer.

Program yield: By Phase IV’s end, the program establishes an auditable baseline for firms’ learning loop: a rulebook and seam map, calibrated traces for P1–P9, evidence on predicate menus and routing, and a validated cadence with measured canonization and latency. Then theories are positioned as traceable levers at named gates and seams, supporting cumulative comparison across carbon and silicon carriers and placing governance at the licensing layer as the durable locus of explanation and governance.

Drawing on the Chomskyan competence–performance split, disciplined inscription, and learning under sparse input, I render classic mechanisms as phases of a single loop. Under poverty of stimulus, firms observe only E-language, not rules; therefore learning requires architecture: keep competence and performance distinct, place split S- and M-gates at their boundary, and run cadence that inscribes examined evidence as versioned I-edits. As computational agents permeate work, carbon–silicon universality becomes decisive: a shared licensing layer makes licensed novelty portable across human and machine enactments. Treating the Chomskyan split as architectural governance rather than metaphor, IELA offers a novel theory of management that explains how competence changes under thin evidence and supplies a meta-learning and governance architecture for carrier-agnostic learning. Carbon–silicon universality is, in turn, a novel organizational theory: it makes the learning loop the primary object, renders its levers carrier-agnostic, and pins accountability to gate design and inscription rather than to the choice of human or machine. This is the learning firm.

IELA establishes the governance architecture for the learning firm: what must be split (competence from performance, form from worth), what must run continuously (cadence), and what must be inscribed (addressed, versioned write-backs). It does not yet specify the content of the I-language — the strategic doctrines and compositional grammar that populate the rulebook. A companion essay introduces that content: strategic logics axiom and a compositional sentence system that turn the shadow grammars of classic management theory into an explicit, auditable, rewritable strategic language. Together, supplies the sentences; IELA governs their learning. The result is a complete architecture for the carbon–silicon learning firm.

REFERENCES

Adner, R. (2017). Ecosystem as structure: An actionable construct for strategy. Journal of Management, 43(1), 39–58.

Adner, R., & Levinthal, D. A. (2004). What is not a real option: Considering boundaries for the application of real options to business strategy. Academy of Management Review, 29(1), 74–85.

Amabile, T. M. (1996). Creativity in context. Westview Press.

Anthony, S. D. (2016, July). Kodak’s downfall wasn’t about technology. Harvard Business Review. https://hbr.org/2016/07/kodaks-downfall-wasnt-about-technology

Argyris, C., & Schön, D. A. (1978). Organizational learning: A theory of action perspective. Addison-Wesley.

Baldwin, C. Y., & Clark, K. B. (2000). Design rules: The power of modularity (Vol. 1). MIT Press.

Baldwin, C. Y., & Woodard, C. J. (2009). The architecture of platforms: A unified view. In A. Gawer (Ed.), Platforms, markets and innovation (pp. 19–44). Edward Elgar.

Camuffo, A., & Wilhelm, M. (2016). A retrospective analysis of the Toyota recall crisis. Journal of Organization Design, 5, Article 4.

Cannon, M. D., & Edmondson, A. C. (2005). Failing to learn and learning to fail (intelligently): How great organizations put failure to work to innovate and improve. Long Range Planning, 38(3), 299–319.

Carlile, P. R. (2004). Transferring, translating, and transforming: An integrative framework for managing knowledge across boundaries. Organization Science, 15(5), 555–568.

Cataldo, M., Wagstrom, P. A., Herbsleb, J. D., & Carley, K. M. (2006, November). Identification of coordination requirements: Implications for the design of collaboration and awareness tools. In Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work (pp. 353–362). Association for Computing Machinery.

Chomsky, N. (1965). Aspects of the theory of syntax. MIT Press.

Christensen, C. M., & Rosenbloom, R. S. (1995). Explaining the attacker’s advantage: Technological paradigms, organizational dynamics, and the value network. Research Policy, 24(2), 233–257.

Conway, M. E. (1968). How do committees invent? Datamation, 14(5), 28–31.

Crossan, M. M., Lane, H. W., & White, R. E. (1999). An organizational learning framework: From intuition to institution. Academy of Management Review, 24(3), 522–537.

Dellermann, D., Calma, A., Lipusch, N., Weber, T., Weigel, S., & Ebel, P. (2019). The future of human–AI collaboration: A taxonomy of design knowledge for hybrid intelligence systems. In Proceedings of the 52nd Hawaii International Conference on System Sciences. University of Hawaiʻi at Mānoa.

Downes, L., & Nunes, P. F. (2013, March). Blockbuster becomes a casualty of big bang disruption. Harvard Business Review. https://hbr.org/2013/03/blockbuster-becomes-a-casualty-of-big-bang-disruption

Eisenhardt, K. M., & Martin, J. A. (2000). Dynamic capabilities: What are they? Strategic Management Journal, 21(10–11), 1105–1121.

Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of algorithms. Information and Organization, 28(1), 62–70.

Fiol, C. M., & Lyles, M. A. (1985). Organizational learning. Academy of Management Review, 10, 803–813.

Geng, S., Cooper, H., Moskal, M., Jenkins, S., Berman, J., Ranchin, N., West, R., Horvitz, E., & Nori, H. (2025). JSONSchemaBench: A rigorous benchmark of structured outputs for language models (arXiv:2501.10868). arXiv. https://arxiv.org/abs/2501.10868

Ghazawneh, A., & Henfridsson, O. (2013). Balancing platform control and external contribution in third‐party development: The boundary resources model. Information Systems Journal, 23(2), 173–192.

Grant, R. M. (1996). Toward a knowledge‐based theory of the firm. Strategic Management Journal, 17(Winter Special Issue: Knowledge and the Firm), 109–122.

Holmström, B., & Milgrom, P. (1991). Multitask principal–agent analyses: Incentive contracts, asset ownership, and job design. Journal of Law, Economics, & Organization, 7(Special Issue), 24–52.

Kallini, J., Papadimitriou, I., Futrell, R., Mahowald, K., & Potts, C. (2024). Mission: Impossible language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (pp. 14691–14714). Association for Computational Linguistics. https://aclanthology.org/2024.acl-long.787/

Keegan, A., & Meijerink, J. (2025). Algorithmic management in organizations? From edge case to center stage. Annual Review of Organizational Psychology and Organizational Behavior, 12, 395–422.

Kogut, B., & Zander, U. (1992). Knowledge of the firm, combinative capabilities, and the replication of technology. Organization Science, 3(3), 383–397.

Leavitt, K., Barnes, C. M., & Shapiro, D. L. (2024). The role of human managers within algorithmic performance management systems: A process model of employee trust in managers through reflexivity. Academy of Management Review. Advance online publication.

Leavitt, K., Schabram, K., Hariharan, P., & Barnes, C. M. (2021). Ghost in the machine: On organizational theory in the age of machine learning. Academy of Management Review, 46, 750–777.

Lindebaum, D., Vesa, M., & den Hond, F. (2020). Insights from “The Machine Stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations. Academy of Management Review, 45, 247–263.

MacCormack, A., Rusnak, J., & Baldwin, C. (2006). Exploring the structure of complex software designs: An empirical study of open source and proprietary code. Management Science, 52(7), 1015–1030.

March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87.

McGrath, R. G., & MacMillan, I. C. (1995). Discovery‐driven planning. Harvard Business Review, 73(4), 44–54.

Nahapiet, J., & Ghoshal, S. (1998). Social capital, intellectual capital, and the organizational advantage. Academy of Management Review, 23, 242–266.

Nandi, A., Manning, C. D., & Murty, S. (2025). Sneaking syntax into transformer language models with tree regularization. In Findings of the Association for Computational Linguistics: NAACL 2025 (pp. 8006–8024). Association for Computational Linguistics. https://aclanthology.org/2025.naacl-long.407/

Nayak, A., Chia, R., & Canales, J. I. (2020). Noncognitive microfoundations: Understanding dynamic capabilities as idiosyncratically refined sensitivities and predispositions. Academy of Management Review, 45(2), 280–303.

Nonaka, I., & Takeuchi, H. (1995). The knowledge-creating company: How Japanese companies create the dynamics of innovation. Oxford University Press.

NVIDIA. (2020, April 27). NVIDIA completes acquisition of Mellanox, creating major force driving next-gen data centers [Press release]. NVIDIA Newsroom. https://nvidianews.nvidia.com/news/nvidia-completes-acquisition-of-mellanox-creating-major-force-driving-next-gen-data-centers

Ocasio, W. (1997). Toward an attention‐based view of the firm. Strategic Management Journal, 18(S1), 187–206.

Orlikowski, W. J. (2000). Using technology and constituting structures: A practice lens for studying technology in organizations. Organization Science, 11(4), 404–428.

Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules. Communications of the ACM, 15(12), 1053–1058.

Pentland, B. T., & Feldman, M. S. (2005). Organizational routines as a unit of analysis. Industrial and Corporate Change, 14(5), 793–815.

Pérez, J., Barceló, P., & Marinković, J. (2021). Attention is Turing-complete. Journal of Machine Learning Research, 22(75), 1–35.

Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210.

Reason, J. (1997). Managing the risks of organizational accidents. Ashgate.

Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., et al. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57, 103174.

Senoner, J., Schallmoser, S., Kratzwald, B., Feuerriegel, S., & Netland, T. (2024). Explainable AI improves task performance in human–AI collaboration. Scientific Reports, 14, 31150.

Sitkin, S. B. (1992). Learning through failure: The strategy of small losses. In B. M. Staw & L. L. Cummings (Eds.), Research in organizational behavior (Vol. 14, pp. 231–266). JAI Press.

Skinner, B. F. (1953). Science and human behavior. Macmillan.

Spear, S., & Bowen, H. K. (1999, September–October). Decoding the DNA of the Toyota production system. Harvard Business Review, 77(5), 96–106.

Taira, Y. (2024). Toyota’s future: Hydrogen- and battery-powered vehicles (Case No. W37150). Ivey Publishing.

Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509–533.

Trigeorgis, L. (1996). Real options: Managerial flexibility and strategy in resource allocation. MIT Press.

Tripsas, M., & Gavetti, G. (2000). Capabilities, cognition, and inertia: Evidence from digital imaging. Strategic Management Journal, 21(10–11), 1147–1161.

Tsoukas, H. (2009). A dialogical approach to the creation of new knowledge in organizations. Organization Science, 20(6), 941–957.

Turing, A. M. (1938). On computable numbers, with an application to the Entscheidungsproblem: A correction. Proceedings of the London Mathematical Society, 2(43), 544–546.

Ulrich, K. (1995). The role of product architecture in the manufacturing firm. Research Policy, 24(3), 419–440.

Vaccaro, A., Almaatouq, A., & Malone, T. W. (2024). When are combinations of humans and AI useful? A preregistered meta-analysis. Nature Human Behaviour, 8, 2293–2303.

Weick, K. E., & Sutcliffe, K. M. (2007). Managing the unexpected: Resilient performance in an age of uncertainty (2nd ed.). Jossey-Bass.

Wilson, H. J., & Daugherty, P. R. (2018, July). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review. https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces

Winter, S. G. (2003). Understanding dynamic capabilities. Strategic Management Journal, 24(10), 991–995.

Zahra, S. A., & George, G. (2002). Absorptive capacity: A review, reconceptualization, and extension. Academy of Management Review, 27, 185–203.

Zuckerman, E. W. (1999). The categorical imperative: Securities analysts and the illegitimacy discount. American Journal of Sociology, 104(5), 1398–1438.

Discussion about this post

Ready for more?