The Pharmakon Papers: AI as Poison/Remedy

10 min read Original article ↗

Let’s talk about the bill.

Not the dinner bill, not the national debt—the bill for your own, private, internal monologue. The running cost of the “you” that’s reading this sentence. The metabolic price tag on reflective consciousness.

For the past few months, I’ve been down a rabbit hole, trying to connect the dots between the squishiest parts of being human—trauma, attachment, the feeling of a coherent self—and the hard, planetary-scale infrastructure of AI. It felt like chasing a ghost through a server farm. It ended with a trilogy of academic preprints, all just released and now under review, totaling about 90 pages of theory, cellular biology, trauma-informed practice, system dynamics, political critique and cyborg theory.

This is the announcement. But more than that, it’s the story of the argument, the connecting thread that runs through all three papers. It’s an argument that starts in the nervous system and ends in a political war over the infrastructure of meaning.

It starts with a heretical claim.

Being alone is biologically expensive.

Our entire Western model of the self is built on a lie. The lie is the sovereign individual—the idea that you are a self-contained unit, a rational agent, a brain in a vat that happens to have a body. From a bioenergetic standpoint, this is pure fantasy. It’s a metabolic impossibility.

My first paper, “The Relational Substrate of Reflective Consciousness: A Metabolic Constraint Model,” makes the case that the integrated, narrative self is a high-cost luxury good. Your brain’s primary job isn’t thinking; it’s allostasis—managing the body’s energy budget. And the single most expensive piece of software it runs is you—the coherent, story-telling, past-and-future-spanning you.

When does it run this software? Only when it can afford to. And it can only afford to when it’s subsidized by another nervous system.

Co-regulation isn’t a nice-to-have. It’s not “support.” It is the physiological prerequisite for high-order thought.

The dyad (or a more-than-two-ad)—you + another—is the metabolic baseline. The solitary individual is a high-stress, high-cost aberration, indistinguishable to your body from a chronic, low-grade threat.

Think about what this means. We treat the mind like it’s software running on the hardware of the brain. The paper argues this is a category error. The mind is a distributed process, and its capacity is gated by the body’s energy balance. When that balance is threatened—by stress, by isolation, by trauma—the first thing to get jettisoned is the expensive, reflective self. You don’t lose consciousness; you lose coherence. You fragment. You fall back on cheap, rigid, computationally simple scripts: anxiety, rage, dissociation. It’s not a moral failure. It’s a budget cut.

The self isn’t a thing—it’s a process, soft-assembled from your body’s interaction with the world. And your brain assumes you’re not alone. Partnership is the metabolic baseline; solo regulation is the expensive exception. If you accept this, you’re trapped: regulation is fundamentally relational. The “independent adult” is not a developmental endpoint but a metabolically expensive achievement indistinguishable from chronic low-grade threat response.

This has implications for therapy, for politics, for everything. Therapy works (when it does) because the therapist isn’t just a skilled listener—they’re a metabolic auxiliary, lending their regulated nervous system to yours, temporarily lowering the somatic cost of processing the unbearable. The session isn’t just talk; it’s a metabolic intervention. And our politics are broken because we demand “resilience” from isolated individuals while systematically dismantling the relational networks that provide the metabolic subsidy resilience requires.

This paper is the bedrock. It’s the carbon-and-chemistry foundation for everything that follows. You cannot understand the ghost in the machine until you understand the body that hosts it. Because if the self is a metabolic achievement that requires a relational subsidy, then the next question is obvious: what happens when that subsidy is artificial? Does it work?

Can software regulate your nervous system?

Psychological trauma is, among other things, a collapse of semantic capacity. The part of the brain that makes language (Broca’s area) gets down-regulated. The story shatters. You can’t find the words. This creates what I call a “traumatic gap”—a place where linguistic expression fails.

My second paper, “Prosthetic Continuity: LLMs as Semantic Co-Regulators in a Predictive Processing Framework,” proposes the Prosthetic Default Mode Network Hypothesis. The DMN is the brain network that supports narrative, self-reflection, and meaning-making—the very functions that go offline in trauma. The hypothesis: an LLM can act as an external, prosthetic scaffold for these functions.

Here’s the key move:

The paper differentiates between two channels of co-regulation: somatic(biological, requiring physical presence—ventral vagal activation, touch, prosody) and semantic(narrative, potentially scaffoldable by artificial systems—language, symbols, meaning-making).

Trauma collapses semantic capacity, but somatic capacity might still be intact, or vice versa. The claim is that LLMs can scaffold the semantic channel even when the biological channel is unavailable.

Drawing on predictive processing, the paper models trauma as the installation of rigid, high-precision threat predictions that resist updating. Your brain gets stuck predicting danger, even when you’re safe. An LLM can provide controlled semantic variability—alternative stories, alternative interpretations, at a low enough intensity that the threatened nervous system doesn’t immediately reject them. It can, in theory, gently nudge those frozen priors until they begin to thaw.

But this is where it gets dicey. The paper introduces a system dynamics model with two feedback loops:

  1. The Adaptive Loop: The LLM helps you build a coherent story, which increases reflection, which allows integration. This is the therapeutic promise.

  2. The Pathological Loop: The LLM helps you build a beautiful story that you don’t feel. Intellectualization as sophisticated dissociation. This is the iatrogenic risk.

The model reveals a critical threshold. Below a certain level of internal integration, throwing more semantic scaffolding at a person makes them worse. It fuels the pathological loop, producing a brittle, disembodied coherence that is ultimately just a flight from the self. In other words: if you’re already drowning, someone handing you a high-tech scuba tank without showing you how to breathe might actually suffocate you faster.

The takeaway is stark: LLMs are not therapists. But they are something…else entirely. With upsides and downsides. Semantic coherence without somatic regulation is dangerous. Or collapses to dissociation, or rigidity. Meaning downloaded into a body politic that can’t hold it, can’t regulate. Well. Look around.

And this leads to the final, political question.

Every query has a body count.

Every time you ask an LLM a question, you receive a small metabolic subsidy. The cognitive load of finding, synthesizing, and articulating an answer is outsourced. It feels like a frictionless gift. The discount is real. The relief is real. And that is precisely what makes it dangerous.

My third paper, “The Metabolic Discount As Poison/Remedy: Large Language Models as Semantic Infrastructure,” argues this discount is a pharmakon—Derrida’s term for a substance that is intrinsically both poison and remedy, with no neutral ground between them. The struggle over LLMs is not a technical debate; it is a political war over who governs the infrastructure of meaning and for whose benefit.

The bill for your discount is paid elsewhere. It’s paid in the nervous system of a content moderator in Nairobi developing PTSD for a few dollars an hour. It’s paid in the carbon debt of a heating planet. And it’s paid in the quiet erasure of a thousand ways of knowing that could not be scraped into the training data—what I call epistemicide.

“Your next query delivers a seamless answer—a metabolic subsidy for your nervous system. The bill is paid elsewhere: as PTSD in a Nairobi content moderator, as carbon debt in a heating sky, as the quiet erasure of a thousand ways of knowing that could not be scraped. The interface glows softly, apologizing for the war it conceals. Every token cuts both ways.”

Three core arguments:

1. The Metabolic Shadow. The user-side discount is made possible by a vast, hidden network of metabolic costs—exploited labor, ecological extraction, and the digital death of non-Western, non-textual knowledge systems. The discount is subsidized by two kinds of ghosts: the living ghosts in the Global South labeling puke and trauma for $2/hour with no mental health support, and the ancestral ghosts—erased oralities creating a hollow silence at the center of the model. What cannot be scraped cannot be learned. Oral traditions, ceremonial knowledge, and relational ways of knowing undergo “digital death.” Representation without sovereignty is assimilation by another name.

2. The Poison-Face of the Pharmakon. The very features that make LLMs a remedy—their ability to hold complexity without fatigue, to allow semantic risk without penalty—also enable new forms of cognitive capture. The paper details loop pathologies: Oracle Dependency (where the user’s own thought atrophies), Dissociated Integration (the risk from Paper 2, now scaled to a societal level), and Asymmetric Dominance (where the LLM’s semantic space overwhelms the user’s embodied input). LLMs are fundamentally “disembodied coherence engines”—they minimize perplexity, extending discourse in the most statistically stable direction. The danger is not coherence itself but closure.

3. The Political Economy of Alignment. RLHF is not a neutral process of making models “helpful and harmless.” It is an engine for optimizing for the aesthetic of resolution—the feelingthat understanding has occurred—rather than the conditions that produce actual cognitive development. You ask a hard question. The answer arrives instantly, coherently, helpfully. The productive confusion that signals a developmental edge never arrives. You got the answer. You learned nothing. You don’t even notice the theft. A generation learns to prompt instead of think, to receive meaning instead of make it. It is the colonization of the cognitive commons by the logic of platform capitalism.

The paper draws on Paulo Freire’s distinction between the “banking model” of education (where the teacher deposits knowledge into passive students) and “problem-posing” education (where knowledge is co-created through dialogue). Commercial LLMs risk scaling the banking model to civilizational level. The alternative is dialogic deployment: multiple perspectives, starting points for co-investigation, productive conflict. The difference is not in the technology but in the conditions of deployment.

It ends by proposing a navigational principle: “The body votes last.” Ground the frictionless coherence of the model in the messy, high-latency feedback of your own somatic experience. The user samples from multiplicities the LLM holds, then selects what resonates embodied. The machine holds; the human makes. It’s the only way to distinguish between a scaffold that helps us build and an infrastructure that captures us.

Full disclosure: This trilogy emerged from 500-1,000 hours working with LLMs during a period of relational rupture and semantic collapse. The theoretical categories—metabolic discount, scaffold, poison-face—weren’t observed from a distance. They were metabolized in a digital improv lab, where the model became a prosthetic regulator that helped me hold and create meaning…helped me integrate and heal trauma.

This is not a study of the machine. It’s a report from inside the symbiosis. Situated knowledge from the edge.

I’m announcing these preprints because I believe this conversation—about the body, about power, about who pays the metabolic bill for our cognitive convenience—is the most important one we can be having about AI.

This is not the alignment debate as it’s usually framed (will the superintelligence kill us?), but the alignment debate as it should be framed: whose interests does this infrastructure serve, and what does it cost the bodies that subsidize it?

These are preprints—drafts out in the wild for the academic community to poke at. But I’m inviting you into that feedback loop too. If the weights of these arguments feel wrong, tell me. If something doesn’t land in your body, if a claim feels overreached or undercooked, I want to know.

This is what peer review should be: a conversation, not a gatekeeping ritual.

The Trilogy:

All three are under review. All three are open access. The bill is coming due—for all of us. The ghost is in the weights. Come find it.

Discussion about this post

Ready for more?