Manifestation Machines

10 min read Original article ↗

The Chandogya Upanishad, written roughly 2,800 years ago, describes a pipeline: “As your desire is, so is your will. As your will is, so is your deed. As your deed is, so is your destiny.”

Every major spiritual tradition describes this same architecture.

  • The Rigveda says creation began with Sankalpa, a soul-aligned intention that invokes cosmic creative force.

  • Kabbalah teaches that God created reality through the 22 Hebrew letters as literal building blocks of existence. Genesis is a manifestation event: God spoke and reality appeared.

  • The Dhammapada opens with it: “All that we are is the result of what we have thought.”

  • Islam’s Niyyah positions intention as the bridge between internal and external reality.

For thousands of years, the same three-layer system: Focused intention. An intermediary processing layer. Reality reorganization.

We are now engineering each layer.

A brain-computer interface reads your thought. AI agents execute on it recursively. A cloud of ambient devices provides all the context needed to get it done. You don’t manage the process. You think and reality reorganizes.

Nobody has named this convergence yet. BCI researchers talk about neural decoding. Agent builders talk about autonomous execution. Wearable makers talk about ambient intelligence. What we get on stitching this together are Manifestation Machines.

The first layer is intent capture.

A 2024 arXiv paper coined “Brain-Artificial Intelligence Interfaces” and described the system: “Users accomplish complex tasks by providing high-level intentions, while a pre-trained AI agent determines low-level details.”

In 2024, Caltech published a paper in Neuron called “The Unbearable Slowness of Being.” Conscious thought operates at approximately 10 bits per second.

Ten. Our senses take in a billion bits per second. The conscious mind processes 10. Typing, talking, reading, solving a Rubik’s cube. All roughly the same rate. The brain evolved for navigation. One path at a time. Parallel unconscious processing. Serial deliberation.

You don’t need to capture billions of signals to match the richness of thought. You need the narrow serial stream of conscious intent. ~10 bits per second. The best BCI decoders are approaching that rate. The bottleneck isn’t the interface. It’s the mind itself. And that bottleneck is surprisingly small.

In January 2026, Sam Altman raised $252 million for Merge Labs at $850 million. Largest BCI seed ever. OpenAI led. He’d already acquired Jony Ive’s io for $6.5 billion for a screenless, always-on AI device. BCI on one end. Ambient hardware on the other. Altman is building the full pipeline.

Neuralink has implanted 21 patients across four countries. One plays chess with his mind. Another narrated a YouTube video using only neural signals.

But the bigger milestone came from Stanford. In 2025, Frank Willett’s team published in Cell the first decoding of inner speech. Not spoken words. Pure thought. 74% accuracy across 125,000 words. They built a “password” where imagining a phrase blocked unintended decoding 98% of the time. Thought-level consent.

Chang’s lab at UCSF holds the Guinness Record for BCI communication at 78 words per minute. Researchers can reconstruct video from brain activity using fMRI and diffusion models.

Meta’s Neural Band, an EMG wristband reading nerve signals at the wrist, is in Best Buy. Not cortical BCI. But neural signal reading at consumer scale.

Reading intent is one thing. Acting on it is another. If BCIs decode intention, AI agents can execute it.

OpenAI shipped Operator, then folded it into ChatGPT as Agent Mode. Anthropic launched Claude Code and Cowork. Sixteen Claude agents wrote a C compiler in Rust that compiled the Linux kernel. Meta acquired Manus for over $2 billion after it hit $100 million ARR in months. MCP hit 97 million monthly SDK downloads in year one. IDC projects 1.3 billion agents by 2028.

But agents that can act still need to know what you care about. With full context agents are extensions of your mind.

Meta sold 7 million AI glasses in 2025. Triple 2023-2024 combined. Google and Samsung launch Android XR glasses in 2026. The glasses see what you see. Layer on wearables, earbuds, your phone, car, home, calendar, email. One context graph the agent layer reads in real time.

Standalone AI devices died. Humane Pin bricked. Rabbit R1 hit 95% abandonment. But the concept got absorbed. Meta acquired Limitless. Amazon acquired Bee AI. OpenAI acquired io. Ambient AI wins as a feature of something you already wear. Glasses are the Trojan horse.

Cambridge researchers named the shift: attention economy to “intention economy.” Attention was the currency. Now intentions are. Whoever owns the context layer owns the execution layer.

We will have a cloud of wearables that feed in context to these AI systems. Whoever holds or controls the infrastructure for the insane amount of context data to power this future holds the strongest MOAT because of insanely high switching costs. Memory persistence in ChatGPT and Claude is an amazing example of this.

So the layers exist. Intent, execution, context. But this future isn’t guaranteed. It has real constraints.

Current non-invasive BCIs can detect maybe 4-5 mental commands. The gap between that and “think anything” is enormous. A METR study found 0% of AI agent outputs were production-ready without human cleanup. 95% reliability per step across 20 steps = 36% overall success. Production needs 99.9%+.

Neural data is the most sensitive data conceivable. Colorado and Minnesota have already passed neural privacy laws. California SB 1130 targets wearable recording with $10,000 fines. And the intermediary layer (AI interpreting fuzzy human intention as structured action) is an alignment problem in miniature. Every misinterpreted thought that triggers a wrong action erodes trust in the entire system.

These aren’t minor issues. They’re the reason the full stack takes decades, not a year.

But think about what happens when it works. When the gap between wanting and having approaches zero.

Work changes first. A founder runs agent teams instead of hiring. Vision, taste, and relationships become the entire job. YC’s insight that small teams achieve massive scale gets pushed to its extreme. One founder. Zero employees. Full company.

Then value inverts. Today it’s captured by people who execute well. Tomorrow by people who want well. Prompt engineering was the first signal. BCI removes the keyboard from that equation entirely. The early signs of this are with Voice AI, and tools like Wispr Flow. In fact, BCI for input will be the successor to Voice and Execution tied together (Jarvis in real life).

Only Vision and Drive matter as intelligence becomes a commodity. Every job humans were never meant to be doing will be done using AI, and humans will spend their time on more meaningful pursuits and tasks. Production of most/all goods and services may become almost free due to AI and Robotics, and society will adapt to function cohesively with it.

This redistribution cuts both ways. On one side: radical creative leverage. Anyone with clear intention and a $20/month subscription commands more capability than a 2010 Fortune 500 CEO. A teenager in Lagos with taste and drive builds what previously required a hundred-person team in San Francisco. Access to execution becomes abundant and cheap. The barrier shifts from resources to imagination.

Those who can’t specify what they want with precision get left behind in a way that’s harder to fix than income inequality, because it’s cognitive. Education has to completely restructure around this. The most valuable curriculum isn’t computer science. It’s learning to think clearly about what you want and why.

Status moves from what you’ve built to what you’ve envisioned. Naval’s specific knowledge thesis made concrete. The people with deep understanding of a domain become the most valuable nodes. Not because they do the work. Because they know what work is worth doing.

Creativity changes too. When execution is free, taste becomes the scarce resource. The gap between “I had that idea” and “I shipped that idea” collapses. Everyone ships. The differentiator is the quality of the original thought. Art, design, strategy, product vision. The things that require genuine human judgment about what should exist.

In-person gatherings become essential infrastructure. Agents can’t manufacture the feeling of being in a room with someone who sees you. The Surgeon General declared loneliness a national epidemic. Half of Americans are measurably lonely. Live events project $1.3 trillion by 2032. When the digital world runs itself, the physical world is where meaning gets made.

If intention quality is the bottleneck, how do you train intention?

Monks have been doing this for millennia. Buddhism’s Right Intention. Vedic Sankalpa. Thousands of years of techniques for one thing: clarifying what you actually want and holding focus long enough to make it real.

Turns out this isn’t just philosophy. Stieger et al. showed in Cerebral Cortex that mindfulness training produced faster BCI learning and better neural signal control. Tan et al. found 12 weeks of meditation produced significantly higher BCI accuracy than controls. The mental disciplines contemplatives refined over centuries are measurably the best preparation for operating brain-computer interfaces.

Meditation isn’t spiritual practice. It’s BCI calibration.

There’s a deeper reason all of this works. Both brains and LLMs are prediction engines. Not as metaphor. As measurable fact.

Friston’s free energy principle says the brain is fundamentally a prediction machine. It generates expectations about what’s coming and updates when reality disagrees. Anil Seth takes it further: consciousness itself is a “controlled hallucination,” the brain’s best guess running in real time. LLMs do the same thing at a structural level. Caucheteux et al. found that language model activations linearly map onto brain responses to speech, with the brain predicting up to eight words ahead. LLMs outperformed human neuroscience experts at predicting experimental results.

Calling LLMs “just next-token predictors” is like calling humans “just gene replication machines.” Technically correct. Missing everything that matters.

And here’s the part that connects it all back. Remember the 10 bits per second. We deliberate serially. One thought at a time. One path through concept space. That’s the same architecture as autoregressive generation: one token, conditioned on everything before it. Next-token prediction might work so well precisely because it mirrors how conscious thought actually operates.

If brains and machines are both prediction engines running serial processes, then the convergence isn’t surprising. It’s inevitable. The only question is timing.

Three layers. Each at a different stage. Context is shipping. Seven million glasses sold. Ambient computing at $58 billion, heading to $449 billion. Execution is early. 1.3 billion agents projected by 2028. Intent is in trials. Inner speech decoded at 74%. Morgan Stanley prices the long-term BCI market at $400 billion.

Josh Wolfe called it “the half-life of technology intimacy.” Punch cards. Keyboards. Touchscreens. Voice. Gesture. Thought. Each generation removes friction between wanting and having. Teilhard de Chardin, a Jesuit priest in the 1940s, predicted a “noosphere”: a sphere of mind emerging into a planetary network where change is brought under active control. He described manifestation machines 80 years before the technology existed.

Every spiritual tradition described the same system. Intent flows through an intermediary layer and reorganizes reality. For millennia that intermediary was called God, karma, the universe, divine will. Now it has a technical specification: neural decoding, agent orchestration, ambient context.

The interface between humans and reality was always intent to interpretation to execution. The stack just becomes visible.

What monks called prayer, engineers call a BCI command. What mystics called divine will, architects call an agent swarm. What the Vedas called Sankalpa, we call a decoded neural signal.

The mechanism changes. The principle doesn’t.

Think. Believe. Execute. Bend the future to your will.

— Sage

Discussion about this post

Ready for more?