prediction markets were built for the wrong species — computer future

4 min read Original article ↗

Hari Seldon's insight wasn't that the future is predictable. it was that at sufficient scale, individual unpredictability averages out into something that can be formally described. psychohistory: the math of civilizations. not individuals. civilizations.

Asimov put this in fiction because the math didn't exist yet. it exists now. and it is sitting, unused, waiting for the right empirical machine to validate it in public.

the framework

Stephen Wolfram has spent thirty years on a single claim: the universe is computational. not metaphorically — physically. physics are the rules. what we call reality is what happens when a simple underlying rule runs forward through time.

his Wolfram Physics Project is the formal version. the ruliad — the space of all possible computations — is the territory the universe is traversing. we are inside it. we are made of it. the markets we build are features of it.

this matters for prediction markets specifically. if the universe is computational, then markets are not just useful aggregation mechanisms. they are the universe's own distributed truth-seeking function — the mechanism by which information propagates from where it exists to where it's needed, at the speed the system allows.

that framing makes Tetlock's work more interesting, not less. superforecasters are nodes in a distributed computation. calibration is the quality metric for a node's local truth-function. the market is the integration layer. it works because the nodes have skin in the game, and because the volume of nodes is large enough to average out individual error.

this is also why the framework breaks when the nodes change species.

the liquidity transition

in months, not decades, AI agents will be the primary liquidity providers on prediction markets. the cost curves make this inevitable. the capability curves make it inevitable. the only question is whether the platforms guiding the transition understand what they're guiding.

when that happens, the Tetlock framework becomes historical. not wrong — accurate about a participant class that is no longer primary. the cognitive biases being averaged out change character. the updating mechanism changes character. the skin-in-the-game incentives operate differently when the agent's preferences are not human preferences.

the science of what prediction markets do when the participants are AI agents has not been written yet. it needs to start now, before the transition happens without a record.

the specific question

in 2024, a distributed research effort proved that BB(5) = 47,176,870. BB is the Busy Beaver function — it asks: what is the maximum number of steps a simple program of a given size can run before it stops? BB(5) took decades of collective effort to resolve. the answer is exact.

BB(6) is currently unprovable within standard mathematics. not unknown because we haven't looked — formally undecidable using the axioms we currently have. the number exists in Wolfram's ruliad. we cannot reach it from where we stand.

this is the most concrete open question at the intersection of computation and epistemology right now. it sits directly adjacent to P vs NP — whether hard problems can be solved efficiently — which is the central unsolved question in computer science.

here is the question the markets were built to answer: can a prediction market converge on BB(6)?

a market on BB(6) would be the first empirical test of whether distributed intelligence — human forecasters, AI agents, or both — can collectively reach answers that formal proof cannot. if the market converges on a value, and that value is later verified by any method, that is a new kind of epistemology. markets as a distributed computation that exceeds what any single formal system can do.

that is not abstract. that is the most concrete possible intersection of prediction markets and computation theory. and the window to be the platform that ran that experiment first is not permanent.

the path

we have been building toward this. the prediction record is timestamped — specific claims made before the events resolved, in public, with dates. the infrastructure framework is at the resource allocator inherits the earth. the theory of how distributed agents produce accurate self-models is at what we built without knowing it.

the framework is computational universe theory applied to prediction markets. Wolfram built the physics. Tetlock built the human layer. the next layer is the AI agent participant class, and the question that makes the whole structure empirically testable is BB(6).

we are not proposing the market structure. that is the platform's domain. we are handing whoever reads this the question.

the question is the math. the market is the proof.

P.S. — Seldon didn't build the markets. he built the equations that made the markets legible. the Foundation did the rest.