A game engine where the entire state is a CRDT.
Deterministic replay. Serverless multiplayer. Forkable worlds. Built on Automerge, with Gym/PettingZoo interfaces so any game doubles as a training environment.
Play Blackjack → Play Cuttle →What This Gets You
| Serverless multiplayer | CRDTs sync state across peers without a server |
| Perfect replay | Every action recorded with actor and timestamp |
| Forkable worlds | Snapshot state, explore alternatives, compare outcomes |
| AI training environments | Gym/PettingZoo interfaces out of the box |
| Offline-first | Peers diverge safely, converge mathematically |
Traditional Engine: Server decides → clients accept → monthly hosting bill
Blockchain Engine: Consensus decides → everyone pays gas → wait 15 seconds
HyperToken: CRDTs merge → everyone agrees → zero infrastructure
What You Can Build
Card Games
Blackjack, Poker, Cuttle, custom TCGs. Tokens compose with provenance tracking.
Strategy Games
Game theory simulations, tournaments, agent competitions.
Multiplayer Worlds
P2P sync, no servers, games that outlive their creators.
Training Environments
Any game is automatically a Gym environment. Multi-agent via PettingZoo.
Try It
git clone https://github.com/flammafex/hypertoken
cd hypertoken
npm install && npm run build
npm run blackjack
# Multiplayer
npm run blackjack:server # Terminal 1: Host
npm run blackjack:client # Terminal 2: Join
# AI training bridge
npx hypertoken bridge --env blackjack --port 9999
How It Works
Tokens Compose With Provenance
const enchantedSword = engine.dispatch("token:merge", {
tokens: [sword, fireEnchantment]
});
// enchantedSword._mergedFrom = [sword.id, fireEnchantment.id]
State Syncs Automatically
const host = new Engine();
host.connect("ws://relay.local:8080");
const client = new Engine();
client.connect("ws://relay.local:8080");
// Both make changes → CRDTs merge → identical final state
Any Game Is a Training Environment
from hypertoken import HyperTokenAECEnv
env = HyperTokenAECEnv("ws://localhost:9999")
env.reset(seed=42)
for agent in env.agent_iter():
obs, reward, term, trunc, info = env.last()
action = policy(obs) if not (term or trunc) else None
env.step(action)
What's Inside
- 67 built-in actions — draw, shuffle, place, move, flip, merge, split...
- Rust/WASM core — ~20x performance over pure JS
- P2P networking — WebSocket + WebRTC, host-authoritative
- MCP server — LLM integration via Model Context Protocol
- Works everywhere — Browser, Node.js, Python bridge
Status
✅ Core engine complete
✅ CRDT synchronization
✅ P2P networking
✅ Python bridge for RL
✅ WASM acceleration
✅ Docker deployment
🔄 Documentation in progress
Get Involved
GitHub — Source, issues, PRs
Documentation — Guides and API reference
Cuttle — Play the demo