Structured Concurrency
Every task lives inside a Region. When a Region closes, all tasks within it are cancelled, awaited, and cleaned up — automatically. No orphaned futures, no leaked goroutines.
Structured concurrency with first-class cancellation for Rust. Three-phase cancel protocol, linear obligations, capability security, and deterministic testing — all enforced by the runtime.
Asupersync replaces manual "developer discipline" with strict runtime guarantees. It forces you to write, reason about, and test async code differently, embedding safety directly into the runtime itselfCancel-CorrectGuaranteed safe shutdownIn standard Rust, dropping a future might instantly stop it mid-execution, leaving connections open or data half-written. 'Cancel-Correct' means the runtime enforces a graceful 3-phase shutdown: tasks are asked to stop, given time to clean up, and finally swept away safely..
In standard async Rust, dropping a future instantly stops its execution. The resulting "silent drop" leaves database connections open and files half-written.
Asupersync requires a mandatory 3-phase shutdown. Tasks receive a time budgetBudgetCountdown timer for task cleanupsWhen cancellation strikes, tasks aren't killed instantly—they're given a 'Budget' (a specific amount of time, like 5 seconds) to clean up their mess during the Drain phase. If they run out of time, they are forcefully Finalized. to gracefully drainDrain PhaseGraceful cleanup during cancellationThe second phase of the cancel protocol. After a cancel request, tasks enter Drain where they can finish in-flight work, flush buffers, close connections, and release resources within their Budget. their connections and flush buffers before the runtime enforces finalizationFinalize PhaseFinal cleanup after drainingThe Finalize phase runs after Drain completes (or the Budget expires). It executes registered finalizers in LIFO order — similar to defer in Go but tightly integrated with the cancel protocol..
Every time you spawn a task or borrow a resource, you get a PermitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create. or a LeaseLeaseScoped resource borrowTemporary ownership of a shared resource (like a database connection). Unlike standard Rust where drops are passive, if a Lease's owning Region is cancelled, the resource is actively and safely returned to its pool via the cancel protocol.. The Rust compiler strictly enforces their usage.
You must explicitly await, detach, or cancel these tokens. Dropping one by mistake triggers a compilation error, preventing resource leaks entirely.
Asupersync enforces structural integrity. Every task is born into a RegionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree., and regions form a strict parent-child hierarchy.
When a parent Region finishes or gets cancelled, the cancellation cleanly cascades down the entire tree. This cascade guarantees no "zombie" tasks keep running after their parent terminates.
Race conditions are notoriously hard to test because standard runtimes behave chaotically. Asupersync provides a deterministic testing environment to solve this.
By providing a SeedSeedDeterministic execution parameterAn integer that controls the Lab runtime's scheduling decisions. The exact same seed always produces the exact same task interleaving, enabling perfectly reproducible testing and bug replay., the Lab Runtime systematically explores different thread interleavings using DPORDPORDynamic Partial Order ReductionA smart algorithm used by the Lab Runtime. Instead of testing every single possible way tasks could overlap (which would take billions of years), DPOR only tests the interleavings that actually interact with each other. It finds the needle in the concurrency haystack mathematically. to find the exact combination that triggers a bug. You can then reproduce the exact bug reliably, every single time.
In most runtimes, any task can perform any action at any time. A background worker can unexpectedly open a network connection or read the file system. Asupersync restricts this by default.
Every task receives a Cx tokenCxCapability context tokenThe magic key passed to every task. It controls what operations that task may perform. Spawning, I/O, timers, and other effects are gated by capabilities on the Cx. This enforces the principle of least authority. that explicitly grants it permissions. If a buggy or hijacked task attempts an operation without the right key on its ring, the runtime instantly blocks it. You maintain absolute control over what each component of your system can actually do.
Traditional async code often crashes midway through an operation, leaving the system in an inconsistent state. For instance, money might be deducted from one account but never added to the other.
Asupersync introduces a Reserve/Commit pattern. Side-effects are first staged as reversible reservations. If cancellation strikes during this window, the runtime seamlessly rolls back the hold. Only when it is entirely safe do the effects permanently commit.
Deadlocks usually freeze systems silently. Asupersync treats deadlock detection as an active telemetry problem, monitoring a real-time Wait-GraphWait-GraphDirected graph of task dependenciesA directed graph where nodes are tasks and edges represent 'task A is waiting on task B'. The runtime maintains this graph in real time and uses spectral analysis on its Laplacian matrix to detect deadlocks before they fully form. of all tasks.
The scheduler calculates the Fiedler ValueFiedler ValueEigenvalue that signals deadlock riskThe second-smallest eigenvalue of the wait-graph's Laplacian matrix. When it's large, tasks flow freely (no bottleneck). When it drops near zero, the graph is about to split into disconnected components — meaning a deadlock is forming. Think of it as a 'traffic congestion score' for your concurrent system. (the second-smallest eigenvalue of the graph's Laplacian matrix). When this value plummets, it signals an impending traffic jam. The runtime can then intervene proactively before a catastrophic freeze occurs.
Moving data across lossy networks usually requires endless back-and-forth acknowledgments (ACKs) and retransmissions. This degrades performance significantly when connections drop.
Asupersync uses a rateless erasure codeFountain CodeRateless erasure code for data transferA class of erasure codes where the sender generates a potentially infinite stream of encoded symbols, and the receiver can reconstruct the original data from any sufficient subset. Like a magic fountain — you just need to collect enough drops, regardless of which specific drops you catch. protocol. The sender emits an infinite stream of encoded droplets. The receiver only needs to catch a sufficient number of these droplets to perfectly reconstruct the file, completely eliminating the need for retransmission requests.
Erlang's OTP architecture is legendary, but it relies on developer discipline. Actors can silently crash, messages can accumulate boundlessly in mailboxes until OOM, and servers can forget to reply, leaving clients hanging forever.
Asupersync provides SporkSporkSpawn + Fork primitiveA portmanteau of Spawn and Fork. It creates a new task while simultaneously branching off a new child Region. The new task inherits a subset of the parent's capabilities and is tightly bound to its new Region's lifecycle.—a GenServerGenServerGeneric Server for stateful actorsA battle-tested server pattern borrowed from Erlang/OTP used to encapsulate state, handle concurrent requests, and manage a mailbox. In Asupersync, GenServers are supercharged with compile-time Reply Obligations, meaning a server can mathematically never 'forget' to respond to a client. and SupervisorSupervisorFault-tolerant task managerA task that monitors child tasks and applies restart policies when they fail. Crucially, Supervisors respect the cancel protocol: during Region cancellation, they stop restarting and allow children to drain gracefully. implementation that enforces strict guarantees at compile time. Mailboxes apply explicit backpressure, and every client request generates a linear Reply Obligation. The server mathematically cannot "forget" to reply.
Traditional runtimes sprinkle Mutex locks everywhere to prevent data races, destroying concurrent performance as cores contend for the same cache lines.
Asupersync uses the formal CALM theorem to eliminate locks. Operations are categorized mathematically. Monotone operationsMonotone OperationOrder-independent operation that needs no coordinationAn operation whose result doesn't depend on the order it's executed relative to other monotone operations. Reserve, Send, Acquire, Renew, and CrdtMerge are all monotone in Asupersync's Saga engine. Because they're order-independent, they can be batched and executed without coordination barriers — a key performance optimization identified by CALM analysis. (like appending data) never require locks because they do not rely on checking absences. The runtime batches these entirely coordination-free, drastically scaling multi-core throughput.
How do you mathematically prove an async program won't just spin in infinite loops during shutdown?
Asupersync calculates a 4-component energy function in real-time. This function aggregates live tasks, pending obligation age, draining regions, and deadline pressure. Because the runtime mechanics mathematically guarantee this "potential energy" always decreases over time, the system acts as a supermartingale—proving it makes strictly monotonic progress toward quiescenceQuiescenceWhen a region goes completely silentThe state of a Region when all its tasks have finished, and no new tasks can possibly be created inside it. It's the equivalent of making sure everyone has left the building and the doors are locked before turning off the lights..
Hardcoded scheduler heuristics break under adversarial workloads. If a flood of cancellations arrives, a static scheduler might starve normal tasks entirely.
Asupersync replaces static thresholds with an online machine learning algorithm (EXP3). It maintains five "arms" representing different cancel-streak limits. By continuously measuring regret, the scheduler dynamically shifts its probability weights to adapt to the current workload—converging on the optimal strategy without any human tuning.
Traditional tests run your code to completion. They cannot catch bugs that only surface when your code is cancelled halfway through execution.
Asupersync provides a deterministic Cancellation Injector. It systematically re-runs your test suite, dropping a "cancel bomb" at a different await point on every run to mathematically prove your application survives interruption without leaking resources.
Instead of writing manual assertions, the Lab Runtime is continuously monitored by independent OraclesTest OracleRuntime correctness monitor for Lab testingA specialized runtime monitor that checks a specific class of correctness invariant during Lab testing. Each of the 17 built-in oracles watches for a different kind of bug — from resource leaks to protocol violations to budget overruns — acting as an automated auditor for your concurrent code. that act as automated auditors.
These Oracles use a statistical method called an E-Process (a betting martingale) to continuously evaluate runtime invariants like task leaks and quiescence. Because it is anytime-valid, the Oracle can instantly halt a test the moment evidence of a bug crosses the rejection threshold.
When a parent task spawns an untrusted child, handing over raw capabilities (like full network access) is dangerous.
Asupersync capabilities behave like Macaroons. A parent can attach cryptographic caveats (like "expires in 50ms" or "read-only") before delegating the token. The child can use the token or add further restrictions, but the math prevents them from ever stripping the parent's constraints away.
In a deep hierarchy of tasks, how do you resolve conflicting timeouts? If a child requests 15 seconds to run, but its parent is shutting down in 5 seconds, chaos ensues.
Asupersync resolves this using a Product SemiringProduct SemiringAlgebraic structure for budget compositionA mathematical structure where budgets compose element-wise using min and max operations. It's called a 'semiring' because it has two combining operations (like addition and multiplication, but using min and max instead). This gives budget composition clean algebraic properties: it's associative, commutative, and has identity elements.. Cancel budgets mathematically compose downwards across nested regions. The runtime clamps child deadlines to the parent's minimum, while propagating maximum priorities upward. A child mathematically cannot outlive the region it was born in.
Exhaustively testing concurrency is impossible because thread interleavings explode factorially. However, many of those interleavings are functionally identical because the swapped operations never actually interact.
Asupersync collapses this search space by converting execution traces into a canonical Foata Normal Form. If the runtime sees that two traces belong to the same Mazurkiewicz equivalence classMazurkiewicz TraceEquivalence class of concurrent interleavingsA concept from concurrency theory: two execution traces are 'Mazurkiewicz equivalent' if they differ only in the order of independent (non-conflicting) operations. Asupersync's DPOR uses Mazurkiewicz traces to avoid redundantly testing schedules that produce identical outcomes — a massive reduction in the search space for concurrency testing., they map to the exact same fingerprint, allowing the DPOR algorithm to safely prune the redundant test run.
How do you set a timeout for a task if you don't know what the normal distribution of execution times looks like? Standard deviations fail completely if the underlying data isn't a perfect bell curve.
Asupersync's Lab Runtime uses a formal statistical technique called Conformal Prediction. By analyzing historical traces, it draws a mathematically strict boundary that is guaranteed to contain 99% of future tasks—entirely distribution-free. If a task breaks this bound, the runtime knows with mathematical certainty that it represents a true anomaly, not just statistical noise.
When you initiate a shutdown across thousands of nested tasks, how do you mathematically prove that the cancellation cascade will actually finish and not get stuck in an infinite cycle?
Asupersync injects a finite amount of "Cancel Fuel" into the top of the Region Tree. Every time cancellation propagates to a sub-region or a child task, it consumes exactly one unit of fuel. Because the fuel strictly decreases and cannot drop below zero, the runtime guarantees absolute termination.
If five background nodes crash simultaneously during a test, traditional runtimes report those crashes to the supervisor in a random, chaotic order governed by thread races. A bug caused by one specific arrival order might never reproduce locally.
Asupersync resolves concurrent races mathematically. When simultaneous events occur, the runtime applies a strict deterministic tie-breaker (sorting by virtual timestamp, then by Task ID). Replaying the exact same trace ten thousand times guarantees the exact same arrival order ten thousand times.
When a transaction spans multiple microservices, you cannot hold a single database lock. If the workflow fails on step 4, how do you undo the side-effects of steps 1, 2, and 3?
Asupersync provides a first-class Saga engine. You define a forward action and a backward (compensating) action for each step. If any step fails—or if cancellation strikes mid-flight—the runtime automatically walks backward up the tree, executing the compensations in strict LIFO order to restore the system to a clean state.
Most runtimes are built on an "it compiles, ship it" mentality. Asupersync's core behavior is strictly defined by a set of formal mathematical rules written in Lean 4.
Every evaluation step, cancellation cascade, and obligation transfer corresponds to a specific Transition Rule. These semantics mathematically prove the soundness of the runtime's guarantees before they are ever translated into Rust code.
Asupersync is a ground-up async runtime providing cancel-correctness guarantees, structured concurrency, and algorithms borrowed directly from formal verification.
Every task lives inside a Region. When a Region closes, all tasks within it are cancelled, awaited, and cleaned up — automatically. No orphaned futures, no leaked goroutines.
Three-phase cancellation: Request → Drain → Finalize. Tasks get budgeted time to clean up connections, flush buffers, and release resources. Never a silent drop again.
Permits must be explicitly consumed — sent, completed, or aborted. The type system prevents you from forgetting to handle them. Compile-time resource leak prevention.
Tasks receive a Cx (capability context) that controls what they can do. Spawn, I/O, timers — all gated by capabilities. Principle of least authority, enforced by the runtime.
Deterministic testing with seed-controlled execution order. Replay any interleaving. Integrated DPOR (Dynamic Partial Order Reduction) finds concurrency bugs systematically.
Cancel lane (highest priority), Timed lane (deadlines), Ready lane (normal work). Cancellation always wins the race, ensuring prompt cleanup even under load.
Cancel-correct TCP, HTTP, channels, mutexes, and supervisors out of the box. Every protocol respects the cancel contract. No need to wrap Tokio primitives.
Reserve/commit pattern: side effects are staged before cancellation checkpoints, then committed atomically. If cancellation arrives mid-operation, only reserved (uncommitted) work is rolled back.
Cancel budgets compose algebraically as a product semiring. When regions nest, their budgets merge via min(deadlines) × min(quotas) × max(priority). Consistent cancel timing across arbitrarily deep hierarchies.
The Lab runtime ships with 17 built-in correctness monitors — from TaskLeak and ObligationLeak to CancelProtocol and DeadlineMonotone. Each oracle independently watches for a specific class of concurrency bug during deterministic testing.
Analyzes the wait-graph’s Laplacian eigenvalues in real time. When the Fiedler value drops near zero, the scheduler detects incipient deadlocks before they fully form.
35 transition rules define every possible async state change — from SPAWN and CANCEL-PROPAGATE to CHECKPOINT-MASKED and CLOSE-RUN-FINALIZER. These rules are mechanized in Lean 4, bridging the gap between spec-doc prose and machine-checked proofs.
Capabilities use a decentralized Macaroon model with 8 caveat predicates: time bounds, region scope, task scope, max uses, resource glob patterns, rate limits, and custom key-value constraints. Caveats can only restrict — never widen — making delegation safe without a central authority.
The Lab runtime uses E-processes (betting martingales via Ville's inequality) for anytime-valid statistical monitoring. Unlike fixed-sample tests, you can peek at any time and reject if the E-value exceeds 1/α — no p-hacking, no multiple-testing correction needed.
Distributed operations use a Saga pattern with 16 operation kinds split into monotone (coordination-free) and non-monotone (barrier-required) classes. CALM analysis batches consecutive monotone steps, inserting coordination barriers only before non-monotone operations like Commit or Release.
A 4-component Lyapunov function — live tasks, obligation age, draining regions, and deadline pressure — provides a formal energy measure that strictly decreases over time. The scheduler uses this potential to prove that the system always makes progress toward quiescence.
Asupersync bakes correctness guarantees into the runtime layer. Features that require manual discipline in other runtimes are enforced by the architecture.
A cancel-correct server in under 25 lines. Regions scope task lifetimes, Cx controls capabilities, and Permits enforce resource cleanup.
Built on formal foundations. Every phase documented, every guarantee proven.
Phase 1
Designed the Region tree model with parent-child lifetime scoping
Implemented the three-phase cancel protocol (Request → Drain → Finalize)
Built the Cx capability context with spawn/IO/timer gates
Formal proofs for cancel-safety and region closure guarantees
Phase 2
Designed the Permit/Lease linear type system
Implemented compile-time must-use enforcement
Built the three consumption paths: Send, Complete, Abort
Added Spork (spawn + fork) for structured task hierarchies
Phase 3
Implemented Cancel/Timed/Ready lane architecture
Built priority inversion prevention for cancel propagation
Designed quiescence detection for region closure
Benchmarked against Tokio's work-stealing scheduler
Add Asupersync to your Rust project with a single command. Ship async systems with cancel-correctness guarantees from day one.