Most explanations of event sourcing reduce to a single sentence. Store events instead of state. This is correct but incomplete. It does not explain why replaying a sequence of events produces the same state, or why some event-sourced systems replay cleanly and others do not.
Automata theory gives us a useful formal lens here. Replayable event sourcing depends on deterministic state evolution over an ordered event stream, and automata theory helps explain how a (state, input) pair can yield both a next state and declared outputs without entangling side effects with replay.
Consider a bank account modelled as an event-sourced entity. Replaying the full event log from the beginning reconstructs the current balance, but if the state-transition logic also triggers side effects directly, replay re-sends every email notification and every fraud alert ever issued. Separating declared outputs from side-effect execution is what makes replay safe.
Essentially, event-sourced systems run in two distinct phases.
- Live processing.
- Replaying to rebuild state.
Forming this mental model early will help you understand why event sourcing is arguably a difficult style of system to get right. Industry has always optimized for “good enough”, and most systems store data in place because that’s good enough for many general use cases, but not all. Reconstructing state from inputs alone is a less common pattern in practice.
So some of what follows may be an alien way of thinking about systems, but hang in there. Done right, event sourcing unlocks systems that are genuinely durable, resilient, and observable.
If there is one single challenge in getting event sourcing right, it’s reconstructing state. Replay is the pressure point. In live execution a system may both update state and cause outside work, such as sending email, publishing a message, calling an API, or scheduling a retry. During replay, only the state update is allowed to run, and the system has to recompute state from the event log without re-performing the work that happened the first time. So which part of the machine is the replayable state transition, and which part is output that must be handled separately?
Automata Theory defines two classical models for finite-state machines with output. Both models move from a current state and input to a next state, differing only in where the output comes from. Deeply understanding these models will help us understand how event sourcing works both during live processing and replaying events to rebuild state.
Moore Machine
Edward Moore introduced the Moore machine in 1956 in his paper Gedanken-experiments on Sequential Machines.1 Moore was studying what an observer could deduce about a sequential machine’s internal state by watching only its inputs and outputs.
In Moore’s model, each state has an associated output, and the machine emits the output of whatever state it currently occupies. The transition function takes a state and an input and produces the next state, but the output is read off the state itself. Because outputs are bound to states, an outside observer can distinguish two internal states by feeding the machine an input sequence and observing the outputs it produces. Moore called this property state distinguishability.
State distinguishability and observability
State distinguishability asks whether two internal states can be told apart using only external evidence. In Moore’s formal setting, that evidence is an input-output experiment. Give the machine an input sequence and observe whether the output sequence differs.
Event sourcing gives this idea an operational form. The recorded events are the input sequence that produced the aggregate’s state. If the journal is complete and apply is deterministic, replaying those events from the initial state reconstructs the state. The event log distinguishes histories. The reconstructed state determines state-derived output. If two different histories fold to the same state, a Moore-style view intentionally forgets the path unless the path is encoded in state.
The shared state-transition function captures pure state evolution. Given the current state and an event, it returns the next state. This function is apply, written formally as:
apply : Event × State → State
apply takes an event and a state and returns the next state. Concretely:
S' = apply(E, S)
Folding an ordered stream of events through apply, starting from an initial state, produces the current aggregate state. Any derived view, such as a balance, a status, or a snapshot, is a pure function of that state. Two folds that arrive at the same state produce the same view. A door in the open state produces the “open” view. Whatever events brought the door to that state, the view is the same.
Mealy Machine
George Mealy introduced the Mealy machine in 1955 in his paper A Method for Synthesizing Sequential Circuits.2 Mealy was solving the synthesis problem of constructing a sequential circuit from a desired input-output behaviour.
The Mealy model shares Moore’s apply for state evolution but also adds a second output at the moment of transition. That shared foundation is why we covered Moore first, even though Mealy’s paper predates Moore’s by a year.
During live handling, a Mealy machine needs to do more than just apply to return the next state. It also needs to say what should happen because this particular event occurred in this particular state. This article models that live step as a transition that returns the next state plus zero or more actions. Those actions are data. They are not the side effects themselves. They describe work for the surrounding host to execute after the transition has been computed.
transition : Event × State → State × Action*
transition returns a next state and a list of actions for a state-event pair. Concretely:
(S', A) = transition(E, S)
If you discard A, only the replayable state evolution remains. The important Mealy distinction is that those actions can depend on the event that triggered the step, not only on the state reached. A door reaching the open state via an “unlocked” event can ring a bell. A door reaching the open state via a “broken in” event can fire an alarm. The two steps land on the same state, but they emit different actions.
Action outcomes can return as events
The door example invites a question. If a “broken in” event fires an alarm, and the alarm summons the police, surely those consequences matter to future state? They do, but they reach state through the journal, not through the action itself. The alarm firing may produce an AlarmRaised event. The police arriving may produce a PoliceDispatched event. Those events are appended to the journal and folded through apply like any other input. A Mealy action is often itself a command issued back into the system. Executing it forces a fresh decision that produces new events, and those events flow through apply to update state. That separation between actions and events is what distinguishes a durable workflow from a scatter of random side effects. You’ll see this in action when we cover the Rust implementation of ObzenFlow.
In an event-sourced system, replay must suppress action execution. Re-running every notification, retry, or external API call during replay would re-send every effect ever issued. During live execution, a Mealy action might actually send the email or ring the alarm. Replay folds the recorded fact (EmailSent, AlarmRaised) through apply without re-running the side effect itself. If the outcome of an action needs to influence future state, that outcome must return to the system as a later recorded event.
In other words, if a system only needed to reconstruct state from recorded events and never needed to declare actions on specific transitions, the replay path could be understood entirely in Moore terms. Events update state, and any output is derived from the state reached. That is exactly why the Moore view is useful for replay. Live execution usually has a hot path, though. It may send messages, schedule work, call gateways, publish notifications, or react differently depending on which event caused the transition. For that path, the Mealy model is the one to keep in mind. It treats actions as first-class output bound to specific transitions, at the expense of Moore’s state distinguishability.
Event Sourcing is the Input Tape
Event sourcing is automata theory with persistence. Mealy and Moore wrote the theory in the mid-1950s. Forty years later, Martin Fowler observed the pattern in the wild and named it event sourcing.3 He didn’t invent it, the underpinnings were already there.
The pattern is superficially simple. Store the events, derive state from them. In practice it’s harder than the one-liner suggests, which is why the seventy years of theory we’ve been covering matters. Two questions remain. What is an event log? Where do the events come from?
An event log is an input sequence, persisted. Whatever consumes it deterministically lands in a single predictable state. That is Moore. Mealy is at work on the live path, emitting actions in the moment. Those actions never enter the journal. They run once and disappear. The events are what survive.
Mealy generates actions, it doesn't execute them
The log tells you what happened in durable terms, because it records the facts that were committed. But the log does not automatically tell you every transient action the runtime attempted.
Mealy tells you that a transition produces actions. It does not tell you how those actions get executed, how they survive a crash, or how they run exactly once. Those are separate engineering concerns.
Event Storming has terminology for how events trigger follow-up commands. A policy (sometimes called a reaction) ties an event to a command. When this event occurs, issue this command. The command flows through the model like any other and may produce its own events and Mealy actions.
Executing those actions reliably is its own engineering problem. A Mealy action is data. For the work to actually happen, and happen once, the intent has to live somewhere durable, whether that’s the journal, a transactional outbox, a command record, or a recoverable workflow step. If the outcome matters to future state, it has to come back as a recorded event.
In order to make sense of the role of the input tape, we actually need to think about where the input tape comes from. That naturally brings us to the role of commands.
A command is a request to do something, like transferring money or opening a door. It is not yet a fact. Before any state changes, the command is evaluated against the current state to decide which events to record. That step is the decide function.
decide : Command × State → Event*
These event-sourcing functions divide the work cleanly.
decide : Command × State → Event*
apply : Event × State → State
transition : Event × State → State × Action*
Let’s walk through this step by step.
decideevaluates a command against the current state and returns zero or more events. The events are facts. They are appended to the journal, and once recorded, they cannot be re-decided.applyfolds recorded events into state. This is the replay path. Given the same prior state, the same event, and the sameapplylogic, the fold reaches the same next state.transitionreturns the next state and the Mealy-style output for a state-event pair. Those outputs are declared actions, not side effects performed directly by the state machine. During replay, the host rebuilds state throughapplyand suppresses action execution. If an action outcome needs to influence future state, that outcome has to return as a later recorded event. (Obzenflow’s own supervisors follow this pattern by mapping action failures back into domain events and feeding them through the machine in their host loops, which we will cover shortly.)
One thing worth paying close attention to is that for replay to reconstruct the same state that live execution produced, apply and transition must agree on state evolution.
apply(event, state) == transition(event, state).next_state
The action list is the only thing that distinguishes them. We’ll see .next_state as a real Rust field in the door example shortly.
The Implementation in Rust
Everything above is theory, now let’s look at this in practice. The obzenflow-fsm library implements the Mealy side in Rust. Given a (state, event) pair, it computes the next state and returns a set of actions for the host to execute. The FSM never performs those effects itself; it makes them explicit as data, and the host decides when and how to run them.
The shared apply part lives in ObzenFlow’s journal-and-replay path, not in journal writing by itself.
- obzenflow_runtime provides the supervised execution engine and replay interfaces.
- obzenflow_infra provides durable journal backends and the
FlowApplicationrunner.
In live execution, supervisors drive stages forward, append outputs to per-stage journals, and execute returned actions under host control. During replay, archived journal data can be fed back through the deterministic state-evolution path with live effects suppressed.
The code is open and runnable. The FSM library, runtime, infrastructure, and top-level ObzenFlow framework are all public, and the published docs include a real end-to-end quickstart and examples on GitHub.
Let’s walk through a small example of opening and closing a door to understand all of the terminology we’ve covered so far.
A Small Door Example
A door FSM demonstrates the pattern at minimal scale.
#[derive(Clone, Debug, PartialEq, StateVariant)]
enum DoorState {
Closed,
Open,
}
#[derive(Clone, Debug, EventVariant)]
enum DoorEvent {
Opened,
Closed,
}
#[derive(Clone, Debug, PartialEq)]
enum DoorAction {
Ring,
Log(String),
}
The transition from Closed to Open on an Opened event produces two actions. One rings a bell. The other logs the transition.
// State: Closed × Event: Opened → State: Open, Actions: [Ring, Log]
on DoorEvent::Opened => |_s, _e, _ctx| {
Box::pin(async move {
Ok(Transition {
next_state: DoorState::Open,
actions: vec![
DoorAction::Ring,
DoorAction::Log("Door opened".into()),
],
})
})
};
In a Moore machine, the bell would ring on entry to the Open state regardless of how the machine got there. In the Mealy model, the bell rings because of the specific transition from Closed to Open. A different transition into Open (if one existed) could produce different actions. The output is a property of the transition, not the state.
Modelling Transitions
The previous section showed a Mealy transition producing state and actions together. The question is how to preserve that separation structurally so that effects remain explicit and replay stays safe. The obzenflow-fsm library makes that split visible in the API; transition handlers return state and actions separately, and the host executes those actions explicitly.
The core type is Transition:
#[derive(Clone, Debug)]
pub struct Transition<S, A> {
pub next_state: S,
pub actions: Vec<A>,
}
A transition handler receives the current state, the event, and a mutable context. It returns the next state and a list of actions. This is the Mealy signature expressed as a Rust function:
async fn handle(
state: &S,
event: &E,
ctx: &mut C
) -> Result<Transition<S, A>>
Actions implement an explicit trait with an async execute method:
#[async_trait]
pub trait FsmAction: Clone + Debug + Send + Sync + 'static {
type Context: FsmContext;
async fn execute(&self, ctx: &mut Self::Context) -> FsmResult<()>;
}
The separation is structural. The transition handler computes the next state and declares what actions should happen. The host loop executes those actions after the state transition is complete.
In replay mode, the host skips action execution entirely. For replayable flows, state evolution must remain deterministic for the same logical (state, event) pair, while actions stay explicit and can be suppressed during replay. That separation is what makes replay possible.
Determinism is a discipline imposed on the handler. Any nondeterministic input that affects state, such as wall-clock readings, random values, or external service responses, must be captured in the event itself rather than pulled in fresh during the transition.
Capturing and executing effects
The door example shows both steps in sequence. The caller invokes handle to compute the transition, then execute_actions to run the returned effects against the context.
After executing both transitions:
let actions = door.handle(DoorEvent::Opened, &mut ctx).await?;
door.execute_actions(actions, &mut ctx).await?;
assert_eq!(door.state(), &DoorState::Open);
let actions = door.handle(DoorEvent::Closed, &mut ctx).await?;
door.execute_actions(actions, &mut ctx).await?;
assert_eq!(door.state(), &DoorState::Closed);
assert_eq!(
ctx.log,
vec!["Ring!", "Door opened", "Door closed"]
);
The context accumulates the effects. The state transitions are deterministic. Given the same event sequence, the machine always arrives at the same state. The actions are always the same. This is what makes replay possible.
The Host Loop
A finite state machine is a declarative object. It computes transitions when asked, but it does not drive itself. It has no event loop, no I/O, no failure recovery.
In actor-based systems like Erlang/OTP, this problem is solved by supervision. A supervisor process owns a worker, monitors it, feeds it messages, and restarts it on failure. The worker defines behaviour. The supervisor keeps it running.
The same pattern applies here. The FSM returns actions as data to a host loop that decides when and how to execute them. In Obzenflow, each runtime supervisor owns a single FSM and drives it through the following cycle:
- Call
dispatch_statefor the current state to get a directive. - On
Continue, yield and loop again. - On
Transition(event), feed the event into the FSM viahandleand execute the returned actions. - If an action fails, map the error to a domain event and feed it back into the FSM.
- On
Terminate, write a completion event and exit.
This is the actual SelfSupervised run loop from the Obzenflow runtime, with tracing removed for clarity:
let mut machine = self.build_state_machine(initial_state);
loop {
let current_state = machine.state().clone();
// Ask the supervisor what to do in this state.
let directive = match self.dispatch_state(¤t_state, &mut context).await {
Ok(d) => d,
Err(e) => {
// Drive the FSM through its failure path.
let failure_event = self.event_for_action_error(
format!("dispatch_state error in {}: {e}", current_state.variant_name())
);
let failure_actions = machine.handle(failure_event, &mut context).await?;
for failure_action in failure_actions {
failure_action.execute(&mut context).await?;
}
continue;
}
};
match directive {
EventLoopDirective::Continue => {
tokio::task::yield_now().await;
continue;
}
EventLoopDirective::Transition(event) => {
let actions = machine.handle(event, &mut context).await?;
for action in actions {
if let Err(e) = action.execute(&mut context).await {
// Action failed. Feed the error back into the FSM
// so it can transition to a Failed state explicitly.
let failure_event = self.event_for_action_error(format!("{e}"));
let failure_actions =
machine.handle(failure_event, &mut context).await?;
for failure_action in failure_actions {
failure_action.execute(&mut context).await?;
}
break;
}
}
}
EventLoopDirective::Terminate => {
self.write_completion_event().await?;
break;
}
}
}
The entire loop is governed by a single enum, EventLoopDirective. Each iteration, the supervisor inspects the current state, runs whatever logic that state requires, and returns one of three directives. The run loop never decides what to do. It only executes what the directive says.
1
dispatch_state
This is where user code runs. Each supervisor defines what happens in each state. A source stage in Running state might poll an API. A transform stage might read from an upstream journal and apply a function. The supervisor examines the current state and returns a directive that tells the run loop what happened.
Nothing actionable occurred. The loop yields back to the Tokio runtime and tries again. This is the non-blocking wait that prevents busy-spinning while the supervisor waits for external data.
3
Transition
Something happened. The run loop feeds the event into the FSM, which computes the next state and returns a list of actions. The loop executes each action in sequence. If any action fails, the error is mapped to a domain event and fed back into the FSM so it can transition to a Failed state through the same mechanism it uses for any other transition. There is no implicit error path.
4
Terminate
The FSM has reached a terminal state. The supervisor writes a completion event to the system journal and exits.
This is generic framework code, so the run loop is identical for every supervisor. What changes per supervisor is the FSM definition (which states and events exist), the dispatch_state implementation (what to do in each state), and the actions (what effects to execute).
How this works in practice is that:
- A user building a source stage writes their own logic, like the API polling logic in a user-defined source.
- The framework wraps that logic in supervision, journaling, error recovery, and replay.
- The Mealy machine handles state transitions and returns explicit actions.
- The journal records events, and replay folds those events back into state through the shared
applypath. - The host loop ties them together and keeps everything both durable and replayable.
When a transition occurs, actions are returned in a defined order:
- Exit-handler actions for the old state.
- Entry-handler actions for the new state.
- Transition actions from the handler itself.
This ordering is deterministic. The library guarantees it. Entry and exit hooks execute even on self-transitions, which keeps timeout and hook behaviour consistent across all transition paths.
Everything described so far assumes a single machine processing events in order, exactly once. Production systems do not get that guarantee.
I cover what breaks under at-least-once delivery, and the algebraic properties your state-transition functions need to survive it, in The Unholy Trinity of Distributed Systems Hell.
Further reading Read more
Why this model works
For event sourcing to work in practice, it needs to incorporate both models to explain where output comes from and how state transitions should occur. State-derived views follow the Moore shape. Transition-time actions follow the Mealy shape. The two models are not competing; they describe different outputs of the same deterministic state machine.
The host loop is essential for event sourcing but completely separate from the automata theory. It is the practical runtime that makes the model operational and brings theory into practice. It loads state, invokes decide, journals events, folds those events through apply, executes declared actions, and records meaningful outcomes as new domain events. During replay, the host loop folds the same events through apply and suppresses action execution.
Replay only works because these concerns stay separate. State evolution is deterministic and runs every time the journal is folded. Action execution is bound to specific transitions and runs only during live handling. Durable consequences return to the log as facts. Transient work is never mistaken for history.
The same separation gives event sourcing more than replay. The journal records facts that have already been decided, making it durable audit evidence. New projections can fold the same events into views that did not exist when the events were written. Temporal queries reduce to folding up to a point. Compatible corrections to interpretation can replay against the original events without rewriting the event log. The journal explains not only what state the system is in, but how it got there.
That is the model underneath event sourcing. Facts are recorded. State is reconstructed. Views are derived from state. Transition-time effects stay outside replay.
This is essentially what Martin Fowler described in his original blog post on the event sourcing pattern. I saw the apply path firsthand as a mainframe developer at the start of my career, well before I ever heard the term event sourcing. The decide and transition pieces took me longer to find. That’s why I’ve been promoting this for a decade and a half. If there’s one takeaway, it’s that theory from the 1950s is more practical today than ever, at a time when systems are badly in need of durability, resilience, and observability.