We start with a grid of cells that update in discrete steps.
All cells are computed in parallel, but we'll look at the progression from the perspective of a single focal cell (index 1). Each cell interacts with its 8 immediate neighbors, forming a group of 9, indexed as in the image to the left.
At each step the focal cell has a non-negative energy scalar , a transfer matrix , and a bias vector . Both and flow with energy and get updated by incoming energy.
To compute how energy spreads, stack the 9 neighboring energies into . We use an affine transformation followed by a nonlinearity to produce a non-negative spread:
We use rectified linearity, , so the spread is non-negative without constraining . The small prevents the all-zero case.
This is done in parallel for all cells, accounting for walls or blocked cells, and summed to get the energy grid at the next step.
The mechanism moves with the energy: the transfer matrix update is proportional to incoming energy flow. Let denote the energy that arrives to the focal cell from neighbor . We form a weighted average of the neighboring matrices:
We square the energy when weighting, which emphasizes directions with higher energy flow. The bias vector is updated in the same way. The same keeps the denominator non-zero.
Finally, we inject randomness when energy is low. The idea is that empty or low-energy cells should mutate their mechanisms, while high-energy cells preserve them:
Here sets the overall noise scale. When is small, is near zero and randomness dominates; as energy grows, the incoming mechanisms take over. The same blend is applied to .
To initialize the system, we sample and at random and place a chunk of energy in the middle. In the live simulation above, randomness is injected in a controllable region, and mechanisms spread and compete as energy flows.