A Vertically Integrated Future with Supra
Modular blockchains enable permissionless innovation on top of base layers. The ability to split up settlement and execution allows the creation of products that can bypass the constraints of the base layer itself. But this has presented us with a severe case of liquidity fragmentation.
Users must constantly bridge in search of new assets, better UX, and superior security guarantees. This problem will worsen as more apps become rollapps/appchains (Zora, Fraxchain, Base, etc.). Liquidity aside, we also need information to flow seamlessly across blockchains. Secure and cross-chain oracles will be required to facilitate the exchange of information on wallet transactions and balances. Most popular oracles like Chainlink and LayerZero are examples of out-of-protocol oracles as they are run on networks separate from the ones running the consensus of chains. This leads to additional costs that end users must bear, along with security guarantees that are not as strong as the base chain. Liquidity fragmentation goes beyond the EVM ecosystem. Solana, Cosmos, and, more recently, Move-based ecosystems such as Sui and Aptos have accumulated significant liquidity. Chains in these ecosystems differ in their VMs, respective DSL, and how the state of that chain is managed.
The idea behind Supra is to have a natively designed, fully vertically integrated blockchain stack composed of a Layer 1, in-protocol oracles, VRFs, Bridges, mutliVM support, and automation. These components are run on the same nodes as consensus, simplifying cross-chain liquidity and reducing unnecessary latency. By integrating these features into the base layer, Supra aims to streamline information flow across blockchains, enabling seamless interaction between protocols. With MultiVM support, Supra can execute transactions in parallel with execution sharding across different VMs.
However, running all these services on the same set of nodes in a vertically integrated stack may overwhelm the network nodes, leading to lower throughput and higher latency if not carefully designed. Adding more nodes to the network or using higher-capability hardware may help. Still, you can add only so many nodes before performance and coordination in the network become a bottleneck, and nodes requiring higher-caliber hardware impose higher entry costs.
Tribe-Clan Architecture
To address this, Supra has developed a node management and coordination mechanism called the Tribe & Clans architecture. Here, a Tribe is a large set of nodes, and Clans are a smaller, randomized subset of nodes within a Tribe. Each of Supra’s components can be run on these Clans, and node resources are dynamically allocated to each component. Nevertheless, node resource requirements for several components in addition to a blockchain would still be high. You would need to ensure that each component is utilizing just enough resources.
Leveraging Moonshot for Simple, not Super Majority
Most blockchains using classical BFT consensus mechanisms require a super majority (67%) of honest nodes to function as desired. They follow the sequence of ordering transactions before they are delivered to validators, but transaction delivery, ordering, and execution need not happen in the exact same order. Execution must be performed in the end to avoid inconsistencies in the blockchain. Delivery and ordering need not be sequential.
Supra expects to reduce the required percentage of honest nodes significantly by allowing transaction delivery and ordering to happen concurrently. Supra utilizes a formally verified consensus protocol called Moonshot. Supra’s Moonshot assumes a fully connected network; each consensus node forms a direct point-to-point connection with every other consensus node. This allows each consensus message to be broadcast to all peers in one go without hops in between peers, greatly speeding up the time to finality. This is unlike Tendermint, which assumes a gossip protocol where nodes are loosely connected.
Supra’s Moonshot consensus protocol leverages optimistic proposals to allow nodes to proceed with their consensus-related operations without waiting for the whole network’s confirmation at every step by assuming a positive outcome. This way, validator idle times are reduced, and the network throughput is maximized as more transactions are finalized in a shorter duration. Moonshot then uses a mechanism called round pipelining to organize all of these optimistic operations in a continuous workflow, allowing multiple validators to work on multiple blocks simultaneously. A PBFT variant, Moonshot also uses pipelining to work on multiple rounds of consensus concurrently.
This reduces latency and increases throughput by allowing multiple blocks to be concurrently finalized instead of a sequential flow. While Moonshot would still require a 67% super majority, Supra’s components based on Supra Chain only need a simple majority of 51%, reducing the number of total honest nodes required across the whole Supra stack.
By doing so, Supra aims to offer a differentiated value proposition to applications and developers and shift the power structure of the current crypto infrastructure landscape across oracles, bridges, and other components. This report will go through Supra’s tech stack and break down its components.
1) DORA-backed oracle network with increased liveness, safety guarantees, and scalability
2) A distributed VRF solution with unbiased on-chain randomness
3) Hypernova, a Light client-based bridge and HyperLoop with its rational bridge design
4) MultiVM support
5) Supra’s Cross-chain Automation service
6) we also examine how the Tribe & Clan architecture enables Supra’s vertical integration
DORA – Distributed Oracle Agreement
Blockchains and smart contracts cannot inherently access off-chain data or data from other blockchains. Oracles help by enabling smart contracts to access this data. Whenever a smart contract’s logic relies on off-chain information such as web2 data, price feeds, or a certain event occurring on a different chain, an oracle is needed to help deliver that data to smart contracts.
DeFi protocols that rely on oracles are mainly lending and derivatives protocols. Lending protocols rely on oracles to feed accurate price data to value collateral put up by users, give out loans, trigger liquidations, and determine interest rates. Similarly, derivatives protocols also rely on oracles for valuing the underlying collateral, triggering based on market events, and calculating funding rates for perpetuals. This has created a major dependency on third party facilitators. According to DefiLlama, DeFi protocols across chains collectively hold $150 Bn in TVL today. With nearly $48 Bn resting upon oracle’s shoulders.
Chainlink takes up about ~55% of the oracle market share as a market leader, but other players like Pyth, API3, UMA, and Supra have been making waves. Supra is relatively new in the space but is positioning itself with a fully vertically integrated infrastructure stack enabling their services to maintain Layer 1 security and lower latencies.
Supra’s oracles run based on their DORA (Decentralized Oracle Agreement) protocol designed to improve data accuracy and latency. Let’s walk through how DORA is structured. DORA, or Distributed Oracle Agreement, dictates how oracle nodes gather and aggregate off-chain data from several first-party and third-party sources to produce a single representative value (S-value) arrived at through statistical computations. This final value can then be directly sent to other chains or protocols. We will now review how Supra’s oracles handle certain challenges in the oracle space at large.
Tribe-Clan Architecture
Supra’s oracle network is structured based on the Tribe-Clan architecture. A Tribe is a set of nodes that run many of Supra’s components. A Tribe can accurately function as expected even if less than 1/3 of its nodes are Byzantine (malicious or offline). A Tribe is divided into smaller subsets called “Clans”. Clans are randomly drawn from the Tribe in a manner that ensures that the probability of any one of the Clans having 50% or more Byzantine nodes is negligible. This can make the oracle network resilient to collusion and liveness issues.

Supra’s Tribes and Clans run the DORA protocol. The protocol helps Supra’s oracle network manage high volumes of data requests and data sources. Each Clan is responsible for calculating and reaching a consensus on S-values (Single Representative Value or a final value) used by smart contracts. If the oracle network had to work on 1000 different S-Values with a setup of 4 Clans per Tribe, each Clan could work on 250 S values each. This is similar to a sharding mechanism where the whole oracle network is not made responsible for sourcing, aggregating, and propagating all data, but Clans work on different data sets in parallel and in the capacity to take up more workload.
Gathering
Oracles receive information from multiple data sources at different periods. Coming up with a single absolute value that smart contracts can rely on for consistent and accurate execution is key. Protocols expect a single representative value that their contracts can depend on, formed through multiple sources. For instance, you might need BTC/ETH prices. Oracles aggregate data from numerous sources such as CEXs, DEXs, Market Makers, and data providers to provide a single certified value on-chain. This is especially useful in scenarios where if certain data sources go offline or begin to propagate incorrect data intentionally or unintentionally, we still need to ensure that smart contracts execute only upon accurate data.

Shuffling The Deck
To ensure that all data sources are being used by all the nodes in a clan effectively and that no single data source is being ignored or used unfairly, a randomness cryptographic function called VRF (Verifiable Random Function) is used to assign data sources to nodes. The VRF mechanism adds an extra layer of randomness and fairness to the distribution of data sources so that the oracle network can withstand potential adversarial scenarios and still continue to push out reliable data. Other oracle service providers like Chainlink and Pyth have no randomization or shuffling mechanisms of their nodes or committees. More on Supra’s VRF will be discussed later.
Aggregation
Once all the nodes in a Clan gather price values from various sources, they calculate a median of those values. The median is mathematically guaranteed to fall within honest sources’ highest and lowest values. Clan nodes then send the median values to nodes in the oracle network called aggregators. The selection of aggregators is randomized so that at least one honest node will be selected. The role of an aggregator is to determine a single value based on all the inputs from a Clan. Once all the nodes in a Clan sign their median values and send them over to aggregators, aggregators collect them and form a “coherent cluster” (CC). A CC is a set of values where each value is at most distance D away from other values in CC.
We’ll see why it is important to form a coherent cluster, as they can make or break protocols that integrate with oracles. On Sep 20th, 2021, when 2 out of 11 publishers on Pyth made a decimal point error on the BTC/USD price pair, the aggregation logic overweighted their contributions, resulting in a price significantly lower than the actual market price of Bitcoin, $43,500. Bitcoin’s price on Pyth crashed to $5,402 for a 2-minute period, with super wide confidence intervals. You could imagine this led to several positions being liquidated for no good reason. (More on this in our Pyth Report)
Byzantine fault-tolerant nodes and coherent clusters can go a long way to combat faulty prices. DORA has a way of handling such instances. For example, a set of 5 nodes, A, B, C, D, and E, report the price of BTC as follows with an agreement distance of $200, for example:
A – $43,800
B – $43,850
C – $43,700
D – $45,000
E – $43,900
BTC prices from (A, B, C, E) would form a coherent cluster as their differences are within the range of the agreement distance of $200. The BTC price value from node D is excluded from this cluster to avoid skewing the reported price. Once a coherent cluster is formed, the aggregators compute the arithmetic mean or an average of the cluster to calculate the Single Representative Value or the S value. This is because, in extreme situations where it turns out that 4 out of the 5 nodes turn Byzantine, a single honest value can anchor the price regardless of how far off or short the Byzantine values can be. The agreement distance D also limits how far off or short the Byzantine values can be.



Once an S value is computed, both the cluster and the S value are sent out to all the nodes in a Clan for validation and approval in the form of a vote. These values are sent to nodes that initially signed these values and to the other nodes in a Clan to receive sufficient votes. Once more than half of the nodes in a Clan reach a quorum by voting that the S value and the cluster are valid, a Quorum Certificate is formed, and the aggregator ships the S value along with the Quorum certificate to the destination chain. To try and reduce latency and improve data freshness, Supra’s DORA protocol can send the requested data payload to various destination chains directly, without necessarily being processed through their Supra Chain, which is used only for ordering. Supra Clans can verify the data and can be delivered directly to various L1 and L2s.
Fall Back Protocol
There may also be instances where even values from honest nodes are not close enough to form a coherent cluster (values not within an agreement distance). This could be due to extreme market conditions, market manipulations, data source failures, Byzantine node behavior, or networking issues.
Once a node sends its median value to an aggregator, it keeps track of whether or not an aggregator has posted the S-value and a Quorum certificate to the SMR or Supra’s blockchain. If, within a certain duration, no aggregator has been posted to the SMR, the nodes propose a vote to run the Fall Back Protocol. When more than half the nodes in a Clan also vote for Fall Back, the aggregator posts the Fall Back message to Supra’s blockchain, where all the other nodes can see it. This effectively moves the responsibility of calculating the S-value from the Clan to the whole Tribe. Each Tribe node gathers data, computes its median, and sends it to the aggregators. The aggregators then wait for values from at least 2/3rd of the Tribe nodes and calculate the median of these values, ensuring that the single representative value (S-value) is still reliable even in unusual conditions. This broader participation of nodes helps anchor the price during extreme market fluctuations. With such instances, there is an inherent trade-off between latency and security/accuracy.

Anomaly Detection
We see oracles do their job as expected. They report prices made available to them, but in several incidents, oracles continue to report abnormal prices that end up detrimental to a protocol and its users. Several unwarranted liquidations could have been prevented if only the use of erroneous data could have been paused, for example, in the Pyth BTC/USD case. The application is partly responsible for using confidence interval bands to ensure a more stable price input instead of just directly taking the median price. DeFi protocols could use DORA’s Circuit Breakers during such events to halt lending and margin calls and prevent further cascading effects.

Circuit Breakers take note of the current and previous S-value. The Circuit Breaker is triggered when the absolute change between these values exceeds a certain threshold. Once a series of these S-values have been posted on Supra’s blockchain, any type of function could be set in place as Circuit Breakers to arbitrarily account for uncommon price trends using the history of those data feeds. Anomaly detections could also be tailored to assess any co-related price deviations, such as BTC/USD and ETH/USD. But that’s not the end of it. It should be noted that the algorithms used to detect anomalies should be continually improved to differentiate between a case of market volatility and an actual case of oracle manipulation or colluding nodes.
When an S-value is successfully posted to Supra’s blockchain, it concludes one round. But there is no guarantee an S-value will be found and agreed upon successfully. If an S-value is awaiting to be posted but doesn’t get published, it may be due to a coherent cluster taking longer time to be formed than expected or due to the presence of some Byzantine nodes or data sources. In such cases, a new round could be initiated to determine an S-value independent of the previous ongoing round. If the newer round agrees on an S-value sooner than the previous round, the previous round will be terminated to conserve computational resources. This DORA tick-start component ensures that prices are still successfully posted even in case of a delay in the first round of price agreements.

Time To Finality
Speed is key for oracles. The faster a set of oracle nodes gather, aggregate, and reach a consensus that a certain value is accurate, the faster protocols can use the value. To this end, SupraChain uses the BFT Moonshot consensus.
As we previously mentioned, Moonshot uses optimistic proposals and round pipelining to concurrently finalize multiple blocks instead of a single block being finalized in a sequence. It also simultaneously broadcasts transactions to all nodes instead of following a traditional BFT gossip mechanism. This way, Supra’s oracles reach finality faster than other oracles and aim to reach 600-900ms data freshness rates with upcoming updates to DORA. DORA can also skip posting and validating on the Supra Chain and directly send the S-value from a Clan to the destination, unlike Pyth, which has first to post data to its blockchain and then Wormhole. Lower hops would reduce latency and be apt for data requiring a high degree of data freshness.
Supra runs both pull and push-based oracles. Pull-based oracles function on demand, i.e., whenever a smart contract requests data from the oracle, data is pulled out. This approach is cost-efficient as data updates or transactions are less frequent. On the other hand, push-based oracles can get gas-heavy as updates or transactions are more frequent at regular intervals on the EVM. But when it comes to blockchains such as Sui and Aptos, verifying +500 data pairs with a single threshold signature can help achieve economies of scale. Given that price feed data is posted to and verified by Supra’s consensus mechanism, smart contracts on Supra can directly use this data for free without relying on external pull-based oracles.
Determinism and Randomness – DVRF
Blockchains are designed to be transparent, deterministic systems. They execute in a manner that for any starting state and a given sequence of inputs, all parties will agree on the final state. So, it is quite challenging to come up with true randomness in such a system without miners or validators compromising the nature of randomness. We also need to verify publicly that randomness is free from tampering. What is possible on deterministic systems such as blockchains is pseudo-randomness, which is produced using algorithms that can be verified. While produced deterministically, these values are provably indistinguishable from uniform random values. For simplicity, we refer to these pseudorandom numbers as random numbers.
VRFs (Verifiable Random Functions) help produce these random numbers and supply them to smart contracts and protocols on-chain. Randomness can be useful for a variety of use cases.
- VRFs are quite popular with on-chain games and NFTs. They generate unpredictable and fair outcomes of loot in loot boxes and random attributes for NFTs. Random game events such as critical hits or similar triggers are also randomized, but most off-chain systems rely on their servers to compute randomness.
- Proposer elections on certain Proof-of-Stake blockchains, such as Polkadot, Secret Network, Algorand, and Cardano, use random values generated by VRFs to elect proposers randomly for every block. A function of the validator’s stake, reputation, and the random value is used to select a proposer for every block.

In existing VRFs like Chainlink’s VRF, users interact with a smart contract to submit a VRF request. The request is sent to a node that holds a secret key. Using its key, the node computes the VRF and generates a random number off-chain along with a cryptographic proof, which is then returned to the smart contract for on-chain verification. While the proof ensures no invalid random value can be used, each node operating with its secret key implies a single point of failure for liveness.

With Supra’s dVRF (distributed VRF), a VRF request is sent to multiple VRF nodes, each managing a secret key share. When a user interaction with a smart contract results in a VRF request, a unique “input” or “INP” for each user is created and sent to an aggregator node that acts as a relay by forwarding the input to a VRF Clan, i.e., a set of VRF nodes. To ensure liveness among these nodes in situations where any of them may be compromised, nodes in a Clan operate on a simple majority basis, where a Clan can tolerate up to 50% of malicious nodes and continue operating successfully. These nodes share a secret key using Shamir’s secret-sharing scheme. Each node with a part of the key now partially computes the random number and, correspondingly, the parts of a proof of correctness for the random number. This ensures that no single node knows the random number or the proof. Once each node has its partial outputs ready, they are sent to the aggregator node, where all parts of the output are put together into a single random output.

The usefulness of a public random VRF output is always limited. Since the VRF outputs are being published publicly, the VRF must constantly be queried for new random numbers to prevent players from adjusting their actions to gain an edge. This brings us to private VRF (PVRF).
Private VRF & Batching
PVRFs use what’s known to be a blinding factor. A blinding factor masks the user input, making it indecipherable to any party except those with the blinding factor. When VRF nodes compute VRF over blinded inputs, they produce partially blinded outputs sent to the aggregator node, which compiles the pieces into one blinded output. The same process is repeated for blinded proofs. The blinded output and proof are publicly verified. At a later point in time, the user can publish their unblinded randomness and verify it with respect to the blinded randomness.
Batch verification of VRFs can save costs and serve low latency use cases that demand numerous random functions. A single Private VRF can act as a seed for several thousand VRFS. This is possible by creating permutations of the original PVRF. To cut back on gas costs for low-latency use cases such as GameFi/casinos, only the original seed PVRF needs to be verified to verify the whole batch of VRFs created with its permutations.

True randomness on-chain is still a quest. API3’s QRNG uses quantum mechanics to generate random numbers. One of QRNG’s principles is based on superposition, where a quantum object (like a photon) can exist in multiple states simultaneously until it is measured. When such a quantum system is measured, it ‘collapses’ into one of its possible states, and this process is inherently random.
HyperNova’s Star Topology Approach To Cross-Chain Bridging
Trust in decentralized systems has mostly been achieved through consensus-run nodes backed with token stakes instead of centralized entities. Over 1 million validators on the Ethereum consensus protocol can participate in voting and agree to keep one absolute view of the state evolving. A substantial amount of staked value backs PoS L1s such as Ethereum, Solana, and others. Validators are incentivized to act honestly with rewards, and any mistakes or malicious acts can lead to their stake being slashed.
This security mechanism works well when a decentralized system is backed with materially large amounts of stake. However, other infrastructure pieces, such as bridges, typically rely on a smaller set of operators with smaller value at stake. Most bridges are designed as multi-signature (multi-sig) bridges, whose security relies on the trustworthiness of the staked bridge nodes, which generally is not as secure as the source chain’s Layer 1 security.
A useful solution here is to use validators of an L1 for bridging (e.g. IBC). The problem in the case of Ethereum is overheads. Verifying signatures and attestation data from over +1M Ethereum nodes leads to unfeasible gas costs and latency.
However, there is a way around these limitations. Ethereum core contributors introduced the Sync Committee. A Sync Committee is a subset of Ethereum validators composed of 512 nodes selected for each epoch at random. This reduction in size makes the consensus verification process more manageable and less resource-intensive on the destination chain. You can now have a light client that can verify the state of the blockchain without downloading it all. This is because the Sync Committee signs headers on beacon blocks, which light clients can use to verify the chain’s state. This subset of nodes is randomized every 27 hours to prevent the same set of validators from colluding with each other. Validators that are a part of the sync committee receive higher rewards.

Supra’s cross-chain solution, HyperNova, will leverage these sync committee-backed light clients.
HyperNova’s Architecture
- Relay Nodes / Light Clients – They run Ethereum Beacon and execution full clients. Relay nodes look for Sync Committee handovers every 256 epochs (27 hours). This is how the light clients keep updated with the chain’s state. Relay nodes also monitor bridge requests from source chains and deliver bridge responses to destination chains. After detecting events, the Relay nodes package them with their inclusion proofs and Beacon block header into a transaction and submit it to the Supra chain.
- Supra Chain: Supra receives and validates cross-chain requests from relay nodes. These requests include information such as events on the source chain and Sync Committee handovers. Once this data is provided, Supra independently verifies the consensus of the source chain and produces its own supermajority L1 proofs. These can be further sent to multiple destination chains and verified to facilitate bridge responses that may result in smart contract calls or the release of funds. Supra Chain uses BLS signatures to verify blocks efficiently on destination chains. Supra Chain maintains awareness of active validator sets of other chains to ensure that they’ve reached super-majority consensus. As and when the active validator sets may change, the previously active validator sets would have to sign off on the new set.

Latency & Security considerations:
Supra Chain’s block time is close to 2.5 seconds. So to bridge from Ethereum using HyperNova, it would take Ethereum’s 12 sec block time + Supra’s 2.5 sec block time + the destination chain’s block latency. This would also carry Ethereum’s Sync Committee’s security guarantees, but the reliability of security guarantees provided by the Sync Committee are discussed below.
The use of light clients for bridging is not novel. We’ve seen light clients being used in IBC in the Cosmos Hub, Beefy in the Polkadot ecosystem, and the upcoming Tinydancer for the Solana ecosystem.
When it comes to light clients on Ethereum, inheriting or mimicking the security of Ethereum PoS running +1 Million validators with a Sync Committee of 512 validators is not free of trade-offs and security considerations.
A Sync Committee, by design, has no slashing conditions. Validators may get away with collusion attacks or misreporting consensus data without punishment. With a much smaller subset of 512 validators, compared to Ethereum PoS +1M, it makes it easier to collude. While collusion is possible, the probability of a sync committee being dishonest is extremely low, practically zero. Even if you consider 1/3 of the honesty of all Ethereum validators, the probability of collusion will still be low. This is because 512 validators are randomly selected and rotated every 27 hours.
The idea behind not including slashing for a Sync Committee by the Ethereum Foundation was that it would further complicate the protocol. Also, the economic security offered by 32 ETH* 512 validators would be too low of a cost of corruption. An entity such as Chorus One or Figment may have more stake than that. A validator is more likely to suffer from legal, social, and reputational damage by colluding. While they may not get slashed, they will likely miss out on future revenue along with a decrease in delegated stake.
Traditional BFT protocols require a consensus of 67%, but Supra can increase the required consensus threshold from the usual super majority of 67% to higher thresholds, such as 75%, for additional assurance that the sync committee is aligning in consensus.
Collusion – Sync Committees & Relay Nodes
There is a possibility that a sync committee can collude with a relay node to push an incorrect block on the Supra Chain. However, given that Ethereum takes about 12 minutes to finalize a block, another honest relayer can submit a correct block. When Supra Chain sees two conflicting blocks of the same slot, Supra Chain will refrain from going forward. During this time, Supra’s DAO or any governing entity can authenticate the correct block so Supra Chain can continue processing valid bridge requests.
When all is said and done, a Sync Committee cannot promise all security guarantees of the full Ethereum PoS but appears as a practical solution, and Ethereum itself has set these parameters. Supra, Polyhedera, and a few other teams are actively working on building trustless bridging solutions that use ZK SNARKs to verify the full Ethereum consensus.
Supra’s HyperLoop – A “Game Theoretically Secure” Bridge Design
Supra’s HyperLoop is a multi-sig bridge that follows a simple majority (51%) signature threshold. The idea with HyperLoop is that it is game theoretically designed so ‘rational’ bridge nodes would not find any collusion to result in net profit. This is because the cost of corruption exceeds the profit from corruption. The punishment or losses the malicious nodes suffer will exceed the monetary value they can exploit. Supra’s HyperLoop is a collusion-resistant bridge design with game-theoretically proven safety. Let’s look at how this is possible with HyperLoop’s design.
HyperLoop Architecture
- Bridge Nodes – These are staked nodes that run clients of both the source and destination chain to follow events. These nodes are responsible for conveying bridge requests and delivering bridge responses. Bridge Nodes can batch multiple requests together for economic efficiencies and revert transactions based on user requests.
- Whistleblower Nodes – They run clients of source and destination chains to monitor and vet the actions of the bridge nodes. In cases where the Whistleblower nodes find discrepancies in the operation of the bridge nodes, the whistleblower nodes highlight them and raise complaints with the AuditDAO. If any dishonesty on behalf of the bridge nodes is found, the bridge nodes are slashed, and a portion of the slashed stake is rewarded to the whistleblower nodes. Whistleblower nodes have to put up collateral in case they want to raise a complaint with the AuditDAO as a safeguard against false complaints. To incentivize Whistleblower nodes to monitor for any dishonesty or collusion constantly, they are also periodically rewarded for continuing to keep an eye out.
- AuditDAO – This is a trusted governing entity whose role is to validate and resolve disputes reported by the Whistleblower nodes, which can pause the bridge. The AuditDAO consists of multiple audit firms that have access to the state of both the source and destination chain. When a discrepancy is detected by a Whistleblower Node and is reported to the AuditDAO, the DAO can ask the Bridge Nodes to respond to it with proof. Their stake is slashed if the Bridge Nodes do not respond within a set duration.

Most bridge security models assume their bridge nodes could be honest or malicious. When nodes are malicious and they collude, user funds from the bridge may leak away to any address the colluding nodes decide. LayerZero assumes that an Oracle and a Relayer would not collude, and Wormhole operates on a 13/19 multi-sig model. The idea with HyperLoop is that it allows all collusion and operates on a simple majority threshold compared to a supermajority threshold. Fewer nodes must be honest/live for the bridge to continue operating. This also means that fewer nodes get paid out bridge rewards, so the fee users are required to pay is also lower, and the total stake required by all the bridge nodes is cumulatively low. Users’ bridging fees can also be lowered when many bridge requests are batched together, resulting in a latency trade-off.
HyperLoop makes it so that even if malicious nodes decide to collude, their collusion will not be economically profitable. This is due to how HyperLoop’s economic incentive structure is designed. There is a sliding window mechanism that limits how much value can be transferred through HyperLoop during a given time. This sliding window limit is capped at 51% of the total bridge stake. So, with each sliding window period, like an hour, they are capped at 51% of the total stake owned by bridge nodes. This means that over a 24-hour span, the aggregate transfer volume is limited to 24 times the 51% sliding window threshold. On the other hand, you have bridge nodes that need to stake to be a part of the network. The stake of each node is set to be greater than the fractional value of the total value being transferred through the bridge.

Whistleblower Nodes can raise complaints to the AuditDAO if they see a bridge node relaying Bridge requests that are not in alignment with what they observe on HyperLoop’s smart contract endpoints on the source chains. Based on the evaluation, the AuditDAO can withhold the stake of malicious nodes. But how does this make collusion a “non-profitable” endeavor?
The stake of each node is designed to be greater than the fractional value of the total value being transferred through the bridge. So, a rational choice for bridge nodes here would be not to collude, which is the Nash equilibrium for the game induced while running the HyperLoop protocol.
While HyperLoop may be game-theoretically secure, there are possibilities of collusion between Whistleblower nodes and bridge nodes. In this scenario, the attack vector would rely on no alters being submitted to the AuditDAO and assume that no honest Whistleblower nodes are present.
Supra as a Multi VM Smart Contract Platform
The EVM has considerable adoption levels across chains. 87% of multi-chain devs work on at least 1 EVM chain. Even when it comes to innovation with smart contracts, Electric Capital’s developer report states that 71% of new contract logic is initially deployed on Ethereum and then redeployed across EVM-based chains such as Arbiturm, Polygon, Optimism, BNB, Base, and Avalanche. Another key point is that 75% of on-chain liquidity resides within EVM chains.

Lately, we’ve seen a handful of Move-based networks spring up since 2020. Sui and Aptos have Move VMs, over $900M in TVL, and had +1700 new developers write code on their platforms in 2023. contrasting with some long-standing issues or inconveniences with the EVM. Move was designed specifically for on-chain smart contracts that can efficiently handle asset transfers and are better optimized for memory usage. Let’s save this conversation on how Move or the EVM are positioned among execution environments for another report.
But what we see happening here is liquidity fragmentation across different ecosystems. Each VM may have its own way of managing its state, executing transactions, handling assets, etc.
Supra is developing its own approach to this problem with a MultiVM that extends support to a few different VMs to allow different ecosystems to be sufficiently atomic in their operations.
State Management:
Let’s look at how Supra’s MultiVM design could manage the state across different VMs. The state of different VMs can be partitioned into their respective segments across Tribes and Clans of Supra nodes. Nodes maintain each VM’s state within Merkelized state trees or sub-trees. This makes it easier for Supra to make cross-VM calls with state proofs without worrying about dealing with the entire state of different VMs, which can lead to computational overheads. With different states partitioned, Supra can independently run different VMs without transactions needlessly interfering with each other’s state and enabling multiple VMs to work in parallel.
Sufficiently Atomic Execution:
Generic composability across VMs would mean that applications running across different VMs could seamlessly interact and call functions on each other as if they were running on the same VM. While this sounds ideal, it could lead to too many complexities, like optimizing resource allocation, and result in poor developer experience and UX. Instead, Supra’s MultiVM strategy takes a sufficiently atomic approach to cross-VM execution where asset transfer can be supported. Let’s see what that would look like:
Alice wants to transfer USDC 100 from her EVM account to Bob’s Move VM account. The first part of this transaction is explicit, where Alice’s account would be debited with USDC 100 (T1), and a pending transfer to Bob on MoveVM would be recorded. The second part of this transaction (T2) would be implicit and executed in a deterministic context (like end of a block), meaning that a credit transaction to Bob’s account would NOT originate and go through a mempool to be executed. Instead, the pending transaction would trigger based on a predetermined condition, for example, when T1 is completed or at a particular time. When the condition is met, Supra’s nodes running MoveVM directly execute the second part of the transaction and update the state without T2 bearing the latency with having to originate and pass through a mempool.
Execution Sharding Across Tribes & Clans
With Supra’s Tribe-Clan architecture, all MultiVM nodes do not have to run all VMs and maintain the state of several VMs. Instead, MultiVM nodes can be sharded into smaller networks of Clans that can each run different VMs that process and execute transactions in parallel. With execution sharded across multiple Clans (that require 51% agreement), transactions targeting different VMs can be processed simultaneously rather than sequentially, leading to lower latency for smart contract execution.
The above-mentioned approach of execution sharding helps with horizontal scaling by distributing VMs across multiple Clans. MultiVM can also take a vertical scaling approach by having multiple VMs run on the same Clan. The nodes in Clans can observe the volume of transactions for each VM and optimize their hardware to allocate computational resources to different VMs based on their incoming traffic. So when one VM has more traffic coming in than the others, the nodes can dynamically adjust their computational resources in real-time. In cases where the incoming volume for one VM is increasingly higher, execution sharding can be used to move that VM to its Clan. 
MultiVM would initially be a cluster of disjointed VMs with separate databases. This could end up requiring a highly efficient resource allocation system. Inter-VM communication would not be as straightforward as there are different execution-related constraints across each VM. This may as well include a particular network’s time to finality. Supra will focus on a more efficient database layer that could account for multiple VMs.
Now that we’ve established some context around the functionalities of the MultiVM, let’s look at how Supra’s Automation Network can leverage this.
Supra Automation Network
Similar to what we discussed in the DORA oracles section, Smart contracts have no inherent access to external data. Smart contracts trigger or execute based on price data given to them by oracles. Now, this trigger does not necessarily have to be price data. It can be any kind of data, such as time, or a combination of triggers, such as when X’s price goes up, and Y’s price decreases at the block time of Z.
Existing solutions like Chainlink, Clockworks, and Gelato run network nodes that record automation requests and monitor blockchains like Ethereum and Solana for relevant transactions. Supra is also launching its smart contract automation service called Platinum Automation Network (PAN). Given that Supra has multiple integrated services such as oracles, VRFs, and bridges through which PAN can directly access blockchain states, oracle data, and cross-chain communication, it could enable more efficient onchain capabilities that wouldn’t rely on middleware secured by trusted parties.
Let’s have a look at how PAN could process an automation request. Alice wants to swap 100 X tokens for 50 Y tokens after 15 days when the price of Z token is below 0.5. This conditional transaction will first be registered with the nodes running the Automation services. Supra stores these conditions in their Automation Registry. dApps using Supra’s automation services can register users’ automation requests on this contract. Supra nodes then observe blockchain transactions and evaluate if Alice’s conditions have been met. When Alice’s condition of the Z price being below 0.5 after 15 days is met, Supra automatically executes the transaction of swapping 100X for 50 Y tokens.

If it happened Alice’s 100 X tokens were on Ethereum and the 50 Y tokens were on Sui, Platinum Automation would still be able to go about executing the transaction as PAN leverages Supra’s cross-chain solutions, Hypernova and HyperLoop that would help Alice swap X tokens on Ethereum with Y tokens on Sui.

Vertical Integration – Bringing it all together on Supra’s IntraLayer
The idea behind building a vertically integrated blockchain stack like Supra is that it can potentially solve liquidity fragmentation across different L1s and L2s. This could be possible by deploying smart contracts once and operating across chains while being backward and compatible with a legacy chain. Let’s look at how each tech component we’ve gone through has its edges and helps Supra’s IntraLayer come to fruition with Supra’s components working together.
Supra’s services, such as the oracles, VRFs, and bridges, can run on the same set of nodes backed with Moonshot consensus that can asynchronously enable those different components to work together with reduced latency and computational overheads.
Supra’s services do not rely on a fixed time for message delivery and can adjust themselves to the destination blockchain’s block times. While this model works well with asynchronous blockchains such as Sui, Supra can also flexibly integrate with partially synchronous blockchains such as Aptos, where message delivery is guaranteed for some duration but with room for some delays.

With the help of Supra’s DVRF, nodes are randomly sampled and shuffled for DORA Oracle Network, HyperNova, HyperLoop, VRF, and the consensus protocol itself. DKG protocols make frequent shuffling and randomization of these networks’ nodes possible.
Shuffling and randomized selection of nodes prevent collusion among oracle nodes and do not allow the same nodes to work on the same set of prices for a prolonged duration. This also guarantees better liveness of the oracle network with consistent and more frequent updates. With Suprachain running Moonshot consensus, oracles can gather, aggregate, and publish prices faster due to Supra’s optimistic way of operations, which allows oracles to work on newer versions of prices even before the current update is published. This helps Supra’s oracles reach higher throughput and lower latency updates. In newer versions of DORA, there will be an optionality to bypass the SupraChain SMR and directly post data to smart contracts, which would reduce latency even further.
HyperNova is well suited for L1 to L1 communication as it mostly inherits security from its underlying L1’s security. HyperLoop, on the other hand, is more suited for L2 to L2 communication. Suprachain and its Moonshot consensus protocol back both HyperNova and Hyperloop. As both the bridges run full clients of the source chain and destination chain, it may be taxing on Supra nodes to run full clients of +20 L1/L2 in a more drawn-out multi-chain future.

Platinum Automation Network – As Supra integrates multiple infrastructure services, it is positioned to offer smart contract automation services leveraging its tech stack of oracles, VRFs, and Bridges with SupraChain that run on the same set of nodes. The same set of nodes can run automation tasks as triggers can be placed on oracle-based prices, effectively reducing latency as no separate transactions need to be ordered from the mempool to be executed.
Cross-chain automations are also possible because of Supra’s vertically integrated stack. Supra’s Automation Network can submit a cross-chain transaction to destination chains with a proof that is verified on Suprachain through HyperNova. Supra’s smart contracts deployed on the destination can verify the proof and execute the cross-chain logic. As Supra works on a single-shot trigger mechanism, from a user’s perspective, just a single transaction with logic on the source chain would lead to all the subsequent actions across destination chains. This logic can include asset conversion rates, timing, predicates for scheduling tasks on Supra, consuming verifiable random functions, and even function calls to multiple other chains.
Multi-VM leverages Supra’s Tribe Clan architecture for execution sharding, allowing different VMs to run parallel and simultaneously, although they are not fully composable with each other. HyperNova and HyperLoop are in place to support cross-chain asset transfer requests that originate from MultiVM and span across L1s and L2s. MultiVM employs DORA oracles to tap into price feed data through push and pull models. It also facilitates private and verifiable random functions relayed to smart contracts operating on MultiVM. The Platinum Automation Network can manage any automation request by recording and executing, even cross-chain, with the help of HyperNova and HyperLoop.
It is to be noted that all Supra services are run on and distributed across nodes that are backed with stakes. Certain nodes will be running more than one of Supra’s services, but all nodes will not run all services simultaneously. These nodes, divided into Tribes and Clans, run on Moonshot consensus and require 67% supermajority honesty for Tribes and a simple majority of 51% for services run on Clans. A single Clan can be dedicated for DORA oracles, another for VRFs, and another for Bridges, and when it comes to the MultiVM, each dedicated Clan could run a separate VM. With more Clans, the Tribe size gets larger, and the speed at which Moonshot consensus is reached could also grow slower. With several Supra components running on each node, the hardware requirements for these nodes would likely fall on the higher end. Node runners should be proportionately incentivized with block rewards relative to the volume of components being run on their nodes. Effectively decentralizing each of Supra’s components and establishing a robust governance structure will be key.