Unlocked by Sei Development Foundation

    This research report has been funded by Sei Development Foundation. By providing this disclosure, we aim to ensure that the research reported in this document is conducted with objectivity and transparency. Blockworks Research makes the following disclosures: 1) Research Funding: The research reported in this document has been funded by Sei Development Foundation. The sponsor may have input on the content of the report, but Blockworks Research maintains editorial control over the final report to retain data accuracy and objectivity. All published reports by Blockworks Research are reviewed by internal independent parties to prevent bias. 2) Researchers submit financial conflict of interest (FCOI) disclosures on a monthly basis that are reviewed by appropriate internal parties. Readers are advised to conduct their own independent research and seek advice of qualified financial advisor before making investment decisions.

    Inside Sei Giga: Autobahn Architecture and Ecosystem Outlook

    Loso

    Key Takeaways

    • Sei Giga introduces Autobahn, a multi-proposer consensus that brings Narwhal-style parallelism to an EVM-compatible environment.
    • With a staggering target of 200,000 transactions per second potential and finality under 400 milliseconds, Sei Giga is positioned to become the fastest EVM-compatible blockchain in existence upon successful execution.
    • By strategically decoupling execution from consensus and leveraging parallel transaction processing, Sei Giga achieves scalability pushing the upper bounds of what’s been demonstrated in other high performance chains.
    • Comparative analysis with leading chains such as Base and Avalanche indicate Sei’s upcoming gas limit expansion could accelerate network adoption, increasing transactions and significantly enhancing user activity.
    • Scaling gas limits historically leads to reduced gas fees. While this increases accessibility, it can suppress short-term protocol revenue, a critical dynamic to monitor post-upgrade.
    • Thoughtful incremental deployment, targeted developer incentives, and advanced network analytics represent substantial opportunities to rapidly grow Sei’s ecosystem and maximize its transformative potential.

    Subscribe to 0xResearch Newsletter

    Introduction: Sei Giga is a major upgrade that rearchitects the Sei blockchain to achieve high levels of performance and scalability. It introduces Autobahn, a consensus protocol enabling a multi-proposer network design alongside a revamped execution engine and storage system. In essence, Sei Giga transforms Sei into the first high-throughput EVM-compatible Layer-1 to combine multi-validator parallel block proposals with a 1.5-round BFT pipeline—bringing Narwhal-style consensus and sub-400ms finality to the Solidity developer stack. By decoupling transaction execution from consensus and overhauling the throughput bottlenecks of the original Tendermint-based design, Sei Giga targets 5 gigagas of processing capacity (roughly 200,000 transactions per second) with finality under 400 milliseconds – a leap from the roughly 12,500 TPS and ~0.5 second finality of the current Sei network. The upgrade achieves this through multi-lane block production, pipelined BFT consensus, a custom-built deterministic EVM execution client, and an asynchronous state commitment mechanism. All of these improvements, each explored in detail in later sections, aim to deliver web2-level throughput and low latency on a decentralized, permissionless proof-of-stake network without compromising security or EVM compatibility.

    Sei Giga remains fully EVM-compatible, running standard Solidity/Vyper smart contracts with near parity to Ethereum’s functionality (with only minor differences such as the absence of EIP-4844 and certain fee mechanism changes). Validators in Sei Giga continue to secure the network via staking and are subject to slashing for malicious behavior, as in the previous Sei design. Gas fees and block rewards are paid in the native SEI token. The critical difference in Giga is how the blockchain processes transactions: instead of one block producer at a time processing transactions sequentially, Sei Giga’s Autobahn consensus and execution pipeline allow many block producers to work in parallel and transactions to be executed asynchronously. This design lets Sei Giga handle the demands of high-frequency trading and other intensive DeFi use-cases on a public ledger, while maintaining robust Byzantine fault tolerance and decentralization given its validator hardware requirements and data availability assumptions.

    Current State of Sei

    Sei v2 is the current mainnet version of the Sei blockchain, optimized for high-throughput, low-latency execution through Twin-Turbo consensus and optimistic parallelization. With 500ms block times and median transaction fees of $0.000025, Sei is currently the fastest and most cost-efficient EVM-compatible Layer 1 chain. While positioned as a general-purpose Layer 1, adoption and developer activity remain in the early growth phase relative to more established ecosystems.

    Sei currently ranks 15th among all blockchains by total value locked (TVL), with $535 million—representing 160% growth year-to-date. This growth is primarily driven by Yei Finance, the largest lending protocol on Sei. It accounts for $332 million of TVL, fueled by its USDC market that comprises over 50% of the protocol’s deposits. Incentives from a recent partnership with Binance Wallet have contributed to increased yield and user inflows. On the DEX side, Sei has posted $2.53 billion in year-to-date volume, on track to surpass its total 2024 volume by the end of May. Seilor Finance, which launched in late January, now accounts for 60% of daily DEX volume through its concentrated liquidity market maker (CLMM) pools.

    In terms of daily active addresses (DAAs), Sei currently ranks 8th among all blockchains, with gaming dominating activity on Sei in April, and accounting for approximately 80% of total usage. Leading contributors included European Fantasy League and Hotspring, both ranking among the top applications in the same time span by address engagement. The chain’s next iteration, Sei Giga, is designed to significantly increase network throughput, enabling the chain to support the next wave of trading, gaming, and AI applications at scale.

    Sei Giga Explainer

    Sei Giga fundamentally redesigns the chain’s core architecture along three pillars – Autobahn multi-proposer consensus, asynchronous execution, and parallel transaction processing – complemented by a new storage engine. Together, these components allow Sei to process far more transactions in parallel and finalize blocks faster, all while maintaining consistency and security. Below, we explore Sei Giga’s architecture and compare it to other next-generation blockchains, demonstrating why its multi-proposer EVM design is gamechanging.

    Autobahn Consensus: Multi-Proposer Lanes and Tip Cuts

    At the heart of Sei Giga is the Autobahn consensus protocol. This replaces Tendermint’s single-proposer sequential round with a multi-proposer architecture. In Autobahn, every validator in the network can propose blocks concurrently in their own “lane.” A lane is essentially a sequence of batched blocks that a validator publishes, one after another, independent of the other validators’ lanes. Instead of waiting for one leader to finish proposing a block before the next begins, all active validators are continually producing proposals in parallel. This design immediately multiplies the potential throughput: if 100 validators are online, 100 or more proposals can be “in flight” at the same time, vastly increasing the rate of block production.

    2.png

    Autobahn’s multi-proposer pipeline: Each car represents a block proposed by a validator on a separate lane. The Tip Cut <2,2,2,3> (top) selects the latest block from each lane to finalize together in one round. This parallel lane approach allows all validators to contribute blocks concurrently, significantly increasing throughput and preventing any single validator from slowing down the network.

    To make this work safely, Sei Giga decouples data availability from the ordering consensus using Proof of Availability (PoA). Whenever a validator creates a new proposal (often analogized as a “car” in that validator’s lane), it immediately broadcasts a small metadata message to other validators containing the proposal’s ID (position and parent reference in the lane) and a cryptographic digest of its contents. The other validators do not need to download the full batch of transactions right away; they first vote on the availability of this proposal. If at least  validators signal that they have received and stored the proposal data, this proposal obtains a PoA certificate. A PoA for a proposal means “we have cryptographic assurance that the data for this batch is available and retrievable from at least one honest node.” Importantly, this lightweight certification happens quickly and off the critical path of global consensus – validators are effectively saying “I’ve got the data, you can count this batch as ready.”

    Each validator’s lane produces a chain of such proposals, each referencing the hash of the previous proposal in that lane. By chaining them, a new tip of a lane implicitly attests that all prior proposals in that lane are also available (this is a form of transitive data availability guarantee: if you have the latest, you can get the earlier ones from it or from others). The Autobahn design continuously disseminates these proposal batches in the background, using the network bandwidth of all validators in parallel. This is in stark contrast to Tendermint, where a single block proposer had to send the entire block to everyone synchronously. Here, data propagation is spread out and pipelined: multiple proposals from different lanes are being gossiped and buffered at any given time.

    6.png

    The consensus layer of Autobahn operates by periodically taking a “tip cut” of all lanes. A tip cut is essentially a snapshot of the highest proposal (latest tip) from each validator’s lane that has a PoA. In practical terms, a designated leader for the current round (elected in a manner similar to Tendermint’s rotating proposer or via stake-weighted selection) will collect the set of all lane tips and bundle these references into a single consensus proposal. This bundle – the cut of tips – represents a composite block containing, conceptually, one batch from every validator. The leader then proposes this cut to the whole network. Validators don’t need the contents of all those batches immediately (because of the PoA guarantees); they just need to agree on the set of batch identifiers (the tips) that form the cut.

    Autobahn consensus runs through a streamlined two-phase commit for each cut. Upon receiving the leader’s proposal of a tip cut, validators enter a Prepare phase: they validate that all proposed tips carry valid PoAs (ensuring availability), then cast votes to approve the cut. Once the leader gathers supermajority votes, it forms a PrepareQC (quorum certificate) and moves to the Commit phase, where validators then vote to finalize the cut. When enough commit votes are collected (potentially with an additional short Confirm phase if needed to gather any remaining acknowledgments), a CommitQC is formed and the cut is officially finalized into the blockchain. Finalizing a tip cut means that all the included lane proposals (the latest from each validator) are now irreversibly ordered and committed as part of the global chain.

    One of Autobahn’s optimizations is pipelining of consensus rounds. While a cut is in its commit phase, the network can already start on the next cut’s Prepare phase with a new leader. This pipelining, combined with reducing Tendermint’s normal 3-phase (pre-vote, pre-commit, commit) BFT cycle into effectively 1.5 roundtrips, allows Sei Giga to significantly cut down block-to-block latency. In effect, Autobahn can finalize blocks in about 1.5 network round-trips instead of Tendermint’s ~2.5–3, roughly halving the time to finality. The result is that consensus can keep up with the barrage of proposals coming from all lanes without becoming a bottleneck. If a leader fails to propose a cut or is slow (e.g., crashes or is malicious), the protocol simply triggers a view-change (just like Tendermint’s timeout mechanism) to switch to a new leader, ensuring liveness.

    Crucially, the decoupling of data availability from consensus means that ordering decisions are made based on PoA certificates, not raw block contents. Validators will fetch any missing batch data asynchronously (on a separate thread or after consensus) from the holders of the PoA if they don’t already have it. This ensures that large blocks or slow data propagation do not stall the agreement on order – the chain can decide “which batches go next” without having downloaded everything first. Any node that is missing some transactions from a committed cut can retrieve them from peers after the fact, before executing them. This approach massively improves throughput by amortizing the cost of data dissemination across many proposals and keeping it off the critical path of reaching consensus. In summary, Autobahn’s multi-proposer, cut-based consensus allows dozens or hundreds of blocks to be finalized in one round , whereas the old model could only finalize one. This underpins the 50× or more throughput gain observed from Sei v2 to Sei Giga.

    Beyond performance, the multi-proposer model also improves censorship resistance. With many validators proposing in parallel, no single leader can easily censor a transaction globally – if one validator ignores or refuses a transaction, another can include it in its own lane. Transactions effectively have many paths to inclusion. This pluralism, combined with the random assignment of incoming transactions to different validators (Sei Giga’s design eliminates the global public mempool; instead, transactions submitted via RPC are randomly routed to a validator for immediate inclusion), makes it much harder for any coalition to systematically exclude a transaction from the chain. And because multiple proposals are merged, there is no single “MEV auction winner” deciding all ordering; the ordering emerges from the cut of many proposals, diluting the control any one proposer has over the entire block’s contents. In effect, Autobahn brings a decentralized block building process into the base layer protocol itself.

    Asynchronous Execution and State Commitment

    A key innovation in Sei Giga is the decoupling of transaction execution from the consensus ordering. Classic blockchain designs (including Cosmos/Tendermint and Ethereum) intertwine ordering and execution: each block’s transactions are executed by validators as part of deciding the block, which slows down consensus if the execution is heavy. Sei Giga instead takes advantage of the fact that EVM execution is deterministic – given a fixed ordered list of transactions and a starting state, all honest nodes will compute the same resulting state. This means that the network can first agree on the sequence of transactions (the blocks) and defer the actual computation of their effects to a slightly later stage, without risking divergence.

    1.png

    When Autobahn finalizes a tip cut containing many batched transactions, all nodes commit to that ordering (ensuring consensus on what transactions and in what order). At this point, those transactions are not yet executed; they are simply agreed upon. Each validator (or a subset of dedicated executor nodes) then proceeds to execute the transactions asynchronously, in parallel with the consensus continuing to finalize subsequent cuts. This is possible because the order won’t change – it’s fixed by consensus – so execution can be done independently by each node off the critical path.

    Concretely, once block (or cut) number N is finalized by the consensus, the network will not stall to process it. Instead, validators move on to propose and finalize block N+1, N+2, and so on, even if some nodes haven’t finished executing N yet. The execution of block N is performed in the background. After execution, the resulting state (typically summarized by a state root or similar hash of the new state) needs to be agreed upon by the network to ensure everyone indeed computed the same outcome. Sei Giga implements a state consensus mechanism whereby the state root of block N (or some cryptographic commitment to the executed state) is gossiped and eventually included in a later block (say block N+x) once enough validators attest to it. In practice, when a validator finishes executing block N, it will include an attestation of the new state in its next proposal. When a supermajority of validators have executed N and agree on the state, that consensus is reflected on-chain, finalizing the state of N with absolute certainty.

    This asynchronous state commitment typically occurs a few blocks after consensus finalizes a block, though the timing can vary. The primary benefit is that execution is moved off the critical path, improving throughput regardless of execution speed. However, it provides a safety net: if a validator computed a different state (e.g. due to a bug or missing some transactions), this will be noticed since it won’t be able to produce a matching state attestation. In normal operation, as long as fewer than one-third of validators are faulty, the honest majority will agree on the correct state and include that commitment, and any stragglers or divergent nodes will realize they’re out of sync and can rectify their state (by re-executing with the correct data or simply obtaining the correct state). If more than one-third of the network somehow diverged on state (which would imply a serious fault or attack, since by assumption at most f are Byzantine), the discrepancy would be detected and the chain could halt or initiate recovery, since a 2/3 consensus on state wouldn’t be reached for that block. In essence, the asynchronous execution model turns execution into another phase that eventually achieves consensus (state consensus), but this phase is decoupled in time from the ordering consensus.

    The benefit of this approach is that block production never waits for execution. Validators can keep finalizing new blocks at high speed, confident that execution can catch up and any issues will be detected via the state attestation process. This is analogous to how some Layer-2 rollups or optimistic systems work – they assume execution is correct and later verify – but here it is all happening within one chain and within a short time window (just a few blocks delay, not weeks). The result is a much smoother pipeline: ordering happens continuously at network speed, and execution is a parallel process that feeds results back into the chain once ready. This contributes to reduced latency per block and also removes the execution workload from the critical path, allowing consensus to tolerate even very compute-heavy transactions or full blocks without slowing down ordering finality.

    To support this, Sei Labs built a new EVM execution client from scratch for Sei Giga. This execution engine is optimized for determinism and performance, ensuring that all validators can run transactions identically and quickly. It adheres to the Ethereum Yellow Paper semantics (minus a few differences like how fees are handled, as noted), so any smart contract that runs on Ethereum can run on Sei Giga with the same results. The difference is that this engine is designed to run out-of-band from consensus and report results asynchronously. In practice, each validator runs two logical components: a consensus process (handling Autobahn protocol messages and block ordering) and an execution process (applying transactions and maintaining state). These might be two threads or even separate services that communicate within the node. Depending on the node architecture, some implementations may run consensus and execution as isolated processes for fault tolerance and modularity, while others may opt for a multi-threaded setup within a single binary for performance and simplicity. Full nodes (non-validators) can also run the execution client to serve RPC queries and stay in sync with state, even if they don’t participate in consensus. There are also lightweight node roles in Sei Giga: for example, “light nodes” that only follow consensus and state commitments (without executing everything) and rely on proofs, or specialized data nodes that retain recent state data to serve network requests. By splitting responsibilities, the network allows different participants to contribute where they are needed – but validators themselves typically run the full stack to secure the chain.

    An important consequence of asynchronous execution is that transaction finality comes in two stages: first the transaction is finalized in the ordering sense (once the cut is committed, the transaction is guaranteed to be included in that block and will not be reverted), and then a short time later the transaction’s effects are finalized in the state (once the state root is confirmed). For end-users, this process is usually seamless – the delay between these two is small – but it adds an extra layer of security. By the time a user sees the block confirmed, they can be confident it will execute deterministically, and soon after the network will collectively sign off on the resulting state. This design gives Sei Giga both speed and eventual absolute finality on state consistency.

    Parallel Transaction Execution (Block-STM Style)

    Sei Giga not only separates execution from consensus, it also accelerates execution itself through parallel processing of transactions. Traditional EVM execution on a block is single-threaded: transactions run one after the other in the order they appear, which means even on hardware with many CPU cores, only one core is really busy at a time applying transactions. This can be a major throughput bottleneck for blocks with many complex transactions. To address this, Sei Giga adopts an approach inspired by Block-STM (Software Transactional Memory) as seen in newer Move-based chains, employing optimistic concurrency control (OCC) for transaction execution.

    When a block (or a batch of transactions) is ready to be executed, the execution engine will speculatively run many transactions in parallel across multiple threads or cores. The idea is to assume that most transactions do not conflict with each other, and therefore can be processed simultaneously without issue. Each transaction, when executed in parallel, operates on an isolated workspace: it reads the necessary state (account balances, contract storage, etc.) and writes any changes to a private buffer, not immediately to the global state. Because the final ordering of transactions is known (from consensus), we can label them t_1, t_2, …, t_n in sequence. The engine will distribute these transactions across, say, many worker threads and begin executing them, but it must ensure that the outcome is equivalent to having run them in order.

    To maintain correctness, Sei Giga’s OCC execution defines a dependency rule: if a later transaction t_j in the block accesses (reads or writes) some state that an earlier transaction t_i wrote, then t_j is said to depend on t_i and must see t_i’s effects. In sequential execution, this is naturally enforced because t_i runs first. In parallel execution, however, t_i and t_j might be running at the same time on different threads. The system needs to detect if such a conflict occurred. During parallel execution, each transaction keeps track of the set of state keys it read (R_i) and wrote (W_i). After a transaction finishes its tentative execution, it enters a validation phase where the engine checks for conflicts: specifically, for a transaction t_i, it will look at any earlier transaction t_k (k < i in ordering) that have already committed, and see if t_k wrote to any address that t_i read or wrote. If such an overlap is found, it means t_i may have executed with stale or inconsistent data, violating the sequential equivalence.

    In the event of a conflict, the affected transaction (t_i) is rolled back – its tentative changes are discarded – and it will be re-executed, usually in a more isolated way (for example, the system might then run it solo or put it in a queue to run after the conflicting earlier transactions). If no conflict is detected, the transaction’s results are committed: its buffered state changes are merged into the global state. By processing and then validating, the execution engine ensures that parallelism does not introduce nondeterministic results; any execution that would differ from the sequential model is caught and corrected by re-running the transaction under the proper conditions.
     

    5.png

    In practice, this optimistic strategy pays off because in many blocks the majority of transactions do not touch the same state. For instance, DeFi trades by different users on different markets, or interactions with independent smart contracts, often don’t interfere with each other’s state. Thus, they can be executed truly in parallel and committed without rollback. Only when two transactions contend for the same resource (like two swaps on the exact same liquidity pool, or two operations on the same user account balance) does the system need to serialize one after the other. The Block-STM-style OCC ensures that any such contention is resolved by automatic retries, and ultimately it guarantees that the final state will be exactly as if the transactions were executed one by one in the given order. However, the total time to execute the whole block is greatly reduced in the common case, because dozens of transactions might be processed concurrently. The end result is that block execution throughput scales with available CPU cores and modern multi-core servers can handle a much higher TPS rate.

    If a block happens to have many conflicting transactions (for example, a surge of trades on the same asset causing everyone to hit the same contract state), the optimistic execution may lead to several rollbacks, and the benefit of parallelism diminishes. In extreme cases, the Sei Giga engine can detect if a particular block’s optimistic execution is thrashing (too many conflicts) and fall back to sequential execution for the remainder of that block. This ensures that progress will be made and avoids livelock in pathological scenarios. In the worst case, the performance would be similar to a normal single-threaded chain for that block, but in the typical case, Sei Giga will leverage parallel execution to finish processing far faster. By combining asynchronous consensus (so there’s no rush from the consensus side) with parallel execution (to maximize throughput on the execution side), Sei Giga’s design can maintain very high throughput even under complex workloads.

    4.png

    Storage Engine and Flat State Accumulator

    Another less visible but critical part of Sei Giga’s architecture is its custom state storage engine, which was re-engineered to handle high write throughput and large state size without becoming a new bottleneck. In traditional blockchains like Ethereum, the state (accounts, contract storage) is maintained in a Merkle Patricia tree – a cryptographic tree structure that allows generation of proofs for light clients but is expensive to update (every transaction changes the tree and requires hashing many nodes). Cosmos chains use variants of Merkle trees (IAVL or ICS-23 proofs) for state as well. These structures impose significant overhead per transaction and can slow down block processing as the state grows. In Sei’s case, while SeiDB still uses a Merkle tree for the state-commitment layer, it mitigates performance bottlenecks by memory-mapping the active tree (via MemIAVL) and aggressively pruning it—keeping hashing largely off the critical write path. Meanwhile, the historical state is offloaded to a log-structured key-value store, allowing bulk I/O to occur asynchronously.

    Sei Giga takes a different approach by using a flat key-value store backed by a fast Log-Structured Merge (LSM) database (RocksDB), combined with an append-only write-ahead log (WAL) and an asynchronous cryptographic accumulator for state commitments. In this design, when transactions execute and modify the state, the changes are written as key-value pairs to a RocksDB store (which efficiently handles sequential writes and batched updates). The update is first recorded in the WAL – a linear log of changes – to ensure durability (so if a node crashes, it can replay the log to recover state up to the point of failure). Writing to the WAL and then the LSM store is an efficient process that avoids a lot of random disk I/O, and batch commits make it even faster under heavy throughput.

    Instead of recomputing a Merkle root for the entire state after every block (which would be infeasible at 200k TPS), Sei Giga uses a pairing-based cryptographic accumulator to produce state proofs and commitments. This accumulator can be thought of as a compact mathematical object that can absorb state updates and later provide proofs of inclusion or exclusion for elements, without needing to store a giant tree. It supports batch updates and constant-size proofs, leveraging advanced cryptography (likely based on techniques referenced in research, possibly something like a vector commitment or polynomial commitment scheme as hinted by “[VB20]” in the whitepaper). The accumulator is updated asynchronously alongside the state: after each block or after a set of blocks, the accumulator is recomputed or incrementally updated to reflect the new state. Because it’s not tied to each transaction synchronously, it can lag a bit and catch up in batches. This means validators can commit state changes quickly to their database and defer the overhead of cryptographic summarization.

    3.png

    The result is a hybrid approach: fast immediate writes to the state (for throughput) and deferred, batched cryptographic proof generation (for security and verifiability). Any node or light client that wants to verify the state at a certain block can obtain a state commitment (for example, the accumulator’s value) and a proof for the specific keys of interest, and can check that against the accumulator without every node having to recompute a full Merkle path. Because the accumulator yields succinct proofs, even light clients or nodes that didn’t store the whole state can trust that a given account or storage slot has a certain value at a particular block, as long as they trust the accumulator commitment published by the network.

    Additionally, Sei Giga’s storage design incorporates tiered storage strategies. Not every full node needs to hold all historical state forever. For scalability, recent state (the last X days or weeks of data) can be kept on fast storage for quick access by dApps and users, while older state data can be moved to slower archival storage or even pruned if the accumulator can provide proof for it. Data nodes in the network may opt to store full history and serve archived queries, while most validators might only keep the current working set and rely on the ability to reconstruct or validate older state via the accumulator proofs when needed. This ensures that as the chain grows (potentially terabytes of data per year with the high throughput), the storage burden doesn’t all land on every participant in a way that would centralize the network. Instead, one can participate with a focus on current state and consensus, and leave deep history to specialist archive nodes, all while the system still provides cryptographic guarantees for that history.

    In summary, the storage subsystem avoids the typical performance penalty of maintaining a Merkle tree by switching to a flat data store for raw reads/writes and an asynchronous accumulator for state commitment. This means applying thousands of transactions is primarily a matter of database writes (which RocksDB handles efficiently) plus occasional background cryptographic ops, rather than thousands of hash computations blocking each transaction. The approach preserves security — because the accumulator and proofs ensure any node can verify state correctness — and it improves decentralization by enabling light clients and partial storage nodes to exist. As throughput scales up, this storage design should gracefully handle the load, whereas a naive approach might choke on the sheer volume of state updates.

    Sei Giga Performance Metrics and Improvements  

    Benchmarks show Sei Giga reaching ~200,000 TPS with under 400 ms finality—placing it at the top tier of Layer-1 performance. If these figures hold on mainnet, Sei could become the fastest EVM-compatible Layer 1 and one of the quickest blockchains overall, rivaling platforms like Solana, Sui, and Aptos. All performance metrics stated below have been achieved within the internal devnet in a simple token transfer environment.

    Throughput: Internal devnet tests show Sei Giga sustaining ~5 gigagas per second—roughly 200,000 to 250,000 simple token transfers per second on a 40-validator network, assuming ~21k gas per transfer and zero contention. This represents a 16× leap over Sei v2’s ~12.5k TPS ceiling. If similar performance holds on mainnet, Sei Giga’s bandwidth would approach Web2-scale throughput, unlocking latency-sensitive use cases like high-frequency trading and real-time gaming. These figures are based on synthetic traffic in a controlled devnet environment; actual mainnet throughput will depend on factors such as real-world contract complexity, network latency, and validator hardware. Real applications with higher gas-per-transaction will naturally achieve lower TPS.

    Latency (Finality):  Sei Giga finalizes blocks in under 400 milliseconds under normal conditions—an incremental improvement over Sei v2, which already achieved ~390 ms finality using its Twin-Turbo consensus. While the raw latency gain is modest (<1.5×), Giga’s architecture introduces more consistent performance at scale and under load, helping sustain low-latency finality even as throughput increases.

    Comparing Sei v2 vs Giga: Overall, the Giga upgrade brings an order-of-magnitude performance boost across the board. By decoupling execution from consensus, enabling multi-proposer parallelism, committing multiple blocks per round, reducing consensus communication steps, and upgrading data availability, Sei Giga achieves:

    • 16x higher execution performance – ~200k TPS vs 12.5k TPS on Sei v2.
    • 2x quicker consensus finality – 1.5 rounds vs 3 rounds per decision (cut finalization).
    • 70x increase in block production rate – up to 180 blocks vs 2.5 blocks per interval previously (due to multiple lanes and batched commits).

    These improvements underscore that Sei Giga is not a linear upgrade but a ground-up redesign for scale.

    Comparison with Other High Performance Blockchains

    Sei Giga’s ambitious combination of multi-proposer consensus and asynchronous, parallel execution places it at the cutting edge of Layer-1 design. It’s informative to compare how it stands relative to other projects and proposals aiming to scale blockchain performance, especially those involving parallelization or multi-block-producer ideas:

    Solana Pre-Firedancer (Jito and Multi-Leader Research): Solana is often cited for its high throughput and parallel execution model via the Sealevel runtime. Solana achieves parallelism by requiring transactions to specify which state accounts they will read/write, enabling non-overlapping transactions to run concurrently on its single leader. However, Solana still uses a single-proposer (single leader) at any given slot – transactions are all sequenced by one leader (which rotates over time). This means Solana does not natively have multiple block proposers working at the same time; the network is fast (400ms slots) but still one block at a time. Projects like Jito on Solana are focused on MEV and block building; Jito introduces an off-chain auction where multiple builders can compete to produce optimized blocks for the single leader. This introduces the concept of multiple parallel block builders, but ultimately only one builder’s block wins each slot. Solana’s community (including input from Anatoly Yakovenko and others) has discussed concepts akin to Multiple Concurrent Proposers (MCP) to further increase throughput and censorship-resistance, where more than one leader could produce blocks in the same slot. Ideas like application-specific serialization (ASS) have been floated, where different applications (like specific DEXs) could have their own ordering rules even under a multi-proposer regime, to allow things like prioritized order cancellations. These ideas, however, remain at the research or early proposal stage for Solana. In contrast, Sei Giga is implementing a form of multi-leader consensus now. Solana’s Turbine propagation and proof of history give it a unique approach to fast block dissemination, but if Solana were to move to MCP, it would face similar challenges of merging multiple sub-blocks – something Sei is tackling head-on with Autobahn. In essence, Sei Giga can be seen as taking some of Solana’s strengths (parallel processing, high throughput) and pushing them further by removing the single leader bottleneck entirely, all while maintaining EVM compatibility (Solana uses its own VM). The trade-off is additional complexity in consensus, but if successful, Sei Giga would achieve a level of decentralization in block production that Solana’s current design doesn’t have (Solana relies on economic incentives like stake-weighted leaders, and it is addressing builder centralization with Jito and potentially stake re-weighting).

    Ethereum (MEV-Boost and Layer 2s): Ethereum mainnet has chosen a different path to scaling, focusing on roll-ups (Layer 2 networks) and danksharding for throughput, and Proposer/Builder Separation (PBS) to address MEV and proposer monopolies. In Ethereum’s PBS via MEV-Boost, there are indeed many block builders who can propose blocks to a validator, but in each slot only one block is ultimately chosen by the validator to be the canonical one. This is not multi-proposer in the protocol sense – it’s still one chain of single blocks, just that block contents might come from a competitive marketplace. Ethereum’s consensus (now Proof-of-Stake via the Beacon Chain) remains single-proposer and sequential, and every block’s transactions are executed by all validators in lockstep as part of reaching consensus (although execution and consensus client are separated processes, they still operate serially per slot). By contrast, Sei Giga internalizes parallel block proposals into the L1 protocol and doesn’t rely on external builders or auctions to increase throughput – it directly orders multiple blocks per round. Where Ethereum pushes scale to L2, with each L2 doing its own execution and posting compressed results to L1, Sei Giga attempts to scale the L1 itself using parallelism and asynchronous processing. One can liken Sei’s design to a single chain mimicking some advantages of danksharding or L2: multiple lanes (like mini-blockchains) that periodically reconcile into one chain (similar to how shards would converge or how many L2 proofs feed into L1). Ethereum researchers have theorized about higher throughput via concepts like DAG-based BFT or pipelined hotstuff, and even multi-proposer slots to improve censorship resistance (e.g., having two proposals per slot to reduce the impact of one censoring validator), but these are far from implementation on Ethereum mainnet due to their complexity and the need for extreme caution. In the meantime, Ethereum throughput is being increased by moving execution load off-chain (rollups) rather than accelerating on-chain execution. Sei Giga’s approach can be seen as a more monolithic scaling: keep the execution on L1 but make that L1 so efficient (via concurrency and multi-proposal) that it can match what a bunch of L2s might collectively achieve, at least for certain use cases. This could attract developers who prefer a single unified environment (no cross-rollup latency, bridging issues, and capital fragmentation) but still need high performance.

    Monad: Monad is a new Layer-1 project that also aims to scale EVM performance dramatically. Monad’s strategy, however, has some differences. It focuses heavily on parallel execution on a single proposer chain. Monad’s architecture uses optimistic concurrency (much like Sei Giga’s Block-STM approach) to run many transactions in parallel, and they claim extremely high throughput by optimizing the EVM execution engine and transaction scheduling. Monad also has introduced the concept of asynchronous execution to some degree – separating consensus from execution – so that consensus can lock in an order quickly and execution can be done in parallel. This sounds quite aligned with what Sei Giga does on the execution side. The key difference is on the consensus side: Monad, as far as public information suggests, does not implement a multi-proposer consensus. It likely uses a fast single-proposer consensus (perhaps a variation of HotStuff or Tendermint with pipelining) to achieve 1-second blocks or so. Thus, all the throughput gains in Monad come from executing a single block’s transactions faster (via parallelism and perhaps larger blocks), but it’s still one block per round. Sei Giga, on the other hand, gains additional throughput by having many blocks per round. Both approaches have merits: Monad takes a single-leader, one-second HotStuff chain and wrings more throughput out of it by running every block’s transactions in optimistic parallel; unlike Sei Giga’s multi-proposer Autobahn, Monad still produces one block per slot, so its scale-up ceiling is the size of that block rather than the number of concurrent leaders. In any case, both Monad and Sei Giga represent a new wave of chains that are not satisfied with the “one CPU, one block at a time” paradigm of previous EVM chains.

    MegaETH: MegaETH is another project often cited in the high-performance EVM conversation, but it follows a distinct approach focused on vertical scaling and client-side optimizations. Rather than redesigning consensus or aggressively pursuing parallelism, MegaETH aims to push the boundaries of the traditional EVM by optimizing every layer of sequential execution—using aggressive caching, state management improvements, and potentially even specialized hardware. While sometimes described as “single-threaded,” this is a simplification: MegaETH still leverages internal parallelism for validation and runs on multi-core servers. Its philosophy is to “scale up” rather than “scale out,” reducing inefficiencies in EVM processing paths and trie structures to maximize throughput on a per-core basis.

    MegaETH claims performance in the range of 100,000 TPS with block times as low as sub-millisecond to 10 milliseconds—well beyond what most Layer-2s advertise. It does not substantially alter Ethereum’s consensus model and instead operates as a Layer-2 or sidechain that squeezes maximal efficiency from EVM’s current architecture. The upside is simplicity, determinism, and compatibility. The trade-off, however, is a natural ceiling: vertical scaling alone may not suffice for workloads demanding sustained throughput in the hundreds of thousands or more. In contrast, Sei Giga embraces horizontal scaling—combining multi-proposer consensus and parallel execution to achieve orders-of-magnitude gains. In the long run, hybrid approaches that combine MegaETH-style client optimizations with architectures like Sei Giga’s could define the frontier—especially as GPU or FPGA acceleration enters the conversation.

    In addition to these, it’s worth mentioning that other high-throughput chains like Aptos and Sui (which are not EVM-compatible) have implemented parallel execution (Block-STM in Aptos, and a similar mechanism in Sui) and very fast consensus (DAG-based Narwhal and Bullshark, etc.). They prove that parallel execution in practice can work and yield significant performance boosts. Sei Giga is taking those lessons and bringing them to an EVM chain, while also adding the multi-proposer consensus which neither Aptos nor Sui do (their consensus is still one leader at a time, albeit with quick finality). So, Sei is positioned at an interesting intersection of ideas proven in different contexts.
     

    Advisory Chart Templates (8).png

    Predictive Impact Analysis

    We attempt to forecast the potential impact of a major gas limit increase on Sei's network activity by drawing on comparative case studies from Base and Avalanche—two chains that have recently experienced major gas limit increases. Through this, we establish three forward-looking scenarios for Sei's post-Giga performance and evaluate implications for transactions, addresses, contracts deployed, fees, and ecosystem scalability.

    Methodology

    Historical Case Studies
    We use two distinct throughput expansion models for comparison:

    • Base: Multiple incremental gas limit increases across 2024–2025, allowing us to evaluate marginal changes in network activity using rolling 14-day windows before and after each increase.
    • Avalanche: A single-event upgrade (Cortina on April 25, 2023) which increased gas limits from 8M to 15M. Uplift was evaluated over a 60-day period pre- and post-upgrade.

    For both chains, we extracted and normalized daily:

    • Successful transactions
    • Unique active addresses
    • Contracts deployed
    • Average fees
    • BTC price (to adjust for broader market conditions)

    Sei Baseline
    Baseline values for Sei were computed using the trailing 90-day average prior to the Giga upgrade announcement:

    • Transactions: ~672,000
    • Active Addresses: ~311,000
    • Contracts Deployed: ~4,854
    • Average Fee: ~0.00055 SEI

    Uplift Modeling
    To model Sei's post-Giga performance, we established three scenarios:

    • Conservative: Reflects Base's typical low-end uplift per upgrade
    • Moderate: A blended average of Base and Avalanche behavior
    • Aggressive: Mirrors Avalanche's full impact and Base’s upper-quartile spikes

    Comparative Insights

    Base (Incremental Increases)
    The relationship between throughput and network activity was best observed through Base’s series of incremental gas limit increases across 2024 and 2025. Each increase served as a natural experiment, providing insight into throughput elasticity and sector responsiveness. Notably, while transaction counts often spiked immediately post-upgrade, active address growth followed on a slight delay, typically 3-5 days later. This suggests that while transaction cost and capacity relief prompts activity, user base expansion is contingent on ecosystem response and usability.
     

    4.png

    When viewed through the lens of 7-day rolling averages, the most consistent and substantial gains in daily transactions were observed in EOAs and Finance, which rose by over 600,000 and 267,000 transactions respectively following gas capacity upgrades. AI-related activity also grew meaningfully, suggesting deeper engagement from automated agents and data-centric use cases. In contrast, Bot and Social transactions declined sharply on a smoothed basis—down 767,000 and 491,000 respectively—indicating a shift in behavioral patterns rather than short-term noise. While memecoins and gaming saw raw volume surges in isolated cases, their 7-day averages declined, pointing to transient activity rather than structural growth. These surges exhibited a logarithmic pattern, with successive increases yielding diminishing marginal returns in some sectors—highlighting the importance of pacing and developer readiness.
     

    3.png

    Avalanche (Single-Event Increase)
    Avalanche’s Cortina upgrade on April 25, 2023, triggered a clear and sustained increase in network activity. In the 60 days following the upgrade, successful transactions nearly doubled (+99.7%) and daily active addresses surged by over 81%, highlighting strong user engagement and throughput utilization. 

    2.png

    However, contract deployments rose only ~2% with a significant spike leading into and after the upgrade, suggesting that while the upgrade catalyzed usage from existing applications or users, it did not drive a corresponding wave of new development activity. This pattern underscores that while throughput expansions can unlock latent demand, deeper ecosystem growth may require more than just infrastructure upgrades.

    Advisory Chart Templates (7).png

    Key Findings and Ecosystem Implications  

    Giga Enables Elastic Growth, But Not Linearly: While Sei's architecture can theoretically process hundreds of millions of gas per second, actual network activity will only scale if complementary applications and developer support are in place. The uplift in Base was incremental and marginal per upgrade, while AVAX saw sharp but non-compounding gains. Sei must manage expectations accordingly.

    Fee Compression Enhances Competitiveness While Tapering Revenue: All three scenarios show a decline in average fees, even with volume increases—driven by increased capacity and fee market elasticity. This positions Sei as a favorable platform for high-frequency or microtransaction-intensive sectors (DeFi, EOAs, and gaming specifically). While aggressive growth in transaction count helps lift topline revenue, the average fee reduction caps the magnitude of those gains. For instance, under the aggressive scenario, revenue rises only ~2% despite a 20% increase in transaction volume. This highlights the delicate balance between scaling throughput for user adoption and preserving sustainable revenue models for validators and ecosystem stakeholders.

    Contracts Are the Bottleneck or Breakthrough: Across all cases, contract deployment remained low-growth unless proactively incentivized. The aggressive scenario assumes targeted growth initiatives (e.g., grant programs, gas credits). Without these, gains in dev activity may lag user growth.

    Observability Infrastructure Is Critical: A 500x jump in capacity demands robust monitoring. Tools for mempool tracking, sector-level dashboards, and liquidity flow analysis will be necessary to understand and steer usage effectively.

    Areas of Opportunity

    With Sei Giga delivering a 500x capacity leap, there is a strong opportunity to sequence the deployment gradually. This would replicate the success of Base’s incremental model, allowing time for application-level scaling, reducing the risk of network shocks, and maximizing sustained adoption. Gradual rollout may also alleviate short term revenue decreases due to impact on fees. As gas limit scales, a proportionate increase in activity and gas usage will be needed to offset the shortfall.

    Given the high sensitivity of contract deployments to throughput relief, Sei can foster development by offering targeted grants, fee discounts, or accelerator programs. Prioritizing segments like DeFi, EOAs, and gaming—where responsiveness was highest—may yield immediate network effects.

    The dramatic jump in transaction capacity will necessitate strong observability tools. There is a clear opportunity to invest in analytics, mempool tracking, and liquidity monitoring systems to ensure real-time insights into network behavior and emergent sectoral trends.

    Conclusion  

    Sei Giga is more than a throughput upgrade—it is a full re-architecture of the Sei blockchain that unlocks performance previously out of reach for EVM-compatible chains. With Autobahn consensus, asynchronous execution, and a multi-lane block production pipeline, Sei Giga transforms the network into the first EVM L1 to natively support parallel block proposals and execution at scale.

    The predictive modeling outlined in this report, grounded in empirical data from Base and Avalanche, reinforces the opportunity and caution required for such a leap. While Sei’s projected metrics—more than doubling transactions and multiplying address and contract activity—suggest massive upside, the path to that scale must be intentional. Incremental models like Base’s demonstrated the value of pacing, ecosystem coordination, and feedback loops. Avalanche’s one-time upgrade, while positive, did not yield the same compound benefits.
    Ultimately, Giga’s architecture makes these outcomes possible, but not inevitable. Realizing Sei’s full potential will depend on how thoughtfully capacity is deployed, how actively developer ecosystems are supported, and how quickly the network can adapt to a radically different scale of throughput.

    The next phase of Sei's evolution requires pairing breakthrough engineering with coordinated ecosystem activation. If done right, Sei Giga may not only set new standards for throughput—but scaling at large.

    This research report has been funded by the Sei Development Foundation. By providing this disclosure, we aim to ensure that the research reported in this document is conducted with objectivity and transparency. Blockworks Research makes the following disclosures: 1) Research Funding: The research reported in this document has been funded by Sei. The sponsor may have input on the content of the report, but Blockworks Research maintains editorial control over the final report to retain data accuracy and objectivity. All published reports by Blockworks Research are reviewed by internal independent parties to prevent bias. 2) Researchers submit financial conflict of interest (FCOI) disclosures on a monthly basis that are reviewed by appropriate internal parties. Readers are advised to conduct their own independent research and seek the advice of a qualified financial advisor before making any investment decisions.