On The Future Of Ethereum
Charles Guillemet, CTO at Ledger, reflects on the future of Ethereum and the protocol’s potential to support Web3’s mainstream adoption.
Things to know: |
– Ethereum, the world’s second-largest blockchain by market cap, recently underwent a significant software upgrade called The Merge, shifting from Proof-of-Work to Proof-of-Stake consensus. – While a successful transition, questions remain about Ethereum’s scalability and readiness for the next stage of Web3’s development. – This article delves into Ethereum’s scaling challenges and evaluates its ability to meet mainstream adoption needs. It argues that, while a flawless scaling solution does not exist yet, Layer 2s including Optimistic and Validity Rollups hold the most potential for increased scalability with a good tradeoff for the blockchain trilemma. – More precisely, optimistic and Validity Rollups, using ZKP technology, will be key in shaping the future of Ethereum by enabling trustless, complex, and permissionless transactions at scale. |
Scaling Ethereum: The Quest For A Solution
Ethereum, like many blockchains, currently faces limited transaction processing capacity. Despite supporting ETH transfers and thousands of DApps, increased usage has resulted in slower and more expensive transactions.
To mitigate high fees, this situation drove insecure design decisions such as off-chain centralized services for NFT marketplaces. The introduction of EIP-1559 improved fee estimation and incentivization, but hasn’t significantly improved scalability. The scalability challenge is well understood within the popular blockchain trilemma of Scalability, Decentralization, and Security.
The blockchain trilemma asserts that it’s not possible to simultaneously achieve three properties: decentralization, security, and scalability. Sacrifice on decentralization, it’s much easier to build a scalable and secure system, as Web2 has already proven. Prioritize scalability by sacrificing on your consensus mechanism and you have a pointless, unsafe, decentralized blockchain. Solving the Blockchain trilemma is incredibly complex and has been an ongoing challenge for the last decade.
Boosting Throughput: Multiple Approaches
Over the years, many solutions have been addressed to solve the Ethereum blockchain trilemma. A popular suggestion is to build bigger blocks or blocks per second. While it may seem like a good idea, it intensifies demands on blockchain nodes and validators/miners for consensus, leading to increased centralization. It also slows reorgs, increasing security risks.
An alternative is creating a side-chain to reduce the load of the main chain, as seen with the Polygon network. This system involves security tradeoffs as it relies on a weaker consensus than Ethereum (less market cap). While it may suit specific use cases, it often leads to centralization and does not fully address Ethereum’s scalability issues. And anyway, it’s still far away from the tens of thousands requests needed to run a Visa-like system.
Layer 2s & Sharding: Solutions To Ethereum’s Scalability Challenges?
Sharding and Layer 2s are widely seen as the best options for Ethereum to scale while preserving the blockchain trilemma.
On the one hand, blockchain sharding has long been considered as the key to scalability in the blockchain world. It was the main feature of Eth2.0 in 2019 with the move to the BLS signature scheme, PoS consensus mechanism, and the implementation of eWASM. On the other hand, Layer 2s have seen rapid advancement through ongoing research in roll-up mechanisms. Let’s explore the current state of these two competing approaches and what their future may hold.
How Does Blockchain Sharding Work?
The term sharding stems from database science where we partition horizontally a database into smaller, manageable pieces called shards. Each shard is a separate database that contains a data subset. Sharding is used to scale databases by distributing data and queries across multiple servers, allowing the database to handle a larger volume of data without needing a single, powerful server.
This idea of leveraging sharding on blockchains quickly became popular among developers. Blockchain sharding divides the network into smaller sub-networks called shards, and shards enable processing transactions in parallel. In a sharded blockchain, each shard is a separate chain that operates independently. This means that each node, miner/validator can focus on a given shard to create a local consensus. First, it allows transactions to be processed in parallel. Second, each shard has fewer transactions to manage. Sounds perfect, so what’s the catch?
Sharding Challenges: Consensus, Cross-Shard Communication & Security
With blockchain sharding, it’s not easy to define the overall consensus. What is the network’s global consensus? Is it the union of each local consensus? How and where do you anchor these local consensus to create a global one that anyone can trust? Such questions are not easy to answer.
Another significant challenge to implementing sharding is cross-shard communication. When it comes to databases, you don’t have this issue since data is split over different shards, enabling you to read or write them independently without real problems. When it comes to blockchain shards executing code, this is much more complex. Each shard must be able to run its own code, consult the state of a different shard, and execute code on another. This is not trivial.
This sharding difficulty also relates to the problem of security. This issue has been studied by experts and different sharding schemes have been considered prone to many new forms of attacks. First of all, it simply questions the consensus mechanism. If you have 10 shards, and miners are distributed per shard, taking over one shard is 10 times less costly than taking over the overall blockchain. Schematically, the 51% attack translates into 5.1%. One solution to this is to change the consensus mechanism from Proof of Work to Proof Of Stake. This was the primary motivation for Ethereum’s transition to Proof Of Stake.
On the security front, the effect of The Merge has been largely debated. On the decentralization front, the updated Ethereum consensus favored centralization, given that token ownership determines network control.
Regarding Ethereum’s new consensus, several parameters incentivized centralization:
- Running your Ethereum node is not straightforward, requiring resources and uptime. It simply prevents your Ethereum wallet from implementing it and running on your laptop or even your mobile.
- The 32 ETH threshold and the fact it’s not possible to unstake until an unknown date created a Pooling and liquid staking where Lido and exchanges took most of the market. Today 4 actors are controlling more than 55% of the coins staked on the Ethereum blockchain (Lido 29.2%, Coinbase 13.1%, Kraken 7.6%, and Binance 6.2%).
All in all, blockchain sharding is an interesting idea for increasing scalability, but requires complex architecture, specifically when it comes to defining the overall consensus and implementing an efficient cross-shard protocol. A lot of work has been done toward these goals, but we’re still far from implementing them and seizing the impacts on the blockchain trilemma.
Rollups to the rescue
Rollups compress multiple transactions into a single transaction for Ethereum to execute, enabling off-chain execution of many transactions with Ethereum’s security for settlements. There are two main implementations of this idea:
- Optimistic Rollups, allowing users to issue fraud proofs in case of dispute
- ZK-Rollups where the L2 network issues validity proofs.
Optimistic Rollups and finality issue:
Optimistic Rollups have been designed as the most EVM-looking Rollups. They are optimistic since they assume that users are not submitting fraudulent transactions, allowing direct blockchain writing.
There is a mechanism using fraud proofs that L2 validators can initialize to check the off-chain transactions made within a few days (7 days on Optimism). A valid fraud-proof identifies fraudulent steps in the transaction process, leading to the reversal of the transaction and a penalty for the approving validator. This improves transaction throughput while preserving Ethereum’s mainchain security.
However, Optimistic Rollups bring a new challenge: finality. With blockchains, confirmed transactions are considered permanent and irreversible, but this depends on the consensus mechanism. For instance, PoW chains consider transactions final when the probability of a reorg is low, and Bitcoin transactions are final after 6 confirmations. With optimistic rollups, transactions can be reversed after several days, which creates a finality challenge and a different tradeoff.
Another kind of rollup: ZK-Rollups
ZK-Rollups, named for their use of Zero-Knowledge Proof (ZKP) technology such as SNARKs or STARKs, are another type of Rollup. Since the Zero-knowledge property is not actually useful, calling them Validity Rollups could be more accurate.
The Rollup executes a batch of transactions and produces a validity proof, verified by a smart contract on Ethereum blockchain, that confirms the final outcome of the transactions. The cryptographic proof is generated using Zero Knowledge cryptographic primitives.
More broadly, zero-knowledge proofs allow one party (verifier) to demonstrate possession of certain information to another party (verifier) without revealing the actual information. The verifier can be confident in the truth of the verifier’s statement without learning its content.
Originally designed for confidentiality, ZKRollups use zero-knowledge proofs for a very different purpose: compression and trusted computing. The two leading zero-knowledge technologies are zk-STARKs (stands for a zero-knowledge scalable transparent argument of knowledge) and zk-SNARKs (stands for a zero-knowledge succinct non-interactive argument of knowledge).
Data availability problem for L2:
As we’ve seen, ZKP technologies ensure the validity of the L2 state, but the proof alone doesn’t provide access to the state. To increase throughput, execution is moved off-chain, but data must still be readily accessible for reconstruction. To achieve this, transactional data is submitted as calldata on Ethereum to ensure that the data is available for future reconstruction. This data could also be stored in trusted decentralized storage such as IPFS or Arweave allowing anyone to reconstruct the L2 and leveraging the inner incentives of decentralized storage.
It would be even better to have the capability to store this data on-chain, but the data only serves to reconstruct the state/truth of the L2 and isn’t executed, making it an inefficient and expensive use of blockchain capacity.
To address this hurdle, Ethereum devs proposed two EIPs: EIP-4488 and EIP-4844 (good luck avoiding confusion). The first lowers the gas cost for calldata while the second creates a new transaction type for L2 data storage. This data is immutable and read-only, and cannot be accessed by the EVM and therefore cannot be executed.
These EIPs are exactly where the ZKRollup roadmap meets the Execution Sharding roadmap, both proposing the same concept for different purposes. EIP-4488 aims to store essential L2 data while EIP-4844, also known as Proto-Danksharding, is a step towards implementing Danksharding and execution sharding.
Danksharding:
Danksharding involves dividing large datasets into smaller parts for separation and processing, often in parallel. This method is used in big data and AI fields where training sets can be very large.
Proto-danksharding (EIP-4844) does not implement sharding but offers cheaper calldata storage that could be sharded. This cheaper calldata storage will greatly improve scalability for Ethereum on L2, potentially making sharding redundant.
Proto-danksharding:
With Proto-danksharding, the Ethereum blockchain will have non-scalable computation and scalable data. And ZkRollups essentially convert this scalable data and non-scalable but trusted computation into scalable computation.
ZKRollups in the blockchain trilemma:
ZKRollups have strong scalability benefits without changing underlying blockchain properties. To verify Zero-Knowledge proof on-chain is the main requirement, while data availability can be implemented off-chain. In the long run, one can expect that Layer-1s will become simple, secure, hopefully decentralized while Layer-2s will provide scalability.
Where’s the catch?
L2 can indeed scale a lot. Nonetheless, to be settled on-chain (on L1), one needs to produce a proof of validity for the overall state of the L2, causing centralization issues. Currently, L2 designs have only one prover, meaning that they can censor your transactions. They couldn’t really freeze your L1 assets since native bridges are built. The research is ongoing to tackle this challenge, allowing other parties to be able to emit proofs, but some difficult questions remain for the arbitration between these proofs. In all cases this is an important problem to solve for the future.
Starknet has identified this as an important topic on the roadmap, while Arbitrum divides responsibility between the sequencer inbox and the delayed inbox to ensure funds can be retrieved in case of censorship.
Closing Thoughts
As we’ve examined, scalability can come at a cost to security and decentralization, while Layer 2 solutions are seen as the most promising ways to increase scalability without compromising the other aspects of the blockchain trilemma.
Optimistic and Validity Rollups, using ZKP technology, will be vital in shaping the future of Ethereum by enabling trustless, complex, and permissionless transactions at scale. Validity Rollups have a significant advantage over Optimistic Rollups: short finality. Ethereum’s roadmap has recently shifted to support these rollups at the blockchain level.
The future of blockchain scalability includes complex DApps running on Layer 2s (or recursive rollups), allowing virtually infinite scalability, with decentralized and secure layer-1 being given. In the long-term, Layer 1 could become settlement layers, with the complexity of DApps moved to Layer 2s.