Are you curious about consensus algorithms and how they affect the blockchain trilemma? Great! You’ve come to the right place. To learn blockchain development and be certified I recommend visiting Ivan on Tech Academy.
Blockchain is currently #1 ranked skill by LinkedIn, hence you should definitely learn more about Ethereum to get a full-time position in crypto during 2020.
In my first and second pieces, I’ve discussed Ethereum 2.0 and the best tools for developers. In my third and fourth articles, I’ve discussed quadratic voting and open governance models. Then, on my fifth piece, I’ve looked into Swarm’s infrastructure and on my latest one I’ve dove-deep into consensus algorithms.
Now to the topic at hand: what is the blockchain trilemma? How are security, decentralization and scalability entangled? Is it possible to scale a public blockchain? If so, how?
An intro to the blockchain trilemma
The blockchain trilemma refers to the generally accepted idea that it is not possible to scale a public blockchain network without compromising either security, decentralization, or both.
The same problem is common in Computer Science (CS) and has been widely discussed since the 1980s. It’s known as the CAP theorem and it argues that in a decentralized data storage system, it is impossible to have more than two out of the three properties listed below:
Consistency: Every read receives the most recent write or an error
Availability: Every request receives a (non-error) response, without the guarantee that it contains the most recent write
Partition tolerance: The system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between nodes
What networking, infrastructure, and other CS experts have found is that some problems are technically impossible to solve. In a decentralized, peer-to-peer network, due to the constraints of liveliness, routing, and fault-tolerance, one cannot simply scale a network by adding more nodes or speeding up communication between nodes.
The problem at hand is somewhat straightforward. Since public networks should be asynchronous and fault-tolerant, there is a top-bound limitation on communication speed. To maintain the decentralization and security of the network.
Decentralization reflects the degree to which transactions between agents are possible and effective, without the control or authorization of a particular group of individuals.
In other words, decentralization guarantees censorship-resistance.
Therefore, in a decentralized governance system members retain control, decision-making, and delegation. Hence, there is no central authority nor groups of individuals/nodes.
To achieve a high degree of decentralization, different roles need to work together within a crypto protocol. The three roles are users, who want to conduct a transaction; validators, who record and verify transactions obeying to a specific consensus mechanism; and the programmers, who vote and suggest amends to the code, defining the future directions of the project.
The more decentralized the network is, the harder it is for a single party to control it or take it down.
Plus, higher decentralization means more participants. And the higher the number of participants, the higher the overall system fault-tolerance and security.
Speaking of the devil, let me dive deep into the second dimension, security.
To be secure, a crypto protocol needs to be resilient in the short term and immutable in the long term.
In other words, the protocol needs to be able to prevent and/or recover from short-term attacks (resilience), without making changes to previous states of the distributed ledger (immutability).
The protocol throughput―the speed of transactions, which dictates the number of transactions per second (TPS)―plays a major role in defining how resilient a protocol is against spam and TPS-based attacks. Such attacks are possible because decentralized network nodes tend to be asynchronous. Meaning, the higher the TPS, the longer it takes the information to arrive from one node to another.
This time-lapse, or propagation delay (aka network latency), may increase the probability of orphaned blockes. Hence, attacks take place which may alter the value of timestamped transactions. Therefore, the higher the protocol throughput, the higher the propagation delay, the higher the probability of attacks.
The security of the protocol diminishes since resilience is lower.
For example, Bitcoin – the most secure and decentralized cryptocurrency at the time of writing – has an incredibly low throughput, being capable of processing only 7 TPS, on average. In comparison, PayPal handles 500 TPS and Visa around 4,000 TPS.
Of course, Bitcoin is not only a payment layer. It’s also a store-of-value layer, which I’ll discuss shortly.
Now, let me discuss the final point, scalability and why it matters in the long-term.
Scalability represents the capability of a network to handle a growing amount of work, or its potential to be enlarged to accommodate said growth.
For example, a system is scalable if it’s capable of increasing its total output under an increased load, as additional resources are added.
An analogous meaning is implied when the word is used in an economic context, where a company’s scalability implies that the underlying business model offers the potential for economic growth within the company. For example, when an organization needs to hire extra employees or to increase investment burdens.
TPS, block height (the number of transactions per block), and transaction size are determinant dimensions for scalability, as well as governance decentralization and security.
These dimensions are important because the higher the TPS and/or the bigger the blocks, the higher the potential of the protocol to accommodate a larger amount of transactions quickly.
On the other hand, faster block times mean nodes are less fault-tolerant. Higher throughput always means a higher probability of failure between nodes. Hence, the need to usually make faster networks more centralized.
The example above was created by the author and is open to interpretation.
My goal is to show what projects tend to focus on. For instance, Bitcoin and Ethereum forego scalability to make the network as secure and decentralized as possible. However, both networks are not as scalable as Ripple, Stellar, or Neo for instance. That’s because these projects forego decentralization to add scaling capabilities.
On the other hand, we have Ardor, Dash, and Nem, which are projects that focus mainly on a decentralized solution that is scalable. Of course, they are less secure (long-term) than Bitcoin, even though there is much room for discussion.
My point is simple: we cannot have a project that fulfills all needs – not at the moment, anyway. That does not mean cryptocurrencies are not invaluable (hint: they are).
Cryptocurrencies have opened the door to completely new distributed consensus algorithms, that one day might crack the blockchain trilemma.
For the time being, I believe second-layer solutions are the answer. It’s the simplest way to keep the main layer as secure and decentralized as possible.
- Ivan On Tech Academy,
- Ivan On Tech ETH 2.0 code review,
- Build a blockchain in SECONDS,
- Functional programming in blockchain,
- ETH 2.0 discussion,
- Ethereum projects analysis
- Role of consensus algorithms
This article is not financial advisement. Changes may happen that the author is unaware of. Always check the resources provided! !