Origin post by Vitalik Buterin on May 9th, 2016
The issue of certainty of settlement has been a major battleground recently between the public chain and the license chain. It seems that centralized systems have at least one advantage, which is called “finality” : once an action is done, it is done forever, and the system can never go back to undo it. Decentralized systems, depending on the design, may have this property, or they may only provide probabilistic certainty within a certain range of incentives, or even no certainty at all, and there is a big difference between the public chain and the licensed chain.
The concept of certainty is particularly important in finance, where institutions need to confirm as quickly as possible whether assets are legally “theirs”. If the asset is theirs, then a randomly generated blockchain fork can undo the transfer and make them lose ownership of the asset again.
As Tim Swanson wrote in a recent post:
Entrepreneurs, investors and blockchain proponents claim that the public chain can serve as a settlement layer for financial instruments. But public chains are not designed to guarantee certainty of settlement, so they are not currently a reliable choice for clearing and settling financial instruments.
Is that true? Is the public chain really incapable of providing any form of settlement certainty? Or is it true, as some proof-of-work Theologians say, that only proof of work is truly deterministic, that the certainty of the license chain is an illusion? Or is it more nuanced than that? To fully understand the certainty offered by different blockchain architectures, we need the help of mathematics, computer science, and game theory — or cryptoeconomics.
All certainties are probabilistic
First of all, it is important to clarify that there is no system in the world that provides 100% strict settlement certainty. If we put co-ownership on paper, the record might be burned, or some bad guy might break into the registry one day and draw a ‘C’ before each number one, making it a nine. Even if there are no bad guys, there is a chance that one day everyone who knows where the records are kept will be struck by lightning at the same time. . So centralised registries have the same problems, and it could even be argued that attacks are easier to carry out — perhaps the recent Bangladesh central bank debacle is a clue.
Ownership of “anonymous digital assets” that are entirely on the chain is entirely determined by the blockchain itself, and thus can only be pursued through community-driven hard forks. If blockchains are simply used as a registry of all legal assets (land, shares, etc.), then it is the judicial system that has the final say on asset ownership. If there is a problem with the registration form, the court faces two situations. First, if the attacker has reaction in the judicial system to try to shift their assets by left, then register the record number of assets and the number of assets in the real world will produce inconsistent, so there will always be someone, although he certainty possess for x number of assets, but must face real assets only y and y < x’s reality.
The court has another option. They could simply ignore the question sheet and refuse to take its literal form as its word; What the courts need to do is to sort out the context and use an eraser as the correct way to respond to the rewriting of 1 to 9, not to throw up their hands and say everyone is better off now. Here we see, once again, that certainty is not certain, only this time we violate certainty for the good of society. These ideas can be applied to all other ways of maintaining and attacking registries, including 51% attacks on public and federation chains.
Bitcoin’s experience suggests that the theory that “all registration forms fail” may be more realistic than you think. There have been three times in bitcoin’s history when a transaction was canceled after a long enough time:
- In 2010, attackers used an integer overflow flaw to create 180 billion bitcoins for themselves. The problem was eventually fixed at the cost of the transaction being cancelled within about half a day.
- In 2013, bitcoin forked because of a bug that only existed in a certain version, causing one half of the network to refuse to accept what the other half thought was the correct chain. The split lasted six hours.
- In 2015, about six blocks were withdrawn because the pool created invalid blocks and did not validate them.
Of these three events, only the root cause of the third was unique to the public chain: the abnormal behavior of these pools was caused by problems in the design of economic incentives (essentially equivalent to the verifier paradox). In the other two cases, the culprit was a software bug – a problem that could have also occurred in the alliance chain. It is believed that the consensus algorithm of preference consistency like PBFT can avoid the occurrence of the second type of events, but even so, the first type of events caused by the overflow defects of all node software are still unsolved.
So we have reason to believe that if you really want to reduce system failure rate, so a more than “from public chain to chain alliance” valuable advice is: run multiple versions of the consensus protocol implementation, if and only if a deal is all implementation accepted that only when it is finalized (which is what we advice to exchange, as well as other projects on the etheric fang). The public/federated chain is a false opposition: if you want true robustness, and if you agree with the federated chain proponents that the federated trust model is safer, you should use both.
Certainty in proof of work
From a technical point of view, the workload proves that transactions on the blockchain are never finalized, and there is always the possibility of a longer fork that begins with its parent block and does not include it. In reality, however, financial services on the public chain have evolved a very practical way to determine whether a transaction is sufficiently approximate: wait for six confirmations.
The probability calculation here is simple: if the attacker has less than 25% of the computation power, we can describe a two-flower attack as a random walk, which starts at -6 (meaning the attacker’s chain is 6 blocks shorter than the original chain). Using the formula (0.25/0.75)^6 ~= 0.00137, we can calculate that the probability of this random process reaching zero (i.e. the probability that the attacker’s chain exceeds the original chain) is less than the processing rate of almost all exchanges. If you want a higher degree of certainty, you can wait for 13 confirmations to give an attacker a one in a million chance of success, or 162 confirmations to give an attacker a lower chance than guessing your private key directly. Thus, there is a degree of certainty even on proof-of-work based blockchains.
The problem is that our calculations assume that 75% of the nodes are correct (for lower percentages, say 60%, there are approximations but more confirmations are needed). Does that premise hold in our incentive model? Attackers can bribe miners to choose the attacker’s chain (a more practical way to bribe miners is to run a mine pool with a negative rate, or a nominal zero rate plus a subsidy rate to avoid suspicion), P + Epsilon attack is one idea. Attackers can also try to hack into the pool or damage the pool infrastructure, which has a good chance of success in an environment where there is little incentive for proof of work security (if miners are hacked, they only lose their rewards for a period of time, and the principal is safe). Finally, there is what Swanson calls the “Maginot Line” of attacks: a simple crush with more computational power than the entire net by spending huge sums of money.
Determinism in Casper
Casper tries to provide a higher level of certainty than proof of work. First of all, Casper has a standard definition of “total economic finality” : when more than 2/3 verifiers bet on a block with the maximum probability or the state will be finally determined. Under this definition verifiers have a very strong incentive not to collude in overthrowing the block: once the verifier makes the maximum probability bet, the verifier loses all of their margin in any fork that does not include the block. As Vlad Zamfir points out, you can think of Casper as a proof of workload variant that takes part in 51% of attacks that cause your mining machine to burn.
Second, since becoming a verifier requires prior application, it means that there can be no other verifier quietly creating another, longer chain. If you see that 2/3 of the verifiers put their entire capital down on one block, and then 2/3 of the verifiers do the same thing on the other contradictory block, then this only means that the intersection of the two groups of verifiers (i.e., at least 1/3 of the verifiers) will lose their entire margin. This is what is known as “economic certainty” : we cannot guarantee “X will never be cancelled”, but we can guarantee a weaker statement, “X will either never be cancelled or a large group of verifiers will voluntarily destroy their own principal worth millions of dollars”.
Finally, even if double-finality events do occur, users are not forced to accept forks supported by more bets. Instead, users can decide which fork to follow, and a perfectly simple strategy is to accept “the one that came first.” A successful attack in Casper is more like a hard fork than a fallback, and the community of users who own the assets on the chain is free to use common sense to choose a fork that is not made by the attacker but contains a transaction that really should be identified.
Law and Economics
Unfortunately, these strong guarantees remain economic. As Swanson puts it in his next point:
Thus, if the market value of native tokens, such as Bitcoin or Ether, rises or falls, so does the amount of work miners have to do to compete for minting rights and fees, as well as the cost of contracts. This leaves open the possibility that malicious nodes can successfully facilitate the reorganization of blocks on the chain under certain economic circumstances.
There are two versions of this. One comes from the “extreme legalists”, who believe that “mere economic guarantees” are of little value, but only theoretical, and that legal guarantees are the only valid ones. That is clearly wrong: in many cases the main or only punishment the law can impose for wrongdoing is a fine, which is a “mere financial incentive”. If mere economic incentives can be used by the law, they can also, at least in some cases, be used by settlement systems.
The second version is much simpler and more pragmatic. Let’s say the total value of all ether today is $700 million, and you figure out that it takes $30 million worth of computing power to execute a successful 51% attack, and once Casper goes live you expect the capital participation rate to be 30%, So the lowest cost required to undo a certain transaction is $700 million * 30% * 1/3 = $70 million (if you are willing to reduce your tolerance for the presence of witnesses to 1/4, you can get a 3/4 certainty threshold, increasing the intersection of evil capital to 1/2, Bringing the minimum cost of an attack to $105 million). If you’re trading $100m worth of securities over two months, it’s not a big deal. The economic incentive design of the public chain can well prevent malicious behavior, and any attack will not be cost-effective.
Now suppose you also trade $100 million worth of securities, but use the Ethereum public chain as the underlayer for five years. You have much less certainty. The price of Ether could stay the same, go higher, or go to zero. Casper’s consensus capital participation rate could rise to 50% or fall to 10%. Thus, the cost of launching a 51% attack could well drop below, say, a million dollars. At that point, it was perfectly feasible to profit with a 51% attack and some market manipulation.
Here’s an even simpler scenario: What if you were to trade $100 billion worth of securities? In this case, the cost of attacking the public chain is negligible in terms of the benefits of market manipulation, so the public chain does not apply at all.
It should be noted that the actual attack cost calculation is more complex than the example above. These calculations are fine if you want to launch an attack by bribing existing verifiers. In reality, it’s more likely that you’ll need to buy tokens to launch an attack, which could cost $150 million or $210 million depending on the deterministic threshold. The very act of buying a token affects the price of the token. The attack itself, if poorly planned, is bound to make the actual penalty ratio larger than the theoretical minimum of 1/3 or 1/2, resulting in a much smaller payoff than expected. But the basic principle still stands.
So we can say that a weakened version of this idea, which is to say that the economic security boundary of the public chain is too low for high-value assets, is entirely correct, and it is entirely reasonable for financial institutions to explore the use of private and affiliate chains in certain scenarios.
Censorship and other considerations
Another concern about the public chain is its censorproof nature, in that anyone can initiate a transaction, and financial institutions will always need to limit the participants or forms of participation in the system. And that’s absolutely true. One pertinent argument for this is that public chains, especially highly generic blockchains like Ethereum, could serve as the underbelly of a system that implements these restrictions: for example, you could create a token contract that allows only whitelisted accounts to participate, or only accounts representing an institution to have administrative rights. The retort to this argument is that such a design is unnecessarily complex, and that you need to bear the costs and give up the benefits of censor-proof and independence of the public chain, rather than implementing these mechanisms directly on the license chain. That’s a fair point, but it’s the argument about efficiency that needs to be pointed out, not about possibility. So if the benefits beyond censorship (e.g., lower coordination costs, network effects) are large enough, the argument becomes questionable.
There are other efficiency considerations as well. Because the public chain must remain highly decentralized, node software must be able to run on standard consumer computers. This places a limit on transaction throughput that does not exist in the license chain, where we can easily require all nodes to run on servers with 64 cores connected to each other over a high-speed network. In the future, we need to alleviate this problem of the public chain through innovations such as sharding. If it goes smoothly, the throughput of the public chain will be infinitely expanded in about 5 years, as long as the degree of parallelism is high enough and there are enough nodes in the network. But even then there will inevitably be some differences in efficiency and cost between the public and licensed chains.
The last technical consideration is delay. The public chain runs on thousands of consumer computers connected over an open network, while the license chain runs on a much smaller set of nodes connected over a high-speed network that may even be physically close together. Therefore, the transaction delay of the license chain, that is, the time required for final determination, must be lower than the public chain. Unlike efficiency, this is a problem that can never be solved by technological progress: Contrary to our hopes, Moore’s Law does not allow the speed of light to double every two years. No matter how much optimization we do, there will always be a difference between any network where a node is located and the network adjacent to the node, and the difference is basically visible to the naked eye.
At the same time, the public chain certainly has its advantages, and there may be many scenarios in which the legal, business, and trust costs of running an alliance chain would be high enough to justify a public chain. A big part of the value of public chains is that anyone, regardless of social resources, can build applications on them: a 14-year-old can implement a decentralized exchange deployed to the blockchain, and others can evaluate the application and use it according to their own needs. Many developers do not have the ability to initiate a consortium, and the public chain is critical for such developers. Another important advantage of public chains is the ease with which cross-application collaboration can be achieved. Ultimately, we will see both systems evolve over time to serve different populations, and they face common challenges including scalability, security, and privacy, so both can benefit from collaboration.
Translator: jan
This article from the etheric fang fan, the original link: https://ethfans.org/posts/vitalik-on-settlement-finality