We’ve heard for a long time, can achieve consensus for the 50% of tolerance in the synchronization network, radio messages to ensure that any honest nodes in a given time period by all other honest node receives (if the attacker has more than 50%, can perform “51%”, and this is used for this type of any algorithm) analogues. We’ve also heard for a long time that if you want to relax the synchronization assumption and have an “asynchronously safe” algorithm, the maximum achievable fault tolerance drops to 33% (PBFT, Casper FFG, etc all fall into this category).

But did you know that if you add more assumptions (specifically, you need observers, i.e., if the user is not actively participating in the consensus but cares about its output, actively observing the consensus instead of just downloading its output after the fact), you can increase fault tolerance all the way up to 99%?

In fact, this has been known for a long time; Leslie Lamport’s famous 1982 paper “The Byzantine Generals Problem” (link here) contains a description of The algorithm. Here is my attempt to describe and reconstruct the algorithm in simplified form.

Assume there are n-consensus participating nodes and everyone agrees on who brought these nodes forward (depending on the context, they may be selected by trusted parties, or through some proof of work or proof of equity if greater decentralization is required) scheme). We mark these nodes 0…. N – 1. It is also assumed that you know the D-network delay plus the clock difference (for example, D= 8 seconds). Each node can publish the value T in a timely manner (malicious nodes can of course propose the value T earlier or later). All nodes wait (n-1) * D for a few seconds and run the following procedure. Define x: I as “x is the value I signed by the node”, x: I: j as “by… The value of x signature “I” and the value is signed with the signature by “j” etc. The proposal published in phase 1 will be some form of V: I, v and I will contain the signature of the node that proposed it.

If the validator I receives some messages V: I [1] :… : I [k], I [1]… I [k] then the index list that has signed the message (in order) (v itself will count as k = 0, and V :ik = 1), then the validator checks (I) for less than T + k * D, and (ii) they have not yet seen the valid information contained v; If both checks pass, they issue v: I [1] :… : I [k] : I.

When T + (n-1) * D, the node stops listening. At this point, you can guarantee that all honest nodes “effectively see” the same set of values.

Node 1 (red) is malicious, and nodes 0 and 2 (gray) are honest. The match begins with two honest nodes making their proposals Y and X, while the attacker makes two W and Z late. W arrives at node 0 on time but z does not arrive at node 2 and does not arrive at node on time. At time T + D, nodes 0 and 2 replay what they’ve seen, they haven’t broadcast all the values yet, but added their signatures on (x and W nodes 0, y nodes 2). The two nodes see {x, y, w}.

If the problem needs to select a value, they can use some “select” functions to select a value from the values they see (for example, they take the value with the lowest hash). The nodes can then agree on the value.

Now, let’s explore why this is so. We need to prove that if an honest node has already seen a particular value (effectively), then each other honest nodes can see this value (if we prove it, then we know all honest nodes to see the same set of values, so if the choice of all honest nodes running the same function, They will select the same value). Assume that any honest node receives v: I [1] :… : I [k] to the message they consider valid (i.e., it arrives at T + k * D before time). Suppose x is the index of a single other honest node. Either x is part of it, {I [1]… I [k]} or not.

In the first case (such as the message x = I [j]), we know that honest node X has broadcast the message, and they do so in response to j-1 they received a signed message T + (J-1) * D prior to time, so they broadcast their message at that time, and therefore, All honest nodes must receive messages by time.

In the second case, since the honest nodes see the message T + k * D before time, they will then broadcast the message with their signature and guarantee that everyone (including X will see it before time) T + (k+1) * D.

Note that the algorithm uses the behavior of adding its own signature as a “collision” for message timeouts, and this ability ensures that if an honest node sees the message on time, they can ensure that others see the message on time, since the definition of “on time” increases the network latency beyond each signature added.

In cases where a node is honest, can we guarantee that passive observers (i.e., non-consensual participants who focus on knowing the outcome) can also see the outcome, even if we require them to observe the whole process all the time? As the plan was written, there was a problem. Suppose a commander and some subset of K (malicious) validators produce a message V: I [1] :…. And broadcast it directly to some of the “victims” beforehand. The victim thinks the message is “on time”, but when they re-broadcast it, it only reaches behind all honest consensus participating nodes, so all honest consensus participating nodes reject it.

But we can fill that hole. We require that D be limited by twice the network latency and clock difference. We then apply different timeouts to the observer: observer V: I [1] :…. : I [k] accept T + (k-0.5) * D before time. Now, suppose the observer sees a message and accepts it. They will be able to broadcast it to an honest node T + k * D some time before, and the honest node will send a message with its signature that will reach all other observers T + (k + 0.5) * D before time, a timeout for messages with a K +1 signature.

Modify other consensus algorithms

In theory, the above could be used as a stand-alone conformance algorithm, or even to run proof-of-equity blockchains. The consensus round N + 1 validator itself can be determined in the consensus round N (for example, each round of consensus can also accept “deposit” and “withdraw” transactions, which if accepted and signed correctly will add or remove validators to the next round). ** The main additional element that needs to be added is the mechanism for deciding who is allowed to block (for example, there can be one designated proposer per round). ** It can also be modified to work as a proof-of-work blockchain, allowing consensus-participating nodes to “declare themselves” in real time by publishing proof-of-work solutions on top of their public keys while signing. Use it for messages.

However, the synchronization hypothesis is very strong, so we want to be able to work without it without exceeding 33% or 50% fault tolerance. There is a way to do this. Suppose we have some other consistency algorithm (e.g., PBFT, Casper FFG, chain-based PoS) whose output can be seen by an occasional online observer (we call it a threshold dependent consistency algorithm, as opposed to the algorithm above) and we call it a delay-dependent consensus algorithm). Assume that a threshold dependent consistency algorithm runs continuously in the pattern of constantly “finalizing” new blocks onto the chain (i.e., each final value points to some previous final value as a “parent”; If there is A series of Pointers A ->… -> B, we call A A descendant of B.

We can rely on the algorithm is modified to the structure of the delay, let always online observer can at checkpoints on the end of the “strong”, fault-tolerant rate of 95% (you can add more validators and any way to any close to 100%) for the process take longer).

Each time the time reaches a multiple of 4096 seconds, we run the delay-related algorithm and select 512 random nodes to participate in the algorithm. A valid proposal is any chain of valid values ultimately determined by the threshold correlation algorithm. If a node sees some final values before T + k * D has a k signature (D = 8 seconds), it puts the link into its set of known chains and re-broadcasts it and adds its signature; The observer uses T + (k – 0.5) * D as the previous threshold.

The final “select” feature used is simple:

  • The final value is not a descendant of the final value already agreed in the previous round and will be ignored

  • Invalid final values are ignored

  • To choose between two valid final values, select the value with the lower hash value

If 5% of validators are honest, then none of the 512 randomly selected nodes are honest, only about 1: So as long as the network delay plus the clock difference is less than D/2, nodes on a single final value can be correctly coordinated even if multiple conflicting final values are presented due to the fault tolerance of the threshold correlation algorithm.

If the fault tolerance of the threshold dependent conformance algorithm is met (typically 50% or 67% honest), then the threshold dependent conformance algorithm will not ultimately determine any new checkpoints, or it will ultimately determine new checkpoints that are compatible with each other (for example, a series of checkpoints, each pointing to the previous as a parent), So even if the network delay exceeds D/2 (or even D), so the nodes participating in the delay-dependent algorithm do not agree with the values they accept, the values they accept are still guaranteed to be part of the same chain, so there is no actual divergence. Once the latency returns to normal in a future round, the delay-related consensus will resume “synchronization.”

** If the assumptions of both threshold-dependent and delay-dependent consensus algorithms are broken at the same time (or in successive rounds), then the algorithm can be decomposed. ** For example, suppose that in a cycle the threshold dependent consensus stereotype Z -> Y -> X and the dependent wait time disagree between the consensus Y and X, and in the next round the threshold dependent consensus stereotype offspring W of X is not the offspring Y; In a delay-related consensus, the agreeing node Y will not accept W, but the agreeing node will accept X. But this is inevitable; The impossibility of a security undersynchronization consensus with more than 1/3 fault tolerance is the result of a well-known Byzantine fault tolerance theory, which assumes that offline observers cannot be more than 1/2 fault tolerance even if the synchronization hypothesis is allowed.

Source: Blockchain bros

Author: V God

Original link: t.cn/RD93GS2