Consistency in distribution can be described as the behavior of reaching agreement between a set of operations that collaborate to solve a problem. With the rise of open source distributed computing and storage platforms, consistency algorithms have become the basic tools for replication. Among them, Paxos and Raft are the most popular consistency algorithms to improve system resilience by eliminating single points of failure. While Paxos dominates the academic and commercial discourse on distributed consistency, the protocols themselves are too complex to reason about, and therefore require algorithms that are easier to understand. Paxos has been studied extensively by researchers and Raft is very popular among engineers. Raft’s popularity comes from the fact that although researchers are interested in Paxos, engineers still need to read several papers in order to be able to understand and create solutions that solve real problems and provide good performance in terms of communication steps. In addition, they still need to fill in some gaps in custom implementations, which can sometimes become very fragile. Raft’s new consistency algorithm, which is designed to be easier to understand and provides a better foundation for building utility systems than Paxos. While Raft brings some new blood to the complex world of distributed systems, it still has a lot in common with Paxos. For example, both choose a leader responsible for deciding on consensus. In this blog post, we’ll take a look at the similarities and differences between Paxos and Raft. First, we will describe what the consistent algorithm is. Second, we describe how to build replication solutions using instances of consistency algorithms. Then we will describe how to select leaders among algorithms and some of the security and active attributes.

2 Consistency Distributed systems are characterized by a set of security and activity or a mixture of both. Simply put, safety is an asset that stipulates that no adverse events will occur during the execution of the procedure. On the other hand, some good things will eventually happen with the active rule. The goal of consistency is to get a group of servers to agree on a value, so the dynamic feature is that eventually each server can decide on a value. Security indicates that there are no two servers to set the value. Unfortunately, servers can take longer to perform algorithmic steps than other servers and can crash and stop processing consistent algorithms. Messages may be delayed, delivered out of order, or lost. These aspects make it very difficult to implement conformance algorithms and force them to lower standards and remain secure during periods of “instability”. Precisely, it is unknown when the system becomes “stable”, but eventually it will remain “stable” long enough for the consistency algorithm to make a decision to reach the final consistency. In stable operation, the system requires two communication steps: leader – (1) – > Servers – (2) – > Leader: Insert image here to describe the leader sending the value it wants to agree to to all servers, and each server replies to the leader informing him that the request has been accepted. Thus, an agreement is reached when the leader receives messages from a quorum of servers (n/2+1 nodes). Notice that we have omitted two messages in this analysis: the message that forwards to the server the value that the server wants to agree to with the leader and the message that notifies the server that the value agreement has been reached. The latter message may not be required if the server sends the accept message to all servers, or if it carries information with it in the next message that the leader sends to the server.

3 Replication To achieve replication, several instances of the consistency algorithm are run, and each instance is restricted to a slot entry in the replication log, which may be persisted on disk. Leaders can improve performance by running multiple instances in parallel to fill different slots. However, parallelism is highly dependent on the hardware, the network used, and the application. Each leader is uniquely responsible for the round or cycle they add when they are elected: Insert picture description 4 Leader Election Both Paxos and Raft believe that eventually there will be one leader that all stable servers trust, and one leader is responsible for a cycle (Term). If problems are suspected with the current leader, the new leader will propose a new term, which must be larger than the previous term. In Raft, servers send “lead requests” to other servers and expect most of them to respond before they think they are the leader. If it does not hear back from most servers or receives a message that another server has become the leader, it will time out and restart the new election process. The server can only vote one leader request per Term. However, Paxos doesn’t really define how servers become leaders. For simplicity, the researchers used sequencing between processes such as server ids (integers). Therefore, there is no doubt that the highest or lowest ranked server becomes the new leader. While this is a simple and intuitive solution, it requires dividing the term space between servers: new terms = old terms + N, where N is the maximum number of servers. Raft imposes restrictions on the leader election process: only the newest server can be the leader. Basically, it ensures that the leader owns all committed entries from previous cycles and doesn’t need to know about old entries in the replication log that it doesn’t know about. Thus, after becoming the leader, the server can simply “impose” its “wishes” on other servers. However, Paxos allows any server to be the leader. Thus, the server must learn about the past before starting to “impose” its “wishes” on other servers, increasing flexibility but with additional complexity. Insert picture description here In Raft, server 1 or Server 2 can be the leader. In Paxos, either one will do.

5 Security Due to the asynchronous nature of the system, the server may sense failures and elections at different times. This means that servers may run temporarily in different ways, but eventually all servers converge to one Term. In any case, if the server gets a message from an earlier Term than its current version, this means that the sender is either a leader or trying to be one of the older terms, and the receiver must reject the message and notify the sender. If the server gets the message from a Term larger than the current Term, it means there is a new Term and a new leader, and the receiver must start receiving the leader’s “wishes.” However, both algorithms must be careful not to overwrite decisions made by the old leader, thereby violating security rules. This is where Raft and Paxos disagree, and we can see that Raft uses a simple and elegant approach. As mentioned above, Raft imposes restrictions on the leader election algorithm so that only the newest server can be the leader: Raft determines which of the two logs is updated by comparing the index and terms of the last entry in the log. If the log contains the last entry with a different term, the log with the later term is updated. Longer log updates are the most recent if the log ends in the same terms. The leader then only needs to ensure that the replicated logs in the server eventually converge, which is done by imposing the following restriction: the “N” server cannot accept the value of the slot “n-1” if the server has not previously accepted the value of the slot. The leader includes the terms of the previous log entry in the current request, and the server only accepts the request if the terms of its previous request match the terms sent by the leader. Otherwise, it requires the leader to send the previously missing request first, repeating “N-2” and “N-3” and so on. In Paxos, any server can be the leader, so the task of avoiding the decision not to be overridden becomes a bit more complicated, as the new leader must find out what other servers have already processed before starting to “impose” it. Hope “is in others. This is the preparatory phase of the Paxos algorithm, which must be run once a new leader has been elected. The prepare message contains the new term and the slot number by which “n” can reach the protocol of all previous entries. The server replies with a message “n “about the larger slots, which is used to limit the new leader’s suggested value for those slots.

6 Activity N /2+1 nodes can provide services as long as most servers are alive.

We’ve shown the similarities between Raft and Paxos, the key difference is how to pick the leader and stay safe. In Raft, only the newest server can be the leader, whereas Paxos allows any server to be the leader. However, this flexibility comes with additional complexity.

Two adjacency matrix, source code % % a = [0 0 0 0 0 0 0 0 0 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0, 0.5 0.5 0 0 0 0 0 0 0 0. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0, 0.5 0.5; 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0.5 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0. 0 0 0 0 0 0 0 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0.5 0 0 0 0 0 0 0 0 0 0 0 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0. 0 0 0 0 0 0 0 0 0 1/3 0 0 0 0 0 1/3 one-third 0 0 0 0 0 0 0 0 0 0 0 0; 0.25 0 0 0 0 0 0.25 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.25 0.25 0 0 0 0; 0.5 0 0 0 0 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0.5 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0. 0.5 0, 0, 0 0 0 0 0 1/3 one-third 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/3 0; 0 0 0 0 0 0 0 1/3 one-third 0 0 0 0 0 0 0 0 0 0 0 0 by one-third 0 0 0 0 0 0 0. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.2 0.2 0.2 0.2 0.2 0 0 0 0 0 0 0 0 0 0. 0 0 0 0 0 0 0 0 0.25 0.25 0 0 0 0 0 0.25 0 0 0 0 0 0 0 0 0 0 0 0; 0.25 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.5 0.5 0 0 0. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.2 0.2 0.2 0.2 0.2 0 0 0 0 0 0 0, 0 0 0 0 0 0 0 0 0 0 0 0 0 0.25 0.25 0.25 0.25 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/3 1/3 one-third 0 0 0 0 0 0 0 0, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0, 0.25 1/3 0.25 0.25 0.25; 0 0 0 0 0 0 0 0 0 0 0 1/3 0 0 0 0 0 0 0 0 0 0 1/3 0 0 0 0 0; 0.25 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0, 0.25 0 0 0 0 0 0 0 0 0 0; 0.25 0.25 0 0 0 0 0 0 0 0 0 1/3 0 0 0 0 0 0 0 0 0 0 0 0 0 1/3 one-third 0 0 0 0, 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.25 0.25 0 0 0 0 0 0 0 0 0, 0.25 0.25; 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0.5 0 0 0 0 0 0.25 0 0 0 0 0 0 0 0 0 0 0 0 0 0.25 0.25 0.25 0 0 0 0 0 0 0; 0.25 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.25 0.25 0.25 0 0 0;] ; % generator % be=[13.065 5.295 11.37 3.36 12.795 11.775 3.375 9.435 6.45 12.39]; Ga =[0.0046 0.0111 0.0099 0.0095 0.0104 0.0029 0.0021 0.0062 0.0077 0.0048]; Pg =[135.88 214.92 108.04 127.69 232.56 240.00 44.628 234.48 74.600 172.09]; SPG = zeros (1301); spg(1)=sum(pg); Pgmax = 2.5 pg; Pgmin = 2.5 pg; li=be+2.ga.pg; % load % B =[25.755 18.42 27.63 10.59 16.275 28.365 28.14 23.55 21.42 15.225 28.56 10.305 23.94 22.05 26.25 16.455 24.375 26.295 to 14.76]; C =[14.02 6.25 15.1 8.42 8.1 21.21 11.9 15.96 12.75 6.9 9.75 8.23 9.28 9.49 9.1 34.08 18.36 12.39 13.04]./(-100); Pd =[110.15 176.75 109.69 75.55 120.64 80.26 142.02 88.57 100.8 132.38 175.75 75.13 154.69 139.3 172.85 28.98 79.67 127.37 to 67.92]; The SPD = zeros (1301); spd(1)=sum(pd); Pdmax = 2.5 pd; Pdmin = 2.5 pd; lj=b+2.*c.*pd; % iteration % l=[li lj]; Ll = zeros (301); ll(1,:)=l; Dp = zeros (1301); dp(1)=spd(1)-spg(1);

PGG = zeros (301, 10); pgg(1,:)=pg; PDD = zeros (301); pdd(1,:)=pd;

For t = 1:1:30 0 updated lambda values for n = 1:1:29% if n = = 1 ll (t + 1, n) = sum (a (n). The ll (t, :)) + 0.005 dp (t); % here and matrix multiplication is the same as a (n) * ll (t, 🙂 ‘ll elseif n = = 11 (t + 1, n) = sum (a (n). The ll (t, :)) + 0.005 dp (t); else ll(t+1,n)=sum(a(n,:).*ll(t,:)); end end

For I = 1:1:10% to determine whether a pg the limit and the assignment if (ll (t, I) – be (I))/(2 ga (I)) > = pgmax (I) PGG (t + 1, I) = pgmax (I); elseif (ll(t,i)-be(i))/(2ga(i))<=pgmin(i) pgg(t+1,i)=pgmin(i); else pgg(t+1,i)=(ll(t,i)-be(i))/(2*ga(i)); end end

For j = 1:1:19% whether pd is limited and the assignment if (ll b (t, j + 1) – (j))/c (j) (2) > = pdmax (j) PDD (t + 1, j) = pdmax (j); elseif (ll(t,j+10)-b(j))/(2c(j))<=pdmin(j) pdd(t+1,j)=pdmin(j); else pdd(t+1,j)=(ll(t,j+10)-b(j))/(2*c(j)); end end spg(t+1)=sum(pdd(t+1,:)); spd(t+1)=sum(pgg(t+1,:)); dp(t+1)=sum(pdd(t+1,:))-sum(pgg(t+1,:)); End % draw % figure(1);

for j=1:1:29 plot(x,ll(:,j)); Hold on end title(‘ generator incremental cost and load incremental benefit ‘); Xlabel (‘ time/s); Ylabel (‘ consistency variable value ‘); Insert picture description here insert picture description here

Iv. Remarks Version: 2014A