Looking for an easy to understand consistency algorithm (expanded version) The original article has been updated, please go to the original article to read
6 Cluster member changes
So far, we have assumed that the configuration of the cluster (the set of servers added to the conformance algorithm) is fixed. In practice, however, it is occasionally possible to change the configuration of a cluster, such as replacing machines that are down or changing the replication level. Although this can be done by pausing the entire cluster, updating all configurations, and then restarting the entire cluster, the cluster becomes unavailable at the time of the change. In addition, if manual steps are involved, there is a risk of error. To avoid this problem, we decided to automate configuration changes and incorporate them into the Raft consistency algorithm.
For the configuration modification mechanism to be safe, there must be no point in the transition at which both leaders are elected for the same term. Unfortunately, any scenario in which the server converts directly from the old configuration to the new configuration is not secure. It is not possible to automatically convert all servers at once, so it is possible for the cluster to split into two independent majority groups during the conversion (see Figure 10).
Figure 10: Going directly from one configuration to a new one is not very secure, because individual machines can switch at any time. In this example, the cluster quota is changed from 3 machines to 5 machines. Unfortunately, there is a point in time when two different leaders can be elected in the same term. One through the old configuration, one through the new configuration.
To ensure security, configuration changes must use a two-phase approach. There are many two-phase implementations. For example, some systems stop the old configuration in phase 1 so the cluster can’t handle client requests; The new configuration is then enabled in the second phase. In Raft, the cluster first switches to a transitional configuration called common alignment; Once the consensus has been committed, the system switches to the new configuration. Common consistency is a combination of the old configuration and the new configuration:
- Log entries are copied to all the new and old servers in the cluster.
- New and old servers can be leaders.
- Reaching agreement (for election and submission) requires majority support on both configurations.
Common consistency allows independent servers to configure the transformation process at different times without compromising security. In addition, common alignment allows the cluster to remain responsive to client requests while configuring transformations.
The cluster is configured to store and communicate with special log entries in the replication log; Figure 11 shows the process of configuring the transformation. When a leader receives a request to change the configuration from C-old to C-new, he stores the configuration (c-OLD,new in the figure) consistently for common use, in the form of the log entry and copy described earlier. Once a server adds a new configuration log entry to its log, it uses this configuration to make all future decisions (the server always uses the latest configuration, whether it has been committed or not). This means that the leader uses the c-old,new rule to determine when the log entry C-old,new needs to be committed. If the leader crashes, the new leader who is elected may use the C-Old configuration or the C-Old new configuration, depending on whether the candidate who wins the election has received the C-Old new configuration. In no case will the C-New configuration be made unilaterally during this period.
Once C-old,new is committed, then neither C-old nor C-New can make a decision without the approval of others, and the leader full feature ensures that only servers with C-old,new log entries can be elected as leaders. At this point, it is safe for the leader to create a log entry about the C-new configuration and copy it to the cluster. Furthermore, each server sees a new configuration that takes effect immediately. When the new configuration is committed under the c-new rule, the old configuration becomes irrelevant and servers that do not use the new configuration can be shut down. As shown in Figure 11, C-OLD and C-New do not have any opportunity to make unilateral decisions at the same time; This ensures security.
Figure 11: A timeline for configuring switches. Dashed lines represent entries that have been created but not yet committed, and solid lines represent the last log entry that was committed. The leader first creates the configuration entry for C-old,new in his own log and submits it to C-old, New (majority of C-old and majority of C-New). He then creates the C-New entry and submits it to the majority in c-New. There is no point in time when C-New and C-old can make a decision at the same time.
There are three more questions to be asked about reconfiguration. The first problem is that the new server may initialize without storing any log entries. When these servers are added to the cluster in this state, they need a period of time to update the catch-up before new log entries can be submitted. To avoid this availability gap Raft uses an extra phase during configuration updates where new servers join the cluster with no voting rights (the leader copies the logs to them, not considering they are the majority). Once the new server catches up with the other machines in the cluster, reconfiguration can be handled as described above.
The second problem is that the leader of the cluster may not be a member of the new configuration. In this case, the leader will abdicate (return to follower state) after committing the C-new log. This means that there is a period of time when the leader manages the cluster, but not himself; He copied the log but did not count himself among the majority. A leader transition occurs when C-new is committed, because this is the earliest point in time when the new configuration can work independently (it will always be possible to elect a new leader under the C-New configuration). Until then, leaders may only be elected from c-old.
The third problem is that removing a server that is not in C-New might disrupt the cluster. These servers will no longer receive heartbeats, so when the election times out, they will run a new election process. They send RPCs requesting votes with a new term number, which causes the current leader to fall back into follower status. A new leader would eventually be elected, but the removed servers would time out again, and the process would repeat itself again, resulting in a significant decrease in overall availability.
To avoid this problem, the server ignores request voting RPCs when it confirms the existence of the current leader. Specifically, when the server receives a request to vote RPC within the current minimum election timeout, it does not update the current tenure number or cast a vote. This does not affect normal elections, and each server waits at least a minimum election timeout before starting an election. However, it helps to avoid server disruption by removal: if the leader can send heartbeats to the cluster, he will not be deposed by the larger tenure number.
7 Log Compression
Raft’s logs grow during normal operation, but in a real system they don’t grow indefinitely. As the log grows, it takes up more space and more time to reset. If there is no mechanism to clean up old information that accumulates in logs, it can cause usability problems.
Snapshots are the simplest method of compression. In a snapshot system, the state of the entire system is written as a snapshot to a stable persistent store, and logs up to that point in time are discarded. Snapshotting is used in Chubby and ZooKeeper, and the following sections will cover snapshotting in Raft.
Incremental compression methods, such as log cleaning or log structure merging trees, are possible. These methods operate on only a small amount of data at a time, thus spreading the load of compression. First, they select an area that has accumulated a large number of objects that have been deleted or overwritten, then rewrite the objects that are still active in that area, and then release that area. Compared to simply manipulating snapshots of entire data sets, complex mechanisms are required to implement this. State machines can implement LSM trees using the same interface as snapshots, but log cleaning methods need to be modified Raft.
Figure 12: A server replaces entries 1 through 5 with a new snapshot, the snapshot value storing the current state. The snapshot contains the last index location and tenure number.
Figure 12 shows the basic idea behind snapshots in Raft. Each server creates a snapshot independently, including only the logs that have been submitted. The main work involves writing the state of the state machine to the snapshot. Raft also contains a small amount of metadata into the snapshot: the last included index is the index in the log of the last entry replaced by the snapshot (the log of the last application of the state machine), and the last included tenure is the tenure number of that entry. This data is retained to support consistency checking of additional log requests for the first entry immediately following the snapshot, since this entry requires the index value and tenure number of the previous log entry. To support cluster member updates (Section 6, p. 16), the last configuration is also saved as the last entry in the snapshot. Once the server has completed a snapshot, it can delete all logs and snapshots up to the last index location.
Although the server usually creates snapshots independently, leaders must occasionally send snapshots to lagging followers. This usually happens when the leader has already discarded the next log entry that needs to be sent to the follower. Fortunately, this is not a routine operation: a follower who keeps up with the leader will usually have this entry. However, a very slow follower or server that is new to the cluster (Section 6, p. 62) will not have this entry. The way to keep the follower up to date is to send them a snapshot over the Internet.
Installing snapshot RPC:
Called by the leader to send chunks of the snapshot to the follower. Leaders always send blocks in order.
parameter | explain |
---|---|
term | Term number of the leader |
leaderId | Id of the leader so that the follower can redirect the request |
lastIncludedIndex | The index value of the last log entry contained in the snapshot |
lastIncludedTerm | The tenure number of the last log entry contained in the snapshot |
offset | Byte offset of a block in a snapshot |
data[] | The original data |
done | True if this is the last partition |
The results of | explain |
---|---|
term | The currentTerm number allows leaders to update themselves |
Receiver implementation:
- if
term < currentTerm
Immediately reply - If it is the first block (offset is 0), a new snapshot is created
- Writes data at the specified offset
- If done is false, continue to wait for more data
- Save the snapshot file and discard any existing or partial snapshots with smaller indexes
- If an existing log entry has the same index value and tenure number as the last one contained in the snapshot, subsequent log entries are retained and replied
- Discarding the entire log
- Resetting the state machine with a snapshot (and loading the cluster configuration for the snapshot)
Figure 13: A brief overview of installing snapshots. Snapshots are segmented for ease of transmission; Each chunk gives the follower an indication of life, so the follower can reset the election timeout timer.
In this case the leader uses a new RPC called install Snapshot to send snapshots to followers that are too far behind; As shown in figure 13. When the follower receives a snapshot through this RPC, he must decide for himself what to do with the log that already exists. Usually the snapshot contains information that does not exist in the recipient’s log. In this case, the follower discards its entire log; It is all replaced by a snapshot and may contain uncommitted entries that conflict with the snapshot. If the snapshot received is the first part of your log (due to network retransmission or error), the entries contained in the snapshot will be deleted, but the entries behind the snapshot will still be valid and must be retained.
This approach to snapshots is a departure from Raft’s strong leader principle, as followers can create snapshots without knowing the leader. But we think the departure is worth it. Leaders exist to resolve conflicts at the time of consistency, but at the time of snapshot creation, consistency has already been achieved and there is no conflict, so it is ok to have no leaders. Data is still passed from leader to follower, but followers can reorganize their data.
We considered an alternative leader-based snapshot scheme where only the leader would create a snapshot and then send it to all followers. But there are two drawbacks to this. First, sending snapshots wastes network bandwidth and slows snapshot processing. Each follower already has all the information needed to create a snapshot, and it is obviously cheaper to create a snapshot yourself from a local state than to receive a snapshot from someone else over the network. Second, the realization of leadership will be more complicated. For example, the leader needs to send a snapshot while sending new log entries to the follower in parallel so that new client requests are not blocked.
Two other issues affect snapshot performance. First, the server must decide when a snapshot should be created. If snapshots are created too frequently, a lot of disk bandwidth and other resources are wasted. If snapshots are created too infrequently, he runs the risk of running out of storage capacity and increasing the time it takes to rebuild from the logs. A simple strategy is to create a snapshot when the log size reaches a fixed size. If this threshold is set significantly larger than the desired snapshot size, the impact of the snapshot on disk pressure will be minimal.
The second performance issue is that writing snapshots takes a significant amount of time, and we don’t want to interfere with normal operations. The solution is to use a copy-on-write technique so that new updates can be received without affecting the snapshot. For example, state machines with functional data structures naturally support such functionality. In addition, the operating system’s support for copy-on-write techniques (such as fork on Linux) can be used to create memory snapshots of full state machines (which is what our implementation does).
8 Client Interaction
This section describes how the client interacts with Raft, including how the client discovers the leader and how Raft supports linearized semantics. These issues are common to all conformance based systems, and Raft’s solution is similar to the others.
The client in Raft sends all requests to the leader. When the client starts, it randomly picks a server to communicate with. If the client first picks a server that is not a leader, that server rejects the client’s request and provides information about the most recent leader it received (the additional entry request contains the leader’s network address). If the leader has crashed, the client request will time out; The client then retries the process of randomly selecting the server.
Our goal for Raft is to achieve linear semantics (every action is executed immediately, only once, between the time he calls it and the time he receives the reply). However, as mentioned above, Raft can execute the same command multiple times: for example, if the leader commits the log but crashes before responding to the client, the client will retry the command with the new leader, causing the command to be executed again. The solution is for the client to assign a unique sequence number to each instruction. The state machine then tracks the latest serial number and corresponding response for each instruction. If an instruction is received and its sequence number has been executed, the result is returned immediately without re-executing the instruction.
Read-only operations can be handled directly without logging. However, without adding any restrictions, doing so risks returning dirty data because the leader may have been invalidated by the new leader when he responded to the client request, but he didn’t know it. Linearized reads must not return dirty data and Raft needs to use two additional measures to ensure this without logging. First, the leader must have up-to-date information about the logs being submitted. The Leader full feature ensures that the leader must have all the log entries that have been committed, but at the beginning of his tenure, he may not know that they have been committed. In order to know this information, he needs to submit a log entry during his tenure. Raft is implemented by the leader submitting a blank log entry to the log at the beginning of the term without any action. Second, the leader must check whether he has been deposed (his own information has become dirty if a newer leader is elected) before dealing with read-only requests. Raft addresses this problem by having the leader exchange heartbeat information with most of the nodes in the cluster before responding to read-only requests. Optionally, the leader can rely on the heartbeat mechanism to implement a lease mechanism, but this approach relies on time for security (assuming the time error is bounded).
9 algorithm implementation and evaluation
We have implemented the Raft algorithm for RAMCloud as part of a replication state machine that stores configuration information and helps RAMCloud coordinate failover. This Raft implementation contains approximately 2000 lines of C++ code, not including tests, comments, and blank lines. The code is open source. There are also about 25 other independent third-party open source implementations based on this draft paper for different development scenarios. At the same time, many companies have already deployed Raft based systems.
This section evaluates the Raft algorithm in three areas: understandability, correctness, and performance.
9.1 Comprehensibility
To compare the comprehensibility of Raft algorithms with Paxos, we conducted a learning experiment with high-level undergraduate and graduate students in the Advanced Operating Systems course at Stanford university and the Distributed computing course at UC Berkeley. Video lessons for Raft and Paxos were filmed and quizzes were prepared. Raft’s video lecture covers everything in this paper except log compression; The Paxos lecture contains enough information to create an equivalent replica state machine, including single-decision Paxos, multi-decision Paxos, reconfiguration, and some performance optimizations required by real systems (such as leader election). Quizzes test some basic understanding of the algorithm and explain some examples of edges and corners. Each student watched the first video and answered the corresponding test, then watched the second video and answered the corresponding test. About half of the students took the Paxos section first and the other half took the Raft section first to illustrate the difference in performance and experience gained from learning the algorithm in part 1. We calculated participants’ scores on each of the quizzes to see if they were better at understanding Raft.
We tried to make the comparison between Paxos and Raft as fair as possible. The experiment favoured Paxos in two ways: 15 of the 43 participants had some previous experience with Paxos, and Paxos videos were 14% longer. As summarized in Table 1, we have taken some steps to mitigate this potential bias. All our materials are available for review.
Care about | Measures taken to mitigate prejudice | Material available for review |
---|---|---|
Same lecture quality | Both use the same instructor. Paxos is the one used in many universities today. Paxos chairman 14%. | video |
Same test difficulty | The questions are grouped by difficulty and come in pairs on both quizzes. | quiz |
Fair score | Use evaluation gauge. Grades are given in random order, with two tests alternating. | Evaluation gauge (RuBRIC) |
Table 1: Solutions for each situation, and corresponding materials, in consideration of possible biases.
On average, participants scored 4.9 points higher on Raft than Paxos (Raft scored 25.7 out of 60 and Paxos scored 20.8); Figure 14 shows the score for each participant. Configuring the T-test (also known as Student’s T-test) shows that, with 95% confidence, the true Raft score distribution is at least 2.5 points higher than Paxos.
Figure 14: A scatter plot shows the scores of 43 students on Paxos and Raft quizzes. The dots above the diagonal represent students who achieved higher scores in Raft.
We also built a linear regression model to predict a new student’s test performance based on three factors: which quizzes they used, previous experience with Paxos, and the order of learning algorithms. The model predicted a 12.5 point difference in the choice of quizzes. This is significantly higher than the previous score of 4.9 because many of the students had previous experience with Paxos, which obviously helped Paxos and didn’t have much impact on Raft. But surprisingly, the model predicted that Raft scored 6.3 points lower for those who took the Paxos quiz first; Although we don’t know why, this seems to make statistical sense.
We also asked participants after the test which algorithm they thought was easier to implement and explain; The result of this is shown in Figure 15. Overwhelming results indicated that the Raft algorithm was easier to implement and interpret (33 out of 41). However, this self-reported result is not as reliable as the results of the participants, and the participants may be biased by our hypothesis that Raft is easier to understand.
Figure 15: Using a 5-point question, participants (left) were asked which algorithm they thought would be easier to implement in an efficient and correct system, and on the right which algorithm would be easier to explain to students.
There is a more detailed discussion of Raft user learning.
Accuracy of 9.2
In Section 5, we have established a formal specification and a proof of security for the consistency mechanism. This formal specification uses the TLA+ specification language to make the information summarized in Figure 2 very clear. It is about 400 lines long and serves as the subject of the proof. It’s also useful for anyone who wants to implement Raft. We proved log complete properties mechanically with the TLA proof system. However, the constraint premise on which this proof depends has not been mechanically proven (for example, we have not yet proved the type-safety of the specification). Furthermore, we have written an informal proof of state machine security that is complete and fairly clear (about 3,500 words).
9.3 performance
Raft has similar performance to other consistency algorithms such as Paxos. In terms of performance, the most important concern is when to copy a new log entry when the leader is elected. Raft achieves this with a very small number of message packets (a round of messages from the leader to most of the machines in the cluster). Further improvements to Raft’s performance are also possible. For example, it is easy to increase throughput and reduce latency by supporting batch operations and pipeline operations. Many performance optimization schemes have been proposed for other consistency algorithms. Many of these can be applied to Raft as well, but we’ll leave that to future work for now.
We used our own Raft implementation to measure the performance of the Raft leader election and answer two questions. First, does the leadership election process converge quickly? Second, what is the minimum time for a system outage after a leader outage?
Figure 16: Time to find and replace a broken leader. The graph above looks at the degree of randomization in election timeouts, and the graph below looks at the minimum election timeouts. Each line represents 1000 experiments (except that only 100 were tried in 150-150 milliseconds), and the corresponding determined election timeout. For example, 150-155 milliseconds means that the election timeout is randomly selected and determined from within this range. The experiment was conducted on a cluster of five nodes with a broadcast delay of about 15 milliseconds. The results are similar for a nine-node cluster.
To measure the leader election, we repeatedly brought down the leader of a five-node server cluster and calculated how long it took to discover that the leader had gone down and elect a new leader (see Figure 16). In order to construct a worst-case scenario, the server logs different lengths of each attempt, meaning that some candidates are ineligible to become leaders. In addition, to facilitate the split of votes, our test script synchronously sent a heartbeat broadcast before terminating the leader (which is about the same as the leader copying a new log to another machine before crashing). Leaders are evenly and randomly down between heartbeats, which is half the minimum election timeout. Therefore, the minimum downtime is about half the minimum election timeout.
The diagram above in Figure 16 shows that only a small amount of randomization on election timeouts can significantly avoid splitting the votes. Without randomization, in our tests, the election process often took more than 10 seconds because too many votes were split up. Adding just 5 milliseconds to the randomization time significantly improved the election process, with the average outage now only 287 milliseconds. Adding more randomization time can greatly improve the worst case: by increasing the randomization time by 50 milliseconds, the worst completion case (1000 attempts) is only 513 milliseconds.
The following figure in Figure 16 shows that reducing the election timeout can reduce system downtime. With an election timeout of 12-24 milliseconds, it takes an average of 35 milliseconds to elect a new leader (the longest took 152 milliseconds). However, lowering the election timeout further would violate Raft’s time inequality requirement: it would be difficult for the leader to send heartbeat packets before electing a new leader. This leads to meaningless leadership changes and reduces the overall usability of the system. We recommend a more conservative election timeout, such as 150-300 milliseconds; Such time is unlikely to lead to meaningless leadership changes and still provides good usability.
10 Related work
There has been a lot of published work on consistency algorithms, much of which falls under the following categories:
- Lamport’s original description of Paxos, and attempts to describe it more clearly.
- A more detailed description of Paxos, adding missing details and modifying the algorithm, provides an easier implementation base.
- Systems that implement consistency algorithms such as Chubby, ZooKeeper, and Spanner. The technical details of Chubby’s and Spanner’s algorithms are not publicly available, although both claim to be based on Paxos. The details of ZooKeeper’s algorithm have been published, but it is quite different from Paxos.
- Paxos can be applied to performance optimization.
- Oki and Liskov’s Viewstamped Replication (VR), an alternative algorithm similar to Paxos. The original algorithm description was coupled to the DISTRIBUTED transport protocol, but the core conformance algorithm was separated in a recent update. VR uses a leader-based approach that has a lot in common with Raft.
The biggest difference between Raft and Paxos is the strong leadership nature of Raft: Raft uses leader election as an essential part of the consistency protocol and focuses as much functionality as possible on the leader. This will make the algorithm easier to understand. For example, in Paxos, the leader election and the basic consistency protocol are orthogonal: the leader election is merely a means of performance optimization and is not necessarily required for consistency. However, this adds a redundant mechanism: Paxos includes both a two-phase commit protocol for basic conformance requirements and a separate mechanism for leadership elections. In contrast, Raft directly incorporates the leader election into the consistency algorithm as the first step in two-stage consistency. That removes a lot of mechanics.
Like Raft, VR and ZooKeeper are leader-based, so they have some of Raft’s benefits as well. However, Raft has fewer mechanics than VR and ZooKeeper because Raft minimizes non-leader functionality as much as possible. For example, log entries in Raft follow the direction of being sent from the leader to others: the attached entry RPC is sent out. In VR, the flow of log entries is two-way (leaders can receive logs during elections); This leads to additional mechanics and complexity. ZooKeeper’s log entries are also bidirectional, but its implementation is more like Raft.
Raft has fewer message types than the other conformance based log replication algorithms we mentioned above. For example, we counted the number of messages used by VR and ZooKeeper for basic consistency needs and member changes (excluding log compression and client interactions, which are relatively independent and not algorithmic). VR and ZooKeeper each define 10 different message types, while Raft only has 4 message types (two RPC requests and corresponding responses). Raft’s messages are slightly more informative than those of other algorithms, but they are very simple. In addition, both VR and ZooKeeper transmit the entire log when the leader changes; So for practical use, additional message types are necessary.
Raft’s strong leader model simplifies the overall algorithm, but also excludes some performance optimization approaches. For example, egalitarian Paxos (EPaxos) can achieve high performance in some situations without a leader. Egalitarian Paxos makes the most of its interchangeability in state machine instructions. Any server can submit instructions in one round of communication, unless other instructions are presented at the same time. However, if the instructions are issued concurrently and do not communicate with each other, EPaxos requires an additional round of communication. Because any server can submit instructions, EPaxos does a good job of load balancing between servers and can easily achieve low latency over WAN networks. However, he added significant complexity to Paxos.
Several methods of cluster member transformation have been proposed or implemented in other work, including the original discussion of Lamport, VR and SMART. We chose to use the common consensus approach because it has very little effect on the rest of the consensus protocol, so we can implement member transformations with very few mechanisms. Lamport’s alpha-based approach was not chosen Raft because it assumed that consistency could be achieved without a leader. Compared to VR and SMART, Raft’s reconfiguration algorithm can be done without limiting normal request processing; In contrast, VR needs to stop all processing, and SMART introduces a similar approach to alpha, limiting the number of requests processed. Raft’s approach also requires fewer additional mechanics to implement compared to VR and SMART.
Conclusion 11
Algorithms are often designed with correctness, efficiency, or simplicity as the primary goal. While these are meaningful goals, we believe that comprehensibility is just as important. None of these goals will be achieved until developers implement algorithms into real systems, which will inevitably deviate from their published form. Unless developers have a deep understanding and intuitive feel for the algorithm, it will be difficult for them to maintain the desired features when implementing it.
In this paper, we try to solve the problem of distributed consistency, but a widely accepted but confusing algorithm, Paxos, has plagued countless students and developers for years. We created a new algorithm, Raft, which is obviously easier to understand than Paxos. We also believe that Raft can provide a solid foundation for practical implementation. Having comprehensibility as a design goal changes the way we design Raft; As the design progressed, we found ourselves reusing techniques such as problem decomposition and simplifying the state space. These techniques not only improve the comprehensibility of Raft, but also give us confidence that it’s correct.
Thank you for 12
This study must be supported by Ali Ghodsi, David Mazie ‘res, and the students of CS 294-91 at Berkeley and CS 240 at Stanford. Scott Klemmer helped us design the user survey, and Nelson Ray suggested statistical analysis. The slides on Paxos used in user surveys were largely borrowed from Lorenzo Alvisi’s slides. Special thanks to DavidMazieres and Ezra Hoch for finding some hard-to-find bugs in Raft. Many people have provided useful feedback and user survey materials on this paper, including Ed Bugnion, Michael Chan, Hugues Evrard, Daniel Giffin, Arjun Gopalan, Jon Howell, Vimalkumar Jeyakumar, Ankita Kejriwal, Aleksandar Kracun, Amit Levy, Joel Martin, Satoshi Matsushita, Oleg Pesok, David Ramos, Robbert Van Renesse, Mendel Rosenblum, Nicolas Schiper, Deian Stefan, Andrew Stone, Ryan Stutsman, David Terei, Stephen Yang, Matei Zaharia and 24 anonymous conference reviewers (there may be duplication), and special thanks to our leader Eddie Kohler. Werner Vogels tweeted a link to an early draft that brought Raft a lot of attention. Our work is supported by the Gigascale Systems Research Center and the Multiscale Systems Research Center, both of which are funded by the Center of Concern Research Program, a semiconductor research company program supported by STARnet, A semiconductor research company program supported by MARCO and DARPA, approved in NSF No. 0963859, and supported by Facebook, Google, Mellanox, NEC, NetApp, SAP, and Samsung. Diego Ongaro is supported by Junglee, inc., a Stanford graduate group.
This post was reposted from TopJohn’s Blog with permission from TopJohn
Blockchain in Simple terms – systematic learning of blockchain to create the best blockchain technology blog.