preface
“
Lonely people, lonely songs
Don’t sing love songs when you are alone
The songwriter writes about his loneliness
Not necessarily for you
Sad people, sad songs
Sad story of different characters
Scriptwriters make up who is happy
Why play so profound
Ordinary people
Neither fairy nor god
Not sad not pleased
Should not have
Loved the pain of good together and good scattered
A wry smile cannot be found in fantasy
Not sad or happy should not get
Have seen the passing of gain and lost
In the unreal become crazy become magic
“
When knocking on the code, to a lonely song, infinite cycle, more than some different taste.
Earlier, we talked about distributed system design practices. For details, please check out previous original posts. The consistent Hash algorithm, which is commonly used in distributed system design practice, is mentioned. I didn’t explain it in detail. Today, I happened to meet a friend and asked, so let’s talk about this.
Simple instructions
Consistent hash algorithm is a distributed hash (DHT) algorithm, mainly to solve the monotonicity and dispersion of distributed hash.
Monotony refers to the normal mapping of the existing content, so as to avoid the failure to hit in the process of node increase and decrease. Similar to the above mentioned hash mode allocation, if several points continue to increase, the calculation method will lose balance. Dispersion, by which I mean solving the problem of unbalanced distribution of hashes.
Consistency hash papers, published in 1997, reading disabled classmates can understand deeper look at the big paper directly, attached paper download links: citeseerx.ist.psu.edu/viewdoc/sum…
The hash
Since we’re going to talk about consistent Hash, it’s important to understand the basic concept of Hash.
Hashes are the most common form of data distribution.
Common hash algorithms include MD5, CRC, and MurmurHash.
The MD5 algorithm
The MD5 message-digest Algorithm, a widely used password hash function that produces a 128-bit (16-byte) hash value, converts a piece of data, such as text, into another fixed-length value. It’s the basis of the hash algorithm. Designed by American cryptographer Ronald Linn Rivest, it was unveiled in 1992 and regulated in RFC 1321.
CRC algorithm
Published by W. Wesley Peterson in 1961, Cyclic Redundancy Check is a hash function that generates a short fixed-digit Check code based on data such as network packets or computer files. The resulting number is calculated and appended to the data before transmission or storage, and then checked by the receiver to see if the data has changed. This function is widely used because it is easy to use with binary computer hardware, easy to perform mathematical analysis and especially good at detecting errors caused by transmission channel interference.
MurmurHash
MurmurHash is an unencrypted hash function suitable for general hash retrieval operations. Invented by Austin Appleby in 2008 and with several variations, MurmurHash performs better than other popular hash functions for regular keys with random distribution characteristics.
This algorithm has been used by many open source projects such as libstdc++ (version 4.6), Perl, nginx (no earlier than version 1.0.1), Rubinius, libmemcached, maatkit, Hadoop, etc.
Common hash methods
The way Hash is implemented is through a Hash method that computes and maps data and handles node relationships.
Common hash methods are as follows:
- Direct addressing method: take the keyword or a linear function value of the keyword as the hash address, the definition of this linear function is varied, there is no standard.
- Numerical analysis: assuming that the keyword is based on R, and the possible keywords in the hash table are known in advance, several digits of the keyword can be used to form the hash address.
- Square the middle method: the middle bits after the key word is squared are the hash address. Usually in the selection of hash function can not necessarily know the whole situation of the keyword, which several bits may not be appropriate, and a number after the square of the middle several bits and each bit of the number are related, so that the random distribution of the keyword hash address is also random, the number of bits determined by the length of the table.
- Fold: Divide the keyword into parts with the same number of digits (the last part can have different number of digits), and then take the superposition of these parts (truncated) as the hash address.
- Modulus method: the remainder of the key word divided by a number p not larger than m of the hash table is the hash address. Hash (key) = key % p (p<= M), not only can the key directly, but also after folding method, square method, etc. The choice of p is very important, generally take prime number or m, if the choice of P is not good, it is easy to conflict.
Common hash algorithm load balancing
Scene:
In the distributed system environment, the database uses the horizontal branch library, the same library is installed on different machines. So, how to choose which machine nodes to add, delete, change and check?
Obviously, how we choose nodes is to choose an appropriate way to carry out load balancing. Let’s say we use a normal hash algorithm for load balancing and choose a simple “modulo” to illustrate the process.
Suppose there are three server nodes [0-2] and six actions [1-6], then after the hash mapping is completed, the data mapping of the three nodes is as follows:
Hash: Action % Total number of nodes = Hash node subscript
1%3 = 1
2%3 = 2
3%3 = 0
4%3 = 1
5%3 = 2
6% 3 = 0
By hashing each action evenly across three different server nodes, it looks perfect!
However, this model has two problems in load balancing of distributed cluster system:
1. Poor scalability
To meet the performance requirements of elastic scaling, service nodes often need to be expanded or reduced.
The change of nodes makes the original calculated hash value inaccurate. In order to achieve the effect of load balancing, it is necessary to recalculate and update the hash value. For the original action whose hash value is inconsistent after the update, it is necessary to migrate to the updated node.
Suppose a new server node is added, and the original three service nodes are changed into four node numbers [0-3]. The hash mapping is as follows:
Hash: Action % Total number of nodes = Hash node subscript
1%4 = 1
2%4 = 2
3%4 = 3
4%4 = 0
5%4 = 1
6% 4 = 2
You can see the following three: Storage nodes 4, 5, and 6 all fail. Therefore, the cache data of these nodes needs to be migrated to the updated nodes (time-consuming and laborious), that is, from the original node [1, 2, 0] to node [0, 1, 2]. The storage diagram after migration is as follows:
2. Poor fault tolerance
Although the online environment service nodes have various high availability guarantees, they still have the possibility of downtime, and even if they do not have downtime, they also have the need to reduce capacity. Both downtime and capacity reduction can be attributed to service node deletion. The following section analyzes the impact of service node deletion on load balancing hash values.
Similarly, in the face of the reduction of server nodes, we still have to face the problems of Hash recalculation and data migration.
Consistent hashing algorithm
Ordinary hash algorithms realize a variety of practical problems in load balancing, so we introduce a consistent hash algorithm.
Consistent Hash, Hash function calculation method is unchanged, by building a circular Hash space to replace the original ordinary linear Hash space.
A Hash ring is a large enough Hash space (usually 0 to 2^32)
Once the Hash ring is built, we can optimize it using a consistent Hash form based on the above scenario.
The Hash value can be calculated using the IP or host name of the server, and the calculated Hash value is the service node placed on the Hash ring.
Finally, the hash is computed once for each operation, and the computed hash is mapped to the first node in the ring that is found clockwise. The following figure illustrates the node selection.
Expansion capability enhancement
Compared with the common Hash method, the extension capability is improved. So how does consistent hashing solve this problem?
Let’s take a look at what happens when a new node is added to a server cluster, as shown in the figure below. Only action 3 is affected. It was originally at IP: 0, but now it can be moved to IP: 3. Data stored on other nodes remains unchanged.
Improved fault tolerance
Common hash algorithm When a service node goes offline, a large area of the original hash mapping becomes invalid. The invalid mapping triggers data migration, affecting service performance and resulting in poor fault tolerance. Let’s see how consistent hashing improves fault tolerance.
As shown in the following figure, if a node goes offline, you only need to select a new node in the clockwise direction to store the node. Data on other nodes is not affected. Consistent hashing can control the impact of node failure among adjacent nodes clockwise to avoid the impact on the entire cluster.
Consistent hash optimization
Existing problems
Above we introduced how to solve the common hash extension and fault tolerance problem, the principle is relatively simple, in ideal circumstances can run well, but in the actual use of some practical problems need to consider, the following specific analysis.
Data skew
Hash rings have a lot of space, so what’s the problem if you have fewer nodes?
One possible scenario is that fewer service node hashes are clustered together, such as the one shown in the figure below, where the clockwise search for the nodes results in all being stored on one node, putting a lot of stress on the individual nodes! This condition is called data skew.
Consistent hashing – Data skew
Node avalanche
Both data skew and node downtime can lead to cache avalanches.
Data skew causes all data to be sent to a single node, which may cause crushing, node downtime, and data to be sent to another node for further transfer. The breakdown snowballed like an avalanche.
There is also a case where the node goes offline due to various reasons. Node avalanches can also occur in the case of extremely large data volumes.
In short, the cascading effect of an entire cache cluster becoming unusable is called a node avalanche.
Virtual node
Create a virtual node to Hash consistency.
Earlier, we said that consistent Hash solves monotony, dispersion. One of the key technologies is virtual nodes.
The so-called virtual node is the original single physical node in the hash ring virtual several of its clone nodes, these clone nodes called “virtual node”. The data on the splitter node is actually mapped to the physical node corresponding to the splitter. In this way, a physical node can be evenly distributed in each part of the hash ring by means of virtual nodes, which solves the data skew problem.
Virtual nodes are scattered in various parts of the hash ring. When a node goes down and goes offline, its data is evenly distributed to other nodes to avoid node avalanche caused by sudden pressure on a single node.
The following figure shows the hash ring distribution of virtual nodes, where the distribution of nodes without virtual nodes is on the left and the distribution of nodes with virtual nodes configured is on the right
Consistent hash – Virtual node
conclusion
Swish swish, lofty algorithm, implementation, really is that one thing. A lot of times, especially in interviews, the heckler starts with consistency hashing. So, I believe that after today’s chat, we have a deep understanding of consistency hash, come on, blow the interviewer!!
Keep listening!