In distributed systems, distributed locks are used to solve the synchronization problem between multiple instances. For example, in the master election, the master can obtain the distributed lock, and the slave fails to obtain the distributed lock. Or instances that can acquire locks can perform specific operations.
At present, there are two commonly used distributed lock implementations, which are based on ZooKeeper and redis. Zookeeper and Redis are also commonly used third-party components in production environments. I’ll look at how they work.
Distributed lock implementation requirements
To implement a distributed lock, the following three requirements must be met:
- Mutually exclusive, a lock can only be held by one client at any time.
- No deadlock, even if the holding client abnormally crashes will not affect the subsequent client lock.
- Whoever locks must unlock the same client.
Zookeeper distributed lock
There are two concepts that need to be clarified before explaining zooKeeper’s distributed locks:
- Temporary node: The life cycle is the same as the link cycle. For example, client link A creates A temporary node, NodeA. If link A is closed or the network is abnormally disconnected, NodeA will also disappear.
- Sequential nodes: Node names are created in ascending order. For example, 000000001 is created first and 000000002 is assigned to nodes that are then created.
The distributed lock implementation principle of ZooKeeper is to use temporary sequential nodes, and the general process is as follows:
- When each client locks a function, a unique temporary sequence node is generated in the directory specified by ZooKeeper.
- The node with the smallest serial number of all temporary nodes is the holder of the current lock.
- When releasing the lock, you can delete the temporary node that you hold.
For example, in the lock process, all clients create temporary nodes under the /lock directory. If they find that the temporary node they create is the smallest node in the /lock directory, then they obtain the lock successfully. Otherwise, they watch the largest node among the nodes smaller than themselves.
Monitoring the largest of the smaller nodes is to avoid the “stampede” effect, where a lock release awakens all waiting clients, but only one client can acquire the lock.
To release locks, simply delete the temporary sequential nodes that you created. The whole process flow chart is as follows:
Advantages: High lock security, preventing Loss of ZooKeeper data. Easy for users to use.
Disadvantages: High performance consumption. Because temporary nodes need to be generated and deleted dynamically, there is a time lag (typically in the range of one minute) when the cluster load is high for temporary nodes to disappear.
Redis distributed lock
Distributed lock implementation of Redis is more complex than distributed lock implementation of ZooKeeper, and is also divided into redis single-instance and multi-instance (master-master) implementation.
It should be pointed out that in the master-slave deployment of Redis, only master can obtain and release locks, which is basically the same as the implementation principle of a single instance. Otherwise, multiple people will get the same lock during the master-slave switchover.
Such as:
- Client A gets the lock at master.
- The master node crashed before writing the key created by A to the slave. (Master/slave synchronization is an asynchronous operation)
- The slave becomes the master node.
- B also gets the same lock that A still holds, because slave has no information that A still holds the lock.
Redis single instance implementation scheme
Obtain the lock by using the following command:
SET resource_name my_random_value NX PX 30000
This command sets the value of the key only if it does not exist (NX, not exist), sets the timeout to 30000 milliseconds (PX), and sets the value of the key to my_random_value. This value must be unique among all clients that acquire lock requests.
Timeout period of the key value, also called lock validity period. This is the automatic release time of the lock.
This implementation scheme is suitable for non – distributed, single – point, guaranteed never – down environment.
Redis Cluster Implementation Scheme (Redlock algorithm)
In the distributed version of the algorithm we assume that there are N Redis master nodes, which are completely independent and do not use any replication or distributed coordination algorithm to synchronize data.
Assuming N=5, a client acquires the lock as follows:
- Gets the current time in milliseconds.
- Polling requests locks on N nodes with the same key. (Set the timeout time for each request to be shorter, so that one master node is not used, and the next master node is quickly requested).
- If the lock is successfully acquired on more than half of the master nodes (three in this case), the client calculates the time spent in the second step to obtain the lock and considers the lock as successful if it is less than the lock release time.
- If the lock was acquired successfully, the lock automatic release time now = the initial lock release time – the time it took to request the lock
- If the lock acquisition fails (the number of successful locks does not exceed the average number of masters or the request time > lock release time), the client releases the lock on each master node.
The paxOS algorithm of ZooKeeper is used as a reference for the idea that the number of nodes that successfully acquire locks is more than half of the number of master nodes.
It is also important to point out that when a client fails to obtain data, delay and retry at any time to avoid multiple clients trying and failing at the same time.
Advantages: High performance
Disadvantages: Single instance has a single point of failure, data loss may occur due to primary/secondary switchover of multiple instances, and the master-master cluster mode is complex to implement.
See big guy to explain to you based on Zookeeper, Redis distributed lock