preface
Because of the impact of the epidemic, the interview has become a focus of attention, today to explain the knowledge of distributed lock, so find several for distributed lock interview how to answer
To this end, the distributed lock related interview knowledge points are summarized and explained, portal here
After welfare, we get down to business
A distributed lock
As you all know, if we have multiple threads on a machine preempting the same resource, and there is an exception if we execute it multiple times, we call it non-thread-safe. In general, we use locks to solve this problem, and in the Java language, we can use synchronized. If it is a different Java instance on the same machine, we can use the system’s file read/write lock to solve the problem, and then extend to different machines? We usually solve this with distributed locks.
Features:
- Mutual exclusion: Like our local locks, mutual exclusion is basic, but distributed locks need to be mutually exclusive for different threads on different nodes.
- Reentrancy: The same thread on the same node can acquire the lock again if it has acquired the lock.
- Lock timeout: Supports lock timeout as local locks, preventing deadlocks.
- High efficiency and high availability: lock and unlock need to be efficient, but also need to ensure high availability to prevent distributed lock failure, can increase degradation.
- Blocking and non-blocking: Like ReentrantLock, lock and trylock and trylock (long timeOut) are supported.
- Support for fair and unfair locks (optional) : Fair locks mean locks are acquired in the order in which they are requested, whereas unfair locks are unordered. This is generally under-implemented. Distributed locks. I’m sure you’ve all encountered a business scenario where we have a scheduled task that needs to be executed at a scheduled time, but the task is not idempotent at the same time, so we can only make one machine one thread to execute it
Distributed locks are implemented in many ways, including Redis, ZooKeeper, And Google’s Chubby
Redis implements distributed locking
Just a quick introduction. I believe you have already thought of a solution here, that is, every time the task is executed, first query redis whether there is a lock key, if not, write, and then start the task.
For example, when process A and process B query Redis, they both find that there is no corresponding value in Redis, and then they both start to write. Since they are not reading or writing to the version, they both succeed in writing and both get the lock. Fortunately, Redis provides an atomic write operation, setnx(SET if Not eXists).
It’s naive to think that this is all you need to do for a distributed lock. Let’s consider the extreme case where a thread gets the lock, but unfortunately the machine freezes, the lock is never released, and the task is never executed. A better solution is to estimate how long it will take for a program to execute, and then set a timeout period for the lock, after which someone else will be able to retrieve the lock. But this leads to another problem. Sometimes the load is so high that the task is executed slowly, and as a result, another task is executed after the timeout period.
The charm of architectural design is that when you solve a problem, there are always new problems that need to be solved step by step. In this way, we can usually open a daemon thread after the lock is preempted, and periodically go to Redis to ask whether I am still preempting the current lock and how long it will expire. If we find that the lock is about to expire, we can renew it as soon as possible.
Ok, now that you have learned how to implement a distributed lock service using Redis
Zookeeper implements distributed locks
Zookeeper implements distributed locks as follows:
In the figure above, the Zookeeper cluster is shown on the left, lock is the data node, node_1 to node_n represents a series of sequential temporary nodes, and client_1 to client_N on the right represents the client that is acquiring the lock. Service is a mutually exclusive access Service.
Code implementation
The source code below is a distributed lock based on Zookeeper’s open source client Curator. The native API implementation of ZK will be more complicated, so here we directly use the wheel of Curator, and adopt acquire and release methods of Curator to achieve distributed lock.
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.curator.retry.ExponentialBackoffRetry;
public class CuratorDistributeLock {
public static void main(String[] args) {
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
CuratorFramework client = CuratorFrameworkFactory.newClient("111.231.83.101:2181",retryPolicy);
client.start();
CuratorFramework client2 = CuratorFrameworkFactory.newClient("111.231.83.101:2181",retryPolicy); client2.start(); InterProcessMutex = new InterProcessMutex(client,"/curator/lock");
final InterProcessMutex mutex2 = new InterProcessMutex(client2,"/curator/lock"); try { mutex.acquire(); } catch (Exception e) { e.printStackTrace(); } // Obtain the lock and proceed with the business process system.out.println ("clent Enter mutex");
Thread client2Th = new Thread(new Runnable() {
@Override
public void run() {
try {
mutex2.acquire();
System.out.println("client2 Enter mutex");
mutex2.release();
System.out.println("client2 release lock"); }catch (Exception e){ e.printStackTrace(); }}}); client2Th.start(); Try {thread.sleep (5000); mutex.release(); System.out.println("client release lock"); client2Th.join(); } catch (Exception e) { e.printStackTrace(); } // Close the client.close(); }}Copy the code
The result of the above code is as follows:
As you can see, the client first acquires the lock and then executes the business, and then it is client2’s turn to attempt to acquire the lock and execute the business.
Source code analysis
Trace the acquire() lock method all the way to the attemptLock core function.
String attemptLock(long time, TimeUnit unit, byte[] lockNodeBytes) throws Exception
{
.....
while ( !isDone )
{
isDone = true; OurPath = driver.createsTheLock(client, path,localLockNodeBytes); HasTheLock = internalLockLoop(startMillis, millisToWait, ourPath); }} // Return to node path if lock is acquiredif ( hasTheLock )
{
returnourPath; }... }Copy the code
InternalLockLoop function
private boolean internalLockLoop(long startMillis, Long millisToWait, String ourPath) throws Exception
{
.......
while( (client.getState() == CuratorFrameworkState.STARTED) && ! List<String> children = getSortedChildren(); String sequenceNodeName = ourPath.substring(basePath.length() + 1); // PredicateResults = driver. GetsTheLock (client, children) sequenceNodeName, maxLeases);if(predicateResults getsTheLock ()) {/ / get lock haveTheLock = successtrue;
}
else{// Get the previous String previousSequencePath = basePath +"/"+ predicateResults.getPathToWatch(); // If no lock is obtained, callwaitSynchronized (this) {try {// set the listener, getData will determine whether the previous node exists, synchronized(this) {try {// set the listener, getData will determine whether the previous node exists, If the listener does not exist, it will throw an exception and not set the listener client.getData().usingWatcher(watcher).forpath (previousSequencePath); // If millisToWait is set, wait for a period of time to delete itself out of the loopif( millisToWait ! = null ) { millisToWait -= (System.currentTimeMillis() - startMillis); startMillis = System.currentTimeMillis();if ( millisToWait <= 0 )
{
doDelete = true; // timed out - delete our node
break; } // Wait a whilewait(millisToWait);
}
else{// Wait and waitwait(a); }} the catch (KeeperException NoNodeException e) {/ / getData found before a child node to be deleted, throw an exception}}}}}... }Copy the code
conclusion
Using ZK to achieve distributed lock is not very common in practical applications, requires a SET of ZK cluster, and frequent monitoring is also a pressure on the ZK cluster, so it is not recommended for everyone. When I can’t go to an interview, I think it would be a plus to talk about how to use ZK to implement distributed locks.
Well, today from redis and ZooKeeper two aspects to achieve distributed lock, feel the harvest of partners, welcome to pay attention to the public number Java Technology Alliance, more information and then wait for you to come