preface
In our programming process, lock is an unavoidable theme, so what is lock, what are the classifications of lock, let’s reveal her veil bit by bit. Here we make an in-depth analysis of all the points involved in lock, such as Java keyword synchronized, lock encapsulation class reentrantLock, AQS, distributed lock, and explain the principle of synchronized. ReentrantLock, reentrantLock, reentrantLock, reentrantLock, reentrantLock, reentrantLock
Classification of lock
We will first lock classification, listing his characteristics and implementation methods, so that we have a general concept of the classification of the lock.
-
Optimistic locking. Features: small competition, short execution time
-
Pessimistic locking. Features: big competition, long execution time
-
Fair lock. Features: Queues all requests in order of arrival and locks them
-
Unfair lock. Features: Random request for lock acquisition
-
An exclusive lock. Features: Single thread holding
-
The Shared lock. Features: Multiple threads hold
-
Reentrant lock. Feature: Whether the current thread can reacquire the lock it already owns
-
The spin lock. Features: Failed to obtain the lock, retry a certain number of times without giving up the CPU. Using CPU time in exchange for thread blocking and scheduling overhead competes with small scenarios
Locks are classified from a different point of view, that is, they may be both exclusive locks and flushable locks. This is just from a hierarchical point of view, many of them were not in conflict before. Let me start with the simplest lock, and the first lock that most developers encounter, the Java keyword Synchronized.
Synchronized
JVM built-in convenient thread synchronization tool, Synchronized efficiency in the early JDK version is low, in the later version of a lot of performance optimization.
The object of the lock
- Classes, static member functions, etc., lock class objects.
- Non-static member functions, such as this, lock instance objects.
Realize the principle of
With Monitorenter and Monitorexit directives:
Monitorenter command:
-
If the monitor counter is 0, one thread immediately acquires the lock. If the lock counter is +1, another thread must wait to acquire the lock
-
Monitor has acquired ownership of the lock, re-enters the lock, and the lock counter accumulates
-
The lock has been acquired by another thread and is waiting for the lock to be released
Monitorexit command:
- If the counter is not 0, the current thread continues to hold the lock. If the counter is 0, the current thread no longer owns the monitor, and the lock is released.
Visibility, atomicity, orderliness and reentrancy are guaranteed, but internal orderliness of the critical region is not guaranteed.
Lock the optimization
- The MONITorenter and Monitorexit directives in the JVM rely on the underlying operating system Mutex Lock to implement the monitorenter and Monitorexit directives. However, using Mutex Lock requires the current thread to suspend and switch from user state to kernel state to execute, which is very expensive.
- In most cases, synchronous methods are run in a single-threaded (lockless race) environment. If Mutex Lock is called every time, performance will be severely affected. The locking implementation in JDK1.6 introduces a number of optimizations to reduce the overhead of locking operations:
- Coarsening Lock: To expand multiple consecutive locks into a larger Lock range.
- Lock Elimination: Eliminates Lock protection for data that is not shared by threads other than the current synchronized block through escape analysis by the runtime JIT compiler. Escape analysis also allocates object space on the thread-local Stack and reduces garbage collection overhead on the Heap.
- Biased Locking:You can continue to optimize for some scenarios without actual competition. Maintaining lightweight locks is wasteful if not only is there no actual competition, but there is only one thread that uses the lock all the time. The goal of biased locking is to reduce the performance cost of using lightweight locks when there is no contention and only one thread uses the lock. Lightweight locks require at least one CAS each time applying for or releasing a lock, but biased locks require only one CAS during initialization. “Biased” means,Biased locking assumes that only the first thread applying for the lock will use the lock in the future(No thread will apply for the lock again), therefore,Simply record the CAS owner in the Mark Word (which is also essentially updated, but starts with a null value), and if the record succeeds, the bias lock is acquired successfully, records that the lock status is biased lock,Later, the current thread equal to owner can directly acquire the lock at zero cost; Otherwise, there are other threads competing for the lightweight lock. Biased locks cannot be optimized using spin locks because the assumption of biased locks is broken once another thread applies for the lock. If it becomes apparent that another thread is applying for the lock, a biased lock will quickly swell to a lightweight lock. Use parameter -xx: -usebiasedlocking to disable biased locking optimization (enabled by default)
- Lightweight LockingThe goal of spin locking is to reduce the cost of thread switching. If locks are hotly contested, we have to rely on heavyweight locks to block the threads that failed to compete. If there is no actual lock competition at all, then applying for heavyweight locks is a waste. The goal of lightweight locks is to reduce the performance cost of using heavyweight locks without actual contention, including kernel-user switching due to system calls, thread switching due to thread blocking, and so on. When using lightweight locks, there is no need to apply for mutex, justPoint the partial byte CAS update in the Mark Word to the Lock Record in the thread stack. If the update succeeds, the lightweight Lock has been acquired, records the lock status as lightweight lock;Otherwise, it means that a thread has already acquired the lightweight lock, there is a lock contention (no longer suitable for using the lightweight lock), and then it expands to the heavyweight lock. Since lightweight locks are naturally aimed at scenarios where lock contention does not exist, if there is lock contention but it is not fierce, it can still be optimized with spin locks.The spin fails and then expands to a heavyweight lock. Disadvantages: IfLock competition fierce, the lightweight locks will soon swell into the heavyweight locks, and the process of maintaining the lightweight locks becomes wasteful.
- Spin Spinning: With spinlocks, thread switching (including suspending and resuming threads) caused by thread blocking can be reduced. This usually happens in scenarios where the lock is held for a long time, but not very competitive. Disadvantages: 1, on a single-core processor, there is no actual parallelism, if the current thread does not block itself, the old owner can not execute, the lock will never be released, no matter how long the spin is wasted; Furthermore, if there are too many threads and too few processors, spin can cause a lot of unnecessary waste. 2. Spin locks are cpu-intensive, and this optimization is often not worth the cost if you’re doing computationally intensive tasks. Reducing the use of locks is a better option. 3. If the lock is contested for a long time, the spin usually does not acquire the lock, wasting the CPU time occupied by the spin. This usually happens whenLocks are held for a long time and are highly competitiveAt this point, spin locks should be actively disabled.
- Adaptive spin Spinning: Adaptive means that the spin time is no longer fixed, but is determined by the previous spin time on the same lock and the state of the lock owner: If the spin wait has just successfully acquired the lock on the same lock object, and the thread holding the lock is running, the virtual machine will assume that the spin wait is likely to succeed again, and it will allow the spin wait to last a relatively long time, such as 100 cycles. Conversely, if the spin is rarely successfully acquired for a lock, it is possible to reduce the spin time or even omit the spin process when acquiring the lock in the future to avoid wasting processor resources. Adaptive spin solves the problem of “uncertain lock competition time”. It is difficult for the JVM to sense the exact lock contention time, and handing it over to the user for analysis goes against the JVM’s design.Adaptive spin assumes that different threads hold the same lock object for roughly the same time and the degree of competition tends to be stable. Therefore, the time of the next spin can be adjusted according to the time of the last spin and the result. Cons: Adaptive spin doesn’t solve the problem completely, either,If the default spin count is not set properly (too high or too low), then the adaptive process will have a hard time converging to the appropriate value.
- Heavyweight locks: Built-in locks are abstracted as monitor locks in Java. The cost of this synchronization is very high, including kernel-user switching caused by system calls, thread switching caused by thread blocking, etc. Hence the name “heavyweight lock”.
Implement the Synchronized object structure
1, Mark Word; 2, pointer to class, array length (only for array objects)
Note: The length is 32bit in 32-bit JVMS and 64bit in 64-bit JVMS
There are two parts in the Object Header. The first part is used to store the runtime data of the object itself, HashCode, GC Age, lock flag bits, and whether it is a biased lock. And so on. Generally, it is 32 or 64 bits (depending on the operating system bit). Officially called The Mark Word, it is the key to lightweight and biased locking. The other part stores Pointers to the object type data in the method area (Klass Point), and an additional part stores the length of the data if the object is an array.
Lock upgrade process
1, When not treated as a lock, this is a common object, Mark Word records the object’s HashCode, the lock flag bit is 01, whether the bias towards the lock bit is 0.
2. When the object is treated as A synchronous lock and A thread A grabs the lock, the lock flag bit is still 01, but the bit of bias lock is changed to 1. The first 23bit records the ID of the thread that grabbed the lock, indicating that it enters the bias lock state.
3. When thread A tries to acquire the lock again, the JVM finds that the flag bit of the synchronized lock object is 01, and whether the biased lock is 1, i.e. the biased state. The thread ID recorded in the Mark Word is thread A’s own ID, indicating that thread A has obtained the biased lock and can execute the code of the synchronized lock.
4. When thread B attempts to acquire the lock, the JVM finds that the lock is in A biased state, but the thread ID in the Mark Word does not record THREAD B, thread B will attempt to acquire the lock with CAS first. If the lock grab succeeds, change the thread ID in The Mark Word to the ID of thread B, which means that thread B has obtained the biased lock and can execute the synchronous lock code. If the lock grab fails, go to Step 5.
5. If the partial lock fails to snatch the lock, it means that the current lock has some competition, and the partial lock will be upgraded to lightweight lock. The JVM creates a separate space in the thread stack for the current thread, where Pointers to the object lock Mark Word are stored, and where Pointers to this space are stored in the object lock Mark Word. The above two saving operations are CAS operations. If the saving is successful, it means that the thread has grabbed the synchronization lock. Then change the lock flag bit in Mark Word to 00, and the synchronization lock code can be executed. If the saving fails, lock snatching fails. In this case, go to Step 6.
6. When a lightweight snatch lock fails, the JVM uses a spin lock. The spin lock is not a lock state, but simply represents repeated attempts to snatch the lock. Starting with JDK1.7, spin locks are enabled by default and the number of spins is determined by the JVM. If the lock capture succeeds, execute the synchronization lock code. If the lock capture fails, perform Step 7.
7. If the attempt fails again after the spin lock retry, the synchronization lock will be upgraded to the heavyweight lock with the lock flag bit changed to 10. In this state, any thread that does not grab the lock is blocked.
Pointers to classes: The class data of Java objects is stored in the method area, and the JVM requires that Java objects take up multiples of 8 bits of memory, with a few bytes used to fill up the size of objects to multiples of 8 bits.
Array length: Only the array object holds this data.
Applicable to concurrent scenarios:
-
Biased lock: No actual contention and only the first thread to apply for the lock will use the lock in the future.
-
Lightweight locking: No actual contention, multiple threads use the lock interchangeably; Allows short lock contention.
-
Heavyweight lock: there are actual contention, and lock contention is long.
In addition, if lock contention time is short, spin locks can be used to further optimize the performance of lightweight and heavyweight locks and reduce thread switching.
If lock contention increases gradually (slowly), then a gradual expansion from biased locking to weight locking can improve the overall performance of the system.
With the introduction to the Synchronized keyword, let’s take a look at Java and issue self-encapsulated locks within the package.
Java. Util. Concurrent. The locks core classes
Lock core class inheritance diagram
LockSupport provides a basic lock operation primitive, implemented by the Unsafe class, that calls JNI Native methods
Unsafe:
public native void unpark(Object var1);
public native void park(boolean var1, long var2);
Copy the code
The ReentrantLock logic is implemented by the internal class Sync.
public void lock(a) {
sync.lock();
}
/ / fair lock
final void lock(a) {
acquire(1);
}
// Unfair lock
final void lock(a) {
if (compareAndSetState(0.1))
setExclusiveOwnerThread(Thread.currentThread());
else
acquire(1);
}
public final void acquire(int arg) {
if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }Copy the code
Fairness and unfairness of locks: fairness is at the bottom of the queue; Unfair is direct competition lock. Reason: Improved efficiency and reduced thread switching.
ReadWriteLock solves the problem of multiple threads reading simultaneously, but only one thread can write. A deeper look at ReadWriteLock reveals a potential problem: If a thread is reading, the writer thread needs to wait for the reader thread to release the lock before acquiring the write lock. In other words, no write is allowed during the read, which is a pessimistic read lock. To further improve concurrent execution, Java 8 introduces a new read/write lock: StampedLock.
StampedLock is an improvement over ReadWriteLock in that it also allows write locks to be acquired during reads! The data we read may be inconsistent, so we need a little extra code to determine whether there is a write in the process of reading. This read lock is an optimistic lock. There is a small probability that a write will cause the data read to be inconsistent, so we need to be able to detect it and read it again.
The core elements of implementing a lock
1. A state variable is used to mark the lock state. 0,1 indicates that there is no lock or lock.
2. Record the thread that currently holds the lock.
Need to support blocking and waking up of a thread.
Need a queue to maintain all blocked threads, thread-safe lockless queue, using CAS.
According to 1:
java.util.concurrent.locks.AbstractQueuedSynchronizer#state
/** * The synchronization state. */
private volatile int state;
Copy the code
In view of 2:
java.util.concurrent.locks.AbstractOwnableSynchronizer#exclusiveOwnerThread
/** * The current owner of exclusive mode synchronization. */
private transient Thread exclusiveOwnerThread;
Copy the code
State =0, no thread holds lock, exclusiveOwnerThread=null
State =1, thread holding lock, exclusiveOwnerThread= holding thread
State > 1, a heavy thread holds a lock, into the hold, exclusiveOwnerThread = hold thread
Addressing 3: Use the UNSAFE blocking and awakening operation primitives in LockSupport.
public static void park(Object blocker) {
Thread t = Thread.currentThread();
setBlocker(t, blocker);
UNSAFE.park(false.0L);
setBlocker(t, null);
}
Copy the code
public static void unpark(Thread thread) {
if(thread ! =null)
UNSAFE.unpark(thread);
}
Copy the code
The difference between unpark and notify is that unpark() can accurately wake up another thread, while notify can only wake up one thread.
For 4: Using CLH queues (virtual bidirectional queues) in AQS, adding threads that temporarily cannot acquire locks to wait queues.
AQS core
If the requested shared resource is free, the thread currently requesting the resource is set to a valid worker thread and the shared resource is set to a locked state. If the requested shared resource is occupied, then a mechanism is needed to block wait and lock allocation when the thread is awakened. This mechanism is implemented with CLH queue (virtual two-way queue) lock, which will be temporarily unable to acquire the lock to join the queue. AQS uses an int member variable to indicate synchronization status, and queues the resource threads through the built-in FIFO queue. AQS uses CAS to perform atomic operations on the synchronization state to modify its value.
Advantage:
- AQS solves many of the details involved in implementing synchronizers, such as customizing standard synchronization state and FIFO synchronization queues.
- There are many benefits to building synchronizers based on AQS. Not only does it greatly reduce the implementation effort, but it also does not have to deal with the problem of competing in multiple locations.
private volatile int state;// Share variables with volatile modifier to ensure thread visibility
/ / CAS operation
protected final boolean compareAndSetState(int expect, int update) {
return unsafe.compareAndSwapInt(this, stateOffset, expect, update);
}
Copy the code
AQS defines two resource sharing modes
-
Exclusive: Only one thread can execute, such as ReentrantLock. And can be divided into fair locks and unfair locks:
-
- Fair lock: The first to come gets the lock according to the order in which the threads are queued
- Unfair lock: when a thread wants to acquire a lock, it directly grabs the lock regardless of the queue order
-
Share: Multiple threads can execute simultaneously, such as Semaphore/CountDownLatch. Semaphore, CountDownLatCh, CyclicBarrier, and ReadWriteLock are all covered later.
Different custom synchronizers compete for shared resources in different ways. In the implementation of the custom synchronizer, we only need to realize the acquisition and release of shared resource state. As for the maintenance of specific thread waiting queues (such as failure to acquire resources and queue entry/wake up queue, etc.), AQS has already helped us achieve good results in the upper layer.
// Whether the thread is monopolizing resources. You only need to implement it if you use condition.
protected boolean isHeldExclusively(a) {
throw new UnsupportedOperationException();
}
// Exclusive mode. Attempts to obtain the resource return true on success and false on failure.
protected boolean tryAcquire(int arg) {
throw new UnsupportedOperationException();
}
// Exclusive mode. Attempts to free resources return true on success and false on failure.
protected boolean tryRelease(int arg) {
throw new UnsupportedOperationException();
}
// Share mode. Attempt to obtain resources. Negative numbers indicate failure; 0 indicates success, but no available resources are available. A positive number indicates success and free resources.
protected int tryAcquireShared(int arg) {
throw new UnsupportedOperationException();
}
// Share mode. Attempts to free resources return true on success and false on failure.
protected boolean tryReleaseShared(int arg) {
throw new UnsupportedOperationException();
}
Copy the code
In the case of ReentrantLock, state is initialized to 0, indicating that the state is not locked. When thread A locks (), tryAcquire() is called to monopolize the lock and state+1. After that, another thread will fail to tryAcquire() until the unlock() of thread A reaches state=0. Of course, thread A can repeatedly acquire the lock itself before releasing it (state will accumulate), which is the concept of reentrant. But be careful how many times you get it and how many times you release it, so that state can go back to zero.
AbstractQueuedSynchronizer data structure
The CLH(Craig,Landin,and Hagersten) queue is a virtual bidirectional queue (virtual bidirectional queue has no queue instance, only the association between nodes). AQS encapsulates each thread requesting shared resources into a Node (Node) of a CLH lock queue to achieve lock allocation. Sync queue is a bidirectional linked list consisting of head nodes and tail nodes. The head node is mainly used for subsequent scheduling. The Condition queue is not required; it is a one-way list that exists only when Condition is used. And there may be multiple Condition queues.
AQS is the core of JUC, and CLH lock is the basis of AQS, AQS uses a variant of CLH lock.
Because CLH lock is a spin lock, spin lock is also a kind of mutex, but the thread that does not grab the lock will spin and wait for the release of the lock, in the busy-waiting state, in this case, the thread waiting for the lock will not go to sleep, but is always busy waiting for the waste of CPU cycle. Therefore, spin locks are suitable for short lock occupancy.
We talked about spin locks, but we also talked about mutex. The mutex here refers to the traditional mutex, that is, when multiple threads concurrently compete for the lock, the thread that does not grab the lock will enter the sleep state, namely sleep-waiting. When the lock is released, the thread in the sleep state will acquire the lock again. The downside is that these procedures require thread switching, many CPU instructions to execute, and also time. If the CPU takes longer to perform thread switches than the lock takes, it might be better to use a spin lock. Therefore, mutex is suitable for situations where the lock is held for a long time.
CLH lock is actually a spin fair lock based on logical queue non-thread hunger, due to the invention of Craig, Landin and Hagersten, so named CLH lock.
CLH locking works as follows:
- First, there is a tail node pointer, through this tail node pointer to build the logical queue waiting for threads, so it can ensure the fairness of thread thread first-come-first-served, so the tail pointer can be said to be the bridge to build the logical queue; In addition, the tail node pointer is an atomic reference type, which avoids the thread-safety problem of multithreaded concurrent operations.
- By each thread waiting on the lock spinning waits on one of its own variables, which will be written by the previous thread. Since a thread always obtains a variable written by the previous thread through the tail node pointer, and the tail node pointer is of atomic reference type, this ensures that this variable is always thread-safe.
Building synchronizer based on AQS:
-
ReentrantLock
-
Semaphore
-
CountDownLatch
-
ReentrantReadWriteLock
-
SynchronusQueue
-
FutureTask
Reference:
Zhuanlan.zhihu.com/p/197840259 juejin. Cn/post / 684490…
Concurrent programming requires thread-to-thread state manipulation, so let’s take a look at the difference between the two ways a thread waits.
Difference between object.wait () and condition.await ()
Object.wait() and condition.await () work in much the same way, except that condition.await () calls locksupport.park () to block the current thread. In fact, it does two more things before blocking the current thread: it adds the current thread to the conditional queue, and it releases the lock “completely” by making the state variable 0, and then blocks the current thread with a call to locksupport.park ().
Having covered a lot of basic theory and introduced the basic ideas for implementing locks, let’s take a look at how an object that is actually a lock is implemented in and out of a packet.
ReentrantLock
- 1. Class internal structure:
- 2. AbstractQueuedSynchronizer. Abstract queue synchronizer. Is the basic component of locks.
-
3. Does ReentrantLock implement fair or unfair locks by default? The default is unfair lock.
-
4. How to implement the lock? High performance locking is realized based on lock-free CAS mechanism.
The first step is to execute if (compareAndSetState(0, 1)) : AQS has a core variable state, representing the lock state; Let’s see if state is 0? If it’s 0, it means no one’s locked it, so I can lock it and set the state to 1. The underlying implementation was based on Unsafe, the API used internally in the JDK, pointer manipulation, and atomic CAS based on CPU instructions. If the lock is successful, you need to set yourself as the current lock thread. setExclusiveOwnerThread(Thread.currentThread());
In the case of lock re-entry, acquire(1) is executed.
The tryAcquire(ARG) method simply determines whether the current thread is locked, and returns state+1 if so, or false otherwise.
Return false and the first condition is true, and then enter the addWaiter method. Simply put, the current thread is encapsulated into a Node object, and the current Node is mounted on the linked list through CAS. If there is no linked list, an empty node is created as head, and the current node is mounted. The rest are pointer operations that I won’t go into here.
The acquireQueued method attempts another lock, and if it fails, the park operation is used to suspend the current thread.
- 5. Lock release
An exception is thrown if the lock is not released by the current thread.
If it’s a reentrant it’s going to put state minus 1.
If state=0, the first thread in the queue is invoked and the lock continues to be acquired.
- 6. Unfair locking
When state=0, a thread suddenly acquires the lock, and the lock succeeds, regardless of the thread in the queue.
- 7. A fair lock
Fair lock, any thread will first judge whether there is someone in the queue, and whether they are in the queue, if not, it means that there is someone else in the queue, at this time, they can not try to lock, directly into the queue blocking waiting
- 8. Obtain the timeout period for lock setting
TryAcquireNanos (int arg, long nanosTimeout). The timeout is set when the thread is suspended.
- 9. Advantages of LockSupport over Wait
You don’t need to synchronize any more code blocks, and then you call the release first, and it doesn’t matter if the call is suspended.
The read/write lock of update granularity is also encapsulated in the packet.
ReentrantReadWriteLock
All except read locks and read locks are mutually exclusive.
1. How do write locks work
c & (1<<16) -1
protected final boolean tryAcquire(int acquires) {
Thread current = Thread.currentThread();
// Get state = 0
int c = getState(); // indicates whether the lock is in place
int w = exclusiveCount(c); // Determine whether the lock is read or write
// If c! If = 0, someone locked it, but c = 0
if(c ! =0) {
// (Note: if c ! = 0 and w == 0 then shared count ! = 0)
if (w == 0|| current ! = getExclusiveOwnerThread())return false;
if (w + exclusiveCount(acquires) > MAX_COUNT)
throw new Error("Maximum lock count exceeded");
// Reentrant acquire
setState(c + acquires);
return true;
}
// If the lock is unfair, return false to determine the length of the wait queue
if(writerShouldBlock() || ! compareAndSetState(c, c + acquires))return false;
// If you write the lock, state += 1, the lock occupies the thread
setExclusiveOwnerThread(current);
return true;
}
Copy the code
2. Read lock lock
state = 1<<16 + state
3. Write lock reentrant
c ! = 0, w == 0, c 16 bits lower than 0, someone added a read lock, return false, if you are not the current thread locked false.
Only one thread can hold a write lock at a time. If thread 1, for example, holds a write lock, thread 2 also holds a write lock, it does not work.
You’re the one who added the lock. If you are the one adding the write lock, you are adding the reentrant lock, and set state += 1.
4. Join the team if the write lock fails
Thread 1 adds a write lock, thread 2 adds a write lock and it’s mutually exclusive, so it’s stuck, blocked, waiting in a queue to wake up. The queue entry logic is the same as the reentrant lock.
Is read lock acquisition necessary in lock degradation? Answers are necessary. In order to ensure the visibility of data, if the current thread does not acquire the read lock but directly releases the write lock, assuming that another thread (called thread T) acquires the write lock and changes the data, the current thread will not be aware of the data update of thread T. If the current thread acquires a read lock, that is, following the steps of lock degradation, thread T will be blocked until the current thread uses the data and releases the read lock. Thread T can acquire the write lock to update the data.
A distributed lock
Distributed locks are different from the locks described above in that they are standalone locks that only control the execution of code blocks within this instance. Distributed lock is a kind of lock in distributed system, which is used to control the problem of resource competition in the whole distributed system. There are two common ways to implement distributed lock, one is based on REDis, the other is based on ZK.
Specific how to do can be according to the actual situation of our system, if our system is now only redis external dependencies, then based on the reids implementation can, if this time must to implementation, based on the zk will only add to the system of external dependence, make the system more likely to depend on the problem of risk, increase the cost of development operations.
The drawback of redis is that if reids hang, the lock may be lost, because the reids may not come and flush the disk. Zk does not have this problem, because ZK created a temporary node, as long as the node is not deleted, after zK restart, the lock will remain.
Let’s take a brief look at jedisAPI calls.
jedisClientUtil.set(lockKey.toString(),
CLIENT_ID + Thread.currentThread().getId(), JedisClientUtil.SetPremise.NX,
JedisClientUtil.ExpireType.Milliseconds, unit.toMillis(expiration))
Copy the code
jedisClientUtil.eval(DELETE_SCRIPT, 1, lockKey.toString(), CLIENT_ID + Thread.currentThread().getId())
private static final String DELETE_SCRIPT =
"if redis.call('get',KEYS[1]) == ARGV[1] then\n" +
" return redis.call('del',KEYS[1])\n" +
"else\n" +
" return 0\n" +
"end";
Copy the code
When using distributed locks, we should pay attention to several issues: the lock must have a timeout period, the lock must be released after use or the system is abnormal, and can the lock be released by other threads? These questions are very important and may lead to serious consequences if not paid attention to.
There is also an open source implementation of packaged distributed locks based on ReIDS, called Redisson. Redission The distributed lock is automatically renewed. When the timeout duration is 1/3, the lock check is triggered. If the thread ID is not unlocked, the lock is renewed. Redission creates a TimerTask implemented by Redission and then triggers it in the time wheel with a trigger delay of 1500ms. The time wheel is equivalent to a countdown stopwatch. The number of cells in the time wheel and the time interval represented by each cell (how long the countdown pointer of the stopwatch goes one cell) can be set. The tasks in each cell have a queue cache with an initial length of 1024. Then when the countdown pointer goes to the current cell, the task in the cell will be triggered when the number of rounds is 0. If the number of rounds >0, the task will be reduced by 1. The countdown pointer then moves on to the next cell. When the countdown pointer reaches the last cell, reset to the first cell (the cell is a list, but this behavior looks like a ring). Because of the time wheel, there is only one countdown thread, so it is not too expensive.
The topic of lock is introduced here. From Synchronized keyword at the beginning, to read and write lock encapsulated in JDK and sent package, and then to distributed lock in distributed system, we have basically introduced the types, principles and use of locks involved in development. The author is limited in level and does not often share technical articles. There are inevitably some mistakes in this article. If there are any mistakes, please correct them. Thank you.
Recommended reading
Process and optimization of a search performance
Kubernetes Scheduler source code parsing and custom resource scheduling algorithm practice
First introduction to JVM (take you from a different perspective to understand JVM) mysql snapshot read principle implementation
, recruiting
Zhengcaiyun Technology team (Zero) is a passionate, creative and executive team based in picturesque Hangzhou. The team has more than 300 r&d partners, including “old” soldiers from Alibaba, Huawei and NetEase, as well as newcomers from Zhejiang University, University of Science and Technology of China, Hangzhou Electric And other universities. Team in the day-to-day business development, but also in cloud native, chain blocks, artificial intelligence, low code platform system, middleware, data, material, engineering platform, the performance experience, visualization technology areas such as exploration and practice, to promote and fell to the ground a series of internal technical products, continue to explore new frontiers of technology. In addition, the team is involved in community building, Currently, There are Google Flutter, SciKit-Learn, Apache Dubbo, Apache Rocketmq, Apache Pulsar, CNCF Dapr, Apache DolphinScheduler, and Alibaba Seata and many other contributors to the excellent open source community. If you want to change something that’s been bothering you, want to start bothering you. If you want to change, you’ve been told you need more ideas, but you don’t have a solution. If you want change, you have the power to make it happen, but you don’t need it. If you want to change what you want to accomplish, you need a team to support you, but you don’t have the position to lead people. If you want to change the original savvy is good, but there is always a layer of fuzzy window…… If you believe in the power of believing, believing that ordinary people can achieve extraordinary things, believing that you can meet a better version of yourself. If you want to be a part of the process of growing a technology team with deep business understanding, sound technology systems, technology value creation, and impact spillover as your business takes off, I think we should talk. Any time, waiting for you to write something and send it to [email protected]
Wechat official account
The article is published synchronously, the public number of political cloud technology team, welcome to pay attention to