In concurrent programming, all kinds of locks play a crucial role, but what situation to use what lock, you need to consider, if not used properly, low efficiency, or unexpected disaster, the following, to analyze the Java lock in all kinds of.
Bias/lightweight/heavyweight lock
Java SE1.6 introduces “biased lock” and “lightweight lock” in order to reduce the performance cost of acquiring and releasing locks. Therefore, there are four lock states in Java SE1.6: no lock state, biased lock state, lightweight lock state and heavyweight lock state, which will gradually upgrade with the competition situation.
Locks can be upgraded but cannot be downgraded, meaning biased locks cannot be downgraded after being upgraded to lightweight locks. The purpose of this policy is to improve the efficiency of acquiring and releasing locks.
When there is only one thread competing for the lock, there is no blocking and no spinning, because there is only one thread competing, just to determine if the ThreadID in the biased lock is the current thread. If yes, execute the synchronization code. If not, try to modify ThreadID using CAS. If the modification succeeds, upgrade the biased lock to a lightweight lock.
The process of acquiring a lightweight lock is different from that of biased locks. The competing threads first need to copy the Mark Word from the object header into the lock record in the frame stack. After a successful copy, use the CAS operation to try to update the Mark Word of the object to a pointer to the current thread. If the update action succeeds, the thread owns the lock on the object. If the update fails, it means there are multiple threads competing. After several failed attempts by a competing thread to hold a lightweight lock (spin), the lightweight lock expands to a heavyweight lock. The heavyweight thread points to the competing thread, which also blocks, waiting for the lightweight thread to release the lock and wake it up.
The process of locking and unlocking heavyweight locks is similar to that of lightweight locks, but the difference is that the thread blocks after the competition fails, and the blocked thread wakes up after the lock is released. Therefore, the heavyweight lock is suitable for the case where the synchronous block execution time is long.
The advantages and disadvantages
Fair/unfair lock
A lock is fair if every thread in a thread group is guaranteed a lock. On the other hand, if no thread is guaranteed to acquire the lock, that is, some thread is starving, then the lock is unfair.
Fair lock
When a thread requests a lock, it will first check whether there are waiting threads in the queue maintained by the lock. If not, and the lock is not occupied at that time, then this thread owns the lock. If there is, it enters the queue tail and waits, following the FIFO principle.
Not fair lock
When a thread requests a lock, it will first try to possess the lock. If the lock fails, it will enter the waiting queue. The thread entering the queue also follows the FIFO principle, which is equal to the lock.
When the thread holding the lock releases the lock, no new thread requests the lock. There is no difference between a fair lock and an unfair lock. If a new thread requests the lock at the same time as the lock is released, and the thread at the head of the queue has not been awakened (thread context switching costs), the last thread to request the lock will have the first possession of the lock.
then
Why is it designed this way?
Because of the overhead of context switching, an unfair lock is more efficient than a fair lock because it reduces the probability of threads hanging, makes better use of the CPU’s time slice, and minimizes the idle state time of the CPU.
Thread context switch
The CPU uses the time slice allocation algorithm to execute tasks in a cycle. After the current task executes a time slice, it switches to the next task. However, the state of the previous task will be saved before switching, so that the state of the task can be loaded again when switching back to the task next time. The process from saving the task to reloading is a context switch.
Since the efficiency of unfair lock is higher than fair lock, the lock we usually use is also unfair lock, so, is not fair lock useless? The answer is clearly no. Said above, the thread starvation is fair lock exist, when the thread holding the lock lock time relatively long or request the average time interval is longer, should use fair lock, in these cases, cutting in line with the throughput of the ascension (when the lock in the available state, thread but it is still in the process of being awakened may not appear, But using fair locks gives the business a lot more control.
ReentrantLock Fair lock source code
protected final boolean tryAcquire(int acquires) {
final Thread current = Thread.currentThread();
int c = getState();
// The status is 0, indicating that no thread currently holds the lock
if (c == 0) {
// If the current thread is the first one in the queue
// The cas directive sets state to 1, and the current thread acquires the lock
// The main difference is whether the queue is the first waiting thread
if (isFirst(current) &&
compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(current);
return true ;
}
}
// If the current thread already holds the lock, then the status value is superimposed to keep acquiring the lock
else if (current == getExclusiveOwnerThread()) {
int nextc = c + acquires;
if (nextc < 0)
throw new Error( "Maximum lock count exceeded");
setState(nextc);
return true;
}
// If none of the above conditions is met, the thread enters the queue.
return false ;
}
Copy the code
ReentrantLock unfair lock source code
final boolean nonfairTryAcquire(int acquires) {
final Thread current = Thread.currentThread();
int c = getState();
if (c == 0) {
// If the current lock is not occupied, the current thread directly owns the lock through the CAS instruction
// Fair lock has an extra step to determine if it is the first thread in the queue
if (compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(current);
return true;
}
}
else if (current == getExclusiveOwnerThread()) {
int nextc = c + acquires;
if (nextc < 0)
throw new Error("Maximum lock count exceeded");
setState(nextc);
return true;
}
return false;
}
Copy the code
Reentrant/non-reentrant lock
A program or subroutine is reentrant if it can “be interrupted at any time and then the operating system schedules the execution of another piece of code that calls the subroutine without error.” That is, while the subroutine is running, the thread of execution can come in and execute it again, still getting the results expected at design time. Unlike thread-safety, where multiple threads execute concurrently, reentrant emphasizes that it is still safe to re-enter the same subroutine while executing on a single thread.
— Wikipedia
In layman’s terms, when a thread requests an object lock held by another thread, the thread blocks, and when a thread requests an object lock held by itself, the request succeeds if the lock is a reentrant lock, and otherwise blocks (non-reentrant locks).
No reentrant lock
public class Lock{
private boolean isLocked = false;
public synchronized void lock() throws InterruptedException{
// isLocked is true when the thread that has acquired the lock executes again
// The thread that has acquired the lock needs to acquire it again.
// Since the lock is already occupied and will not be released,
// A deadlock occurs
while(isLocked){
wait();
}
isLocked = true;
}
public synchronized void unlock(){
isLocked = false;
notify();
}
}
Copy the code
Use the lock
public class Count{
Lock lock = new Lock();
public void methodA(){
lock.lock();
methodB();
lock.unlock();
}
public void methodB(){
lock.lock();
/ /... Omit irrelevant code
lock.unlock();
}
}
Copy the code
When methodA is executed, the lock is obtained first and methodB is executed next. MethodB also needs to obtain the lock. The lock is occupied by methodA and cannot be released, causing A deadlock.
Reentrant lock
public class Lock{
boolean isLocked = false;
//lockedBy, save the thread holding the lock
Thread lockedBy = null;
/ / counter
int lockedCount = 0;
public synchronized void lock()
throws InterruptedException{
Thread thread = Thread.currentThread();
// Wait when the lock is occupied and the current thread is not the thread holding the lock
while(isLocked && lockedBy ! = thread){
wait();
}
isLocked = true;
lockedCount++;
lockedBy = thread;
}
public synchronized void unlock(){
if(Thread.currentThread() == this.lockedBy){
lockedCount--;
// When the counter is 0, the current thread is no longer holding the lock
if(lockedCount == 0){
isLocked = false;
notify();
}
}
}
}
Copy the code
MethodB = methodB (); methodB = methodB (); methodA = methodB (); We usually use the lock, such as synchronized, ReentrantLock and so on are ReentrantLock, the above example fair/unfair lock example source code also has the concept of ReentrantLock.
Exclusive/shared lock
01
An exclusive lock
The exclusive lock
The lock can only be held by one thread at a time. If thread T holds an exclusive lock on data A, no other thread can hold any type of lock on data A. The thread that acquires an exclusive lock can read and modify data.
02
A Shared lock
A shared lock means that the lock can be held by multiple threads. If thread T adds A shared lock to data A, other threads can only add A shared lock to data A, not an exclusive lock. The thread that acquires the shared lock can only read the data, not modify it.
Exclusive lock and shared lock are achieved through AQS, through the implementation of different methods to achieve exclusive or shared.
AQS: AbustactQueuedSynchronizer, abstract queue synchronizer, Java’s underlying synchronization tools, expressed in a variable of type int synchronization state, and provides a series of CAS operation to manage the synchronization state. The main purpose of AQS is to provide unified underlying support for concurrent synchronization components in Java.
Optimism/pessimism lock
Optimistic locking
Always assume the best case, every time I go to get the data, I think others will not modify it, so I will not lock it, but when updating, I will judge whether others have updated the data during this period, which can be achieved by using the version number mechanism and CAS algorithm.
CAS: CompareAndSwap is a lockless algorithm that can implement variable Synchronization between multiple threads without using locks. It is also called non-blocking Synchronization. Three values are involved, the memory value V that needs to be read and written, the comparison value A, and the new value B. CAS updates the value of V with the new value B atomically if and only if the value of V is equal to A, otherwise nothing is done (compare and replace is an atomic operation). Typically it is a spin operation, that is, repeated retries.
Pessimistic locking
Always assume the worst, every time you go to get the data you think someone else is going to change it, so every time you go to get the data you lock it, so that someone else tries to get the data and it blocks until it gets the lock. Both fair and unfair locks are pessimistic locks, and synchronized and ReentrantLock are pessimistic locks.
Optimistic locking is suitable for low write cases (multi-read scenarios), where conflicts are really rare, thus eliminating the overhead of locking and increasing the overall throughput of the system; However, in the case of overwrite, conflicts will often occur, which will cause the upper application to constantly retry, which will degrade the performance. Therefore, pessimistic locking is suitable in the case of overwrite.
Read-write lock
ReadWriteLock manages a set of locks, one read-only and one write. A read lock can be held by multiple threads without a write lock. A write lock is an exclusive lock.
Read-write locks allow a greater degree of concurrency for shared data than mutex locks. There can only be one writer thread at a time, but multiple threads can concurrently read data. ReadWriteLock is suitable for concurrency with too many reads and too few writes.
The implementation principle of the read/write lock is as follows: The synchronization variable state is split according to the high 16 bits and the low 16 bits. The high 16 bits represent the read lock and the low 16 bits represent the write lock.
1
1. Obtain the synchronization state and separate the write lock state with the lower 16 bits;
2. If the synchronization status is not 0, a read lock or write lock exists.
3, if there is a read lock (c! =0 &&w == 0), the write lock cannot be acquired (to ensure visibility of write to read);
4. If the current thread is not the thread that acquired the write lock last time, it cannot acquire the write lock (the write lock is an exclusive lock).
5. If all the above judgments pass, CAS is used to modify the synchronization status of the lower 16-bit write lock.
6. Set the current thread as the write lock getter thread.
The process of releasing the write lock is similar to releasing the exclusive lock. The counter is continuously reduced by one until it equals 0.
2
1. Obtain the current synchronization status.
2. Calculate the value of the high 16-bit read lock state after +1;
3. If the value of step 2 is greater than the maximum number of read locks that can be obtained, an exception is thrown.
4. If a write lock exists and the current thread is not the acquirer of the write lock, the read lock fails to be acquired.
5. If all the preceding judgments pass, use CAS to reset the synchronization status of the read lock.
The procedure for releasing a read lock is similar to that for releasing a write lock. That is, the state of the read lock is continuously released until it reaches 0.
spinlocks
Spinlocks principle is very simple, if the thread holding the lock can lock is released in a very short time resources, and the thread lock wait for competition there is no need to do between kernel mode and user mode switch (context switching) to block the pending state, they just need to wait for a while (spin), such as thread holding the lock immediately after releasing the lock locks, This avoids the cost of switching between user threads and the kernel.
However, the spin of the thread consumes the CPU. In other words, the CPU is doing idle work. If the lock cannot be acquired, the thread cannot occupy the spin of the CPU for idle work. If the thread holding the lock executes for longer than the maximum spin wait time and still does not release the lock, the thread contending for the lock cannot acquire the lock within the maximum wait time, then the contending thread will stop spinning and enter the blocking state.
The advantages and disadvantages
Spinlocks reduce thread blocking as much as possible, which is a significant performance improvement for blocks of code that are less competitive for locks and have a very short lock duration, because the spin cost is less than the cost of a thread blocking, suspending and then waking up (these actions cause the thread to make two context switches).
But if the lock of the competition is intense, or thread holding the lock need long time to occupy the lock synchronization block, this time is not suitable for using a spin lock, because of the spin lock before acquiring a lock is CPU doing this all the time, at the same time a large number of threads a lock, the competition will result in acquiring a lock for a long time, spin thread consumption is greater than the thread block suspend operation consumption, Other threads that need the CPU cannot obtain the CPU, resulting in a waste of CPU. So in this case you have to turn off the spin lock.
1
The purpose of a spin lock is to hold CPU resources until the lock is acquired. But how do you choose when to execute the spin? If the spin execution time is too long, a large number of threads in the spin state will consume CPU resources, which will affect the overall system performance. If the spin execution time is too long, the advantages of the spin lock will not be taken advantage of. Therefore, the selection of the period of spin is particularly important.
2
1. If the average load is less than CPUs, the spin is constant;
2. If more than (CPUs/2) threads are spinning, the subsequent thread blocks directly;
3. Delay spin time (spin count) or block if the thread that is spinning finds that the Owner has changed;
4, if the CPU is in power-saving mode, stop spinning;
5. The worst case of spin time is the memory delay of the CPU (the time difference between CPU A storing A piece of data and CPU B knowing the data directly).
Long press to scan the QR code below to follow me,
Weeks have dry goods, technical learning ~
Left left left
Long march always feeling, point a”good-looking“Ok
↘ ↘ ↘ ❤ ❤