Content of this section:
- Java lock is introduced
- Bias lock, lightweight lock, heavyweight lock
- Reentrant lock, non-reentrant lock
- Shared lock, exclusive lock
- Fair lock, unfair lock
- Pessimistic lock, optimistic lock
- Spinlocks, non-spinlocks
- Interruptible lock, non-interruptible lock
- Synchronized lock is introduced
- What is a synchronized lock
- The application of synchronized keyword in the synchronization method
- Comparison between the synchronized keyword and Lock interface
- Fair locks and unfair locks
- Fair locks and unfair locks
- Fair lock and unfair lock effect display
- Read-write lock
- Why set read/write lock
- Read/write lock rules
- Read and write lock usage demonstration
- Read lock queue-jumping strategy
- Upgrade or downgrade of read/write locks
- spinlocks
- Spin lock introduction
- How to customize a reentrant spin lock
- JVM optimizations for locking
Java lock is introduced
Bias lock, lightweight lock, heavyweight lock
These three types of locks specifically refer to synchronized lock states, which are represented by the Mark Work field in the object header.
Biased locking:
From beginning to end, there is no competition for the lock, just need to make a mark, this is biased locking, each object is a built-in lock (built-in lock is a reentrant lock), after an object is initialized, there is no thread to obtain its lock, it is can be biased, when there is a thread to visit and try to get the lock, it will keep track this thread, In the future, if the thread that acquires the lock officially favors the lock owner, it can directly acquire the lock, which has the best performance.
Lightweight lock:
A lightweight lock is a partial lock that is accessed by another thread, and there is a lock contention. Then the partial lock will be upgraded to a lightweight lock, and the thread will try to acquire the lock in the form of spin without being blocked.
Heavyweight locks:
Heavyweight lock is a mutex, mainly use the synchronization mechanism of the operating system, when multiple threads directly have concurrent access, and a lock competition time long, lightweight lock can’t meet the demand, the lock is upgraded to a heavyweight locks, heavyweight lock will put other can’t get the thread to lock in a blocking state, heavyweight lock the overhead of relatively large.
Reentrant lock, non-reentrant lock
Reentrant lock:
Reentrant lock means that if the current thread already holds the lock, can don’t release the lock case again to get the lock, if one thread attempts to obtain it already hold locks, then this request will be successful, each lock associated with a thread count and an owner, when the count value is 0, don’t think the thread holding the lock, When a thread requests an unheld lock, the JVM takes note of the holder and sets the count to 1. If the same thread acquires the lock again, the count increases.
Non-reentrant lock:
Similarly, a non-reentrant lock means that the current thread already holds the lock, but if it wants to acquire the lock again, it must release the lock before attempting to acquire it again.
Shared lock, exclusive lock
A Shared lock:
A shared lock is a lock that can be acquired by multiple threads at the same time. The most typical is the read lock in the read/write lock.
Exclusive locks:
Similarly, an exclusive lock is a lock that can only be held by one thread.
Fair lock, unfair lock
Fair lock:
A fair lock means that if the thread can’t get the lock at the moment, the thread will wait and queue. The thread that has waited long in the queue will get the lock first, first come, first served.
Unfair lock:
An unfair lock means that some threads will ignore the threads that are already in the queue, causing queue jumping.
Pessimistic lock, optimistic lock
Pessimistic locks:
Pessimistic locking, as its name suggests, is more pessimistic, pessimistic locking think if don’t lock the Shared resources, other threads will be back in the competition, can cause data errors, so before access to a Shared resource, must first get a lock, in order to reach the state of the “monopoly”, for other threads to access the data, data errors will happen like this. Common pessimistic locks such as synchronized keyword, Lock interface
Optimistic locking:
Similarly optimistic locking is relatively pessimistic locking, optimistic locking is more optimistic, it was assumed that data is not conflict, usually only at the time of data update, will compare the data in the current thread updates have been modified, if has not been modified, can update the data normally, if the data change has occurred and expectations are different, Then the update operation will not succeed, so you can abandon the update or choose to report an error, retry, etc. Common optimistic locks are: various atomic classes
Spinlocks, non-spinlocks
The spin lock.
Spin locking means that if a thread can’t get the lock, instead of blocking or releasing CPU resources, it starts spinning itself around trying to get the lock, a loop called spin.
Non-spinlock:
A non-spin lock is a process that has no spin. If the lock is not available, it is abandoned or other logical processes such as queuing and blocking are performed.
Interruptible lock, non-interruptible lock
Interruptible lock:
Interruptible lock means that during the lock acquisition process, the lock can be interrupted to do other things without waiting until the lock is acquired
Non-interruptible lock:
Synchronized is an unbreakable lock, which means that once you acquire a lock, you can’t perform any other logical processing until you get it.
Synchronized lock is introduced
What is a synchronized lock
In Java, each object holds a lock associated with it and controls the synchronized code of the object. To execute the synchronized code of the object, you must first acquire the lock of the object, which is the Monitor lock of the object. Synchronized uses Monitor lock to achieve lock unlocking. The only way to acquire a Monitor lock is to enter a synchronized block of code protected by the lock, or a synchronized method. The lock is acquired before a thread enters synchronized protected code, and then automatically released after normal code execution is complete or when an exception exits.
The use of the synchronized keyword in a synchronized code block can be understood by analyzing the disassembly content of synchronized code block
public class TestSync { public void sync1(){ synchronized (this){ int ss = 10; System.out.println(ss); }}}Copy the code
Javap-verbose testsync.class = javap-verbose testsync.class = javap-verbose testsync.class
public void sync1(); descriptor: ()V flags: ACC_PUBLIC Code: stack=2, locals=4, args_size=1 0: aload_0 1: dup 2: astore_1 3: Monitorenter / / locking 4:10 6: bipush istore_2 7: getstatic # 2 / / Field Java/lang/System. Out: Ljava/IO/PrintStream; 10:11: iload_2 invokevirtual # 3 / Java/Method/IO/PrintStream println: (I) V 14:15: aload_1 monitorexit / / unlock 16: Goto 24 19: Astore_3 20: ALOAD_1 21: MonitoreXit unlocked 22: ALOad_3 23: Athrow 24: ReturnCopy the code
Monitorenter and Monitorexit directives are added to the synchronization block, which we can interpret as the corresponding unlocks. The reason there is one Monitorenter for two MonitoreXit is because the JVM guarantees that each Monitorenter must have a monitorexit for it. Monitorexit needs to be executed separately in the normal end flow and the exception end flow to ensure that the lock is released in both normal and exception end cases.
Monitorenter Meaning: Each object maintains a counter. The unlocked object counter is 0. The thread that executes the Monitorenter tries to acquire ownership of the monitor.
- If the count of the monitor is 0, the thread acquires the monitor and sets its count to 1, and the thread becomes the owner of the monitor.
- If the thread has already acquired the monitor, the count for the monitor accumulates.
- If the thread has already acquired the monitor from another thread, the thread currently trying to acquire the monitor is blocked until the count of the monitor is 0, indicating that the monitor has been released, and then the current thread can attempt to acquire the monitor.
Monitorexit Meaning: Monitorexit decreases the count of the Monitor by one until it reaches zero, indicating that the monitor has been released and that other threads can try to acquire ownership of the monitor.
Application of synchronized keyword in synchronization method:
Let’s take a look at what happens when the synchronous method is disassembled. We disassemble the following.
public class TestSync { public synchronized void sync2(){ int aa = 10; System.out.println(aa); }}Copy the code
The disassembly code is as follows:
public synchronized void sync2();
descriptor: ()V
flags: ACC_PUBLIC, ACC_SYNCHRONIZED
Code:
stack=2, locals=2, args_size=1
0: bipush 10
2: istore_1
3: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
6: iload_1
7: invokevirtual #3 // Method java/io/PrintStream.println:(I)V
10: return
Copy the code
The difference between the synchronized method and the synchronized block is that the synchronized block relies on monitorenter and Monitorexit to unlock, and the synchronized method adds an ACC_SYNCHRONIZED flags modifier to indicate that it is synchronized. Therefore, a synchronized modified method will have an ACC_SYNCHRONIZED flag. When a thread accesses a method, it will check whether the method carries the ACC_SYNCHRONIZED flag. In this case, it will acquire the monitor lock first, and then execute the method after obtaining the lock. Release the monitor lock after the method completes execution.
Comparison between the synchronized keyword and Lock interface
Similarities:
- Synchronized and Lock are used to secure resource threads.
- Both ensure visibility. For synchronized, an operation performed by thread A into A synchronized code block or method is visible to subsequent thread B that acquires the same monitor lock. In the same way, Lock is guaranteed to be visible as synchronized
- Synchronized and ReentrantLock both have reentrant features
Difference:
- Usage differences: The synchronized keyword can be added to a method without specifying a Lock object, or a new block of synchronized code can be created and a custom monitor Lock object can be created. The Lock interface must say Lock(), unLock(), It is also common to ensure that unLock() is used in the finally block to prevent deadlocks.
- Unlock A different order, the Lock can not unlock in Lock order such as we first obtain A Lock, in acquiring Lock B, then unlock when we to unlock A Lock in the unlock the Lock, B but synchronized with unlock must have the order, such as access to A Lock, in acquiring Lock B, then unlock is to unlock the Lock, B in the unlock A Lock.
- Synchronized is less flexible than Lock. Once a synchronized Lock is acquired by a thread, other threads can only block and wait for the Lock to be released. If the thread holding the Lock executes for a long time, the running efficiency of the whole program will be very low. In contrast to Lock’s lockInterruptibly method, you can interrupt and exit if you feel that the thread holding the Lock has been executing for too long, or you can use tryLock() to try to acquire the Lock and perform other logic if you can’t.
- Some implementation classes of the Lock interface, such as read locks, can be held by multiple threads, whereas synchronized can only be held by one thread
- Synchronized is a built-in Monitor Lock, which can be unlocked by JVM. It can also be divided into biased Lock, lightweight Lock and heavyweight Lock. The Lock interface has different underlying principles according to different implementations.
- Lock can be set to fair Lock, but synchronized cannot
- Synchronized has been optimized in the JDK since Java6, so it’s no worse than Lock
Fair locks and unfair locks
Fair locks and unfair locks
A fair lock is one in which locks are allocated in the order requested by the thread. An unfair lock is one in which locks are not allocated in exactly the order requested by the thread. However, an unfair lock is not allocated randomly but is executed “at an appropriate time”.
What is the right time So-called the right time is like a thread to get the new lock, happened to be on a thread holds a lock is released just lock, so at this point the new thread can choose whatever line up behind the thread and immediately cut in line, but if a thread is not releasing the lock, then the new thread will need to wait queue. Why set a fair lock Compared to design a fair lock because fair lock queue execution, on a thread lock is released after need to wake up the next to thread of execution, and then to get locked in execution, and use, not under the fair lock can be released in a thread lock, directly obtained came to a new thread lock to jump the queue to execute the code, There is no additional cost to wake up the thread, and it is possible that during the time the thread wakes up, the queue-jumping thread has acquired the lock, completed its task, and released the lock. Therefore, set an unfair lock, which is designed to improve the overall operating efficiency of the system, and ReentrantLock defaults to an unfair lock.
Fair lock and unfair lock effect display
Set a Boolean value in ReentrantLock to set a fair unfair lock, as shown in the following code.
Lock lock=new ReentrantLock(false);
Copy the code
Fair lock code display:
/** * Description: demonstrates fair locks, showing fair and unfair cases respectively. * Unfair locks give priority to the thread that currently holds the lock to acquire the lock again. Public class FairAndNoFair {public static void main(String[] args) {PrintQueue PrintQueue = new PrintQueue(); Thread[] threads= new Thread[10]; for(int i=0; i<10; i++){ threads[i] = new Thread(new Job(printQueue),"Thread "+ i); } for (int i = 0; i < 10; i++) { threads[i].start(); try { Thread.sleep(100); } catch (InterruptedException e) {e.printStackTrace(); } } } } public class Job implements Runnable{ private PrintQueue printQueue; public Job(PrintQueue printQueue){ this.printQueue=printQueue; } @override public void run() {system.out.printf ("%s: Going to print a job\n", thread.currentThread ().getName()); printQueue.printJob(); System.out.printf("%s: The document has been printed\n", thread.currentThread ().getName()); } } public class PrintQueue { private final Lock lock=new ReentrantLock(false); public void printJob(){ lock.lock(); Try {Long duration = (Long) (Math. random()*10000); System.out.printf("%s:First PrintQueue: Printing a Job during %d seconds\n", Thread.currentThread().getName(), (duration / 1000)); Thread.sleep(duration); } catch (InterruptedException e){ e.printStackTrace(); } finally { lock.unlock(); } lock.lock(); try{ Long duration = (long) (Math.random()*10000); System.out.printf("%s:Second PrintQueue: Printing a Job during %d seconds\n", Thread.currentThread().getName(), (duration / 1000)); Thread.sleep(duration); } catch (InterruptedException e){ e.printStackTrace(); } finally { lock.unlock(); }}}Copy the code
Let’s run the fair lock first and print the following:
Thread 0: Going to print a job Thread 0: First PrintQueue: Printing a job during 9 seconds Thread 1: Thread 1: Going to print a job Thread 2: Going to print a job Thread 3: Going to print a job Thread 4: Going to print a job Thread 5: Going to print a job Thread 6: Going to print a job Thread 7: Going to print a job Thread 8: Going to print a job Thread 9: Going to print a job Thread 1: First PrintQueue: Print a Job during 5 seconds Printing a Job during 5 seconds Printing a Job during 5 seconds Print a Job during 1 seconds Thread 3: First PrintQueue: print a Job during 9 seconds Thread 4: First PrintQueue: print a Job during 1 seconds Thread 3: First PrintQueue: print a Job during 9 seconds Thread 4: First PrintQueue: Print a Job during 7 seconds Thread 5: First PrintQueue: print a Job during 8 seconds Thread 6: First PrintQueue: print a Job during 7 seconds Thread 5: First PrintQueue: print a Job during 8 seconds Thread 6: First PrintQueue: Print a Job during 5 seconds Thread 7: First PrintQueue: print a Job during 2 seconds Thread 8: First PrintQueue: print a Job during 5 seconds Thread 7: First PrintQueue: print a Job during 2 seconds Thread 8: First PrintQueue: Print a Job during 9 seconds Thread 9: First PrintQueue: print a Job during 7 seconds Thread 0: Second PrintQueue: print a Job during 9 seconds Thread 9: First PrintQueue: print a Job during 7 seconds Thread 0: Second PrintQueue: Print a Job during 0 seconds Thread 1: Second PrintQueue: print a Job during 6 seconds Thread 0: The document has been printed Thread 1: The document has been printed Thread 2: Second PrintQueue: Printed a Job during 4 seconds Thread 2: The document has been printed Thread 3: Second PrintQueue Printed a Job during 4 seconds Thread 3: The document has been printed Thread 4: Second PrintQueue Printed a Job during 1 seconds Thread 4: The document has been printed Thread 5: Second PrintQueue: Printed a Job during 3 seconds Thread 5: The document has been printed Thread 6: Second PrintQueue Printed a Job during 0 seconds Thread 6: The document has been printed Thread 7: Second PrintQueue Printed a Job during 1 seconds Thread 7: The document has been printed Thread 8: Second PrintQueue Printed a Job during 5 seconds Thread 8: The document has been printed Thread 9: Second PrintQueue Printing a Job during 5 seconds Thread 9: The document has been printed Process finished with exit code 0Copy the code
As you can see from the above figure, threads directly acquire locks in a fair order, first come, first served.
We run an unfair lock and print the following:
Thread 0: Going to print a job Thread 0: First PrintQueue: Printing a job during 5 seconds Thread 1: Going to print a job Thread 2: Going to print a job Thread 3: Going to print a job Thread 4: Going to print a job Thread 5: Going to print a job Thread 6: Going to print a job Thread 7: Going to print a job Thread 8: Going to print a job Thread 9: Going to print a job Thread 0: Second PrintQueue Thread 0: The document has been printed Thread 1: Printing a Job during 2 seconds First PrintQueue: Printing a Job during 9 seconds Thread 1: Second PrintQueue: Printing a Job during 3 seconds Thread 1: Printed Thread The document has been printed Thread 2: First PrintQueue: Printing a Job during 0 seconds Thread 3: First PrintQueue: Print a Job during 0 seconds Thread 3: Second PrintQueue: print a Job during 7 seconds Thread 3: Printed Thread The document has been printed Thread 4: First PrintQueue: Printing a Job during 3 seconds Thread 4: Second PrintQueue: Printed a Job during 8 seconds Thread 4: The document has been printed Thread 5: First PrintQueue: Printing a Job during 6 seconds Thread 5: Second PrintQueue: Printing a Job during 1 seconds Thread 5: Printed Thread The document has been printed Thread 6: First PrintQueue: Printing a Job during 0 seconds Thread 6: Second PrintQueue: Printed a Job during 7 seconds Thread 6: The document has been printed Thread 7: First PrintQueue: Printing a Job during 8 seconds Thread 7: Second PrintQueue: Printing a Job during 1 seconds Thread 7: Printed Thread The document has been printed Thread 8: First PrintQueue: Printing a Job during 9 seconds Thread 8: Second PrintQueue: Printed a Job during 8 seconds Thread 8: The document has been printed Thread 9: First PrintQueue: Printing a Job during 5 seconds Thread 9: Second PrintQueue: Printing a Job during 5 seconds Thread 9: The document has been printed Thread 2: Second PrintQueue: Printing a Job during 3 seconds Thread 2: The document has been printed Process finished with exit code 0Copy the code
It can be seen from the above figure that after thread 0 released the lock, it immediately acquired the lock and continued to execute, resulting in the phenomenon of lock snatching and queue jumping (at this time, there are threads 1-9 waiting in the waiting queue).
Fair and unfair locks have disadvantages
- Fair lock advantage: A fair lock is equal to all threads and is always executed when each thread waits a certain amount of time.
- Disadvantages of fair locks: Compared with unfair locks, the execution efficiency is slower and the throughput is smaller
- Unfair lock advantage: Faster and more throughput than fair lock
- Unfair lock disadvantage: Hungry threads are also possible, where some threads wait too long and never execute.
Fair lock and a fair lock source code parsing First fair lock and not a fair lock are inherited in the class already internal Sync, the Sync AQS class hierarchy (AbstractQueuedSynchronizer), Sync class code is as follows:
/ / the source code can be seen in the Sync inherited AbstractQueuedSynchronizer class abstract static class Sync extends AbstractQueuedSynchronizer {... }Copy the code
Static final Class NonFairSync extends Sync {// Static final class NonFairSync extends Sync { } static final class extends Sync {. }Copy the code
Fair lock obtain lock source:
protected final boolean tryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); If (c == 0) {if (c == 0) { 24. Hasqueuedtoraise () judge if (! hasQueuedPredecessors() && compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; }Copy the code
Non-fair lock access lock source code:
final boolean nonfairTryAcquire(int acquires) {
final Thread current = Thread.currentThread();
int c = getState();
if (c == 0) {
if (compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(current);
return true;
}
}
else if (current == getExclusiveOwnerThread()) {
int nextc = c + acquires;
if (nextc < 0) // overflow
throw new Error("Maximum lock count exceeded");
setState(nextc);
return true;
}
return false;
}
Copy the code
It can be seen by comparison that the main difference between a fair lock and an unfair lock is that the fair lock is preceded by a Hasqueuedshareholding () which is false at the time the lock is acquired. The HasqueuedShareholding () method determines whether there is already a thread waiting on the wait queue. If so, the current thread cannot attempt to acquire the lock again. For non-fair locks, regardless of whether there is a thread waiting, the thread will try to acquire the lock first, if not, queue up. TryLock () calls sync.nonfairtryacquire (1), that is, non-fair lock, so even if the fair mode is set, TryLock () can also be used to cut in line.
Read-write lock
Why set read/write lock
First, read/write lock is used to improve the efficiency of the system. Although common ReentrantLock can ensure thread safety, using ReentrantLock for multiple read operations will greatly waste system resources. Moreover, write operations are not safe. Thread-safety issues will occur, so the set read/write lock comes into play. Read/write locks support concurrent reads to improve read efficiency while ensuring safe write operations.
Read/write lock rules
- If one thread has occupied the read lock, the other thread can apply for the read lock successfully.
- If one thread has occupied the read lock, if another thread applies for the write lock, the thread that applies for the write lock will wait for the release of the read lock, because the read and write cannot be performed simultaneously.
- If a thread has occupied a write lock, the other thread must wait for the previous thread to release the write lock, because the read and write cannot take place simultaneously.
Read and write lock usage demonstration
ReentrantReadWriteLock is an implementation class of ReadWriteLock. The two main methods readLock() and writeLock() obtain the readLock and writeLock. Use the read/write lock in ReadWriteLock to perform concurrent reads and writes. The code is shown as follows:
Public class ReadWriteLock {// Define read/write locks private static final ReentrantReadWriteLock ReentrantReadWriteLock = new ReentrantReadWriteLock(false); / / get read lock private static final ReentrantReadWriteLock. ReadLock ReadLock = ReentrantReadWriteLock. ReadLock (); / / get write lock private static final ReentrantReadWriteLock. WriteLock WriteLock = ReentrantReadWriteLock. WriteLock (); public static void read(){ readLock.lock(); Try {system.out.println (thread.currentThread ().getName() +" read lock, reading "); Thread.sleep(500); }catch (InterruptedException e){ e.printStackTrace(); } finally {system.out.println (" release read lock "); readLock.unlock(); } } private static void write() { writeLock.lock(); Try {system.out.println (thread.currentThread ().getName() + "write lock, writing "); Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } finally {system.out.println (thread.currentThread ().getName() + "write lock "); writeLock.unlock(); } } public static void main(String[] args) throws InterruptedException { new Thread(() -> read()).start(); new Thread(() -> read()).start(); new Thread(() -> write()).start(); new Thread(() -> write()).start(); }}Copy the code
The running results are as follows:
Thread-0 obtained the read lock, Thread-1 was reading the read lock, Thread-1 was reading the read lock, thread-2 obtained the write lock, Thread-2 was writing the write lock, Thread-3 obtained the write lock, thread-3 was writing the write lockCopy the code
As can be seen from the results, the read/write lock supports concurrent reads, while the write operation is performed separately.
Read lock queue-jumping strategy
ReentrantReadWriteLock Supports fair and unfair locks. You can set the following parameters:
// The Boolean value is used to set fair locks and unfair locks, where false is set to unfair locks and true is set to fair locks. ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(false);Copy the code
If the lock is set to fair, the corresponding read-write lock is implemented as follows:
static final class FairSync extends Sync { private static final long serialVersionUID = -2274990926593161451L; final boolean writerShouldBlock() { return hasQueuedPredecessors(); } final boolean readerShouldBlock() { return hasQueuedPredecessors(); }}Copy the code
The HasqueuedToraise () method detects whether there are already threads on the wait queue sorting. If so, each thread currently acquiring the lock blocks the sort, thus meeting the fair lock definition.
If false is set to unfair lock, the corresponding implementation is as follows:
static final class NonfairSync extends Sync { private static final long serialVersionUID = -8159625535654395037L; final boolean writerShouldBlock() { return false; // writers can always barge } final boolean readerShouldBlock() { /* As a heuristic to avoid indefinite writer Starvation, * block if the thread that appears to be head * of queue, if one exists, is a waiting writer. This is * only a probabilistic effect since a new reader will not * block if there is a waiting writer behind other Enabled * readers that have not yet fight from the queue. */ return apparentlyFirstQueuedIsExclusive(); }}Copy the code
The writerShouldBlock() method returns false, so it can jump the queue at any time if it wants to get a write lock. Not under the fair lock access read lock need basis apparentlyFirstQueuedIsExclusive () method return values, above comments on apparentlyFirstQueuedIsExclusive methods mostly said prevent waiting for the team The column head writer threads without hunger to wait, Here’s an example:
Scenario: If thread 1 and thread 2 are reading simultaneously, and thread 1 and thread 2 already hold the read lock, thread 3 wants to write, so thread 3 enters the wait queue, and thread 4 suddenly jumps the queue to acquire the read lock. There are two strategies:
- Allowed to jump the queue, allowing the thread 4 read lock and thread one thread 2 to read together, seems to improve the efficiency of reading, but there is a serious problem, is that if the back to the thread has been want to get read lock threads, thread 3 would have been enforced, they would be in the “hunger” state, in a long time not get executed.
- No queue cutting is allowed. If the new thread 4 wants to acquire the read lock, it must wait in the queue. Under this strategy, thread 3 or thread 4 can avoid the “starvation” state until thread 3 finishes running and thread 4 has a chance to run.
In ReentrantReadWriteLock, queue jumping is not allowed to prevent thread hunger.
Let’s show the above no queue cutting policy in code:
Public class ReadWriteLock {// Define read/write locks private static final ReentrantReadWriteLock ReentrantReadWriteLock = new ReentrantReadWriteLock(false); / / get read lock private static final ReentrantReadWriteLock. ReadLock ReadLock = ReentrantReadWriteLock. ReadLock (); / / get write lock private static final ReentrantReadWriteLock. WriteLock WriteLock = ReentrantReadWriteLock. WriteLock (); public static void read(){ readLock.lock(); Try {system.out.println (thread.currentThread ().getName() +" read lock, reading "); Thread.sleep(500); }catch (InterruptedException e){ e.printStackTrace(); } finally {system.out.println (" release read lock "); readLock.unlock(); } } private static void write() { writeLock.lock(); Try {system.out.println (thread.currentThread ().getName() + "write lock, writing "); Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } finally {system.out.println (thread.currentThread ().getName() + "write lock "); writeLock.unlock(); } } public static void main(String[] args) throws InterruptedException { new Thread(() -> read()).start(); new Thread(() -> read()).start(); new Thread(() -> write()).start(); New Thread(() -> read()).start(); }}Copy the code
The running results are as follows:
Thread-1 obtained the read lock, and Was reading the read lock; Thread-2 obtained the write lock, and was writing the write lock; Thread-3 obtained the read lock, and was reading the read lockCopy the code
The results show that ReentrantReadWriteLock has selected a policy that does not allow queue-jumping.
Upgrade or downgrade of read/write locks
Write lock degrade write lock degrade
Private static Final ReentrantReadWriteLock ReentrantReadWriteLock = new ReentrantReadWriteLock(); / / get read lock private static final ReentrantReadWriteLock. ReadLock ReadLock = ReentrantReadWriteLock. ReadLock (); / / get write lock private static final ReentrantReadWriteLock. WriteLock WriteLock = ReentrantReadWriteLock. WriteLock (); Public static void downgrade(){system.out.println (thread.currentThread ().getName()+" try to get write Thread "); writeLock.lock(); Try {system.out.println (thread.currentThread ().getName()+" write lock "); Readlock. lock(); // obtain the readLock without releasing the write lock. System.out.println(thread.currentThread ().getName()+" read lock "); }finally {system.out.println (thread.currentThread ().getName()+" write lock "); Writelock.unlock (); // Unlock (); // unlock(); } } public static void main(String[] args) { new Thread(() -> downgrade()).start(); new Thread(() -> downgrade()).start(); }Copy the code
The above running results are as follows:
In the figure, we can see that thread 0 can acquire the write lock while holding the write lock, which is the degradation of the write lock. Because thread 0 only put the write lock, but did not release the read lock, causing the subsequent thread 1 can not acquire the write lock, so the program keeps blocking.
Read lock upgrade next we look at read lock upgrade, code display:
private static final ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(false); / / get read lock private static final ReentrantReadWriteLock. ReadLock ReadLock = ReentrantReadWriteLock. ReadLock (); / / get write lock private static final ReentrantReadWriteLock. WriteLock WriteLock = ReentrantReadWriteLock. WriteLock (); Public static void upgarde(){system.out.println (thread.currentThread ().getName()+" Try to obtain read lock "); readLock.lock(); Try {system.out.println (thread.currentThread ().getName()+" read lock "); System.out.println(thread.currentThread ().getName()+" block write lock "); Writelock.lock () is blocked, indicating that the read lock cannot be upgraded to writelock.lock (). System.out.println(thread.currentThread ().getName()+" write lock "); }finally { readLock.unlock(); } } public static void main(String[] args) { new Thread(() -> upgarde()).start(); new Thread(() -> upgarde()).start(); }Copy the code
The running results are as follows:
In the preceding figure, we can see that thread 0 and thread 1 can successfully obtain the read lock. However, the process of obtaining the write lock during the lock upgrade is blocked because ReentrantReadWriteLock does not support read lock upgrade to write lock. A read lock can be held by multiple threads, but a write lock can only be held by one thread. It is impossible to hold both the read lock and the write lock at the same time. Therefore, the write lock can be upgraded only after all the read locks are released.
For example, three threads ABC all hold read locks. If thread A wants to upgrade the lock, it must wait until B and C release the read lock, and then thread A can successfully upgrade and acquire the write lock.
However, there is A problem here, which is that if both A and B want to upgrade the lock, thread A needs to wait for all other threads including thread B to release the read lock, and thread B also needs to wait for other threads including thread A to release the read lock, then A deadlock will occur. So if we make sure that only one thread can be upgraded per upgrade, then we can keep thread-safe and implement.
spinlocks
Spin lock introduction
A spin lock is a loop, such as a while or for loop, that keeps trying to get the lock, unlike a normal lock that gets blocked if it doesn’t get it.
Spin-lock and non-spin-lock flow diagrams are compared as follows:
In the figure above, we can see that spin-lock acquisition failure does not release CPU resources but waits for the lock to be released by spin-lock until the lock is successfully acquired. Rather than a spin lock if you try to get the lock fails, it is your own thread, the thread dormancy, freeing up CPU time slice, then hold the lock the thread until the release of the lock, so before the CPU thread back, let this thread to try to get the lock, if fail again on the thread to sleep, if successful, You can obtain the lock of the synchronized resource.
Benefits Of spin locking The spin lock avoids time-consuming blocking and wake up operations, avoids overhead such as thread state switching, and improves efficiency.
Disadvantages of spin locking Although it avoids the overhead of thread switching, it also introduces new overhead, because it has to keep trying to acquire the lock repeatedly. If it does not release the lock, it will have to keep trying, which is a waste of resources.
Therefore, spin locks are suitable for scenarios where concurrency is not particularly high and threads hold the lock for a short period of time. For example, most of the atomic classes in the java.util.concurrent package are spinlock based implementations, such as AtomicInteger, whose getAndIncrement() method looks like this:
public final int getAndAddInt(Object var1, long var2, int var4) { int var5; do { var5 = this.getIntVolatile(var1, var2); } while(! this.compareAndSwapInt(var1, var2, var5, var5 + var4)); return var5; }Copy the code
It is obvious that… The while() loop is a spin operation, and if the modification fails due to another thread encountered during the modification, the loop will retry until the modification succeeds.
How to customize a reentrant spin lock
The implementation code is as follows:
Public class CustomReentrantSpinLock {private AtomicReference<Thread> owner=new AtomicReference<>(); private AtomicReference<Thread> owner=new AtomicReference<>(); private int count = 0; Public void lock() {Thread t = thread.currentThread (); System. The out. Println (Thread. CurrentThread (). The getName () + "lock"); if (t == owner.get()) { ++count; return; } // spin lock while (! Owner.com pareAndSet(null, t)) {system.out.println (thread.currentThread ().getName()+" spin "); } } public void unLock(){ Thread t=Thread.currentThread(); If (t == owner.get()){if(count >0){--count; } else { owner.set(null); } } } public static void main(String[] args) { CustomReentrantSpinLock spinLock=new CustomReentrantSpinLock(); Runnable runnable=new Runnable() { @Override public void run() { System.out.println(thread.currentThread ().getName()+" Start trying to get spin lock "); spinLock.lock(); Try {system.out.println (thread.currentThread ().getName()+" get the spin lock "); Thread.sleep(1); }catch (InterruptedException e){ e.printStackTrace(); } finally { spinLock.unLock(); System.out.println(thread.currentThread ().getName()+" release splock "); }}}; Thread thread1=new Thread(runnable); Thread thread2=new Thread(runnable); thread1.start(); thread2.start(); }}Copy the code
The running results are as follows:
Thread0 started trying to obtain the spin lock Thread-1 started trying to obtain the spin lock Thread-1 obtained the spin lock.. Thread-1 obtained the spin lock thread-1 released the spin lock thread-1 obtained the spin lock Thread-1 releases the spin lockCopy the code
As can be seen from the above running result, a lot of Thread-1 spins were printed, indicating that the CPU was still running during the spin, and Thread-1 did not release CPU time slices.
JVM optimizations for locking
After JDK1.6, The HotSpot VIRTUAL machine has made many optimizations to Synchronized, including adaptive selectation, lock elimination, lock coarser, biased lock, lightweight lock, etc., which greatly improves the performance of synchronized lock.
Adaptive spin lock
Adaptive spin is when the spin time is not fixed, but determined by various factors such as the success rate of spin, failure rate, and the state of the current lock holder. This means that the spin time is variable, which reduces useless spin and improves efficiency.
Lock elimination
Lock elimination is a kind of lock occurs at the compiler level optimization way, sometimes we don’t need to lock code, such as locking code is actually only a thread executes, multiple threads concurrent access will not happen, but we have added the synchronized lock, then the compiler may eliminate the lock, For example, the following append operation for StringBuffer:
@Override public synchronized StringBuffer append(Object obj) { toStringCache = null; Super. Append (String. valueOf(obj)); return this; }Copy the code
You can see in the code that this method is modified by synchronized because it can be used by multiple threads at the same time, but most of the time it is only used in a single thread. If the compiler can be certain that the object will be used in only one thread, it must be thread-safe and the compiler will optimize it. The corresponding synchronized is eliminated and the operation of adding and unlocking is omitted to improve efficiency.
Lock coarsening
Lock coarsening is used to re-acquire the lock as soon as it is released, such as the following code:
public void lockCoarsening() {
synchronized (this) {
//do something
}
synchronized (this) {
//do something
}
synchronized (this) {
//do something
}
}
Copy the code
In the above code, when the thread executes the first synchronized code block, it needs to acquire the synchronized lock first, and then releases the synchronized lock after executing the synchronized code block. However, when the thread has released the lock after executing the first synchronized code block, Then immediately began to perform a second thread synchronization code block when they need to get the same lock and release, release and acquiring a lock is completely unnecessary, if the synchronization area expansion, also is in the beginning with a lock, in the end of the lock is released, you can add intermediate meaningless unlock and lock the process of elimination, It is equivalent to merging several synchronized blocks into a larger synchronized block, which has the advantage of eliminating frequent lock release and lock acquisition and reducing system overhead.
But lock coarsening does not work in loop scenarios, only in non-loop scenarios, because as shown in the following code, if we acquire the lock in the first loop and release it in the last loop, it will cause other threads to be unable to acquire the lock for a long time.
for (int i = 0; i < 1000; i++) {
synchronized (this) {
//do something
}
}
Copy the code
Lock coarcing is enabled by default. Run -xx: -eliminatelocks to disable this function
Bias lock, lightweight lock, heavyweight lock
These three types of locks were introduced at the beginning. They refer to the synchronized lock state, which is indicated by the Mark Work field in the object header.
Lock the upgrade path
The upgrade path of the lock is shown in the following figure. Biased lock has good performance and avoids CAS operation; lightweight lock uses spin and CAS operation to avoid thread blocking and wake up caused by heavyweight lock, with medium performance; heavyweight lock will block threads that cannot obtain the lock and has the worst performance.
-END
If you like, please scan the qr code below or search for “programmer’s inner mind method” on wechat and follow me