Multithreaded concurrency is a very important content in Java language, but also a difficult point in Java foundation. It is important because multithreading is frequently used in daily development, and it is difficult because there are so many knowledge points involved in multithreading concurrency that it is not easy to fully master Java concurrency knowledge. For this reason, Java concurrency is one of the most frequently covered topics in Java interviews. This series of articles will take a systematic look at Java concurrency in terms of the Java memory model, volatile keywords, synchronized keywords, ReetrantLock, Atomic concurrency classes, and thread pools. In this series of articles, you will learn more about the use of volatile, the implementation of synchronized, AQS and CLH queue locking, and clearly understand spin locking, bias locking, optimistic locking, pessimistic locking… And so on a dizzying array of concurrent knowledge.
Multi-threaded concurrency series
This time, understand the Java memory model and the volatile keyword once and for all
This time, thoroughly understand the Synchronized keyword in Java
This time, thoroughly understand the Java ReentrantLock implementation principle
This time, understand Java thoroughly and send out the Atomic Atomic classes in the package
Understanding the wait and wake up mechanism of Java threads
Understanding the wait and wake up mechanism of Java threads (Part 2)
Java Concurrency series finale: Get to the bottom of how Java thread pools work
The principle of ThreadLocal is simple
This article, the third in the Java concurrency series, covers the underlying implementation principles of ReentrantLockk and AQS in detail.
Before I start, I would like to recommend the GitHub repository AndroidNote, which is my study notes and the source of the first draft of my article. This repository contains a lot of Java and Android advancements. Is a systematic and comprehensive Android knowledge base. It is also a valuable interview guide for students preparing for interviews. Welcome to the GitHub warehouse homepage.
Introduction to ReentrantLock
In the last article, we analyzed the implementation principle of synchronized keyword in depth. In this article we’ll take a look at another synchronization mechanism in Java, ReentrantLock. ReentrantLock was introduced in the java.util.concurrent package of JDK1.5. ReentrantLock has more powerful concurrency capabilities than synchronized. Before diving into ReentrantLock, let’s take a look at its use.
1. Already use
The synchronized keyword introduced in the previous article is an implicit lock, meaning that its locking and release are automatic and do not require our attention. ReentrantLock is an explicit lock that requires manual code to attach and release the lock. Let’s take a look at the use of ReentrantLock.
public class ReentrantLockDemo {
// Instantiate an unfair lock. The constructor argument is true for a fair lock and false for an unfair lock.
private final ReentrantLock lock = new ReentrantLock(false);
private int i;
public void testLock(a) {
// Get the lock. If you can't get it, wait
lock.lock();
try {
// Try to pick up the lock again (reentrant) and wait 100 milliseconds at most
if (lock.tryLock(100, TimeUnit.MILLISECONDS))
i++;
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
/ / releases the locklock.unlock(); lock.unlock(); }}}Copy the code
Lock. Lock () in the above code will retrieve the lock, or wait if it can’t be retrieved. If the lock is picked up, the code in the try block is executed. Then try the lock again using the tryLock(100, timeunit.milliseconds) method. If the lock is retrieved within 100 MILLISECONDS, tryLock returns true. I++ If false is returned, lock acquisition fails and i++ will not be executed. Since the first thread has already acquired the lock, and since the ReentrantLock is reentrantable, the second thread must acquire the lock. This is just to demonstrate the use of ReetranLock.
Also, note that the area being locked by ReentrantLock must be wrapped ina try block, and that the lock release needs to be in finally to avoid deadlocks. As many times as you perform a lock, you need to release the lock several times.
2. Fair and unfair locks
In the previous section, we instantiated an unfair ReentrantLock lock in our code. What are fair and unfair locks?
Fair lock means that multiple threads obtain locks according to the sequence of lock application. The threads directly queue into the synchronization queue, and the first line in the queue gets the lock first. An unfair lock means that when multiple threads add a lock, each thread will try to acquire the lock first. If it just gets the lock, the thread will execute the lock directly without waiting. If it fails to obtain the lock, the thread will be added to the end of the synchronization queue for execution.
Of course, both fair and unfair locks have their advantages and disadvantages and are applicable to different scenarios. The advantage of the fair lock is that all threads are fair and equal, and each thread has the opportunity to execute after waiting for a period of time. However, compared with the disadvantages of the unfair lock, the overall execution speed is slower and throughput is lower. All threads in a synchronous queue except the first thread block, and the CPU is more expensive to wake up a blocked thread than an unfair lock.
The advantage of an unfair lock is that it reduces the overhead of invoking threads, and the overall throughput is high, because threads have a chance to acquire the lock without blocking and the CPU does not have to wake up all threads. The disadvantage is also obvious, that is, the thread waiting in the queue may not acquire the lock for a long time.
3. Reentrant and non-reentrant locks
The concept of reentrant locks is also mentioned in section 1 of this chapter:
A reentrant lock, also known as a recursive lock, means that when the same thread obtains the outer synchronization lock, the inner synchronization method of the same thread will automatically acquire the lock (provided that the lock object is the same object or class). It will not be blocked because the lock has been acquired before and has not been released. A non-reentrant lock is the opposite of a reentrant lock, that is, a thread that has acquired the lock of an outer synchronization method cannot acquire the lock of the inner synchronization method, even if the lock is the same object.
Synchronized and ReentrantLock are both reentrant locks. Reentrant locks can effectively avoid deadlocks.
4. Exclusive and shared locks
Since the concepts of exclusive and shared locks will also be covered later in this article, they will be explained here.
An exclusive lock, also called an exclusive lock, means that the lock can only be held by one thread at a time. If thread T holds an exclusive lock on data A, no other thread can hold any type of lock on data A. A thread that acquires an exclusive lock can both read and modify data. A shared lock means that the lock can be held by multiple threads. If thread T adds A shared lock to data A, other threads can only add A shared lock to data A, not an exclusive lock. The thread that acquires the shared lock can only read the data, not modify it.
ReentrantLock source code analysis
Let’s look at the code structure of the ReentrantLock class:
public class ReentrantLock implements Lock.java.io.Serializable {
private final Sync sync;
public ReentrantLock(a) {
sync = new NonfairSync();
}
public ReentrantLock(boolean fair) {
sync = fair ? new FairSync() : new NonfairSync();
}
/ /... Omit other code
}
Copy the code
As you can see, the code structure of ReentrantLock is very simple. It implements both Lock and Serializable interfaces, and has two constructors at the same time. In the no-parameter constructor, an unfair Lock is initialized, and in the parameterized constructor, the fair or unfair Lock is initialized according to the parameters.
Next, let’s look at the Lock interface code:
public interface Lock {
/ / acquiring a lock
void lock(a);
// Synchronized is an uninterruptible lock.
void lockInterruptibly(a) throws InterruptedException;
// Try to acquire the lock, return true on success, false on failure
boolean tryLock(a);
// Attempts to acquire the lock within the given time, returning true on success and false on failure
boolean tryLock(long time, TimeUnit unit) throws InterruptedException;
/ / releases the lock
void unlock(a);
// Wait and wake up
Condition newCondition(a);
}
Copy the code
You can see that in Lock there are defined multiple methods to acquire locks, as well as methods to release locks. There is also a newCondition method related to wait and wake mechanisms. Condition will not be discussed in this article.
Since ReentrantLock implements the Lock interface, it must also implement these methods. Take a look at the implementation in ReentrantLock:
public class ReentrantLock implements Lock.java.io.Serializable {
private final Sync sync;
public void lock(a) {
sync.acquire(1);
}
public void lockInterruptibly(a) throws InterruptedException {
sync.acquireInterruptibly(1);
}
public boolean tryLock(a) {
return sync.nonfairTryAcquire(1);
}
public boolean tryLock(long timeout, TimeUnit unit)
throws InterruptedException {
return sync.tryAcquireNanos(1, unit.toNanos(timeout));
}
public void unlock(a) {
sync.release(1);
}
/ /... Omit other code
}
Copy the code
As you can see, the implementation of these methods in ReentrantLock is very simple. Both call the relevant methods in Sync. ReentrantLock all lock fetching and lock releasing operations are performed using the Sync member variable. Sync is an abstract inner class in ReentrantLock. Its source code is as follows:
abstract static class Sync extends AbstractQueuedSynchronizer {
// Try to get an unfair lock
final boolean nonfairTryAcquire(int acquires) {
final Thread current = Thread.currentThread();
int c = getState();
// Not locked
if (c == 0) {
// Try to get the lock through CAS
if (compareAndSetState(0, acquires)) {
// Set the thread holding the exclusive lock
setExclusiveOwnerThread(current);
return true; }}// If the thread holding the lock is not its own thread, it can re-enter the lock implementation
else if (current == getExclusiveOwnerThread()) {
int nextc = c + acquires;
if (nextc < 0) // overflow
throw new Error("Maximum lock count exceeded");
setState(nextc);
return true;
}
return false;
}
/ / releases the lock
protected final boolean tryRelease(int releases) {
int c = getState() - releases;
if(Thread.currentThread() ! = getExclusiveOwnerThread())throw new IllegalMonitorStateException();
boolean free = false;
if (c == 0) {
free = true;
setExclusiveOwnerThread(null);
}
setState(c);
return free;
}
protected final boolean isHeldExclusively(a) {
return getExclusiveOwnerThread() == Thread.currentThread();
}
// Get the thread holding the lock
final Thread getOwner(a) {
return getState() == 0 ? null : getExclusiveOwnerThread();
}
// Get the number of times the thread holding the lock reenters
final int getHoldCount(a) {
return isHeldExclusively() ? getState() : 0;
}
// Whether it is locked
final boolean isLocked(a) {
returngetState() ! =0;
}
/ /... Omit other code
}
Copy the code
As you can see, the code logic in Sync isn’t complicated either. The nonfairTryAcquire method is used to acquire an unfair lock, while tryRelease is used to release the lock. There are several other methods related to lock state. If you are careful, you will find that Sync determines the lock status through state. A value of state 0 indicates that the lock is not in place, and a value greater than 0 indicates that the lock is in place. We’ll talk more about state later.
Also, since Sync is an abstract class, there must be classes that inherit from it. There are two implementations of Sync in ReentrantLock, NonfairSync and FairSync. From the name you can see one unfair lock, one unfair lock.
Let’s take a look at the implementation of NonfairSync’s unfair lock:
static final class NonfairSync extends Sync {
private static final long serialVersionUID = 7316153563782823691L;
protected final boolean tryAcquire(int acquires) {
returnnonfairTryAcquire(acquires); }}Copy the code
As mentioned above, Sync already implements unfair locking logic, so the code for NonfairSync is very simple, just calling nonfairTryAcquire directly from tryAcquire.
The code for FairSync fair lock is as follows:
static final class FairSync extends Sync {
protected final boolean tryAcquire(int acquires) {
final Thread current = Thread.currentThread();
int c = getState();
// Not locked
if (c == 0) {
// CAS will be enabled when there is no waiting node
if(! hasQueuedPredecessors() && compareAndSetState(0, acquires)) {
// Set the thread holding the exclusive lock
setExclusiveOwnerThread(current);
return true; }}// As with an unfair lock, the thread holding the lock is its own, so it can be reentrant
else if (current == getExclusiveOwnerThread()) {
int nextc = c + acquires;
if (nextc < 0)
throw new Error("Maximum lock count exceeded");
setState(nextc);
return true;
}
return false; }}Copy the code
As you can see, tryAcquire implements the fair lock operation, which is only one line different from the non-fair lock implementation. That is, the fair lock determines whether there is a thread waiting in the synchronization queue first, if not, it will carry out the lock operation. Non-fair locking does not care if there is a synchronization queue, go get it first.
At this point, the source code for ReentrantLock has been largely analyzed, but we haven’t seen the underlying logic for holding and releasing locks. And these are in Sync superclass AbstractQueuedSynchronizer realize logic.
Third, AbstractQueuedSynchronizer
AbstractQueuedSynchronizer translates as queue synchronizer, usually referred to as AQS. International practice, or take a look at the internal structure of AbstractQueuedSynchronizer class:
public abstract class AbstractQueuedSynchronizer
extends AbstractOwnableSynchronizer
implements java.io.Serializable {
protected AbstractQueuedSynchronizer(a) {}// Synchronize the header of the queue
private transient volatile Node head;
// Synchronize the tail of the queue
private transient volatile Node tail;
// Synchronization status
private volatile int state;
}
Copy the code
AbstractQueuedSynchronizer class inherits the AbstractOwnableSynchronizer, AbstractOwnableSynchronizer the code is as follows:
public abstract class AbstractOwnableSynchronizer
implements java.io.Serializable {
private transient Thread exclusiveOwnerThread;
protected final void setExclusiveOwnerThread(Thread thread) {
exclusiveOwnerThread = thread;
}
protected final Thread getExclusiveOwnerThread(a) {
returnexclusiveOwnerThread; }}Copy the code
Can see AbstractOwnableSynchronizer and AbstractQueuedSynchronizer maintained in the four members, respectively is: ExclusiveOwnerThread type, Head and tail type of Node, and state type of int.
- ExclusiveOwnerThread Exclusive lock thread
- Head and tail represent the head and tail of the waiting thread queue respectively.
- State indicates the synchronization status. If the value is 0, the synchronization status is not locked. If the value is greater than 0, the synchronization status is locked.
This reminds me of the Monitor object in a synchronized lock described in the previous article. The monitor object also maintains an _ower field to represent the thread holding the lock, a set of _EntryList and _WaitSet to store the waiting and blocking threads, and a counter count to represent the lock state, 0 is the unlocked state, and greater than 0 is the locked state.
Is it surprising to find that AQS and synchronized’s monitor are similar? But AQS can do much more than that.
See ReentrantLock from AQS
With AQS in mind, let’s return to ReentrantLock.
1.ReentrantLock lock method
Take the Lock method of ReentrantLock as an example to continue the analysis. We know that the lock method calls acquire of Sync:
// ReentrantLock
public void lock(a) {
sync.acquire(1);
}
Copy the code
But did not acquire method in Sync, but in the superclass AbstractQueuedSynchronizer Sync. Then we will look at AbstractQueuedSynchronizer acquire method:
// AbstractQueuedSynchronizer
public final void acquire(int arg) {
if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }Copy the code
TryAcquire is first called in the ACQUIRE method of AQS, and instead of implementing tryAcquire, AQS throws an exception:
protected boolean tryAcquire(int arg) {
throw new UnsupportedOperationException();
}
Copy the code
The tryAcquire implementation, which was analyzed in the NonfairSync and FairSync classes in the previous chapter, attempts to acquire the lock through CAS and returns a value indicating whether the lock was successfully acquired. Ideally, the lock would be obtained directly through the tryAcquire method. But what if you don’t have the lock? You can see that after tryAcquire returns false, the addWaiter method is called to join the synchronization queue. This means that the thread is blocked, queued and waiting for the blocking wake up mechanism, which is implemented mainly by a deformed CLH queue and wakes up the thread in the acquireQueued method, as discussed below
2.AQS and bidirectional CLH queues
CLH queues, short for Craig, Landin and Hagersten queues (Craig, Landin and Hagersten are the names of three people), are one-way linked lists. Queues in AQS are virtual bidirectional queues of CLH variants. In AQS, each thread requesting locks is encapsulated into a Node Node to achieve lock allocation. The code for Node has been mentioned above:
static final class Node {
// Share mode
static final Node SHARED = new Node();
// Exclusive mode
static final Node EXCLUSIVE = null;
// The thread is terminated
static final int CANCELLED = 1;
// The thread is waiting for the flag to wake up
static final int SIGNAL = -1;
// Condition state
static final int CONDITION = -2;
// State in shared mode
static final int PROPAGATE = -3;
// Indicates the wait state
volatile int waitStatus;
// Synchronize the precursor node of the queue
volatile Node prev;
// Synchronize the successor node of the queue
volatile Node next;
// Waiting thread
volatile Thread thread;
Node nextWaiter;
Node() {}
Node(Node nextWaiter) {
this.nextWaiter = nextWaiter;
THREAD.set(this, Thread.currentThread());
}
Node(int waitStatus) {
WAITSTATUS.set(this, waitStatus);
THREAD.set(this, Thread.currentThread());
}
/ /... Omit other code
}
Copy the code
It can be seen that Node encapsulates the waiting thread and the current state of the thread, where there are four states of the thread, namely CANCELLED, SIGNAL, CONDITION and PROPAGATE, which represent the following meanings respectively:
-
CANCELLED Indicates the status of the thread being CANCELLED. Threads in the synchronous queue will change waitStatus to CANCELLED if they wait out or are interrupted.
-
SIGNAL indicates that a node is awakened, and when its precursor releases the synchronization lock or is canceled, it notifies the threads of its successor node in SIGNAL state to execute.
-
CONDITION nodes in the waiting queue will be marked in this state. When the singal method of CONDITION is called, the nodes in CONDITION state will always wait for the queue to transfer to the synchronization queue, waiting for the lock.
-
PROPAGATE state is related to the sharing mode, in which the node is in an operational state.
In addition, Node maintains a precursor Node and a successor Node. Let’s take a look at how the addWaiter method wraps a thread as a Node and inserts it at the end of a synchronized queue.
// Wrap the current thread into a Node and insert it at the end of the queue
private Node addWaiter(Node mode) {
// Instantiate the Node containing the current thread
Node node = new Node(mode);
// Execute an infinite loop
for (;;) {
Node oldTail = tail;
if(oldTail ! =null) {
// Set the precursor node to the old tail node
node.setPrevRelaxed(oldTail);
// perform tail substitution with CAS
if (compareAndSetTail(oldTail, node)) {
// oldTail's next node points to node
oldTail.next = node;
returnnode; }}else {
// If the queue is empty, tail and head are initialized. After initialization, an infinite loop is executed, and oldTail is not nullinitializeSyncQueue(); }}}// Initialize the header and tail
private final void initializeSyncQueue(a) {
Node h;
// Note that an empty Node is instantiated as a header
if (HEAD.compareAndSet(this.null, (h = new Node())))
// point the end node to the head node
tail = h;
}
Copy the code
Notice that the addWaiter method is called in the Aquire method with node.exclusive, which means EXCLUSIVE mode. The current thread is set when a Node can be instantiated using new Node(mode).
An infinite loop is then started to insert the end of the node queue. If the queue is not empty, the node is inserted into the end of the queue through CAS. If the queue is empty, the queue is initialized. In the initialization queue, an empty node is instantiated as the head and tail is pointed to the head node. You can also see from this that the head of the synchronization queue is a node that does not store any data.
After joining the node to the synchronization queue, the node will start the spin operation and observe the status of the precursor node, waiting for the conditions to be fulfilled. This is done in the acquireQueued() method in the Acquire method.
final boolean acquireQueued(final Node node, int arg) {
boolean interrupted = false;
try {
// Turn on the spin
for (;;) {
// Get the precursor node
final Node p = node.predecessor();
// Run tryAcquire to try to acquire the lock
if (p == head && tryAcquire(arg)) {
// After the synchronization status is obtained, set the current node as the head node
setHead(node);
// Empty the successor node
p.next = null; // help GC
return interrupted;
}
// if p is not a header, decide whether to suspend the thread
if(shouldParkAfterFailedAcquire(p, node)) interrupted |= parkAndCheckInterrupt(); }}catch (Throwable t) {
cancelAcquire(node);
if (interrupted)
selfInterrupt();
throwt; }}Copy the code
Turn on the spin operation in the acquireQueued method and check to see if the node’s precursor node is a header
If the node precursor node is the head node, an attempt is made to obtain the synchronization status. If the synchronization is successful, the synchronization code can be executed. At this point, the node node is actually useless.
private void setHead(Node node) {
head = node;
// Empty the current thread
node.thread = null;
// set an unprecedented drive node
node.prev = null;
}
Copy the code
If the node precursor node is not the first node, then call the shouldParkAfterFailedAcquire method will determine whether the thread hung. If so, call parkAndCheckInterrupt to suspend the thread.
private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {
// Get node wait status
int ws = pred.waitStatus;
// Returns true if the state is wake-up
if (ws == Node.SIGNAL)
return true;
// If the status is greater than 0, the thread is terminated
if (ws > 0) {
// Walk through the precursor until you find a node whose thread is not terminated
do {
node.prev = pred = pred.prev;
} while (pred.waitStatus > 0);
pred.next = node;
} else {
// If the wait state is less than 0 but not SIGNAL, CAS changes it to SIGNAL wake-up
pred.compareAndSetWaitStatus(ws, Node.SIGNAL);
}
return false;
}
private final boolean parkAndCheckInterrupt(a) {
// Suspend the thread
LockSupport.park(this);
// Returns the state of the thread
return Thread.interrupted();
}
Copy the code
The process of getting the lock through lock ends here.
3. LockInterruptibly
The acquisition of interruptible locks is also supported in ReentrantLock, through the lockInterruptibly() and tryLock() methods. Let’s take the lockInterruptibly method as an example to see how it differs from the Lock method.
// ReentrantLock
public void lockInterruptibly(a) throws InterruptedException {
sync.acquireInterruptibly(1);
}
Copy the code
Then call the AQS acquireInterruptibly method:
// AbstractQueuedSynchronizer
public final void acquireInterruptibly(int arg)
throws InterruptedException {
// If the thread breaks, an exception is thrown
if (Thread.interrupted())
throw new InterruptedException();
// Try to get the lock
if(! tryAcquire(arg))// Failed to get lock
doAcquireInterruptibly(arg);
}
Copy the code
If the doAcquireInterruptibly method is called after the attempt to get the lock fails (the tryLock method is eventually called with doAcquireInterruptibly) :
private void doAcquireInterruptibly(int arg)
throws InterruptedException {
final Node node = addWaiter(Node.EXCLUSIVE);
try {
for (;;) {
final Node p = node.predecessor();
if (p == head && tryAcquire(arg)) {
setHead(node);
p.next = null; // help GC
return;
}
if (shouldParkAfterFailedAcquire(p, node) &&
parkAndCheckInterrupt())
// Thread interrupt throws an exception
throw newInterruptedException(); }}catch (Throwable t) {
cancelAcquire(node);
throwt; }}Copy the code
You can see that the logic of this method is almost the same as that of the acquireQueued method, except that the exception is thrown directly when a thread interrupt is detected.
4. Lock release
ReentrantLock releases the lock through its own UNLOCK method, which also calls AQS release:
public void unlock(a) {
sync.release(1);
}
Copy the code
The code for the release method in AQS is as follows:
// AbstractQueuedSynchronizer
public final boolean release(int arg) {
// Try to release the lock
if (tryRelease(arg)) {
Node h = head;
if(h ! =null&& h.waitStatus ! =0)
// Wake up the successor node
unparkSuccessor(h);
return true;
}
return false;
}
protected boolean tryRelease(int arg) {
throw new UnsupportedOperationException();
}
Copy the code
TryRelease are implemented in a subclass of AbstractQueuedSynchronizer Sync, in this paper we have mentioned, namely the manipulation of the state, the state minus the releases, if the state of 0 so long lock is released, The exclusive thread is set to NULL and the state is updated. The code is as follows:
protected final boolean tryRelease(int releases) {
int c = getState() - releases;
if(Thread.currentThread() ! = getExclusiveOwnerThread())throw new IllegalMonitorStateException();
boolean free = false;
if (c == 0) {
free = true;
setExclusiveOwnerThread(null);
}
setState(c);
return free;
}
Copy the code
The unparksucceeded nodes are called after the lock is released. The code is as follows:
// AbstractQueuedSynchronizer
private void unparkSuccessor(Node node) {
int ws = node.waitStatus;
// Change the node status to 0
if (ws < 0)
node.compareAndSetWaitStatus(ws, 0);
// Get the next node
Node s = node.next;
if (s == null || s.waitStatus > 0) { // The subsequent node is null or the thread is cancelled
s = null;
// Start traversal at the end
for(Node p = tail; p ! = node && p ! =null; p = p.prev)
if (p.waitStatus <= 0) // Indicates that the node is in the valid state
s = p;
}
// Wake up the thread
if(s ! =null)
LockSupport.unpark(s.thread);
}
Copy the code
Five, the summary
The implementation of ReentrantLock versus AQS is relatively difficult to understand. Although this article is a long one, it does not cover all the knowledge points of ReentrantLock and AQS. The analysis of this article only covers exclusive locks (exclusive locks), not the implementation of ReentrantLock shared locks, and Condition is not covered in this article. If you have time later, you can write another article analyzing Condition.
Finally, a summary of the ReentrantLock exclusive lock fetching and queuing process: ReentrantLock internally implements fair and unfair locking through FairSync and NonfairSync. In AQS, the synchronization state is marked by state. If the state is 0, the thread can directly acquire the lock; if the state is greater than 0, the thread will be encapsulated as Node Node and enter the CLH queue for execution. The CLH queue of AQS is a bidirectional linked list structure with an empty Node as the head Node. The new node is inserted at the end of the queue and spins to determine whether its predecessor is a head node. If it is a header, the lock is attempted; if it is not a header, the suspension is performed based on the condition.
Draw a flow chart for your reference:
Reference & Recommended reading
[Basics] The unspeakable Java “lock” thing
The reentrant lock (exclusive lock) based on concurrent AQS and its Condition implementation principle are deeply analyzed
See AQS principle and application from the realization of ReentrantLock