J.U.C profile
Java.util.concurrent is a utility class commonly used in concurrent programming, which contains many components for use in concurrent scenarios. Such as thread pools, blocking queues, timers, synchronizers, concurrent collections, and so on. And the author of the package is the famous Doug Lea. In the next section, we’ll go back to some of the classic and more commonly used component design ideas
Lock
Lock is a core component of J.U.C., and one of the most important features of locking is concurrency security. Why start with Lock? If you look at all the components in the J.U.C package, you’ll see that most of them are locked. Therefore, using Lock as the entry point will make the subsequent learning process easier.
Introduction of the Lock
Prior to the Lock interface, Java applications had to rely on the synchronized keyword to secure concurrency for multiple threads. Synchronized, however, suffers from a disadvantage in some scenarios, namely that it is not suitable for all concurrent scenarios. But after Java5, the advent of Lock addresses the shortcomings of synchronized in certain scenarios, and it is more flexible than synchronized.
The realization of the Lock
Lock is essentially an interface that defines abstract methods for releasing locks and acquiring locks. Defining an interface means that it defines a standard specification for locks, as well as different implementations of locks. There are many classes that implement the Lock interface, but here are a few common Lock implementations
- ReentrantLock: represents a ReentrantLock and is the only class that implements the Lock interface. Reentrant lock refers to a thread that acquires a lock and does not block it again, but increases the number of reentrant times directly associated with a counter
- ReentrantReadWriteLock: a reentrent read/WriteLock that implements the ReadWriteLock interface. This class maintains two locks, ReadLock and WriteLock, that implement the Lock interface. Read/write lock is a tool suitable for solving thread safety problems in the scenario of reading too much and writing too little. The basic principles are: Read and read are not mutually exclusive, read and write are mutually exclusive, and write and write are mutually exclusive. This means that operations that affect changes in data are mutually exclusive.
- StampedLock: StampedLock is a new locking mechanism introduced in JDK8, which can be simply regarded as an improved version of read/write lock. Although read/write lock can be fully concurrent with read/write through the separation of read and write function, read and write conflict, if a large number of read threads exist, may cause hunger of writer threads. StampedLock is an optimistic read strategy that keeps the optimistic lock from blocking the writer thread at all
Class diagram for Lock
There are many implementations of Lock, but the most intuitive implementation is ReentrantLock
void lock(a) // If the lock is available, the lock is acquired; if not, the lock is blocked until the lock is released
void lockInterruptibly(a) // Similar to the lock() method, but blocked threads can be broken and thrown outJava. Lang. InterruptedExceptionboolean tryLock(a) // get the lock without blocking; Attempts to acquire the lock, return true on success
boolean tryLock(long timeout, TimeUnit timeUnit) // Get lock method with timeout
void unlock(a) / / releases the lock
Copy the code
Already into the lock
A reentrant lock is a lock that supports re-entry. In other words, if the current thread T1 obtains the lock by calling the lock method, it will not block to obtain the lock again. It simply increases the retry times. Synchronized and ReentrantLock are reentrant locks. Many students do not understand why there is a reentrant feature in locks, because the understanding level of synchronous locks is not enough. For example, when there are multiple lock methods calling each other, it is actually a reentrant scene.
Reentrant lock is designed for
For example, call demo method to obtain the current object lock, and then call Demo2 in this method, demo2 has the same instance lock, at this time, the current thread will be blocked because demo2 cannot obtain the object lock, resulting in deadlock. Reentrant locks are designed to avoid thread deadlocks.
public class ReentrantDemo {
public synchronized void demo(a) {
System.out.println("begin:demo"); demo2(); }}Copy the code
public void demo2(a) {
System.out.println("begin:demo1");
synchronized (this) {}public static void main(String[] args) {
ReentrantDemo rd = new ReentrantDemo();
new Thread(rd::demo).start();
}
Copy the code
public class AtomicDemo {
private static int count = 0;
static Lock lock = new ReentrantLock();
public static void inc(a) {
lock.lock();
try {
Thread.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
count++;
lock.unlock();
}
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 1000; i++) {
new Thread(() -> {
AtomicDemo.inc();
}).start();
}
Thread.sleep(3000);
System.out.println("result:"+ count); }}Copy the code
ReentrantReadWriteLock
The locks we used to understand are basically exclusive locks, that is, these locks can only be accessed by one thread at a time, while the read and write can be accessed by multiple threads at a time, but when the writer thread accesses, all reader threads and other writer threads will be blocked. Read/write locks maintain a pair of locks, a read lock and a write lock. In general, read-write locks perform better than exclusive locks because most scenarios have more reads than writes. Read/write locks provide better concurrency and throughput than exclusive locks when there are more reads than writes.
public class LockDemo {
static Map<String, Object> cacheMap = new HashMap<>();
static ReentrantReadWriteLock rwl = new
ReentrantReadWriteLock();
static Lock read = rwl.readLock();
static Lock write = rwl.writeLock();
public static final Object get(String key) {
System.out.println("Start reading data");
read.lock(); / / read lock
try {
return cacheMap.get(key);
} finally{ read.unlock(); }}public static final Object put(String key, Object value) {
write.lock();
System.out.println("Start writing data");
try {
return cacheMap.put(key, value);
} finally{ write.unlock(); }}}Copy the code
In this case, a memory cache is simulated using a HashMap, and read-write stations are used to keep the memory cache thread-safe. When a read operation is performed, a read lock is required. In concurrent access, the read lock is not blocked because the read operation does not affect the execution result. In order to perform a write operation, the thread must acquire a write lock. If the write lock is held by another thread, the current thread will be blocked. After the write lock is released, other read and write operations can continue. Using read/write locks improves the concurrency of read operations and ensures that each write operation is visible to all read/write operations
- Read locks and read locks can be shared
- Read and write locks cannot be shared (exclusive)
- Write locks and write locks cannot be shared (exclusive)
Implementation principle of ReentrantLock
We know that the basic principle of locking is based on the serial execution of multi-thread parallel tasks through a mechanism, so as to achieve the purpose of thread safety. In synchronized, we analyze biased lock, lightweight lock and optimistic lock. Optimizes the locking cost of synchronized based on optimistic lock and spin lock, and achieves the purpose of thread competition and synchronization through thread blocking and awakening in heavyweight lock stage.
In ReentrantLock, there must be issues that need to be addressed. When multiple threads are competing for a reentrant lock, how does the failed thread block and wake up?
What is the AQS
In the Lock, with the help of a synchronous queue AQS, full name AbstractQueuedSynchronizer, it is a Lock synchronization tools are used to realize the core components of thread synchronization. Most of the tools in J.U.C are easy to master if you understand the AQS.
Two functions of AQS
In terms of usage, AQS functions are divided into two types: exclusive and shared exclusive locks. Only one thread can hold the lock at a time, such as the ReentrantLock shown above
An exclusive mutex shared lock allows multiple threads to obtain the lock at the same time and access the shared resource source, like ReentrantReadWriteLock
Internal implementation of AQS
AQS maintains a volatile variable state, which represents the state of the synchronizer, and an int, which can be defined by the implementation class.
-
ReentrantLock is implemented in Sync, a subclass of AQS. State is used to indicate the number of times a lock has been held (0 indicates that the lock has not been held, and greater than 0 indicates how many times the lock has been held by the same thread).
-
ReentrantReadWriteLock: The high 16 bits of state indicate the number of read locks and the low 16 bits indicate the number of write locks. Reentrancy is also supported.
-
Semaphore, state means how many more times it can be fetched by threads, and 0 means it is not available.
-
CountDownLatch, state indicates the number of times the lock needs to be unlocked (release). After multiple threads perform the release action, state becomes 0 and the latch opens.
AQS data structure
Explain the meaning of several methods and attribute values:
Method and property values | meaning |
---|---|
waitStatus | Status of the current node in the queue |
thread | Represents the thread at the node |
prev | Precursor pointer |
predecessor | Returns the precursor node, if not, throws an NPE |
nextWaiter | Points to the next node in the CONDITION state. (Since this article does not cover the CONDITION Queue, this pointer is not covered.) |
next | Pointer to the subsequent |
There are two modes of thread locking:
model | meaning |
---|---|
SHARED | Represents threads waiting for locks in shared mode |
EXCLUSIVE | Indicates that the thread is waiting for the lock in an exclusive manner |
WaitStatus has the following enumerated values:
The enumeration | meaning |
---|---|
0 | The default value when a Node is initialized |
CANCELLED | A value of 1 indicates that the thread’s lock request has been canceled |
CONDITION | Is -2, indicating that the node is in the wait queue and the node thread is waiting to wake up |
PROPAGATE | If the value is -3, this field is used only when the current thread is SHARED |
SIGNAL | A value of -1 indicates that the thread is ready for resource release |
Release the lock and add thread changes to the queue
When lock contention occurs and locks are released, the nodes in the AQS synchronization queue will change. First look at the node addition scenario. There are two changes involved here
-
A new thread is appended to the synchronization queue wrapped as a Node, setting the prev Node and modifying the next Node of the preceding Node of the current Node to point to itself
-
When the head node releases the synchronization state, it wakes up the successor node. If the successor node successfully obtains the lock, it sets itself as the head node. The node changes as follows: This process also involves two changes
-
Modify the head node to point to the next node that acquires the lock
-
The new node that acquires the lock points the pointer to prev to NULL
-
CAS is not required to set the head node, because the head node is set by the thread that obtains the lock, while the synchronous lock can only be obtained by one thread, so there is no need for CAS guarantee, just set the head node as the successor node of the original first node. And disconnect the next reference from the original head node
ReentrantLock source code analysis
Starting with ReentrantLock, see how AQS are used to synchronize threads in this scenario
A sequence diagram for ReentrantLock
Call the lock() method in ReentrantLock. I used a sequence diagram to show the source code.
ReentrantLock.lock()
This is the entry point for reentrantLock to obtain the lock
public void lock(a) {
sync.lock();
}
Copy the code
Sync is actually an abstract static inner class that inherits the logic of AQS to implement reentrant locking. As mentioned earlier, AQS is a synchronization queue that can block and wake up threads, but it has no business function, so in different synchronization scenarios, Sync will inherit AQS to implement the functions of the corresponding scenarios. Sync has two specific implementation classes, which are:
-
NofairSync: indicates that lock preemption can exist, that is, new threads have the opportunity to preempt the lock regardless of whether there are other threads waiting on the current queue
-
FailSync: All threads acquire locks in strict accordance with the FIFO
NofairSync.lock
Take an unfair lock as an example to look at the implementation in Lock
-
The biggest difference between an unfair lock and a fair lock is that in an unfair lock my logic for preempting the lock is, regardless of whether there are threads queued or not, I go to CAS first to preempt it
-
If the CAS is successful, the lock is successfully obtained
-
If CAS fails, invoke acquire(1) lock contention logic
final void lock(a) {
if (compareAndSetState(0.1))
setExclusiveOwnerThread(Thread. * currentThread * ());
else {
acquire(1); }}Copy the code
CAS implementation principle
protected final boolean compareAndSetState(int expect, int update) {
return unsafe.compareAndSwapInt(this, stateOffset, expect, update);
}
Copy the code
To compare and replace by cas optimistic locking, this code means that if the current value of state in memory is equal to the expected value, update is replaced. The broadening operation returns true on success, false on otherwise. This operation is atomic, and there are no thread-safety issues, as it relates to the Unsafe class operation,
And what it means when it comes to the state property. State is a property in AQS that means different things in different implementations, for reentrant
The implementation of a lock represents a synchronization state. It has two meanings
-
When state is 0, it indicates that there is no lock
-
When state>0, it means that some thread has acquired the lock, i.e. state=1, but because
ReentrantLock allows ReentrantLock, so if the same thread acquires the lock more than once, the state will increase. For example, if the thread ReentrantLock acquires the lock five times, the state will be 5. Unsafe Classes Unsafe classes are covered by the Sun. misc package and are not part of the Java standard. The Unsafe class is the foundation for many Java libraries, including some of the most widely used high performance development libraries, such as Netty, Hadoop, Kafka, etc. Unsafe is a backdoor left over from Java, providing low-level operations such as direct memory access, thread suspension and recovery, CAS, thread synchronization, and memory barriers. CAS is an atomic operation provided in the Unsafe class, where the first argument is the object to be changed. The second argument is the offset (the value of the previously calculated headOffset), the third argument is the expected value, and the second argument is the expected value
The entire method updates to the new expected value var5 if the current value is equal to the expected value var4, and returns true if the update is successful, false otherwise;
stateOffset
A Java object can be thought of as a block of memory in which each field has to be placed in a certain order. This method tells you exactly what the byte offset of a field is relative to the object’s starting memory address. Used later in compareAndSwapInt to find the location of an object in memory based on the offset
So stateOffset represents the memory offset of the state field in the AQS class relative to the first address of the class
compareAndSwapInt
In the unsafe. CPP file, you can find an implementation of compareAndSwarpInt
UNSAFE_ENTRY(jboolean, Unsafe_CompareAndSwapInt(JNIEnv *env, jobject unsafe, jobject obj, jlong offset, jint e, jint x))
UnsafeWrapper("Unsafe_CompareAndSwapInt");
oop p = JNIHandles::resolve(obj); // Parse Java objects into OOP (normal object Pointers) for the JVM,
jint* addr = (jint *) index_oop_from_field_offset_long(p, offset); // Find the address based on object P and the address offset
return (jint)(Atomic::cmpxchg(x, addr, e)) == e; // Compare and replace based on CAS, where x represents the value to be updated and addr represents state
Copy the code
In memory, e represents the expected value UNSAFE_END
AQS.accquire
Acquire is a method in AQS. If CAS operation fails, state is no longer 0, and proceed acquire(1) operation
Question: What is the argument of 1 in acquire method used for?
The main logic of this approach is
-
TryAcquire attempts to acquire an exclusive lock, return true on success, false on failure
-
If tryAcquire fails, the current thread is added to the end of the AQS queue by wrapping it as a Node using the addWaiter method
-
AcquireQueued, taking Node as an argument, attempts to acquire the lock by spinning.
public final void acquire(int arg) {
if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); NonfairSync.tryAcquire }Copy the code
This method attempts to acquire the lock, and returns true on success, false on failure. This method overwrites the tryAcquire method in AQS and throws an exception. According to the general mode of thinking, since it is a template method does not implement, should define abstract, let the subclass implement? Think about why
protected final boolean tryAcquire(int acquires) {
return nonfairTryAcquire(acquires);
}
ReentrantLock.nofairTryAcquire
Copy the code
-
Gets the current thread to determine the current lock status
-
If state is 0, the current state is unlocked. Use the CAS to update the state value
-
If the current thread is a reentrant, the reentrant count is increased
final boolean nonfairTryAcquire(int acquires) {
final Thread current = Thread.currentThread();// Get the current thread of execution
Copy the code
int c = getState();// Get the value of state
if(c ==0) {// Indicates no lock
if (compareAndSetState(0, acquires)) {//cas replaces the value of state. If the CAS succeeds, the lock is successfully obtained
setExclusiveOwnerThread(current);Return true;}}else if(current ==getExclusiveOwnerThread()){// If the same thread acquires the lock, increase the reentrant count directly
int nextc = c + acquires;
if (nextc < 0) *// overflow*
throw new Error("Maximum lock count exceeded");
setState(nextc);
return true;
}
return false;
}
Copy the code
AQS.addWaiter
When the tryAcquire method fails to acquire the lock, addWaiter is first called to encapsulate the current thread as a Node. The mode parameter indicates the state of the current Node, and the node. EXCLUSIVE parameter is passed to indicate the EXCLUSIVE state. Means that the reentrant lock uses AQS exclusive lock function
-
Encapsulate the current thread as a Node
-
Check whether the tail node in the current linked list is empty. If not, add the node of the current thread to the AQS queue through cas operation
-
If empty or cas fails, enQ is called to add the node to the AQS queue
private Node addWaiter(Node mode) {
Node node = new Node(Thread. * currentThread * (), mode);// Encapsulate the current thread as Node
Node pred = tail; //tail is the attribute in AQS that represents the end of a queue. The default value is null
if(pred ! =null) {//tail If tail is not empty, there are nodes in the queue
node.prev = pred;// Point the Node prev of the current thread to tail
if (compareAndSetTail(pred, node)) {// Add node to the AQS queue via CAS, i.e. set to tail
pred.next = node;// Once set, point the next of the tail node to the current node
return node;
}
}
enq(node);//tail=null to add node to the synchronization queue
return node;
}
// enq adds the current node to the queue by spinning
private Node enq(final Node node) {
for(; ;) { Node t = tail;if (t == null) {*// Must initialize*
if (compareAndSetHead(new Node()))
tail = head;
} else {
node.prev = t;
if (compareAndSetTail(t, node)) {
t.next = node;
returnt; }}}}Copy the code
Graphic analysis
Assuming that three threads are fighting for the lock, the structure of the linked list in the AQS will remain until the end of the enQ method or after the addWaiter method is called
AQS.acquireQueued
After adding a thread to the linked list via the addWaiter method, Node is then passed as an argument to
AcquireQueued method to compete for locks
-
Gets the prev node of the current node
-
If the prev node is head, then it is eligible to scramble for the lock and call tryAcquire to preempt the lock
-
After successfully preempting the lock, set the node that acquired the lock to head and remove the original initialized HEAD node
-
If the lock fails to be acquired, the thread needs to be suspended based on waitStatus
-
Finally, cancel the lock acquisition operation by cancelAcquire
NofairSync.tryAcquire
This method has been analyzed before, which is to judge whether the state is locked by the state, and then perform the lock competition operation through CAS. Success indicates that the lock is obtained. Failure indicates that the lock fails to be obtained
shouldParkAfterFailedAcquire
If ThreadA lock has not been released, ThreadB and ThreadC to jostle for lock must be fail, then fail will call shouldParkAfterFailedAcquire method Node has 5 states, respectively is: CANCELLED (1), SIGNAL (-1), CONDITION (-2), PROPAGATE(-3), default state (0) CANCELLED: If the thread waiting in the synchronization queue has timed out or been interrupted, the Node of the Node should be CANCELLED. Its waitStatus should be CANCELLED, i.e. the Node will not change after entering the status. SIGNAL: As soon as the front node releases the lock, the thread on the next node identified as SIGNAL is notified. CONDITION: The main purpose of this method is to check whether ThreadA should be suspended if it fails to compete for the lock based on the Node state.
-
If ThreadA’s pred node is in SIGNAL state, then it is safe to suspend the current thread
-
CANCELLED nodes are removed by cyclic scanning the linked list
-
Change the status of the pred node to SIGNAL, returning false.
When false is returned, that is, no suspension is required, and true is returned, a call to parkAndCheckInterrupt is required to suspend the current thread
private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {
int ws = pred.waitStatus;// waitStatus of the front node
if (ws == Node.SIGNAL)// If the front-node is SIGNAL, it simply waits for threads from other front-nodes to be released.
return true;// Return true, which means you can suspend without worrying
if (ws > 0) {// if ws is greater than 0, it means that the prev node is unqueued
do {
node.prev = pred = pred.prev;// equivalent to: pred=pred.prev;
node.prev = pred;
} while (pred.waitStatus > 0); // A loop is used here to remove CANCELLED nodes from the two-way list
pred.next = node;
} else {// Use cas to set the prev node to SIGNAL(-1)
compareAndSetWaitStatus(pred, ws, Node.SIGNAL);
}
return false;
}
Copy the code
parkAndCheckInterrupt
Suspend the current Thread in the WATING state thread.interrupted () using locksupport. park. This returns whether the current Thread has been interrupted by another Thread. If an interrupt request was triggered, this method returns the current interrupt flag true and resets the interrupt flag to have responded to the interrupt request. If it returns true, it means that selfInterrupt() is executed in the Acquire method.
private final boolean parkAndCheckInterrupt(a) {
LockSupport.park(this);
return Thread.interrupted();
}
SelfInterrupt: Identifies if the current thread has been interrupted in the acquireQueued, an interrupt request needs to be generated because the thread will not respond to the interrupt request when calling the acquireQueued method
static void selfInterrupt(a) {
Thread.currentThread().interrupt();
}
Copy the code
Lock release process
If ThreadA releases the lock, what happens when the lock is released
ReentrantLock.unlock
In UNLOCK, the release method is called to release the lock
public final boolean release(int arg) {
if (tryRelease(arg)) { // The lock was released successfully
Node h = head; // get aQS head node
if(h ! =null&& h.waitStatus ! =0)// If the head node is not empty and in state! Call the unparkprecursor (h) to wake up the succeeding nodes
unparkSuccessor(h);
return true;
}
return false;
}
Copy the code
ReentrantLock.tryRelease
This method can be thought of as an operation to set the lock state by subtracting the state from the value of the argument passed in (the argument is 1). If the result state is 0, the Owner of the exclusive lock is set to null so that other threads have a chance to execute.
In an exclusive lock, the Owner thread is set to empty only when unlock() counts correspond to lock() counts. The Owner thread is set to empty only when unlock() counts correspond to lock() counts. The Owner thread is set to empty only when unlock() counts correspond to lock() counts. And only in this case will it return true.
protected final boolean tryRelease(int releases) {
int c = getState() - releases;
if(Thread.currentThread() ! = getExclusiveOwnerThread())throw new IllegalMonitorStateException();
boolean free = false;
if (c == 0) {
free = true;
setExclusiveOwnerThread(null);
}
setState(c);
return free;
}
// unparkSuccessorprivate
void unparkSuccessor(Node node) {
int ws = node.waitStatus;// Get the state of the head node
if (ws < 0)
compareAndSetWaitStatus(node, ws, 0);// Set the head node status to 0
Node s = node.next;// Get the next node of the head node
if (s == null || s.waitStatus > 0) {
// Cancelled if the next node is null or status>0.
// Select waitStatus <= 0 from head (s = null);
for(Node t = tail; t ! =null&& t ! = node; t = t.prev)if (t.waitStatus <= 0)
s = t;
}
if(s ! =null) // The next node is not empty, just wake up the thread
LockSupport.unpark(s.thread);
}
Copy the code
Why scan from tail when releasing locks
Let’s go back to the ENQ method.
-
Point the prev of the new node to tail
-
Set tail to the new node through CAS, which is thread-safe because cas is an atomic operation
-
T.n ext = node; Set the previous tail next node to point to the new node
private Node enq(final Node node) {
for(; ;) { Node t = tail;if (t == null) { // Must initialize
if (compareAndSetHead(new Node()))
tail = head;
} else {
node.prev = t;
if (compareAndSetTail(t, node)) {
t.next = node;
returnt; }}}}Copy the code
After the cas operation, t.ext = before the node operation. There are other threads calling unlock from head and traversing backwards, because t.ext =node has not been executed, it means that the list has not been completed. It’s going to interrupt the traversal to t. So if you go from the back to the front, you won’t have this problem.
Graphic analysis
With the release of the lock, the original structure has undergone some changes. The head node’s waitStatus becomes 0, and ThreadB is woken up
The thread that was suspended continues execution
With reentrantLock. unlock, a suspended thread is awakened to resume execution. The thread that was suspended was in the acquireQueued method, so it continues executing from that method after being woken up
AQS.acquireQueued
This method has been thoroughly examined before, but let’s focus on the flow of execution once ThreadB is woken up. Since ThreadB’s prev node points to the head, and ThreadA has already released the lock. So when you call tryAcquire, you can get the lock
-
Treat a ThreadB node as a head
-
Point the next node of the original head node to null
final boolean acquireQueued(final Node node, int arg) {
boolean failed = true;
try {
boolean interrupted = false;
for(; ;) {final Node p = node.predecessor();
if (p == head && tryAcquire(arg)) {
setHead(node);
p.next = null; // help GC
failed = false;
return interrupted;
}
if (shouldParkAfterFailedAcquire(p, node) &&
parkAndCheckInterrupt())
interrupted = true; }}finally {
if(failed) cancelAcquire(node); }}Copy the code
Graphic analysis
-
Set prev=null for the new head node
-
Set the next node of the original head node to null
The difference between fair and unfair locks
The fairness of a lock is relative to the order in which the lock is acquired. If it is a fair lock, the order in which the lock is acquired should conform to the absolute chronological order of the request, known as FIFO. In the example analyzed above, as long as the CAS synchronization status was set successfully, the current thread acquired the lock, whereas a fair lock is different, with two differences
FairSync.tryAcquire
final void lock(a) {
acquire(1);
}
Copy the code
An unfair lock preempts CAS before acquiring a lock, whereas a fair lock does not preempt fairsync.tryacquire
protected final boolean tryAcquire(int acquires) {
final Thread current = Thread.currentThread();
int c = getState();
if (c == 0) {
if(! hasQueuedPredecessors() && compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(current);
return true; }}else if (current == getExclusiveOwnerThread()) {
int nextc = c + acquires;
if (nextc < 0)
throw new Error("Maximum lock count exceeded");
setState(nextc);
return true;
}
return false;
}
Copy the code
This method differs from nonfairTryAcquire(int acquires) in that hasqueuedToraise () is used to determine whether the current node in the queue has a precursor. If this method returns true, It means that a thread has requested the lock earlier than the current thread and therefore needs to wait for the precursor thread to acquire and release the lock before continuing to acquire the lock.