| good praise, please, get into the habit of

  • You have one thought, I have one thought, and when we exchange, one person has two thoughts

  • If you can NOT explain it simply, you do NOT understand it well enough

Now the Demo code and technical articles are organized together Github practice selection, convenient for everyone to read and view, this article is also included in this, feel good, please Star



Writing in the front

Entered the source stage, wrote a dozen concurrent series of knowledge matting to finally come in handy. I believe a lot of people have forgotten some of the theoretical knowledge, don’t worry, I will be in the source link with the corresponding theoretical knowledge points to help you recall, do the combination of theory and practice, in addition, this is a long picture and text, suggest collection, if useful to you also please like to let more people see


Why did the Java SDK design Lock

Once upon a time, I imagined that if Java concurrency control only synchronized, how good, only the following three ways to use, simple and convenient

public class ThreeSync {

 private static final Object object = new Object();

 public synchronized void normalSyncMethod(a){
 / / critical region  }   public static synchronized void staticSyncMethod(a){  / / critical region  }   public void syncBlockMethod(a){  synchronized (object){  / / critical region  }  } } Copy the code

If this was the case before Java 1.5, since the 1.5 version Doug Lea has reinvented a wheel Lock


We often say, “Avoid reinventing the wheel.” If you have a wheel and still insist on reinventing the wheel, then surely the traditional wheel will not solve the problem well in some application scenarios


If you remember Coffman’s list of four situations in which deadlocks can occur, inalienable conditions are:

A thread that has acquired a resource cannot be stripped of it until it is used up, but can only release it when it is used up

Breaking this condition requires the ability to release existing resources without requesting further resources

Obviously, synchronized does not possess this ability. With synchronized, a thread will enter a blocked state if it fails to obtain resources, and there is nothing we can do to change its state, which is the fatal weakness of synchronized wheels. This is a strong case for reinventing wheel Lock

Explicit Lock Lock

The old wheel has weaknesses, so the new wheel needs to solve these problems, so it needs to have the function of not blocking. The following three solutions are good ways to solve this problem (see the following table description to understand the meanings of the three solutions).

features describe API
Able to respond to interruption It’s also nice to be able to respond to interrupts if you can’t release them yourself.Java multithreaded interrupt mechanismInterrupts are specifically described for the purpose of jumping out of a state, such as blocking, by means of an interrupt signal lockInterruptbly()
Non-blocking access lock Attempt to obtain, if not, will not block, return directly tryLock()
Support the timeout Given a time limit, if it is not retrieved for a period of time, instead of entering a blocked state, it returns directly tryLock(long time, timeUnit)

Good plan, but can not have both fish and bear paw, Lock does not have the characteristics of synchronized, naturally not like synchronized three key word play all over the world, in use is relatively complex


Lock uses the paradigm

Synchronized has a standard usage, such a fine tradition of our Lock must also have, I believe many people know the use of Lock a paradigm

Lock lock = new ReentrantLock();
lock.lock();
try{
.}finally{
 lock.unlock(); } Copy the code

Since it is a paradigm, there must be a reason for it. Let’s take a look

Release locks in standard 1 — finally

Releasing a lock in finally ensures that the lock will eventually be released after it is acquired

Standard 2 – Get the lock outside the try{}

Have you ever wondered why standard 2 exists? We usually “like” trying everything for fear of an exception and not being able to capture it

Acquiring a lock outside of a try{} has two main considerations:

  1. If you throw an exception without obtaining the lock, you’ll end up with a problem releasing the lock because you don’t own it yet
  2. If an exception is thrown when the lock is acquired, that is, the current thread does not acquire the lock, but when the finally code is executed, if another thread happens to acquire the lock, it will be released (without reason).

Different locks are implemented in slightly different ways, and paradigms exist to avoid all problems, so try to follow them


How does a Lock work?

If you’re familiar with synchronized, you know that the moniterEnter and MoniterExit instructions appear in the critical section of a program when it is compiled into CPU instructions, which can be interpreted as signals to enter and exit the critical section

From the perspective of paradigm:

  • Lock. Lock () acquires locks, “equivalent” to the synchronized MoniterEnter instruction

  • Lock.unlock () releases the lock, “equivalent” to synchronized moniterexit

So how does Lock do that?

Here’s a quick note, so that when it comes to source code analysis later, you can look at the outline of the design from a distance and the implementation details from a distance, which makes it easier


For example, a volatile variable state is maintained inside the ReentrantLock, which is read and written by the CAS. If the CAS changes successfully, the lock is acquired. The thread enters the try block and continues execution; If the change is not successful, the thread is suspended and does not proceed

But Lock is an interface, and there is no state variable in it:


What does it do with this state? Obviously a little bit of a design bonus is needed, the interface defines the behavior, which is required to implement the class

The implementation class of the Lock interface basically implements thread access control by aggregating a subclass of queue synchronizer

So what is a queue synchronizer? (This should be the best clickbait you’ve ever seen. It took half a century to get to the point.

Queue synchronizer AQS

Queue synchronizer (AbstractQueuedSynchronizer), hereinafter referred to as synchronizer or AQS, is our today’s heroine

Q: why do you analyze JUC source code from AQS?

A: Look at the picture below


I’m sure you’ll get the idea if you look at this screenshot

  • ReentrantLock
  • ReentrantReadWriteLock
  • Semaphore
  • CountDownLatch
  • Fair lock
  • Not fair lock
  • ThreadPoolExecutorFor an understanding of thread pools, seeWhy use thread pools? )

Are directly related to AQS, so understand the abstract implementation of AQS, on this basis and then a little view of the implementation details of all kinds of above, soon all can be done, not to view the source code when confused, lost the main line

As mentioned above, synchronizers are aggregated in the lock implementation class, and then synchronizers implement the semantics of the lock.

Why use aggregate mode, and how to further understand the relationship between locks and synchronizers?


Most of us are using locks, and after implementing locks, its core is to be easy to use


From AQS class name and modify point of view, this is an abstract class, so from the point of view of design patterns synchronizer must be based on the template model to design 】, the user needs to inherit synchronizer, implement custom synchronizer, pay equal attention to write specified method, then the synchronizer combination in the synchronous component of custom, and call the synchronizer template method, These template methods in turn call the user-overridden methods

I don’t want to explain the above so abstract, in fact, to understand the above sentence, we only need to know the following two questions

  1. What are the methods that a custom synchronizer can override?
  2. What are the template methods provided by the abstract synchronizer?

A method that the synchronizer can override

Synchronizer provides only five rewritable methods, which greatly facilitates lock users:


In theory, methods that need to be overwritten should also have an abstract modifier, why not here? The reason is simple. I’ve color-coded the above methods into two categories:

  • exclusive
  • Shared

A custom synchronized component or lock cannot be both exclusive and shared. To avoid forcing irrelevant methods to be overridden, we leave the abstract decorator out, but throw an exception to tell us that the method cannot be used directly:

    protected boolean tryAcquire(int arg) {
        throw new UnsupportedOperationException();
    }
Copy the code

Warm and intimate (if you have similar needs can also copy this design)

The table method description refers to the volatile state mentioned above, so when overriding the above methods, we also need to obtain or modify the synchronization state through the following three methods provided by the synchronizer (provided by the AQS) :


The difference between an exclusive and a shared operation on the state variable is simple


So ReentrantLock, ReentrantReadWriteLock, Semaphore, CountDownLatch are classes that are only slightly different from each other. The rest of the classes are implemented using the template method of the synchronizer. Is it a relief to be here? Let’s take a look at the template method:

Template methods provided by the synchronizer

In addition to providing the above two types of template methods, only more interrupt response and timeout limit template methods for Lock to use, let’s take a look


Forget about the function of the above method, for now you only need to understand the general function. In addition, I’m sure you’ve noticed:

All of the above methods have the final keyword modifier, which means that subclasses cannot override the method

You might be a little confused by this, but let’s generalize a little bit:


Let’s use the code to illustrate the above relationship (note the comments in the code, the following code is not very strict, but just to illustrate the implementation of the code above) :

package top.dayarch.myjuc;

import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.AbstractQueuedSynchronizer;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;  / * ** Custom mutex *  * @author tanrgyb  * @date 2020/5/23 9:33 PM * / public class MyMutex implements Lock {   Static inner class - custom synchronizer  private static class MySync extends AbstractQueuedSynchronizer{  @Override  protected boolean tryAcquire(int arg) {  // Call the method provided by AQS to guarantee atomicity through CAS  if (compareAndSetState(0, arg)){  // We implement a mutex, so mark the thread that got the synchronization state successfully,  // To determine whether reentrant is possible (more on that later)  setExclusiveOwnerThread(Thread.currentThread());  // Succeeded in obtaining the synchronization status, return true  return true;  }  // Failed to obtain synchronization status, false is returned  return false;  }   @Override  protected boolean tryRelease(int arg) {  // Release the lock without owning it throws IMSE  if (getState() == 0) { throw new IllegalMonitorStateException();  }  // Can be released to clear the exclusive thread flag  setExclusiveOwnerThread(null);  // Set the synchronization status to 0 to release the lock  setState(0);  return true;  }   // Whether to hold it exclusively  @Override  protected boolean isHeldExclusively(a) {  return getState() == 1;  }   Each condition has a corresponding conditional wait queue, as described in the lock model  Condition newCondition(a) {  return new ConditionObject();  }  }   // Aggregate custom synchronizer  private final MySync sync = new MySync();    @Override  public void lock(a) {  // Block the lock, call synchronizer template method exclusive, get synchronization state  sync.acquire(1);  }   @Override  public void lockInterruptibly(a) throws InterruptedException {  // Call the synchronizer template method to interrupt the synchronization state  sync.acquireInterruptibly(1);  }   @Override  public boolean tryLock(a) {  // Call the self-overridden method, non-blocking to get the synchronization state  return sync.tryAcquire(1);  }   @Override  public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {  // Call the synchronizer template method in response to interrupts and timeout limits  return sync.tryAcquireNanos(1, unit.toNanos(time));  }   @Override  public void unlock(a) {  / / releases the lock  sync.release(1);  }   @Override  public Condition newCondition(a) {  // Use custom conditions  return sync.newCondition();  } }  Copy the code

If you open the IDE now, you will find that the ReentrantLock ReentrantReadWriteLock Semaphore CountDownLatch is implemented according to this structure. So let’s take a look at AQS template method exactly how to implement locks

AQS implementation analysis

From the above code, you should understand that lock.trylock () is a non-blocking lock that calls the tryAcquire() method of a custom synchronizer rewrite. Setting the state through CAS will return immediately regardless of success; So how does lock.lock() implement blocking locks?

Queues are needed when there is blocking, and queues are necessary to achieve queuing

CLH: Craig, Landin and Hagersten queues, is a unidirectional linked list. Queues in AQS are virtual bidirectional queues (FIFO) of CLH variants

Each queued individual in a queue is a Node, so let’s look at the structure of a Node

The Node Node

AQS maintains a synchronization queue internally to manage synchronization status.

  • When the thread fails to obtain the synchronization status, the current thread and wait status will be constructed into a Node Node, which will be added to the end of the synchronization queue, blocking the thread
  • When the synchronization state is released, the thread of the “primary node” in the synchronization queue is woken up to get the synchronization state

To make sense of the above steps, we need to take a look at the Node structure (which is great if you can open the IDE)


At first glance it looks a bit messy, but let’s categorize it:


With the Node structure, you can imagine the connection structure of the synchronous queue:


With the basics of pre-knowledge behind us, let’s take a look at the entire process of obtaining synchronous state exclusively

Exclusive access to the synchronization status

The story begins with the paradigm Lock.lock ()

public void lock(a) {
 // Block the lock and call the synchronizer template method to get the synchronization state
 sync.acquire(1);
}
Copy the code

AQS Template method acquire()

public final void acquire(int arg) {
  // Call the tryAcquire method for the custom synchronizer rewrite
 if(! tryAcquire(arg) &&  acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
  selfInterrupt();
} Copy the code

First, a non-blocking attempt is also made to obtain the synchronization state. If tryAcquire fails (return false), the addWaiter method is called to construct a Node (Node.exclusive) and safely join the queue.

    private Node addWaiter(Node mode) {
       // Construct a Node containing the current thread information and Node mode.
        Node node = new Node(Thread.currentThread(), mode);
       // Create a new variable pred to point to the node pointed by tail
        Node pred = tail;
 // If the tail node is not empty  if(pred ! =null) {  // The first node points to the last node  node.prev = pred;   // Because this code is executed if multiple threads fail to obtain the synchronization state simultaneously  // Therefore, CAS is used to ensure that the current node is securely set as the latest tail node  if (compareAndSetTail(pred, node)) {  // The successor of the former tail node points to the current node  pred.next = node;  // Returns the newly constructed node  return node;  }  }  If the tail node is empty, the current node is the first node to be added to the synchronization queue  // A queue operation is required  enq(node);  return node;  }   private Node enq(final Node node) {  // Make sure the node is added correctly through an "infinite loop", and return only after it is finally set as a tail node, for the same reason as above  for (;;) {  Node t = tail;  // For the first loop, if the tail node is null  if (t == null) { // Must initialize  // Build a sentinel node and point the head pointer to it  if (compareAndSetHead(new Node()))  // The tail pointer also points to the sentinel node  tail = head;  } else {  // For the second loop, point the precursor of the new node to t  node.prev = t;  // Add the new node to the end of the queue  if (compareAndSetTail(t, node)) {  // The successor node of the precursor node points to the current new node to complete the bidirectional queue  t.next = node;  return t;  }  }  }  } Copy the code

You may be confused by the way enq() is handled. Entering this method is an “infinite loop”. Let’s use a graph to describe how it breaks out of the loop


Some of you might wonder, why is there a sentinel node?

Sentinels, as the name suggests, were used to settle boundary issues between countries and were not directly involved in production activities. Similarly, sentry, as mentioned in computer science, is used to solve boundary problems. If there is no boundary, the same algorithm may fail at the boundary, as in the acquireQueued() method to continue parsing

    final boolean acquireQueued(final Node node, int arg) {
        boolean failed = true;
        try {
            boolean interrupted = false;
           // "dead loop", try to get the lock, or suspend
 for (;;) {  // Get the precursor node of the current node  final Node p = node.predecessor();  // Attempts to acquire locks only if the current node's precursor is a head node  // You should understand the meaning of adding the sentinel node  if (p == head && tryAcquire(arg)) {  // Get synchronization status successfully, set yourself as the head  setHead(node);  // Empty the successor node of the sentinel node for GC  p.next = null; // help GC  failed = false;  // Return the interrupt identifier  return interrupted;  }  // The precursor node of the current node is not the head node  // [or] The precursor of the current node is the head node but failed to obtain the synchronization status  if (shouldParkAfterFailedAcquire(p, node) &&  parkAndCheckInterrupt())  interrupted = true;  }  } finally {  if (failed)  cancelAcquire(node);  }  } Copy the code

It is understandable that the synchronization state will return on success, but if it fails, will it be a waste of resources in an “endless loop”? Not, apparently, shouldParkAfterFailedAcquire (p, node) and parkAndCheckInterrupt () will be failed to get sync thread thread hangs, we continue to look down

    private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {
       // Get the status of the precursor node
        int ws = pred.waitStatus;
       // If the state is SIGNAL, that is, waiting for occupied resources to be released, return true
       // Ready to continue calling the parkAndCheckInterrupt method
 if (ws == Node.SIGNAL)  return true;  // CANCELLED if ws is greater than 0  if (ws > 0) {  // Check whether the precursor node is also CANCELLED. Ignore the node in the CANCELLED state and reconnect the queue  do {  node.prev = pred = pred.prev;  } while (pred.waitStatus > 0);  pred.next = node;  } else {  // Set the precursor node of the current node to SIGNAL state for subsequent wake up operations  // The program returns false the first time it executes, and then loops through the outer loop a second time, finally returning from line 7  compareAndSetWaitStatus(pred, ws, Node.SIGNAL);  }  return false;  } Copy the code

At this point you might have a question:

What is the purpose of setting the precursor node to SIGNAL state?

Save the question and we’ll find out

If precursor node waitStatus is SIGNAL status, namely shouldParkAfterFailedAcquire method will return true, the program will continue to execute down parkAndCheckInterrupt method, is used to the current thread

    private final boolean parkAndCheckInterrupt(a) {
       // The thread hangs, and the program does not proceed
        LockSupport.park(this);
       // According to the PARK method API, the program continues to execute in the following three cases
       / / 1. Be unpark
 // 2.  // 3. Other illogical returns continue down   // Because of the above three cases of program execution so far, return the current thread interrupt state, and clear the interrupt state  // This method returns true if it is interrupted  return Thread.interrupted();  } Copy the code

The aroused program continues the loop in the acquireQueued method and returns interrupted = true if the synchronization status is successfully retrieved

The program continues up the call stack, eventually returning to the AQS template method Acquire

public final void acquire(int arg) {
 if(! tryAcquire(arg) &&  acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
  selfInterrupt();
}
Copy the code

You may wonder:

The program has successfully obtained the synchronization state and returned, how can there be a self-interrupt?

static void selfInterrupt(a) {
    Thread.currentThread().interrupt();
}
Copy the code

If you don’t understand interrupts, it is highly recommended that you look back at the Java multithreaded interrupt mechanism

AcquireQueued finally block if you look closely you may immediately be confused:

When will the code in if(failed) be executed?

if (failed)
  cancelAcquire(node);
Copy the code

This code is executed under the condition that failed is true. Normally, if you break out of the loop, the value of failed is false. If you can’t break out of the loop, it seems that you can’t get to this point at all

Looking at the try block, there are only two methods that throw exceptions:

  • Node. The processor () method

  • Rewrite the tryAcquire() method yourself

First look at the former:


Obviously, the exception thrown here is not the point, so take the tryAcquire() method overwritten by ReentrantLock for example


In addition, the above analysis shouldParkAfterFailedAcquire method also CANCELLED status in the judgment, then

When will a node in the cancelled state be generated?

How does cancelAcquire set/dispose the CANNELLED data

 private void cancelAcquire(Node node) {
        // Ignore invalid nodes
        if (node == null)
            return;
    // Clear the associated thread information
 node.thread = null;   // Skip the precursor node that is also in the cancelled state  Node pred = node.prev;  while (pred.waitStatus > 0)  node.prev = pred = pred.prev;   // After breaking out of the above loop, find the precursor valid node and get the successor nodes of the valid node  Node predNext = pred.next;   // Set the status of the current node to CANCELLED  node.waitStatus = Node.CANCELLED;   // If the current node is at the end of the queue, just remove itself from the queue  if (node == tail && compareAndSetTail(node, pred)) {  compareAndSetNext(pred, predNext, null);  } else {  int ws;  // 1. If the valid precursor of the current node is not a head node, that is, the current node is not a successor of the head node  if(pred ! = head && // 2. Check whether the status of the current active precursor node is SIGNAL  ((ws = pred.waitStatus) == Node.SIGNAL ||  // 3. If no, try setting the status of the precursor node to SIGNAL  (ws <= 0 && compareAndSetWaitStatus(pred, ws, Node.SIGNAL))) &&  // Check whether the thread information of the current node is null pred.thread ! =null) {  // The above conditions are met  Node next = node.next;  // Point the successor node pointer to the successor node of the current node  if(next ! =null && next.waitStatus <= 0)  compareAndSetNext(pred, predNext, next);  } else {  Wake up the successor node of the current node if the precursor node of the current node is the head node, or other conditions above are not met  unparkSuccessor(node);  }   node.next = node; // help GC  } Copy the code

The core purpose of this note is to remove CANCELLED nodes from the waiting queue and reassemble the whole queue. In summary, there are only three conditions for setting CANCELLED nodes, which can be analyzed by drawing diagrams:






At this point, the process of obtaining synchronization state is over. Let’s use a simple flow chart to illustrate the whole process


This is the end of the lock acquisition process, pause for a few minutes to collect your thoughts. A SIGNAL is used to SIGNAL a SIGNAL. This involves the release of the lock, let’s continue to understand, the overall idea is the same as the lock acquisition, but the release process is relatively simple

Exclusive release synchronization state

The story begins with the unlock() method

 public void unlock(a) {
  / / releases the lock
  sync.release(1);
 }
Copy the code

Call the AQS template method release to enter this method

    public final boolean release(int arg) {
       // Call the tryRelease method of the custom synchronizer rewrite to try to release the synchronization state
        if (tryRelease(arg)) {
           // The head node is obtained
            Node h = head;
 // There is a head node and waitStatus is not the initial state  // We have already analyzed the fetch process, which updates the value of waitStatus from the initial state to SIGNAL state  if(h ! =null&& h.waitStatus ! =0)  // Unsuspend the thread  unparkSuccessor(h);  return true;  }  return false;  } Copy the code

To look at the unparksucceeded is really to wake up the successors of the first node

    private void unparkSuccessor(Node node) {      
       // Get the waitStatus of the head node
        int ws = node.waitStatus;
        if (ws < 0)
           // Clear the waitStatus value of the head node to 0
 compareAndSetWaitStatus(node, ws, 0);   // Get the successor of the head node  Node s = node.next;  // Check whether the successor node of the current node is in the cancelled state. If so, remove it and reconnect to the queue  if (s == null || s.waitStatus > 0) {  s = null;  // Look forward from the tail node to find the first node in the queue whose waitStatus is less than 0  for(Node t = tail; t ! =null&& t ! = node; t = t.prev) // SIGNAL < 0  if (t.waitStatus <= 0)  s = t;  }  if(s ! =null)  // Unsuspend the thread  LockSupport.unpark(s.thread);  } Copy the code

Some students may have questions:

Why is this place looking forward from the end of the queue for non-cancelled nodes?

There are two reasons:

First, look back at the scenario where the node is enqueued:

    private Node addWaiter(Node mode) {
        Node node = new Node(Thread.currentThread(), mode);
        // Try the fast path of enq; backup to full enq on failure
        Node pred = tail;
        if(pred ! =null) {
 node.prev = pred;  if (compareAndSetTail(pred, node)) {  pred.next = node;  return node;  }  }  enq(node);  return node;  } Copy the code

Node enqueueing is not an atomic operation, lines 6 and 7 of the code

node.prev = pred; 
compareAndSetTail(pred, node) 
Copy the code

These two places can be considered atomic operations for enqueueing the last node, if the code has not yet been executed to pred.next = node; It could not be found from the front to the back if the unparksucceeded had happened to be carried out then, because the succeeding Pointers had not been joined together and so it was necessary to go from the back to the front

The second reason is that when CANCELLED nodes are generated in the diagram above, the Next pointer is disconnected first, while the Prev pointer is not. Therefore, it is also necessary to traverse backwards to traverse all nodes

The synchronization state has now been successfully released, and the thread that got the suspended synchronization state will wake up and resume execution from line 3 below:

    private final boolean parkAndCheckInterrupt(a) {
        LockSupport.park(this);
        return Thread.interrupted();
    }
Copy the code

Continue back to the upper call stack, starting at line 15 below, and re-execute the loop to try again to get the synchronization status

    final boolean acquireQueued(final Node node, int arg) {
        boolean failed = true;
        try {
            boolean interrupted = false;
            for (;;) {
 final Node p = node.predecessor();  if (p == head && tryAcquire(arg)) {  setHead(node);  p.next = null; // help GC  failed = false;  return interrupted;  }  if (shouldParkAfterFailedAcquire(p, node) &&  parkAndCheckInterrupt())  interrupted = true;  }  } finally {  if (failed)  cancelAcquire(node);  }  } Copy the code

At this point, the process for exclusive lock acquisition/release is closed loop, but the other two template methods for AQS have not been introduced

  • In response to interrupt
  • Timeout limit

Exclusive response interrupts to get synchronization status

The story begins with the lock.lockInterruptibly() method

 public void lockInterruptibly(a) throws InterruptedException {
  // Call the synchronizer template method to interrupt the synchronization state
  sync.acquireInterruptibly(1);
 }
Copy the code

With the previous understanding, it is easy to understand how to obtain the synchronous state of the exclusive respondable interrupt:

    public final void acquireInterruptibly(int arg)
            throws InterruptedException {
        if (Thread.interrupted())
            throw new InterruptedException();
       // An attempt to get the synchronization status nonblocking fails. If no synchronization status is obtained, execute line 7
 if(! tryAcquire(arg)) doAcquireInterruptibly(arg);  } Copy the code

Continue to look at the doAcquireInterruptibly method:

    private void doAcquireInterruptibly(int arg)
        throws InterruptedException {
        final Node node = addWaiter(Node.EXCLUSIVE);
        boolean failed = true;
        try {
 for (;;) {  final Node p = node.predecessor();  if (p == head && tryAcquire(arg)) {  setHead(node);  p.next = null; // help GC  failed = false;  return;  }  if (shouldParkAfterFailedAcquire(p, node) &&  parkAndCheckInterrupt())  After receiving an interrupt signal, InterruptedException is thrown instead of returning interrupted = true  throw new InterruptedException();  }  } finally {  if (failed)  cancelAcquire(node);  }  } Copy the code

InterruptedException is thrown (line 17), and the call is returned to the upper stack to catch the exception

To strike while the iron is hot, let’s look at another template method:

Exclusive timeout limits the acquisition of synchronization status

Given a time limit, return true if the synchronization state is retrieved within that time limit, and false otherwise. It’s like a thread setting itself an alarm clock. When the alarm goes off, the thread returns, so it doesn’t block

Since the timeout limit is involved, the core logic must be to calculate the time interval, because in the timeout period, there must be several attempts to obtain the lock, each time to obtain the lock must have time consumption, so the calculation of the time interval logic is as simple as we print the program log

nanosTimeout = deadline – System.nanoTime()

The story begins with the lock.tryLock(time, unit) method

 public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
  // Call the synchronizer template method in response to interrupts and timeout limits
  return sync.tryAcquireNanos(1, unit.toNanos(time));
 }
Copy the code

Take a look at the tryAcquireNanos method

    public final boolean tryAcquireNanos(int arg, long nanosTimeout)
            throws InterruptedException {
        if (Thread.interrupted())
            throw new InterruptedException();
        return tryAcquire(arg) ||
 doAcquireNanos(arg, nanosTimeout);  } Copy the code

The doAcquireNanos method is also throws InterruptedException. If the acquireInterruptibly method is used, the procedure is InterruptedException. Methods marked with throws InterruptedException indicate that the method can also respond to interrupts, so you can understand that the timeout limit is an enhanced version of the acquireInterruptibly method, with both timeout and non-blocking controls

    private boolean doAcquireNanos(int arg, long nanosTimeout)
            throws InterruptedException {
       // Return false to obtain synchronization status within the timeout period
        if (nanosTimeout <= 0L)
            return false;
 // Calculate the timeout expiration time  final long deadline = System.nanoTime() + nanosTimeout;  // Join the queue in exclusive mode  final Node node = addWaiter(Node.EXCLUSIVE);  boolean failed = true;  try {  for (;;) {  final Node p = node.predecessor();  if (p == head && tryAcquire(arg)) {  setHead(node);  p.next = null; // help GC  failed = false;  return true;  }  // Calculates the new timeout  nanosTimeout = deadline - System.nanoTime();  // If timeout occurs, return false  if (nanosTimeout <= 0L)  return false;  if (shouldParkAfterFailedAcquire(p, node) &&  // Check whether the latest timeout period is greater than the threshold 1000  nanosTimeout > spinForTimeoutThreshold)  // Suspend the thread nanosTimeout for a long time, when the time is up, automatically return  LockSupport.parkNanos(this, nanosTimeout);  if (Thread.interrupted())  throw new InterruptedException();  }  } finally {  if (failed)  cancelAcquire(node);  }  } Copy the code

The above method should not be too hard to follow, but you might be confused on line 27

Why nanosTimeout is compared to the spin timeout threshold 1000?

    /**
     * The number of nanoseconds for which it is faster to spin
     * rather than to use timed park. A rough estimate suffices
     * to improve responsiveness with very short timeouts.
     */
    static final long spinForTimeoutThreshold = 1000L;
Copy the code

Doc makes it very clear that 1000 nanoseconds is very, very short, and there is no need to suspend and wake up. Instead, the current thread will go straight to the next loop

Up to now, we only need Condition to define MyMutex. Are you tired? I’m still holding on


Condition

If you’ve seen the wait notification mechanism for concurrent programming written earlier, you should remember the following diagram:


If you understand the model and look at the implementation of Condition, it won’t be a problem at all. Condition is still an interface, and it definitely needs to have an implementation class


Let’s start with lock.newnewCondition

 public Condition newCondition(a) {
  // Use custom conditions
  return sync.newCondition();
 }
Copy the code

Custom synchronizer rewraps this method:

  Condition newCondition(a) {
   return new ConditionObject();
  }
Copy the code

ConditionObject is the Condition implementation class defined in the AQS. ConditionObject has only two member variables:

/** First node of condition queue. */
private transient Node firstWaiter;
/** Last node of condition queue. */
private transient Node lastWaiter;
Copy the code

So we just need to look at the await/signal method of the ConditionObject implementation to use these two member variables

        public final void await(a) throws InterruptedException {
            if (Thread.interrupted())
                throw new InterruptedException();
           // Also build Node and join the queue
            Node node = addConditionWaiter();
 // Release the synchronization state  int savedState = fullyRelease(node);  int interruptMode = 0;  while(! isOnSyncQueue(node)) { // Suspend the current thread  LockSupport.park(this);  if((interruptMode = checkInterruptWhileWaiting(node)) ! =0)  break;  }  if(acquireQueued(node, savedState) && interruptMode ! = THROW_IE) interruptMode = REINTERRUPT;  if(node.nextWaiter ! =null) // clean up if cancelled  unlinkCancelledWaiters();  if(interruptMode ! =0)  reportInterruptAfterWait(interruptMode);  } Copy the code

AddWaiter is the condition to join a queue of its own. AddWaiter is the condition to join a queue.

        private Node addConditionWaiter(a) {
            Node t = lastWaiter;
            
            if(t ! =null&& t.waitStatus ! = Node.CONDITION) {                unlinkCancelledWaiters();
 t = lastWaiter;  }  // The waitStatus of the newly built node is CONDITION, not 0 or SIGNAL  Node node = new Node(Thread.currentThread(), Node.CONDITION);  // Build a unidirectional synchronization queue  if (t == null)  firstWaiter = node;  else  t.nextWaiter = node;  lastWaiter = node;  return node;  } Copy the code

Here are some friends who might be wondering:

Why is there a one-way queue and no CAS to ensure the security of joining the queue?

Since await is used in the Lock paradigm try, it means that the Lock has been obtained, so there is no need to use CAS. One-way because there is no contention Lock involved, just a conditional wait queue

Multiple conditions can be defined in Lock, and each condition corresponds to a conditional wait queue, so the illustration above will look something like this:


The thread has been added to the conditional wait queue, so how to try to acquire the lock again? The signal/signalAll method already comes in handy

        public final void signal(a) {
            if(! isHeldExclusively())                throw new IllegalMonitorStateException();
            Node first = firstWaiter;
            if(first ! =null)
 doSignal(first);  } Copy the code

The Signal method wakes up only the first node in the conditional wait queue by calling the doSignal method

        private void doSignal(Node first) {
            do {
                if ( (firstWaiter = first.nextWaiter) == null)
                    lastWaiter = null;
                first.nextWaiter = null;
 // Call this method to move the thread node of the conditional wait queue to the synchronous queue  } while(! transferForSignal(first) &&(first = firstWaiter) ! =null);  } Copy the code

Continue with the transferForSignal method

    final boolean transferForSignal(Node node) {       
        if(! compareAndSetWaitStatus(node, Node.CONDITION,0))
            return false;

        // join the queue again
 Node p = enq(node);  int ws = p.waitStatus;  if (ws > 0| |! compareAndSetWaitStatus(p, ws, Node.SIGNAL)) // Wake up the thread in the synchronization queue  LockSupport.unpark(node.thread);  return true;  } Copy the code

So let’s graph the whole process of arousal again


At this point, understanding signalAll is very simple, just loop to see if there is a nextWaiter, and if there is one, move it from the conditional queue to the synchronous queue, just like signal

        private void doSignalAll(Node first) {
            lastWaiter = firstWaiter = null;
            do {
                Node next = first.nextWaiter;
                first.nextWaiter = null;
 transferForSignal(first);  first = next;  } while(first ! =null);  } Copy the code

I don’t know if you remember another quote I made in the waiting notification mechanism of concurrent programming

Use signalAll for no particular reason

When to use the signal method is also explained in the section, please check for yourself


I will mention one more detail here, there is a time lag in moving from a conditional wait queue to a synchronous queue, so it is normal to use the await() method, also explained in this article


ReentrantLock is an application of ReentrantLock that uses AQS in reverse order to verify the fairness and unfairness of the application

How does ReentrantLock apply AQS

A typical use of an exclusive is ReentrantLock. How does it override this method


At first glance, it seems strange that there are three custom synchronizers: NonfairSync, FairSync is just a further division of Sync:


As you probably know from the name, this is the fair/unfair lock you’ve heard of

What is a fair/unfair lock?

In life, it’s fair to be first in line. The fairness of the procedure is also in line with the absolute time of the request lock, in fact, FIFO, otherwise regarded as unfair

How does ReentrantLock implement fair and unfair locking


In fact, it is not a big deal, fair lock is to determine whether there is a pioneer node in the synchronization queue, only if there is no pioneer node can obtain the lock; Instead of a fair lock is not a matter of this, can obtain the synchronization state can be so simple, then the problem arises:

Why is there a fair/unfair lock design?

To consider this, we need to revisit the lock acquisition implementation diagram above, which I have already revealed a little bit


There are two main reasons:

Reason 1:

There is still a time lag between resuming the suspended thread and the actual lock acquisition, which is minuscule from a human perspective, but significant from a CPU perspective. Therefore, unfair locking can make full use of the CPU time slice and minimize the IDLE time of the CPU

Reason two:

Do you remember when I asked in the interview how many threads to create? Repeatedly mentioned in the article, the use of multithreading is important considerations point thread overhead, imagine, if using a fair lock, request lock acquisition synchronization state when a thread, and then release the synchronization state, because they do not need to consider whether there is any precursor nodes, so just threads lock is released at the moment again for synchronous state probability is very big, So it reduces thread overhead


Why does the default ReentrantLock constructor use an unfair lock synchronizer

    public ReentrantLock(a) {
        sync = new NonfairSync();
    }
Copy the code

See here, feel not fair lock perfect, not also, you will lose

What are the problems with using fair locks?

Fair lock ensures the fairness of the queue, while non-fair lock overtly ignores this rule, so it may lead to a long time in the queue and no chance to get the lock, which is the legendary “hunger”.

How to choose fair/unfair lock?

If you want higher throughput, obviously unfair locking is better, because it saves a lot of thread switching time and throughput goes up, otherwise fair locking is better

We’re down to the last leg. We really have to hang in there

Reentrant lock

ReentrantLock ReentrantLock ReentrantLock ReentrantLock

Why support lock reentrant?

If it were a recursive call decorated with synchronized, it would be a joke if the program blocked itself the second time it entered, so synchronized supports reentrant locking

Lock is a new wheel, naturally also want to support this function, its implementation is also very simple, please check the comparison graph of fair Lock and unfair Lock, there is a code:

// Check whether the current thread is the same as the thread that has occupied the lock
else if (current == getExclusiveOwnerThread())
Copy the code

If you look closely at the code, you may notice that one of my previous instructions was wrong and I want to reinterpret it


The reentrant thread will keep state + 1, and the lock release will keep state – 1 until it equals 0

conclusion

This article is a long article, explains why to build the new wheel Lock, how to use the standard Lock, what is AQS, is how to achieve the Lock, combined with ReentrantLock in some applications in AQS and some of its unique features

Semaphore, ReentrantReadWriteLock, CountLatch, etc

Finally, welcome everyone’s message, if there is any mistake, please also point out. My hand sour, eyes dry, I go to prepare for the next…..


Soul asking

  1. SetState(), compareAndSetState(), setState(), setState()

  2. The following code is a transfer program. Is there deadlock or other problems with lock?

    
    class Account {
      private int balance;
      private final Lock lock
              = new ReentrantLock();
     / / transfer  void transfer(Account tar, int amt){  while (true) {  if(this.lock.tryLock()) {  try {  if (tar.lock.tryLock()) {  try {  this.balance -= amt;  tar.balance += amt;  } finally {  tar.lock.unlock();  }  }//if  } finally {  this.lock.unlock();  }  }//if  }//while  }//transfer } Copy the code

reference

  1. Java Concurrency Combat
  2. The art of Concurrent programming in Java
  3. https://tech.meituan.com/2019/12/05/aqs-theory-and-apply.html

Personal blog: https://dayarch.top

Add my wechat friends, enter the group entertainment learning exchange, note “enter the group”

Welcome to continue to pay attention to the public account: “One soldier of The Japanese Arch”

  • Cutting-edge Java technology dry goods sharing
  • Efficient tool summary | back “tool”
  • Interview question analysis and solution
  • Technical data to receive reply | “data”

To read detective novel thinking easy fun learning Java technology stack related knowledge, in line with the simplification of complex problems, abstract problems concrete and graphical principles of gradual decomposition of technical problems, technology continues to update, please continue to pay attention to……