preface

Abstract Queued Synchronizer (AQS) is an Abstract Queued Synchronizer. It is the basic framework for building concurrent utility classes under the J.U.C(java.util.concurrent) package. In addition to providing interruptable locks (wait for interrupts), In addition to the functions of timeout lock, exclusive lock, shared lock and so on, and in these basic functions are extended, derived in addition to some other tool classes, these tool classes can basically meet our practical application for the various needs of the lock.

Before I read the source code of AQS, I felt that its implementation is the same as synchronized principle. It feels that both realize concurrent access control through object lock, but in fact it is just a common utility class, just like utils class written in our daily development process. The implementation of AQS does not use advanced machine instructions and memory model rules like synchronized keyword, it does not use advanced machine instructions, does not use JDK compilation time special processing, just a common class, to achieve concurrent control. This makes me very interested in exploring and learning its design ideas and implementation principles in depth.

Why should we study the implementation of AQS? Because the tool classes under synchronized and J.U.C package are often used in concurrent programming, most of the tool classes under J.U.C package are extended implementation based on AQS, so to master and understand the implementation principle of tool classes under J.U.C package, it is essential to understand the implementation of AQS.

Why would you design AQS with Synchronized?

Why create an AQS framework, and what’s the point of reinventing the wheel, when the JVM already provides keywords like synchronized and volatile to solve the three problems of concurrency and thread execution order?

A framework or technology must be there to solve some problem, but can function and performance be a reason to reinvent the wheel? The emergence of AQS synchronization framework is to solve the usage scenarios that synchronized cannot meet. Let’s take a look at the features provided by AQS.

  1. Wait interrupted. Synchronized cannot be interrupted, meaning that synchronized wait cannot be interrupted. Once synchronized enters the blocking state, it cannot be interrupted. Methods that do not throw InterruptedException are not interruptible.
  2. The lock timeout. AQS supports timeout lock. You can specify a time. If no lock is obtained within the specified time, you can directly exit from obtaining the lock.
  3. Non-blocking. If the thread fails to acquire the lock, and instead of blocking, it returns, it has a chance to release the lock it once held.

In addition to satisfying all the functions of synchronized, AQS can also be used to describe the characteristics of synchronized. And based on the implementation of extended read and write lock (ReadWriteLock), Semaphore, CyclicBarrier and other additional functions of the lock, greatly improve the use of the scene and flexibility. Let’s take a look at the detailed implementation of AQS.

AQS design idea

We can enter into the source of AQS see AQS is an abstract class, but we found that is not an abstract method in AQS, this is because AQS is designed to support a variety of purposes, it is to use as the basis of many tools framework, if there are abstract methods subclasses in succession have to rewrite all the abstract method, This is clearly not in line with the original intention of AQS; So AQS framework using the template method design pattern, AQS will need some subclasses overriding methods are designed to protect method, its default implementation to throw UnsupportedOperationException abnormalities, and this method will be used if a subclass, override this method.

The underlying design of AQS is not particularly complex. It uses the state flag bit (state variable)+FIFO queue to record a series of lock operations such as lock acquisition, lock release and lock competition. For AQS, the state variable can be regarded as a lock, and the queue adopts a first-in, first-out bidirectional linked list. The state shared state variable represents the lock state, and CAS is used internally to modify the state atomic operation to complete the lock state change (lock holding and release).

When a thread requests to hold a lock, it can determine whether the lock is held by another thread by judging the current state of state. If it is not occupied, it can let the requesting thread hold the lock. If the lock is occupied, the thread blocks, encapsulates it as a Node, and then correlates it with each other to form a bidirectional linked list. When the thread that holds the lock completes the operation, it releases the lock resource and wakes up the node in the queue (this is fair locking, of course, as we will discuss below), thus blocking and waking up the thread through the queue. Now let’s look at the implementation of AQS through specific code.

AQS core implementation

AQS Core elements

State

    private volatile int state;



    protected final int getState(a) {

        return state;

    }



    protected final void setState(int newState) {

        state = newState;

    }



    protected final boolean compareAndSetState(int expect, int update) {

        return unsafe.compareAndSwapInt(this, stateOffset, expect, update);

    }

Copy the code

The setState(int newState) method is only used to initialize state. The setState(int newState) method is used to initialize state. CompareAndSetState (int expect, int Update) is used to modify the state variable at run time.

Why create a separate compareAndSetState method to modify the state variable? Because assignments to shared variables are not atomic operations that require additional lock synchronization, we might want to use synchronized to ensure atomicity, but synchronizedh blocks threads, causing thread context switches that affect performance. The CAS lock-free operation is used here, but CAS also has its disadvantages. It will perform spin operation, which will also waste CPU resources.

compareAndSet.gif

Synchronous queue (FIFO)

AQS will wrap the threads that did not fight for the lock into Node nodes and queue them. Let’s look at the structure of Node

static final class Node {

   // Indicate that the node is in shared mode

    static final Node SHARED = new Node();

    // The tag node is exclusive

    static final Node EXCLUSIVE = null;



   // The thread representing this node has cancelled the resource contention

    static final int CANCELLED =  1;

  

   // Indicates that the thread corresponding to the successor node of the current node needs to be awakened

    static final int SIGNAL    = -1;

  

  // These two states are related to condition

    static final int CONDITION = -2;

   

    static final int PROPAGATE = -3;



   // The value can be 1, -1, -2, -3, or 0 above

    volatile int waitStatus;



    volatile Node prev;



    volatile Node next;

   // Wait for the thread

    volatile Thread thread;



    Node nextWaiter;



    final boolean isShared(a) {

        return nextWaiter == SHARED;

    }



    final Node predecessor(a) throws NullPointerException {

        Node p = prev;

        if (p == null)

            throw new NullPointerException();

        else

            return p;

    }



    Node() {   

    }

 

   // The thread is queued.

    Node(Thread thread, Node mode) {     // Used by addWaiter

        this.nextWaiter = mode;

        this.thread = thread;

    }



  // use condition

    Node(Thread thread, int waitStatus) { // Used by Condition

        this.waitStatus = waitStatus;

        this.thread = thread;

    }

}

Copy the code

As the core of AQS, the synchronous queue is used to implement the blocking and wake up operations of threads. WaitStatus represents the status of the thread represented by Node. In the exclusive lock mode, we only need to pay attention to the CANCELLED and SIGNAL states. The nextWaiter property, which is always null in exclusive lock mode, serves only as a marker. The following figure is based on exclusive locks.

CHL.jpg

Exclusive locks & shared locks

AQS defines two resource sharing modes:

  • Exclusive: Only one thread can access a shared resource. As already
  • Share (shared mode) : Multiple threads can access shared resources simultaneously, such as Semaphore/CountDownLatch.

AQS design is based on the template method mode, queue maintenance and Node Node in the queue or failure to obtain resources and other operations, AQS has been implemented. The actual acquisition logic of the resource is left to subclasses. It also provides two modes of resource access: Exclusive and Share. What kind of resource access mode needs to be realized, the subclass only needs to rewrite the method reserved by AQS and use the atomic operation method provided by it to modify the state variable to realize the corresponding synchronization logic. Typically, subclasses implement only one of the schemas as required, although there are also synchronous classes that implement both schemas, such as ReadWriteLock.

The implementation of a custom synchronizer is mainly implemented in the following ways:

  • TryAcquire (int) : Exclusive mode. Attempts to obtain the resource return true on success and false on failure.
  • TryRelease (int) : Exclusive mode. Attempts to free resources return true on success and false on failure.
  • TryAcquireShared (int) : Share mode. Attempt to obtain resources. Negative numbers indicate failure; 0 indicates success, but no available resources are available. A positive number indicates success and free resources.
  • TryReleaseShared (int) : share mode. Attempts to free the resource, returning true if subsequent wait nodes are allowed to wake up after release, false otherwise.

Exclusive mode provides the same effect as synchronized, with only one thread accessing it at a time. If state is 0, no thread holds the lock, and if state is greater than 0, another thread holds the current lock. This value can be greater than 1 because the lock can be reentrant, adding 1 to each reentrant and requiring multiple releases.

In shared mode, the value of state represents the number of permissions, but its application to each specific utility class varies somewhat. Take a look at the animation below to get a feel for the use of shared locks.

Apr-15-2020 07-22-21.gif
CountDownLatch.gif

Exclusive: ReenTranLock example

In front we said AQS lock acquisition logic is handed over to the subclass to achieve, then we look at the specific code to see how the class is implemented, take ReentranLock as an example, look at the details of the implementation.

ReentrantLock is implemented as a fair lock and an unfair lock. The default implementation is an unfair lock, as reflected in its constructor. Let’s start with an exclusive lock and look at the structure of ReentranLock.

public class ReentrantLock implements Lock.java.io.Serializable {

  

    private final Sync sync;

    

    //ReentranLock's inner class,

    abstract static class Sync extends AbstractQueuedSynchronizer {

.

    }

    

    // Unfair lock

    static final class NonfairSync extends Sync{... }

    

    / / fair lock

    static final class FairSync extends Sync {... }

    

    // constructor

    public ReentrantLock(a) {

      sync = new NonfairSync();

    }



    public ReentrantLock(boolean fair) {

        sync = fair ? new FairSync() : new NonfairSync();

    }

    

    / / acquiring a lock

    public void lock(a) {

        sync.lock();

    }

    

    / / releases the lock

    public void unlock(a) {

        sync.release(1);

    }

.

}

Copy the code

As you can see, both FairSync and NonfairSync inherit from Sync, which in turn inherits from AQS. The logic for ReentrantLock to obtain the lock is a direct call to FairSync or NonfairSync logic. Let’s take FairSync as an example to see how this works.

FairSync (Fair Lock)

 static final class FairSync extends Sync {

        private static final long serialVersionUID = -3000897897090466540L;

        / / lock

        final void lock(a) {

        // Call AQS acquire() directly here

            acquire(1);

        }

    

     //==== This method comes from AQS and is posted at ==== for easy reading

     / * *

As you can see from the code, if tryAcquire(ARG) returns true, the tryAcquire(ARG) method will exit and nothing will happen.

So we can infer that, in most cases, false should be returned. Let's look at it one way at a time.

* /


      public final void acquire(int arg) {

        if(! tryAcquire(arg) &&

          acquireQueued(addWaiter(Node.EXCLUSIVE), arg)){

            selfInterrupt();

          }

      }

    / / = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

      

      protected final boolean tryAcquire(int acquires) {

.

      }

}

Copy the code

tryAcquire

TryAcquire determines whether the current lock is occupied:

  1. If the lock is not occupied, try to obtain it in a fair manner
  2. If the lock is already occupied, check for lock reentrant

Return true on success, false on failure

   / * *

if (! tryAcquire(arg) &&

        acquireQueued(addWaiter(Node.EXCLUSIVE), arg)){

        selfInterrupt();

    } 

Try to obtain the lock, return Boolean, whether the lock was obtained successfully.

True: 1. Indicates that no thread is waiting for the lock. 2. The lock itself is held, but is a reentrant lock.

* /


    protected final boolean tryAcquire(int acquires) {

        final Thread current = Thread.currentThread();

        int c = getState();

        // If = 0, no thread holds the lock.

        if (c == 0) {

           / * *

There is no thread to grab the lock, and then to determine whether there is a thread in the queue waiting for the lock.

Because it's a fair lock, it's always first come, first come

If there are no threads in the queue waiting to acquire the lock, try to acquire it.

If successful, update the thread currently holding the lock to the current thread.

* /


           if(! hasQueuedPredecessors() && compareAndSetState(0, acquires)) {

           / * *

The HasqueuedBoommethod, used to determine whether a queue is empty,

Check whether the node after the head node is the current node.

            public final boolean hasQueuedPredecessors() {

             Node t = tail;

             Node h = head;

             Node s;

return h ! = t &&

((s = h.next) == null || s.thread ! = Thread.currentThread());

            }

* /


             // Set the current thread to the thread holding the lock

              setExclusiveOwnerThread(current);

    

              return true;

           }

        }else if (current == getExclusiveOwnerThread()) {

         / * *

If so, the thread currently holding the lock is itself. It's what we call a reentrant lock.

* /


            int nextc = c + acquires;

            

            if (nextc < 0) {

              throw new Error("Maximum lock count exceeded");

            }

            

            setState(nextc);

            

            return true;

        }

     // If true is not returned, no lock has been acquired.

      return false;

    }





Copy the code

addWaiter

AcquireQueued (addWaiter(node.exclusive), arg) if tryAcquire() returns false, the lock grab failed. This step is mainly to queue the thread that has not grabbed the lock, so let’s first look at the addWaiter() method.

  / * *

if (! tryAcquire(arg) &&

        acquireQueued(addWaiter(Node.EXCLUSIVE), arg)){

        selfInterrupt();

    } 

* /
  

private Node addWaiter(Node mode) {

   /** Node constructor

        Node(Thread thread, Node mode) {     

            this.nextWaiter = mode;

            this.thread = thread;

        }

        Node(Thread thread, int waitStatus) { 

            this.waitStatus = waitStatus;

            this.thread = thread;

        }

* /


        Node node = new Node(Thread.currentThread(), mode);

      

        Node pred = tail;

       // tail ! =null indicates that the queue is not empty. Tail = head is null when the queue is empty,

        if(pred ! =null) {

          // Point the precursor node of the new node to the old tail node.

           node.prev = pred;

           // Make the new node a tail.

           if (compareAndSetTail(pred, node)) {

                // If the setting is successful, point the successor node of the old tail to the new node. Go straight back to Node

                pred.next = node;

                return node;

            }

        }

        1. The queue is empty. 2. 2.CAS fails (some threads are competing to join the queue)

        // The enq () method is executed

        enq(node);

        return node;

    }

 

Copy the code

This approach essentially wraps waiting threads as Node nodes. For each node in exclusive lock mode, nextWaiter must be null. This method determines whether the queue is empty and, if not, attempts to add Node nodes to the end of the queue. If joining fails or the queue is empty, the enQ method is executed.

There are two possibilities if the enq () method is executed:

  1. The wait queue is now empty and has not been initialized.
  2. The other thread enqueued first during the current thread enqueued, causing the value of the tail node to change and the CAS operation to fail.

enq

In this method, an infinite loop is used, that is, the node spins into the queue, and if it fails, it will continue to try until it succeeds. In addition, this method is responsible for initialization of the queue when the queue is empty, which also indicates that the queue is lazily initialized:

   /* Let's look at the code for enq().

This method uses spin enqueueing.

If you don't get the lock, you loop until you get in.

* /


     private Node enq(final Node node) {

        for (;;) {

            Node t = tail;

          if (t == null) { 

           /** if tail==null, the queue is empty.

It's not initialized. Again, cas is used to set the head node as it is for the tail node.

* /


              if (compareAndSetHead(new Node())){

               / * *

The head node is set, but the tail node is still null,

I'm going to set the endpoints as well, and notice that I haven't returned yet, so I'm going to loop.

* /


                  tail = head;

              }

         

           }else {

           // This is similar to the addWaiter() method, which adds a thread to the end of the queue.

           // If it doesn't work, keep repeating until it does.

                node.prev = t;

                if (compareAndSetTail(t, node)) {

                    t.next = node;

                    return t;

                }

            }

        }

    }

Copy the code

We can see here that when the queue is empty, the initialization of the queue does not make the passed Node, but creates a new Node. After initialization, it does not return. Instead, it goes straight to the next loop, where the queue is no longer empty and the Node passed in is added to the end of the queue. This is why we started talking about FIFO queues with empty heads.

Here we can see that the enq () method has a return value. It returns the precursor node of node, but it does not use its return value here, but it does use its return value elsewhere.

acquireQueued

The code can get to this point as indicated by addWaiter(Node.exclusive), at which point the Node is added to the queue.

Note: If acquireQueued(addWaiter(Node.exclusive),arg) returns true, this code will enter selfInterrupt(), so normally the following should return false.

 / * *

if (! tryAcquire(arg) &&

        acquireQueued(addWaiter(Node.EXCLUSIVE), arg)){

        selfInterrupt();

    }

* /


  final boolean acquireQueued(final Node node, int arg) {

        boolean failed = true;

        try {

            boolean interrupted = false;

            for (;;) {

                // The precursor of the current node. The addWaiter method returns an encapsulated Node Node.

                final Node p = node.predecessor();

               / * *

P == head indicates that the current Node is already in the blocking queue, but Node is the first Node in the blocking queue because its precursor is head. Normally, we add nodes to the end of the queue. If the first Node in the queue is the head Node, the Node is the first in the queue to try to acquire the lock.

* /


                if (p == head && tryAcquire(arg)) {

                    setHead(node);

                / * *

                private void setHead(Node node) {

                     head = node;

                     node.thread = null;

                     node.prev = null;

                }

* /


                    p.next = null// help GC

                    failed = false;

                    return interrupted;

                }

     

                // The current Node is not the first Node in the CLH queue or the current thread failed to acquire the lock.

                if(shouldParkAfterFailedAcquire(p, node) &&

                    parkAndCheckInterrupt()){

                    interrupted = true;

                }

            }

        } finally {

            if (failed)

                cancelAcquire(node);

        }

    }

Copy the code

shouldParkAfterFailedAcquire

When we analyze the structure of FIFO queue, we see that the node composition has waitStatus, which has four values

static final int CANCELLED =  1;

static final int SIGNAL    = -1;

static final int CONDITION = -2;

static final int PROPAGATE = -3;

Copy the code

‘CANCELLED’ and ‘SIGNAL’ are only used for exclusive locks.

CANCELLED this is easier to understand because it means that the current node has been CANCELLED due to queueing, i.e. lock grabbing has been CANCELLED. SIGNAL does not represent the state of the current node. It represents the state of its predecessors. When waitStatus is set to SIGNAL for a node, its next node is suspended (or about to be suspended). If its waitStatus attribute is SIGNAL, it does one more thing — wake up its successor.

/**if(shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()){}

* /




private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {

    int ws = pred.waitStatus;

  // I think this is just a judgment of the state of the precursor node, the judgment of the precursor node is normal, because we know that if the current node is suspended

  // If the current node is in a normal state, the current node can be woken up normally.

  // Because the node in the waiting queue may exit the waiting queue, we need to check whether the status of the precursor node is normal.

    if (ws == Node.SIGNAL)

   // If the precursor node is already SIGNAL, return true, and parkAndCheckInterrupt() is immediately executed to suspend the thread

    // The current node can be suspended because the precursor node is healthy.

      return true;

  

        / *

* When the status of the precursor node is greater than 0, the precursor node cancels the lock grab and exits the queue.

If the precursor node cancels the lock grab, go ahead and find a node that is in a normal state, then skip the nodes that are not queued and add them directly after the first normal waiting node

* /


    if (ws > 0) {

        do {

            node.prev = pred = pred.prev;

        } while (pred.waitStatus > 0);

        pred.next = node;

    } else {

        / *

The status of the precursor node is neither SIGNAL nor CANCELLED

Set ws of the precursor Node to Node.SIGNAL using CAS.

* /


        compareAndSetWaitStatus(pred, ws, Node.SIGNAL);

    }

    return false;

}

Copy the code

It is worth noting here that ture is returned only if the state of the current node’s precursor is equal to SIGNAL, otherwise false is returned only.

If false is returned, it will loop through the acquireQueued method because the current node’s precursor has changed and is probably the head until it returns true, SIGNAL, and suspends the current thread. ParkAndCheckInterrupt is called to suspend the thread.

parkAndCheckInterrupt

This method is very simple, because the previous return true, so need to suspend the thread, this method is responsible for suspending the thread, at this point lock acquisition has been analyzed.

private final boolean parkAndCheckInterrupt(a) {

    LockSupport.park(this); // The thread is suspended and stops executing

    return Thread.interrupted();

}

Copy the code

NofairSync(Unfair lock)

Unfair lock implementation, in fact, and fair lock implementation is not much different, specific through the code to look at it.

 static final class NonfairSync extends Sync {

        private static final long serialVersionUID = 7316153563782823691L;

        // The difference between an unfair lock and a fair lock is that an unfair lock is acquired directly, regardless of whether the blocking queue has threads waiting

        final void lock(a) {

            if (compareAndSetState(0.1))

             setExclusiveOwnerThread(Thread.currentThread());

            else

                acquire(1);

        }

    / * *

if (! tryAcquire(arg) &&

        acquireQueued(addWaiter(Node.EXCLUSIVE), arg)){

     }

* /
 

        protected final boolean tryAcquire(int acquires) {

            return nonfairTryAcquire(acquires);

        }

    }

    

    // This method comes from Sync

    final boolean nonfairTryAcquire(int acquires) {

            final Thread current = Thread.currentThread();

            int c = getState();

            if (c == 0) {

             // This is not a fair lock.

             // For a fair lock, check whether there are threads waiting in the queue.

                if (compareAndSetState(0, acquires)) {

                    setExclusiveOwnerThread(current);

                    return true;

                }

            }

            else if (current == getExclusiveOwnerThread()) {

                int nextc = c + acquires;

                if (nextc < 0

                    throw new Error("Maximum lock count exceeded");

                setState(nextc);

                return true;

            }

            return false;

        }

Copy the code

Fair and unfair locks are implemented slightly differently, but not by much. The difference between an unfair lock and a fair lock is that an unfair lock adds a bit of code to lock ()

// An unfair lock

final void lock(a) {

   if (compareAndSetState(0.1))

      setExclusiveOwnerThread(Thread.currentThread());

   else

      acquire(1);

}



// Fair lock lock

final void lock(a) {

      acquire(1);

}

Copy the code

When the unfair lock is locked, it will directly try to get the lock. If the attempt is successful, it will directly take possession of the lock. That’s the first difference.

In the tryAcquire() method, there are more fair locks! The hasqueued24 () line is the code that determines whether there are already waiting threads on the blocking queue.

/ / fair lock

if (c == 0) {

    if(! hasQueuedPredecessors() &&compareAndSetState(0, acquires)) {

      setExclusiveOwnerThread(current);

      return true;

   }

}

 / * *

     public final boolean hasQueuedPredecessors() {

        Node t = tail; // Read fields in reverse initialization order

        Node h = head;

        Node s;

return h ! = t &&

((s = h.next) == null || s.thread ! = Thread.currentThread());

    }

* /


// Unfair lock.

if (c == 0) {

    if (compareAndSetState(0, acquires)) {

      setExclusiveOwnerThread(current);

      return true;

   }

}

Copy the code

A fair lock will determine if there are threads in the queue waiting to acquire the lock, and will only attempt to acquire the lock if the blocking queue is empty.

However, an unfair lock does not check to see if there is already a thread waiting in the blocking queue.

The implementation differences between fair and unfair locks are these, and the rest of the implementation logic is similar.

Release release the lock

Locksupport. park(this) suspends the thread if the lock is not grabbed.

 public void unlock(a) {

     sync.release(1);

 }

 

 //==== This method comes from AQS================

 public final boolean release(int arg) {

        // Try to release the lock

        if (tryRelease(arg)) {

         

            Node h = head;

            if(h ! =null&& h.waitStatus ! =0)

                unparkSuccessor(h);

            return true;

        }

        return false;

 }

Copy the code

TryRelease () is called to try to release the lock, and if it succeeds, the thread is woken up by determining the state of the header. A point to make here, Head! = null we understand, but why waitStatus! = 0. We looked at thread lock snatching earlier, and there is only one place where waitStatus is assigned. ShouldParkAfterFailedAcquire in this method, the precursor of Node waitStatus set to the Node. The SIGNAL. We can scroll forward.

In addition, the enq() method initializes waitStatus with a default value of 0, and assigns no value to waitStatus elsewhere. If waitStatus! = 0, that means there are no threads behind the head waiting to wake up, and therefore no need to wake up.

tryRelease

 protected final boolean tryRelease(int releases) {

            int c = getState() - releases;

           // Check whether the thread holding the lock is the current thread.

            if(Thread.currentThread() ! = getExclusiveOwnerThread())

                throw new IllegalMonitorStateException();

            boolean free = false;

            // Determine whether the lock is fully released, possibly reentrant

            if (c == 0) {

              free = true;

              // Empty the thread flag indicating lock possession.

              setExclusiveOwnerThread(null);

            }

            setState(c);

            return free;

 }

Copy the code

It’s too simple. There’s nothing to say.

unparkSuccessor



 private void unparkSuccessor(Node node) {

     // We know that the blocking queue is a first-in, first-out queue. If the queue is woken up, it is woken up in order

    // If the header waitStatus < 0, say

        int ws = node.waitStatus;

        if (ws < 0)

            compareAndSetWaitStatus(node, ws, 0);

    

        Node s = node.next;

    // The following code wakes up the successor node, but it is possible that the successor node unwaits (waitStatus==1)

    // Go back to the end of the queue and find the first node whose waitStatus<=0

        if (s == null || s.waitStatus > 0) {

            s = null;

            for(Node t = tail; t ! =null&& t ! = node; t = t.prev)

                if (t.waitStatus <= 0)

                    s = t;

        }

        // Wake up the thread

        if(s ! =null)

            LockSupport.unpark(s.thread);

  }



 // After waking up the thread, the awakened thread will proceed from the following code:

  

 private final boolean parkAndCheckInterrupt(a) {

         LockSupport.park(this); // The thread was suspended here

      return Thread.interrupted();

 }

 AcquireQueued (final Node Node, int arg) acquireQueued(final Node Node, int arg

Copy the code

Here basically ReenTranLock to obtain the lock, release the lock are finished analysis, some specific details may not say, we go with the code can be.

conclusion

Based on ReentranLock exclusive lock, this article analyzes AQS and learns the following points.

  1. AQS uses the state attribute to represent the lock. In ReentranLock, when state = 1 acquires the lock, state = 0 represents the release of the lock, and state>1 represents the reentrant lock. The exclusiveOwnerThread attribute represents the thread that owns the lock.
  2. AddWaiter is responsible for wrapping the current thread awaiting the lock as a Node and successfully adding it to the end of the queue, which is guaranteed by the ENQ method it calls, which is also responsible for initializing the queue if it is empty.
  3. The acquireQueued method is used to continue trying to acquire the lock (depending on whether the Node precursor is head) or to suspend the thread after Node has been successfully enqueued
  4. ShouldParkAfterFailedAcquire method is used to ensure that the current thread waitStatus attribute value of the precursor of node as the SIGNAL, thus ensuring their hang, precursor node is responsible for rouse themselves at the right time.
  5. The parkAndCheckInterrupt method is used to suspend the current thread and check for interrupt status.
  6. If the lock is finally acquired successfully, the thread returns from the lock() method and continues execution; Otherwise, the thread blocks the wait.

Afterword.

Originally plan is to output a week, but there is a temporary catch an urgent needs to be done, on Saturday to work overtime, on Sunday and chessy bosses face again about a meal, is also very late every day last week back to work overtime on Saturday also demand in catching up, this week, to measure, to launch at the end of the middle is also time to halting writing and is finally finished yesterday. In the last two weeks, I feel my energy has been exhausted and my state is not very good. I will adjust my state these days. I had a lot of conversations with Chessy last week, which gave me a lot of impressions. I plan to write an article about the direction of continuous learning and personal growth to share, and we can exchange ideas with each other then.

It’s not easy to code so many words, so just give it a thumbs-up.

reference

https://javadoop.com/2017/06/16/AbstractQueuedSynchronizer/