In the last section, when discussing ReentrantLock, we came across several methods implemented in AQS, For example, the core logic of AQS is to implement acquire, hasqueuedToraise method to judge whether there is a predecessor, compareAndSetState method to try to hold the lock, setExclusiveOwnerThread method to set the lock after getting the lock, etc. This time we through the implementation details of these methods, to understand the entire internal structure of AQS.

Lock state storage

In the introduction to the Synchronized keyword, we know that the Synchronized lock state ultimately exists in the head of the object, the current lock state, the thread that acquired the lock, and so on. AQS wants to implement the logic of the lock separately, so the basic information of the lock must not be missing. In AQS, the information is directly stored in the internal private attributes, and only two methods are provided to set the attributes of the lock externally. Those are the compareAndSetState and setExclusiveOwnerThread we mentioned earlier

State

	/** * The synchronization state. */
    private volatile int state;
Copy the code

First of all, this is the most important field in AQS. It is an int, indicating the current lock synchronization state, which can be divided into three types:

  • 0: idle state, not held by any thread
  • 1: The thread has acquired the lock and is processing it
  • Greater than 1: the reentrant status of the lock. The value is the number of reentrants

CompareAndSetState () is an atomic operation, as the name suggests. This is the implementation of AQS. The CAS operation ensures atomicity, and the volatile keyword ensures visibility. Allows lock status to be synchronized to all preemption threads in a timely manner. CAS returns true on successful lock capture, false on failure. The actual implementation of the internal CAS calls native methods, invisible to the Java layer

protected final boolean compareAndSetState(int expect, int update) {
        // See below for intrinsics setup to support this
        return unsafe.compareAndSwapInt(this, stateOffset, expect, update);
}
Copy the code

exclusiveOwnerThread

public abstract class AbstractOwnableSynchronizer
    implements java.io.Serializable {

    protected AbstractOwnableSynchronizer(a) {}private transient Thread exclusiveOwnerThread;

    protected final void setExclusiveOwnerThread(Thread thread) {
        exclusiveOwnerThread = thread;
    }
    protected final Thread getExclusiveOwnerThread(a) {
        returnexclusiveOwnerThread; }}Copy the code

CompareAndSetState we can get a lock, but how do we know who got the lock? ExclusiveOwnerThread, this attribute is saved in the superclass AbstractOwnableSynchronizer AQS, this class is the only attribute that’s it, just get lock to save the current thread. When a thread acquires a lock through the CAS operation, it can use the setExclusiveOwnerThread() method to set its own information, indicating that the lock is owned by it.

The thread queue

Getting the lock is easy, but how to manage those threads that don’t get the lock, how to calm their emotions, and don’t let them occupy meaningless resources is the real complex part of AQS. We know that holding the lock is a CAS, but if dozens of threads are trying to hold the lock at the same time, only one thread can hold the lock, what happens to the other threads that don’t hold the lock? We don’t want them to fail directly, and we don’t want them to waste resources doing meaningless spins and CAS, so this thread queue is where the AQS are going to give them a break. Let’s look at the basic information about queues in AQS.

  private transient volatile Node head;
    
  private transient volatile Node tail;
Copy the code

Queue node

We keep talking about queues queues, but in the end AQS have no queue properties at all, just a head and a tail. Yes, AQS thread storage structure is a two-way queue with AQS internal class Node, AQS only records the queue head and tail, how to use this head and tail, later introduced, we can first briefly look at the internal structure of Node class

static final class Node {
        static final Node SHARED = new Node();
        static final Node EXCLUSIVE = null;
        static final int CANCELLED =  1;
        static final int SIGNAL    = -1;
        static final int CONDITION = -2;
        static final int PROPAGATE = -3;
        volatile int waitStatus;
        volatile Node prev;
        volatile Node next;
        volatile Thread thread;
        Node nextWaiter;
    
        final boolean isShared(a) {
            return nextWaiter == SHARED;
        }
        
        final Node predecessor(a) throws NullPointerException {
            Node p = prev;
            if (p == null)
                throw new NullPointerException();
            else
                return p;
        }

        Node() {    // Used to establish initial head or SHARED marker
        }

        Node(Thread thread, Node mode) {     // Used by addWaiter
            this.nextWaiter = mode;
            this.thread = thread;
        }

        Node(Thread thread, int waitStatus) { // Used by Condition
            this.waitStatus = waitStatus;
            this.thread = thread; }}Copy the code

The internal design of Node not only simply realizes the bidirectional queue of locks, but also includes other lock extensions provided by AQS, including exclusive lock, shared lock, and Condition lock, etc. The ReentrantLock we introduced in the last section is exclusive lock, which is also the core content of these two articles. A shared lock is a ReentrantReadWriteLock read-write lock that is also implemented for the AQS extension. It divides the lock into two states. In the read state, threads can read the lock concurrently. Condition is a feature of ReetrantLock, which, like wait and notify, is used to allocate threads. So it is also implemented based on AQS. Instead of focusing on the shared lock and Condition, let’s leave them out and focus on the exclusive lock. Actually, the part we’re going to look at is pretty simple.

 volatile Node prev;
 volatile Node next;
 volatile Thread thread;
 volatile int waitStatus;

 Node(Thread thread, Node mode) {     // Used by addWaiter
            this.nextWaiter = mode;
            this.thread = thread;
   }
Copy the code

Yes, that’s it. The inside of the bidirectional queue must be made up of threads first, and bidirectional must mean that each Node needs to know its prev and next. There is also the wake state of each node, which controls whether the thread should sleep or be woken up, but we can just focus on that for now.

Queue operation method

Corresponding to this bidirectional queue, AQS provides a lot of queue operation methods, but in the end, there are only two real queue operation, that is, CAS operation is either inserted into the queue head or inserted into the queue tail. Why do I need CAS to insert queues? Of course, if a race occurs, and multiple threads are inserted at the head or back of the queue, it must be ensured that the two-way queue is atomic at the end.

    /** * CAS head field. Used only by enq. */
    private final boolean compareAndSetHead(Node update) {
        return unsafe.compareAndSwapObject(this, headOffset, null, update);
    }

    /** * CAS tail field. Used only by enq. */
    private final boolean compareAndSetTail(Node expect, Node update) {
        return unsafe.compareAndSwapObject(this, tailOffset, expect, update);
    }
Copy the code

hasQueuedPredecessors()

On top of the two most important methods to join the platoon above, another must be mentioned, and that is the Hasqueuedtoraise () method mentioned at the beginning of our article, yes, finally seen here, which was used to establish fair lock earlier on to find out whether it was possible to take a lock first. Let’s take a look. Although the content of this method is small, the internal logic is rather convoluted.

  public final boolean hasQueuedPredecessors(a) {
        Node t = tail; // Read fields in reverse initialization order
        Node h = head;
        Node s;
        returnh ! = t && ((s = h.next) ==null|| s.thread ! = Thread.currentThread()); }Copy the code

Before we analyze this, we need to conclude that this method returns false to indicate that there are no predecessors and true to indicate that there are predecessors. Analysis is as follows:

  1. Continue execution only if the head node is not equal to the tail node, otherwiseNow the queue is last, either there are no nodes in the queue or there is only one node(If there is only one node in the queue, it must be an empty node, which will be stated below), return false. (So the fair lock block call has been increased! Symbol to straighten the logic, indicating that you can try to pick up the lock.
  2. If the head node is not equal to the tail node, the next node of the head node is empty.This is because we said above, if the team is only one node, it must head is equal to the tail, but now head is not equal to the tail, so must be more than one node, and the head node of the next node is empty, but certainly not myself, because I haven't team, just see if there are any predecessors, that there must be other threads in the process of team, I can only go to the back), if not, proceed to the next condition.
  3. If the head node is not equal to the tail node, return false if the next node is myself.This is because in the case of reentry, even though I'm not on the team, I've already got the lock, so I'm a senior.), return true if the next node is not my own.

Acquire (core)

public final void acquire(int arg) {
        if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }Copy the code

This is the core logic of AQS lock implementation. This code can be summarized into two parts. The first part is to call tryAcquire() to get the lock. Symbol for block next step. Otherwise proceed to the second part of the team.

Pick the lock

There is no need to say more about the tryAcquire lock holding method in the first part. The implementation of fair and unfair lock holding methods in the last section is the same method overwritten, because if this method is not overwritten, AQS will directly throw an exception on the lock holding method by default, and the specific implementation has been explained in the last section.

   protected boolean tryAcquire(int arg) {
        throw new UnsupportedOperationException();
   }
Copy the code

Arrange sleep in the team

The second part is the core enqueue operation. Thread enqueue does more than just enqueue operation. I divided the process into three parts: joining the queue, trying to wake up, and sleeping. Let’s take a look at each one

The team

First of all, now that I have joined the team here, whether or not there is a possibility that I should take the lock before sleep, it doesn’t matter! Let’s get on the team. An important logic in queue entry is the design of AQS bidirectional queue. If all lock picking operations before queue entry are particularly smooth, it means that threads are executed alternately. At this time, queues in AQS are always empty and do not need to be initialized. However, once the queue is joined, it means that at least two threads are preempting the lock at the same time. AQS uses an empty head Node to represent the thread Node that has acquired the lock in the multi-thread competition, and starts from the second Node to represent the thread that is waiting. This is why I mentioned earlier that if there is only one node in the queue, it must be an empty node. Let’s look at how to get into the team

private Node addWaiter(Node mode) {
        Node node = new Node(Thread.currentThread(), mode);
        // Try the fast path of enq; backup to full enq on failure
        Node pred = tail;
        if(pred ! =null) {
            node.prev = pred;
            if (compareAndSetTail(pred, node)) {
                pred.next = node;
                return node;
            }
        }
        enq(node);
        return node;
    }
Copy the code

The logic of waiting to join the queue is still relatively clear. A simple summary is that if there are nodes in the queue, they are directly joined at the end of the queue. There is also a case where the queue is empty and the head node and tail node are both null.

 private Node enq(final Node node) {
        for (;;) {
            Node t = tail;
            if (t == null) { // Must initialize
                if (compareAndSetHead(new Node()))
                    tail = head;
            } else {
                node.prev = t;
                if (compareAndSetTail(t, node)) {
                    t.next = node;
                    returnt; }}}}Copy the code

This enqueue code validates what I mentioned above: even when the queue is initialized for the first time, multiple threads are competing. Use new Node() to add an empty Node to the head of the queue, and then add the unlocked thread Node to the tail.

So what does it do to keep the head node empty all the time? Let’s move on

Double check to see if you are asleep

When you get in the queue, there are two things to do. The first one is if you get in the queue and there is no one in front of you, it should be your turn to take the lock, you just try to take the lock. Otherwise, the second action is to go to sleep. But is it really that simple?

final boolean acquireQueued(final Node node, int arg) {
    boolean failed = true;
    try {
        boolean interrupted = false;
        for (;;) {
            final Node p = node.predecessor();
            if (p == head && tryAcquire(arg)) {
                setHead(node);
                p.next = null; // help GC
                failed = false;
                return interrupted;
            }
            if (shouldParkAfterFailedAcquire(p, node) &&
                parkAndCheckInterrupt())
                interrupted = true; }}finally {
        if(failed) cancelAcquire(node); }}Copy the code
Found a lock to pick up

As I mentioned several times, once a thread has acquired the lock, it will become the head node. The first feature of the head node is null. What is the function of keeping the head node empty? In both cases, I can only find operations on threads, but there is no code to show how these threads’ nodes are finally queued. The above two lines of code tell us to do this.

 setHead(node);
 p.next = null; // help GC
Copy the code
private void setHead(Node node) {
        head = node;
        node.thread = null;
        node.prev = null;
}
Copy the code

The first step is to call setHead and set the current node to the head, and then set thread and prev to null, okay? And then set next to null on the previous header, right? Emmm, yes.

The thread node has null thread and prev properties when programming the header node. If the next node is to be replaced by a new thread node, its next property will be null as well. So we don’t have to manually get the header out of the queue, right? The design is very clever.

It turns out to be hopeless

If you find that you really don’t have any hope of getting the lock, go to sleep. But what do you do before sleep? This is another clever aspect of AQS design. Node before the spin thread to sleep twice, first in shouldParkAfterFailedAcquire way to the front of the node to have sleep, and then to call parkAndCheckInterrupt method into sleep. Why do you do that?

Make sure the senior is asleep
  private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {
        int ws = pred.waitStatus;
        if (ws == Node.SIGNAL)
            return true;
        if (ws > 0) {
            do {
                node.prev = pred = pred.prev;
            } while (pred.waitStatus > 0);
            pred.next = node;
        } else {
            compareAndSetWaitStatus(pred, ws, Node.SIGNAL);
        }
        return false;
    }
Copy the code

Why make sure the senior is asleep first? My understanding is that if the thread changes its waitStatus to sleep before it goes to sleep, but in fact if something happens later and it doesn’t go to sleep then it’s a false state, and the false state is completely beyond our control. So the design of AQS is to go to sleep, and then let subsequent threads confirm that you are asleep.

I rested, too

Finally, it’s time for me to rest

  private final boolean parkAndCheckInterrupt(a) {
        LockSupport.park(this);
        return Thread.interrupted();
    }
Copy the code

The AQS thread is eventually blocked with the locksupport. park method. “Wait” and “notify” are used together with synchronized. There should be, but LockSupport is definitely the most suitable one.

conclusion

This film is divided into two sections, top and bottom. By tracking the implementation of lock in ReetrantLock, we finally have a comprehensive understanding of AQS. All the articles in this paper only focus on the main line tracking of the logic of lock and lock, and the implementation of AQS is far more complicated than what I said, including the release and awakening of specific locks. Temporarily not included, if you are interested in the idea of AQS lock, to think about what you need to do when releasing the lock, you can look at its source code. Understanding the implementation of AQS helps improve our basic level, when using locks, more handy.

This article pure individual understanding, if have mistake, help correct, common progress, thank!