This article is a reading note for chapter 3 of the Art of Concurrent Programming in Java

The Lock interface

Lock vs synchronized

After Java SE 5, the Lock interface (and related implementation classes) was added to the package to implement the Lock function, which provides synchronization functionality similar to that of the synchronized keyword. However, they differ in the following ways:

Synchronized: Using the synchronized keyword implicitly acquires and releases locks. At the same time, the types of locks used (bias, lightweight, heavyweight) and their implementation are all implemented by the UNDERLYING JVM.

Lock interface: Lock interface related implementation classes for Lock acquisition and release need to display. At the same time, it is up to the developer (including the JDK source code) to decide how to capture and release it. In this way, it has the operability of lock acquisition and release, non-blocking lock acquisition, interruptable lock acquisition and timeout lock acquisition, and other synchronization features that synchronized keyword does not have.

The Lock interface API

Lock is an interface that defines the basic operations of Lock acquisition and release. The Lock API is as follows:

  1. Void lock() : blocks to obtain the lock. If the lock is not acquired, the current thread cannot schedule and must wait until the lock is acquired.
  2. Void lockInterruptibly() throws InterruptedException: The interruptible blocking to obtain the lock. The difference with the lock() method is that it interrupts the current thread during lock acquisition.
  3. Boolean tryLock() : Try to obtain the lock without blocking. Returns immediately after this method is called, true if the lock can be acquired, false otherwise.
  4. boolean tryLock(long time, TimeUnit unit) throws InterruptedException: Block timeout to obtain the lock. The current thread returns in three cases:
    • The current thread acquired the lock during the timeout period, returning true
    • The current thread was interrupted during the timeout period, raising an exception
    • False is returned when the timeout period expires
  5. Void unlock() : Releases the lock. In the implementation class of the Lock interface, usually only the thread that owns the Lock can release it.
  6. Condition newCondition() : Returns an instance of the Condition object bound to the current lock.

Implementations of the Lock interface such as ReentrantLock basically implement thread access control by aggregating a subclass of queue synchronizer (AQS).

AbstractQueuedSynchronizer (AQS)

Abstract class AbstractQueuedSynchronizer provides a basic framework, it can be used to implement blocking locks and other related depends on the components (such as a Semaphore) FIFO queue synchronization. Its implementation relies heavily on a single atomic int variable value, state, to represent synchronization state. Inheriting subclasses must override several protected methods of AQS used to change the synchronization state and methods used to acquire and release locks based on those states. Other methods in the AQS class mainly implement queuing and blocking mechanisms. Subclasses of AQS may maintain some other state values, but the state values that manage synchronization can only be updated atomically through getState,setState and compareAndSetState.

Subclasses that inherit from AQS are recommended to be defined as static inner classes of blocking locks or synchronous component implementation classes. AQS itself does not implement any synchronization interface. Instead, it simply defines synchronized state acquisition and release methods for blocking locks or synchronization components to implement the common methods of subclasses.

AQS supports both exclusive and shared access to the synchronization state, or both modes. By default, the synchronization status is obtained exclusively.

  1. Exclusive mode means that once one thread has possession, no other thread can have possession.
  2. Shared mode supports multi-threaded ownership.

AQS does not care about the differences between these different models. Waiting threads in different modes share the same FIFO queue. In general, implementation subclasses of AQS support either exclusive mode or shared mode only. However, there are exceptions, such as ReadWriteLock supporting both. Subclasses that support only exclusive or shared schemas do not need to define methods for another schema.

AQS is the key to realize Lock Lock interface or synchronization component. AQS is aggregated in the realization of Lock and used to realize Lock semantics. The relationship between the two can be understood as follows:

  1. The Lock interface is consumer-oriented. It defines the interface for the user to interact with the Lock (for example, allowing two threads to access it in parallel), hiding implementation details.
  2. AQS is for the implementor of the lock, it simplifies the implementation of the lock, shielding the synchronization state management, thread queuing, waiting and wake up and other low-level operations. Locks and AQS do a good job of isolating the areas that consumers and implementors need to focus on.

AQS interface and instance

AQS interface

Synchronizer design is based on the template pattern, that is, the consumer needs to inherit AQS and rewrite the specified methods, then combine the AQS in the implementation of the custom synchronization component, and call the template methods provided by THE AQS, which will call the user-overridden methods.

When overriding a method specified by AQS, you need to use the following three atomic operations provided by AQS to access or modify the synchronization state:

  1. GetState () : obtains the current synchronization status.
  2. SetState (int newState) : Sets the current synchronization status.
  3. CompareAndSetState (int expect,int Update) : CAS is used to set the current state. This method ensures atomicity of the state setting.

The methods that must be overridden to inherit AQS are as follows:

  1. Protected Boolean tryAcquire(int ARg) : Obtain the synchronization status exclusively. To implement this method, you need to query the current status and determine whether the synchronization status is as expected. Then set the synchronization status through the CAS.
  2. Protected Boolean tryRelease(int arG) : Exclusive release of synchronization state. Threads waiting for synchronization state will have the opportunity to acquire synchronization lock.
  3. Protected int tryAcquireShared(int arg) : Shared access status, returns a value greater than or equal to 0, indicating that the lock was successfully acquired, otherwise, the lock failed.
  4. Protected Boolean tryReleaseShared(int arG) : Shared release synchronization status.
  5. Protected Boolean isHeldExclusively() : Indicates whether the AQS are held by the thread in exclusive mode.

When inheriting AQS to implement a custom synchronization component, the template methods provided by AQS are called. These template methods are shown below. Note that these methods are final, which means they cannot be overridden.

  1. Public Final void acquire(int arg): Exclusive acquisition of the synchronization status. If the current thread successfully obtains the synchronization status, this method returns it. Otherwise, the synchronization queue will wait and the method will call the overwritten tryAcquire(int arg).
  2. public final void acquireInterruptibly(int arg): Same as acquire(int ARG), but in response to interrupts, the method throws InterruptedException and returns if the current thread is interrupted in the synchronization queue without obtaining the synchronization status.
  3. Public final Boolean tryAcquireNanos(int arg, long nanosTimeout) throws InterruptedException: AcquireInterruptibly (int ARG) adds a timeout limit to acquireInterruptibly(int ARg), which returns false if the current thread does not obtain the synchronization status within the timeout period and true if it does.
  4. Public final Boolean release(int ARG) : Exclusively releases the synchronization queue. This method wakes up the thread containing the first node in the synchronization queue after releasing the synchronization state.
  5. Public Final void acquireShared(int arg) : Shared access to the synchronization state. If the current thread has not obtained the synchronization state, the thread will enter the synchronization queue and wait. The main difference between the shared access and exclusive access is how many threads can obtain the synchronization state at the same time.
  6. Public final void acquireSharedInterruptibly (int arg) : with acquireShared (int arg) are the same, the method of interrupt response.
  7. Public Final Boolean tryAcquireSharedNanos(int arg, Long nanosTimeout) throws InterruptedException: In acquireSharedInterruptibly (int arg) on the basis of increased timeouts.
  8. Public Final Boolean releaseShared(int ARG) : Specifies the shared release synchronization status.
  9. Public Final Collection getQueuedThreads() : Gets the set of threads waiting on the synchronization queue.

The template methods provided by synchronizers are basically divided into three categories: exclusive access and release synchronization state, shared access and release synchronization state, and query the status of waiting threads in the synchronization queue. Custom synchronization components implement their synchronization semantics using the template methods provided by the synchronizer.

A simple inherited AQS exclusive lock Mutex

An exclusive lock means that only one thread can acquire the lock at the same time, and other threads that acquire the lock can only wait in the synchronization queue. Only when the thread that acquired the lock releases the lock, the subsequent thread can acquire the lock, as shown in the following:

Class Mutex implements Lock {private final Sync = new Sync(); public voidlock() {
        sync.acquire(1);
    }

    public boolean tryLock() {
        return sync.tryAcquire(1);
    }

    public void unlock() {
        sync.release(1);
    }

    public Condition newCondition() {
        return sync.newCondition();
    }

    public boolean isLocked() {
        return sync.isHeldExclusively();
    }

    public boolean hasQueuedThreads() {
        return sync.hasQueuedThreads();
    }

    public void lockInterruptibly() throws InterruptedException {
        sync.acquireInterruptibly(1);
    }

    public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException {
        returnsync.tryAcquireNanos(1, unit.toNanos(timeout)); } / / static inner class, custom synchronizer private static class Sync extends AbstractQueuedSynchronizer {/ / in the occupied state protected BooleanisHeldExclusively() {
            returngetState() == 1; Public Boolean tryAcquire(int acquires) {public Boolean tryAcquire(int acquires) {if (compareAndSetState(0, 1)) {
                setExclusiveOwnerThread(Thread.currentThread());
                return true;
            }
            return false; } // Release the lock and set the state to 0 protected Boolean tryRelease(int releases) {if (getState() == 0) throw new
                    IllegalMonitorStateException();
            setExclusiveOwnerThread(null);
            setState(0);
            return true; } // Return a Condition, each of which contains a Condition queue ConditionnewCondition() {
            returnnew ConditionObject(); }}}Copy the code

It can be found:

  1. Exclusive lock Mutex’s internal class Sync only implements methods related to AQS exclusive mode, because it only uses exclusive mode.
  2. Exclusive Lock Mutex implements all methods of the Lock interface.

The following gives the use of the exclusive lock Demo, you can find that its implementation is not a problem, does complete the same synchronization keyword and synchronized function.

/** * Exit Thread-1 * Running Thread-2 * Exit Thread-2 */ @test public voidtestCase01() throws InterruptedException {
        Mutex mutex = new Mutex();

        Runnable r = () -> {
            mutex.lock();
            System.out.println("Running " + Thread.currentThread().getName());

            try {
                TimeUnit.SECONDS.sleep(5);
            } catch (Exception e) {
                e.printStackTrace();
            }
            mutex.unlock();
            System.out.println("Exit " + Thread.currentThread().getName());
        };

        Thread thread1 = new Thread(r, "Thread-1");
        Thread thread2 = new Thread(r, "Thread-2");
        thread1.start();
        thread2.start();

        thread1.join();
        thread2.join();
    }
Copy the code

I had a question when I was writing this example. That is, what is the thread state of Thread2 when it blocks and cannot acquire the lock? The MainThread is Waiting(in join logic) and Thread1 is Timed_Waiting (sleeping). But what about the thread state of Thread2? Thread state reference For an in-depth understanding of Java concurrent programming threads.

If, instead of using Mutex as a self-implemented exclusive lock, synchronized is used, Thread2 is clearly blocked.

In order to answer this question, I set the sleep time to 666666, and jstack dumps the thread status. It is clear that Thread2 is in waiting state.

"Thread-2" #15 prio=5 os_prio=0 tid=0x00000000299cd000 nid=0x9498 waiting on condition [0x000000002b81e000]
   java.lang.Thread.State: WAITING (parking)
	at sun.misc.Unsafe.park(Native Method)
	- parking to wait for  <0x00000007177a70f0> (a concurrent.aqs.Mutex$Sync)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
	at concurrent.aqs.Mutex.lock(Mutex.java:17)
	at concurrent.aqs.MutexTest.lambda$testCase01$0(MutexTest.java:17)
	at concurrent.aqs.MutexTest$$LambdaThe $1/385337537.run(Unknown Source)
	at java.lang.Thread.run(Thread.java:748)
Copy the code

AQS implementation analysis

Next, it will analyze how AQS completes thread synchronization from the perspective of implementation, mainly including the core data structure and template methods of AQS, such as synchronization queue, exclusive synchronization state acquisition and release, shared synchronization state acquisition and release, and timeout synchronization state acquisition.

Synchronous queue

AQS relies on an internal synchronization queue (a FIFO bidirectional queue) to manage the synchronization state:

  1. When the current thread fails to obtain the synchronization state, the synchronizer constructs the current thread and the wait state into a Node and adds it to the synchronization queue, while blocking the current thread.
  2. When the synchronization state of the current node is released, threads on subsequent nodes wake up and try to get the synchronization state again.

The synchronization queue is implemented by a bidirectional linked list of Node nodes. The Node data interface is as follows:

Volatile int waitStatus Node waitStatus. The status is as follows:

  1. Cancelled, value 1 means that the thread wrapped by the Node has been Cancelled. Because the thread waiting in the synchronization queue has timed out or been interrupted, the wait needs to be cancelled from the synchronization queue, and the node enters the state unchanged.
  2. Signal, with a value of -1. The thread of the successor Node of the Node in this state is in the waiting state. If the thread of the current Node releases the synchronization state or is cancelled, the successor Node is notified so that the thread of the successor Node can run.
  3. Condition, value -2, the node is in the wait queue. The node in this state means that the thread wrapped by the node is waiting on Condition. When another thread calls signal() on Condition, the node will be transferred from the wait queue to the synchronization queue (tail) and added to the synchronization state.
  4. Propagate, the value is -3, which means that the shared synchronization state will be unconditionally propagated to other nodes, and the state will only be used to set the head node of the AQS of the shared mode.

These values are set only to simplify use. A non-negative numeric node indicates that the node is not signalled. So, for the most part, you don’t care about the numbers, you just care about the sign of the numbers.

volatile Node prev

A precursor node that is set when the node joins a synchronous queue from the end of the queue.

Volatile Node Next Successor Node

Node nextWaiter a successor Node in ConditionObject’s queue.

  • If the current node is exclusive, this value points to the next node in the wait queue.

  • If the current node is SHARED, this field will be a SHARED constant.

Volatile Thread Thread Node Locks the wrapped Thread.

Node Node is the basis of AQS synchronization queue and wait queue. The basic structure of the synchronization queue is shown as follows:

  1. AQS has references to head and tail of the synchronous queue (head and tail are AQS member variables)

  2. When one thread succeeds in acquiring the synchronization state (or lock), other threads cannot acquire it and are constructed as Node nodes to join the end of the synchronization queue.

  3. The process of joining the queue must ensure thread safety (there may be multiple threads that have not obtained the synchronization state and need to join the synchronization queue). Therefore, AQS provides a method for setting tail nodes based on CAS: CompareAndSetTail (Node expect,Node Update), which needs to pass the tail Node and the current Node that the current thread “thinks” of. The current Node is not formally associated with the previous tail Node until the setting is successful. This process is illustrated below.

  1. The synchronization queue follows FIFO. The first node is the node that successfully obtains the synchronization state. When the thread of the first node releases the synchronization state, it will wake up the successor node, and the successor node will set itself as the first node when successfully obtains the synchronization state, as shown in the following figure. The first node is set by the thread that has successfully obtained the synchronization state. Since only one thread can successfully obtain the synchronization state, CAS is not required to set the first node. It only needs to set the first node as the successor of the original first node and disconnect the next reference of the original first node.

Exclusive synchronization state acquisition and release

##### Exclusive synchronization state can be obtained by calling AQS acquire(int ARg) method. This method is not interrupt sensitive, that is, the thread enters the synchronization queue after it fails to obtain the synchronization state, and the thread will not be removed from the synchronization queue when it interrupts subsequent operations. The code of this method is as follows:

    public final void acquire(int arg) {
        if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }Copy the code

The above code mainly completes the synchronization state acquisition, node construction, joining the synchronization queue and spin waiting in the synchronization queue. The main logic is as follows:

  1. Call the tryAcquire(int ARg) method of the custom synchronizer implementation, which ensures thread-safe acquisition of synchronization state.

  2. If the synchronization fails, construct a Node (node. EXCLUSIVE, where only one thread can successfully obtain the synchronization state at a time) and add it to the end of the synchronization queue using the addWaiter(Node Node) method.

  3. Finally, the acquireQueued(Node Node,int arg) method is called, causing the Node to obtain the synchronization status in an “infinite loop”. If not, the thread in the node is blocked, and the awakening of the blocked thread mainly depends on the precursor node to queue up and wake up the current node or the blocking thread is interrupted.

Let’s analyze the correlation functions one by one.

    private Node addWaiter(Node mode) {
        Node node = new Node(Thread.currentThread(), mode);
        // Try the fast path of enq; backup to full enq on failure
        Node pred = tail;
        if(pred ! = null) { node.prev = pred;if (compareAndSetTail(pred, node)) {
                pred.next = node;
                return node;
            }
        }
        enq(node);
        return node;
    }
    
    private Node enq(final Node node) {
        for (;;) {
            Node t = tail;
            if (t == null) { // Must initialize
                if (compareAndSetHead(new Node()))
                    tail = head;
            } else {
                node.prev = t;
                if (compareAndSetTail(t, node)) {
                    t.next = node;
                    return t;
                }
            }
        }
    }
    
    final boolean acquireQueued(final Node node, int arg) {
        boolean failed = true;
        try {
            boolean interrupted = false;
            for (;;) {
                final Node p = node.predecessor();
                if (p == head && tryAcquire(arg)) {
                    setHead(node);
                    p.next = null; // help GC
                    failed = false;
                    return interrupted;
                }
                if (shouldParkAfterFailedAcquire(p, node) &&
                    parkAndCheckInterrupt())
                    interrupted = true;
            }
        } finally {
            if(failed) cancelAcquire(node); }}Copy the code
  1. This is done by using the compareAndSetTail(Node expect,Node Update) method to ensure that nodes can be added safely by threads. withThe reason CAS is thread-safe is because it can be called by multiple threadstryAcquire(int arg)Method failed to get synchronization status and added to the tail of the queue.
  2. In the ENQ (Final Node Node) method, the synchronizer ensures the correct addition of nodes through an “infinite loop” in which the current thread can only return from the method after the Node is set as a tail by CAS, otherwise the current thread keeps trying to set. As you can see, the ENQ (Final Node Node) method serializes requests for concurrent Node addition through CAS.
  3. Once a node enters the synchronization queue, it enters a spin process, in which each node (or thread) spins to observe. When the condition is met and the synchronization status is obtained, it can exit the spin process, otherwise it remains in the spin process (and blocks the node’s thread). This part corresponds to the acquireQueued() function.

In the acquireQueued(Final Node Node,int arg) method, the current thread attempts to obtain the synchronization state in an “infinite loop” while only the precursor Node is the head Node can attempt to obtain the synchronization state. Why is this?

  1. The head node is the node that has successfully obtained the synchronization state. After the thread of the head node releases the synchronization state, it will wake up its successor nodes. After waking up, the thread of the successor node needs to check whether its precursor node is the head node.
  2. Maintain the FIFO principle for synchronization queues. In this method, the behavior of node spin obtaining synchronization state is shown in the figure below.

In the figure above, a thread that is not the first node returns from the wait state because its precursor has been queued or interrupted, and then checks whether its precursor is the first node and tries to get the synchronization state if it is. Can see nodes and between nodes in the process of cycle check basic don’t communicate with each other, but simply to judge whether their precursors for the head node, which makes the node release rules in line with the FIFO, and also facilitate handling of early notification (early notification refers to the thread is not head precursor node aroused due to interrupt the pseudo wake up).

Exclusive synchronous state acquisition process, namely acquire(int ARg) method call process, is shown in the figure below:

In the figure above, the precursor node is the head node and the judgment condition that can obtain the synchronization state and the thread enters the wait state is the spin process of obtaining the synchronization state. When the synchronization status is acquired successfully, the current thread returns from acquire(int arg), which in the case of concurrent components such as locks, means that the current thread acquired the lock.

The exclusive synchronization state is released

After the current thread has acquired the synchronization state and executed the logic, it needs to release the synchronization state so that subsequent nodes can continue to acquire the synchronization state. The synchronization state can be released by calling the release(int arg) method of the synchronizer, which wakes up subsequent nodes after releasing the synchronization state (thus causing them to try again to get the synchronization state). The release code looks like this:

    public final boolean release(int arg) {
        if (tryRelease(arg)) {
            Node h = head;
            if(h ! = null && h.waitStatus ! = 0) unparkSuccessor(h);return true;
        }
        return false;
    }
Copy the code

The method, when executed, wakes up the successor Node threads of the head Node, and the unparksucceeded (Node Node) method uses LockSupport to wake up the threads in the waiting state.

Exclusive AQS high summary
  1. AQS maintains a synchronization queue when obtaining the synchronization status, and any thread that fails to obtain the status is added to the tail of the queue and spins in the queue.
  2. The condition for moving out of the queue (or stopping the spin) is that the precursor node is the head node and the current synchronization state has been successfully obtained, after which it sets itself as the head node.
  3. When releasing the synchronization state, the synchronizer calls the tryRelease(int arg) method to release the synchronization state and then wakes up the successor nodes of the head node.

Shared synchronization state acquisition and release

Shared synchronization status acquisition

The main difference between shared and exclusive acquisition is whether more than one thread can simultaneously acquire the synchronization state. Take reading and writing files as an example

  1. If a program is reading a file, all writes to the file are blocked at that moment, but reads can be performed simultaneously.
  2. Write operations require exclusive access to resources, while read operations can be shared access. The two different access modes can access files or resources at the same time, as shown in the following figure:

Above:

  1. In the left half, shared access is allowed, but exclusive access is blocked.

  2. If the right half is exclusive access to the resource, all other access is blocked at the same time.

The synchronization state can be shared by calling the synchronizer’s acquireShared(int arg) method as follows:

    public final void acquireShared(int arg) {
        if (tryAcquireShared(arg) < 0)
            doAcquireShared(arg);
    }
    
    private void doAcquireShared(int arg) {
        final Node node = addWaiter(Node.SHARED);
        boolean failed = true;
        try {
            boolean interrupted = false;
            for (;;) {
                final Node p = node.predecessor();
                if (p == head) {
                    int r = tryAcquireShared(arg);
                    if (r >= 0) {
                        setHeadAndPropagate(node, r);
                        p.next = null; // help GC
                        if (interrupted)
                            selfInterrupt();
                        failed = false;
                        return; }}if (shouldParkAfterFailedAcquire(p, node) &&
                    parkAndCheckInterrupt())
                    interrupted = true;
            }
        } finally {
            if(failed) cancelAcquire(node); }}Copy the code
  1. In the acquireShared(int arg) method, the synchronizer calls the tryAcquireShared(int arg) method to try to obtain the status of the synchronization. The return value of tryAcquireShared(int arg) is int. If the return value is greater than or equal to 0, Indicates that the synchronization status can be obtained.
  2. Therefore, in the shared acquisition of spin, the synchronization state is successfully obtained and the exit of spin is achieved only if the current node is preceded by a head node and the return value of tryAcquireShared(int arg) is greater than or equal to 0.
Shared synchronization is released

As with exclusive, shared fetching requires the release of synchronous state, which can be done by calling releaseShared(int arg), which looks like this:

    public final boolean releaseShared(int arg) {
        if (tryReleaseShared(arg)) {
            doReleaseShared();
            return true;
        }
        return false;
    }
Copy the code

This method, after releasing the synchronization state, wakes up subsequent nodes in the wait state. For concurrent components (such as Semaphore) that can be accessed simultaneously by multiple threads, the main difference is that the tryReleaseShared(Int ARG) method must ensure that synchronized state (or resource count) threads are released safely, usually through loops and CAS. This is because the operation to release synchronized state can come from multiple threads at the same time.

Exclusive timeout gets synchronization status

The synchronization state can be timeout obtained by calling the synchronizer’s doAcquireNanos(int arg, Long nanosTimeout) method, that is, the synchronization state is obtained within a specified period of time and returns true if it is obtained, false otherwise. This method provides features that traditional Java synchronization operations, such as the synchronized keyword, do not.

Get synchronization status in response to interrupts:

Before Java 5, when a thread failed to acquire the lock and was blocked outside synchronized, the interrupt operation was performed on the thread. At this time, the interrupt flag bit of the thread would be modified, but the thread would still be blocked on synchronized, waiting to acquire the lock. In Java 5, the synchronizer provides the acquireInterruptibly(int ARg) method, which returns immediately and throws InterruptedException if the current thread is interrupted while waiting to obtain the synchronization status.

DoAcquireNanos (Int ARg, Long nanosTimeout) method adds the feature of timeout acquisition on the basis of supporting interrupt response. For timeout acquisition, it is mainly necessary to calculate the time interval nanosTimeout that requires sleep. In order to prevent premature notification, nanosTimeout calculation formula is: NanosTimeout = deadline-system.nanotime (), where deadline is the latest wake-up time calculated, if nanosTimeout is greater than 0, it indicates that the timeout time has not reached, the need to continue to sleep nanosTimeout nanoseconds, otherwise, Indicates that timeout has occurred. The code for this method is as follows:

    private boolean doAcquireNanos(int arg, long nanosTimeout)
            throws InterruptedException {
        if (nanosTimeout <= 0L)
            return false;
        final long deadline = System.nanoTime() + nanosTimeout;
        final Node node = addWaiter(Node.EXCLUSIVE);
        boolean failed = true;
        try {
            for (;;) {
                final Node p = node.predecessor();
                if (p == head && tryAcquire(arg)) {
                    setHead(node);
                    p.next = null; // help GC
                    failed = false;
                    return true;
                }
                nanosTimeout = deadline - System.nanoTime();
                if (nanosTimeout <= 0L)
                    return false;
                if (shouldParkAfterFailedAcquire(p, node) &&
                    nanosTimeout > spinForTimeoutThreshold)
                    LockSupport.parkNanos(this, nanosTimeout);
                if (Thread.interrupted())
                    throw new InterruptedException();
            }
        } finally {
            if(failed) cancelAcquire(node); }}Copy the code
  1. In this method, during the spin process, when the precursor node of the node is the head node, it tries to obtain the synchronization state, and if it succeeds in obtaining the synchronization state, it returns from this method. This process is similar to the process of exclusive synchronization acquisition.

  2. If the current thread fails to obtain the synchronization state, determine whether timeout (nanosTimeout is less than or equal to 0, indicating timeout), if not, recalculate the timeout interval nanosTimeout, and then make the current thread wait nanosTimeout nanoseconds (when the set timeout time has been reached, The thread returns from locksupport. parkNanos(Objectblocker,long Nanos).

  3. If nanosTimeout is less than or equal to the spinForTimeoutThreshold (1000 nanoseconds), the thread is not made to wait out the timeout, but to enter a fast spin process. The reason is that very short timeout waits cannot be very precise, and if timeout waits are made at this point, nanosTimeout timeouts as a whole appear imprecise. Therefore, in the case of very short timeouts, the synchronizer goes into unconditional fast spin. (In this case, thread blocking and waking time may be greater than nanosTimeout).

The process of obtaining the same gait for the exclusive timeout is shown in the figure below:

Custom synchronization component -TwinsLock

Here we design a synchronization tool: this tool can only allow two threads to access at the same time, more than two threads access will be blocked, we call this synchronization tool TwinsLock.

Access pattern

TwinsLock can be accessed by multiple threads at the same time. This is obviously Shared access, so you need to use the acquireShared(int args) method provided by the synchronizer. This requires TwinsLock to override the tryAcquireShared(int Args) and tryReleaseShared(int Args) methods to ensure that the synchroniser’s shared state get and release methods can be implemented.

Number of resources

TwinsLock allows up to two threads to access at the same time, indicating that the number of synchronized resources is 2. In this way, the initial status can be set to 2. When a thread acquires, status decreases by 1, and the thread is released, then status increases by 1. 0 indicates that two threads have obtained the synchronization resource, and another thread can obtain the synchronization status, and this thread can only be blocked. When synchronizing state changes, use compareAndSet(int expect,int update) to ensure atomicity.

TwinsLock

This custom synchronization component is implemented by implementing the Lock interface and combining subclasses derived from AQS. In general, subclasses that inherit from AQS are defined as static inner classes of the synchronous component class.

public class TwinsLock implements Lock {
    private final Sync sync = new Sync(2);

    private static final class Sync extends AbstractQueuedSynchronizer {
        Sync(int count) {
            if (count <= 0) {
                throw new IllegalArgumentException("Count must be larger than zero!");
            }
            setState(count);
        }

        @Override
        protected int tryAcquireShared(int reduceCount) {
            for (;;) {
                int current = getState();
                int newCount = current - reduceCount;
                if (newCount < 0 || compareAndSetState(current, newCount)) {
                    return newCount;
                }
            }
        }

        @Override
        protected boolean tryReleaseShared(int returnCount) {
            for (;;) {
                int current = getState();
                int newCount = current + returnCount;
                if (compareAndSetState(current, newCount)) {
                    return true;
                }
            }
        }

        Condition newCondition() {
            return new ConditionObject();
        }
    }

    @Override
    public void lock() {
        sync.acquireShared(1);
    }

    @Override
    public void unlock() { sync.releaseShared(1); } // Other interface methods omitted}Copy the code

In the above example,

  1. TwinsLock implements the Lock interface and provides a user interface. The user calls Lock () to acquire the Lock, and then unlock() to release the Lock. Only two threads can acquire the Lock at the same time.
  2. TwinsLock also includes a custom synchronizer, Sync, for thread access and synchronization state control. Take sharing the synchronization status as an example: The synchroniser calculates the obtained synchronization state and then ensures the correct setting of the synchronization state through CAS. The current thread obtains the synchronization state only when the return value of tryAcquireShared(Int reduceCount) is greater than or equal to 0. For the TwinsLock, the current thread has obtained the lock.

AQS acts as a bridge between the interface semantics of underlying technologies such as thread access and synchronous state control and different concurrent components such as Lock, CountDownLatch, and so on.

TwinsLock test Demo

In the test case, the Worker thread is defined. The thread acquires the lock during execution, and makes the current thread sleep for 1 second after acquiring the lock (without releasing the lock), then prints the name of the current thread, and finally sleeps again for 1 second and releases the lock. The test code is as follows:

public class TwinsLockTest {
    Lock lock = new TwinsLock();

    @Test
    public void testCase01() throws InterruptedException {
        for (int i = 0; i < 10; i++) {
            Worker w = new Worker();
            Thread thread = new Thread(w, "Worker"+ i); thread.start(); } // Line breaks every 1 secondfor (int i = 0; i < 10; i++) {
            Thread.sleep(1000);
            System.out.println();
        }
    }

    class Worker implements Runnable {
        @Override
        public void run() {
            while (true) {
                lock.lock();
                try {
                    Thread.sleep(1000);
                    System.out.println(Thread.currentThread().getName());
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                } finally {
                    lock.unlock();
                }
                break; }}}}Copy the code

When run, you can see the following output: Only two threads are able to acquire the lock at any one time, indicating that TwinsLock is working correctly as expected.

Worker0
Worker1

Worker3
Worker2

Worker4
Worker5

Worker7
Worker6

Worker9
Worker8
Copy the code

Reentrant lock ReentranLock

reentrancy

A ReentrantLock, as its name implies, is a ReentrantLock that allows a thread to repeatedly lock a resource. Simply put, re-entry means that any thread that has acquired a lock can acquire it again without being blocked by the lock.

The synchronized keyword implicitly supports re-entry, such as a synchronized modified recursive method in which the thread of execution acquires the lock several times in a row after it has been acquired. The previous example in this article, Mutex, does not support reentrancy. If a thread acquires a lock, it will block itself the next time it acquires the lock.

There are two problems with the implementation of reentrancy:

  1. The thread acquires the lock again. The lock needs to identify whether the thread acquiring the lock is the thread currently occupying the lock, and if so, it is successfully acquired again.
  2. The final release of the lock. The thread acquires the lock n times, and after releasing the lock for the NTH time, other threads can acquire the lock. The final release of the lock requires the lock to increment the count for the acquisition. The count represents the number of times the current lock has been acquired repeatedly. When the lock is released, the count decreases.

ReentrantLock is used to obtain and release locks by combining custom subclasses of AQS and implementing the Lock interface. Using the non-fair (default) implementation as an example, the code for obtaining synchronization status is as follows:

        final boolean nonfairTryAcquire(int acquires) {
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
                if (compareAndSetState(0, acquires)) {
                    setExclusiveOwnerThread(current);
                    return true; }}else if (current == getExclusiveOwnerThread()) {
                int nextc = c + acquires;
                if (nextc < 0) // overflow
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }
Copy the code

This method adds processing logic for thread to acquire synchronization state again (reentrant) : it determines whether the acquisition operation is successful by judging whether the current thread is the thread that acquires the lock. If it is the thread that acquires the lock again, it increases the synchronization state value and returns true, indicating that the synchronization state is successfully acquired.

The thread that successfully acquired the lock acquires the lock again, only increasing the value of the synchronization state, which requires ReentrantLock to reduce the value of the synchronization state when releasing the synchronization state. The code for this method is as follows:

        protected final boolean tryRelease(int releases) {
            int c = getState() - releases;
            if(Thread.currentThread() ! = getExclusiveOwnerThread()) throw new IllegalMonitorStateException(); boolean free =false;
            if (c == 0) {
                free = true;
                setExclusiveOwnerThread(null);
            }
            setState(c);
            return free;
        }
Copy the code
  1. If the lock is acquired n times, then the previous (n-1) tryRelease(int Releases) method must return false, and only return true if the synchronization state is fully released.
  2. This method takes whether the synchronization status is 0 as the condition of the final release. When the synchronization status is 0, the owning thread is set to NULL and returns true, indicating that the release is successful.

Fair lock acquisition vs unfair lock acquisition

Fairness is about acquiring locks. If a lock is fair, the order of acquiring locks should match the absolute chronological order of requests, known as FIFO.

In nonfairTryAcquire(int acquires), if CAS is set successfully, then the thread has acquired the lock.

        protected final boolean tryAcquire(int acquires) {
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
                if(! hasQueuedPredecessors() && compareAndSetState(0, acquires)) {setExclusiveOwnerThread(current);
                    return true; }}else if (current == getExclusiveOwnerThread()) {
                int nextc = c + acquires;
                if (nextc < 0)
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }
Copy the code

The only difference between this method and nonfairTryAcquire(int acquires) is the fact that hasqueuedToraise () joined the synchronous queue to determine whether the current node has a precursor node. If this method returns true, It means that a thread has requested the lock earlier than the current thread and therefore needs to wait for the precursor thread to acquire and release the lock before continuing to acquire the lock.

The difference between fair and unfair lock acquisition:

  1. Conditions for obtaining a lock in tryAcquire: Unfair lock If the SYNCHRONIZATION status is set successfully by the CAS, a fair lock requires that the SYNCHRONIZATION status is set successfully by the CAS. + The current node is the first node in the synchronization queue.
  2. Performance: Fair locks ensure that locks are acquired in ACCORDANCE with FIFO at the expense of a large number of thread switches. Unfair locks can cause thread hunger, but few thread switches guarantee higher throughput.