LockSupport

LockSupport is a tool class that provides a locking mechanism,

public static void park(Object blocker) {
    // Get the current thread
    Thread t = Thread.currentThread();
    / / set the Blocker
    setBlocker(t, blocker);
    // Get permission
    UNSAFE.park(false.0L);
    // You can re-run this Blocker before setting it
    setBlocker(t, null);
}
public static void unpark(Thread thread) {
    if(thread ! =null) // Thread is not empty
        UNSAFE.unpark(thread); // Release the thread permission
}
  
Copy the code

AQS

AQS (AbstractQueuedSynchronizer) provides a FIFO queue, can be seen as a used to implement a lock, and other needs sync framework. This class is called AQS for short. The use of AQS relies on inheritance (template methods), and subclasses manage synchronization state by inheriting from AQS and implementing the required methods. For example, ReentrantLock, CountDownLatch and other common AQS can be divided into two functions: exclusive and shared.

In exclusive lock mode, only one thread can hold the lock at a time. For example, ReentrantLock is a mutex lock implemented in exclusive mode. In shared lock mode, multiple threads can simultaneously obtain the lock and concurrently access shared resources, such as ReentrantReadWriteLock.Copy the code

Obviously, an exclusive lock is a pessimistic and conservative locking strategy that limits read/read conflicts. If one read-only thread acquires the lock, all other reader threads have to wait, which limits unnecessary concurrency because reads do not affect data consistency. The shared lock is an optimistic lock, which relaxes the locking policy and allows multiple read threads to access the shared resource simultaneously

  • Internal implementation of AQS
  1. state

AQS uses an int member variable to indicate synchronization status, and queues the resource threads through the built-in FIFO queue. AQS uses CAS to perform atomic operations on the synchronization state to modify its value.

/** * The synchronization state. */
    private volatile int state;
Copy the code
  1. Synchronous queue (CLH lock)

CLH(Craig,Landin,and Hagersten) queue is a virtual bidirectional queue (virtual bidirectional queue has no queue instance, only the association relationship between nodes). AQS encapsulates each thread requesting shared resources into a Node (Node) of a CLH lock queue to achieve lock allocation. Aqs relies on the internal synchronization queue (a FIFO bidirectional queue) to complete the synchronization state management. When the current thread fails to obtain the synchronization state, AQS will construct the current thread and wait state and other information into a Node and add it to the synchronization queue. At the same time, it will block the current thread. Wakes up the thread in the first node to try again to get the synchronization state. The main properties of Node are as follows

static final class Node {
    int waitStatus; // Indicates the node status, including cancelled; Condition indicates that the node is waiting on condition, that is, in the condition queue
    Node prev; // Predecessor node
    Node next; // Subsequent nodes
    Node nextWaiter; // Subsequent nodes stored in the condition queue
    Thread thread; // The current thread
}
Copy the code

The underlying data structure of AQS class is the use of bidirectional linked lists, which is a realization of queues. There is a head node and a tail node, representing the head node and the tail node respectively. The head node does not store Thread, but only the reference to the next node.

When a thread succeeds in acquiring the synchronization state (or lock), other threads cannot acquire the synchronization state and are instead constructed as nodes and added to the synchronization queue. This process must be thread safe

The first node is set by the thread that has successfully obtained the synchronization state. Since only one thread can successfully obtain the synchronization state, CAS is not required to set the first node. It only needs to set the first node as the successor of the original first node and disconnect the next reference of the original first node

So the synchronizer provides a CAS-based method of setting tail nodes: compareAndSetTail(Node expect,Nodeupdate), which requires passing the tail Node and the current Node that the current thread “thinks” of. Only after this setting succeeds does the current Node formally associate with the previous tail Node.

The synchronization queue follows FIFO. The first node is the node that successfully obtains the synchronization state. When the thread of the first node releases the synchronization state, it will wake up the successor node, and the successor node will set itself as the first node when successfully obtains the synchronization state.

  1. Lock acquisition/release process

Even though this class is based on an internal FIFO queue, it does not automatically enforce FIFO acquisition policies. The core of exclusive synchronization takes the form:

Acquire: while (! tryAcquire(arg)) { enqueue thread if it is not already queued; possibly block current thread; } Release: if (tryRelease(arg)) unblock the first queued thread;Copy the code
  1. Template method design patterns

To use this class as the basis of a synchronizer, redefine the following methods, as applicable, by inspecting and/or modifying the synchronization state using getState, setState and/or compareAndSetState:

IsHeldExclusively ()// Whether the thread is monopolizing resources. You only need to implement it if you use condition. TryAcquire (int)// Exclusive. Attempts to obtain the resource return true on success and false on failure. TryRelease (int)// Exclusive. Attempts to free resources return true on success and false on failure. TryAcquireShared (int)// Share mode. Attempt to obtain resources. Negative numbers indicate failure; 0 indicates success, but no available resources are available. A positive number indicates success and free resources. TryReleaseShared (int)// Share mode. Attempts to free resources return true on success and false on failure.Copy the code

ReentrantLock

The characteristics of

  • Reentrant lock

A reentrant lock means that any thread that has acquired a lock can acquire it again without blocking. Synchronized and ReentrantLock are reentrant locks.

This is done by associating each lock with a request counter and a thread that owns it. When the count is 0, the lock is considered unoccupated. When a thread requests an unused lock, the JVM records the owner of the lock and sets the request counter to 1. If the same thread requests the lock again, the counter is incremented. The counter is decremented each time the occupying thread exits the synchronized block. Until the counter reaches 0, the lock is released.

Reentrant of the parent and subclass locks: the subclass overwrites the synchonized method of the parent class and then calls the method in the parent class. If there is no reentrant lock, this code will generate a deadlock.

  • Interruptible lock acquisition:

A thread that does not respond to an interrupt when acquiring a Lock using the synchronized keyword does not block. A thread that interrupts when acquiring a Lock using lock.lockInterruptibly () throws an interrupt exception.

  • Non-blocking lock acquisition:

Synchronized blocks if the synchronized keyword fails to acquire a Lock, while lock. tryLock() does not block but returns false if the synchronized keyword fails to acquire a Lock.

  • The timeout period for acquiring a lock can be limited:

Use lock. tryLock(long time, TimeUnit unit).

  • There can be multiple wait queues on the same Object (similar to object.wait (), which supports fair lock mode).

  • Fair locks and unfair locks

ReentranLock is divided into fair locks and unfair locks. The difference between the two is whether the opportunity to obtain the lock is related to the queue order. If the lock is held by another thread, the other threads applying for the lock are suspended and added to the wait queue. Theoretically, the thread that calls the lock function first and suspends the wait should be at the front of the wait queue, and the thread that calls the lock function later should be at the back. If, at this point, the lock is released, the waiting thread needs to be notified to try to acquire the lock again. Fair locking allows the first thread to acquire the lock. An unfair lock would wake up all threads and make them try to acquire the lock again, so it might result in a later line acquiring the lock first, which would be unfair. The default is unfair lock

public ReentrantLock(boolean fair) {
    sync = fair ? new FairSync() : new NonfairSync();
}
Copy the code

Non – fair lock implementation process sequence diagram

Compared with synchronized

  • Synchronized with already

ReentrantLock is a JDK class-level implementation; Synchronized is a JVM level implementation.

ReentrantLock supports both interruptible and timeout lock acquisition, non-blocking lock acquisition, fair lock implementation, and multiple Condition binding (a ReentrantLock object can be bound to multiple Condition objects simultaneously). Synchronized implicit lock release ease and JDK lock upgrade performance optimization

The following code is ReentranLock’s functions, and we will explain the implementation principles behind these functions in that order.

ReentrantLock lock = new ReentrantLock();
lock.lock();
lock.unlock();
Copy the code

ReentrantReadWriteLock