AQS: AbstractQueuedSynchronizer abstract queue synchronizer

1. What is AQS?

AQS is used to build locks or other synchronization components of the basic framework, its internal through the maintenance of an INT type of state variable to represent the synchronization state, through the built-in FIFO queue to complete the queuing thread work; AQS is the main method of use of the inheritance, generally in lock or synchronous component internal implementation of a static inner class, then inherit AQS, realize your need to implement specific methods, and then call open interface implementation of AQS lock or synchronous component function, through the template method design pattern within the AQS function for the packaging, We just need to implement the required methods in our own classes; The AQS components are CountDownLatch, ThreadPoolExecutor, ReentrantLock, and ReentrantReadWriteLock.

2. Template method design mode:

Defining a method framework for the internal execution of an operation while deferring the implementation of the method to its subclasses, template methods allow subclasses to redefine specific steps of an algorithm without changing the structure of that algorithm;

Here’s an example:

Cake making methods can be divided into modeling, smearing, baking; In the abstract category, we define the steps of cake making as shaping, smeating and baking. Then, because different types of cake have different operations, for example, when we make cheesecake, we can redefine three specific steps in the cheesecake category. At this time, the method framework of cake making has not changed, and the specific operations within the method are realized by the subclass;

3. Common methods in AQS:

Template method:

Acquire: Exclusive access to the synchronization status.

AcquireShared: Shared get synchronization state;

Release: an exclusive release of the synchronization state;

ReleaseShared: Shared release synchronization state;

GetQueuedThreads: Gets a collection of threads waiting on a synchronized queue;

General rewriting methods:

TryAcquire: a concrete implementation of exclusive acquisition of synchronization state, in which tryAcquire is called;

TryRelease: a concrete implementation of exclusive release of synchronization state;

TryAcquireShared: Shared access to the synchronization state implementation;

TryReleaseShared: an implementation of shared release synchronization;

Methods for accessing and modifying synchronization status:

The state variable is an int variable representing the synchronization state in AQS.

SetState: Sets the synchronization status.

GetState: obtains the synchronization status.

CompareAndSetState: CAS is used to set the synchronization status to ensure atomicity of the setting operation.

4. CLH queue lock: the basic idea of AQS

Queue lock is a kind of spin lock, constantly spin polls the state of the precursor, assuming that the precursor released the lock to end the spin and get the lock;

Execution logic: a list of thread nodes is formed. The thread that wants to acquire the lock wraps itself as a thread node. Inside the node, a pointer to the end of the list for the thread waiting for the lock is maintained, and a locked variable is maintained to indicate whether the lock is released.

When thread A wants to acquire the lock, it first points to the end of the linked list and sets the locaked parameter to true to indicate that it is waiting to acquire the lock. When thread B wants to acquire the lock, it does the same. When thread A releases the lock on the previous node, it sets its locked variable to false. Then thread A stops polling the locked variables on the previous node and acquires the lock. When thread A completes, thread B acquires the lock in the same way.

5, ReentrantLock implementation:

Basic implementation of reentrant lock:

Reentrant lock refers to a thread that acquires the lock again without being blocked. The premise of reentrant lock is to determine whether the current thread is the thread that acquires the lock. When the lock is acquired, the state state variable will be +1, and 1 will be added every time the lock is re-entered. The lock is not fully released until the state variable is reduced to 0;

Fair and unfair locks:

ReentrantLock can be set to a fair lock by passing true when constructing it;

For an unfair lock, the CAS obtains the lock if the SYNCHRONIZATION status is set successfully. If this method returns true, it means that there is an earlier thread applying for the lock, so it needs to wait for the preceding node to release the lock before it can compete for the lock.

Second, the realization principle of synchronized

1. Synchronized code blocks

The uncompiled class file shows that the synchronized code block is implemented through a pair of MonitorEnter and MonitorExit directives. Before entering the synchronized code block, the MonitorEnter directive is executed to obtain the lock status of the Monitor object. The monitorExit directive is executed at the end of synchronized block execution or method execution or when an exception is thrown, which releases the lock state of the Monitor object.

If the monitor object has an entry number of 0, the requesting thread acquires the lock directly and sets the entry number to 1. If the thread already holds the lock, acquiring the lock again will increase the number of entries by one, and exit the synchronization code block to decrease the number of entries of the monitor object by one. Only when the number of monitor entries is 0, other threads can compete for the lock.

2. Synchronized

When the class file is decomcompiled, the synchronized method does not have monitorEnter and monitorExit directives. Instead, it has an ACC_SYNCHRONIZED identifier in the constant pool. Locks are also acquired and released internally by the number of entries in the Monitor object;

3. Synchronized

Synchronized locks are stored in Java object headers, which are composed of two parts. One is MarkWord, which is used to store lock status, hashcode, GC status, etc. The other is KlassPointer, a pointer to the type of the class to which the object belongs; If it’s an array type, there’s a bit more to implement the length of the array;

The lock information is stored in the MarkWord section, which is used to store hashCode and GC information, etc. But it changes as the object runs and becomes used to store various lock states

The Java memory model JMM is associated with concurrency issues

1. Working memory and main memory

Each thread will correspond to its own working memory, the corresponding physical device is usually A CPU register or cache, and the main memory generally represents the memory bar. A simple analogy can be made to the memory model in the JVM, where stack memory that is thread exclusive is compared to working memory, and heap memory that is thread shared is compared to main memory.

2. Concurrency security issues caused by JMM

Count++ operation

Count++ operation is very simple on the surface, in fact, the internal implementation process needs to fetch data, add, write back to main memory, etc., when different threads to perform this operation, it will produce concurrency safety issues;

Volatile details: The lightest synchronization operation

Volatile guarantees visibility and order, but not atomicity; For example, count++ above, even if we set count to volatile, we still have concurrency safety issues because the ++ operation is not atomic;

The underlying principle of volatile is that volatile variables are written using the CPU lock prefix. Write data from the current thread cache row back to system memory. This write back operation invalidates variables cached in other threads. If you want to use them, you need to retrieve them from main memory.

4. The role of volatile on DCL:

In singleton double-locking, volatile is used to modify singleton objects. The visibility and order of volatile are used mainly so that changes made to variables are immediately visible to other threads. Second, order is used to forbid instruction rearrangement, to prevent the occurrence of null and point to the object;

Four, the four states of the lock

There are four lock states: no lock state, biased lock state, lightweight lock state and heavyweight lock state. The lock state will be upgraded with the intensification of the competition state, and the status can only be upgraded rather than degraded. Switching between various states is to improve the efficiency of obtaining and releasing locks.

1. No lock state:

As the name implies, there is no lock, there is no lock competition and release problems;

If the resource is not locked, only one thread can modify the resource successfully, and the other threads will retry. Similar to CAS mechanism;

2. Biased lock state:

In most cases, a lock is always held by one thread, so the program can run in biased lock state. At this time, the Markwork part of the object header will save the state as biased lock state, and record the thread ID. When a thread executes the code, check whether the current thread ID is equal, and run directly if the current thread ID is equal. If it is not equal and there is competition between threads, the biased lock becomes invalid and expands to lightweight lock.

3. Lightweight lock state:

When multiple threads execute other threads when a thread to acquire the lock does not block, but CAS spin operation, the loop execution waiting thread lock is released, if the waiting thread CAS spin more than setting the number (typically set to a context switch time, because of the consumption of heavyweight lock time is context switching time, Java1.6 by vm control); [Fixed] Lightweight locks inflate to heavyweight locks

4. Heavyweight lock status:

When the competition condition is fierce or the time of thread executing synchronization program is too long, the lightweight lock will expand to the heavyweight lock. At this time, the lock will compete according to the thread, and the thread that did not obtain the lock will be blocked, and the context switch will be performed at the same time when the thread switches.