Introduction to JUC multithreading
1. Thread lifecycle
-
Thread t = New MyThread(); Thread t = New MyThread();
-
Ready state: when the thread object’s start() method is called (t.start();) , the thread enters the ready state. When a thread is in the ready state, it is ready to be executed by the CPU, not when t.start() is executed.
-
Running: When the CPU starts scheduling a thread in the ready state, the thread can actually execute, that is, enter the Running state. Note: The thread state is the only entry into the running state, that is, to enter the running state execution, the thread must first be in the ready state;
-
Blocked: A running thread temporarily relinquish access to the CPU and stops executing for some reason. At this point, the thread enters the Blocked state and is not called again by the CPU until it enters the ready state. According to different causes of blocking, blocking states can be divided into three types:
1. Waiting to block: A thread in the running state executes wait() to make it enter the waiting state.
2. Synchronized blocking — if a thread fails to acquire a synchronized lock (because the lock is occupied by another thread), it enters the synchronized blocking state;
3. Other blocking – the thread is blocked by calling its sleep() or join() or by making an I/O request. When the sleep() state times out, when the join() wait thread terminates or times out, or when I/O processing is complete, the thread goes back to the ready state.
-
Dead: a thread terminates its life cycle when it finishes executing or exits the run() method because of an exception.
-
Java multithreading ready, running, and dead:
Transition from ready state to running state: when this thread gets a processor resource;
Running state transitions to ready state: When this thread actively invokes the yield() method or loses processor resources while running.
The running state transitions to the dead state: When the execution body of this thread completes or an exception occurs.
It is important to note here that when A thread’s yield() method is called, the thread transitions from the running state to the ready state, but there is some randomness about which thread in the ready state is scheduled next by the CPU. Therefore, it is possible for the CPU to schedule thread A even after the yield() method is called by thread A
-
The difference between wait/sleep is mainly for locks
-Leonard: Let go of the lock
-Serena: I woke up with the lock in my hand
Wait is the Object method and sleep is the thread method
-
Process and thread relationship:
Process: A process is a running activity of a program with some independent function about a set of data. It is the basic unit of dynamic execution of the operating system
** Threads: ** A process can contain several threads, but of course there is at least one thread in a process. The process is used as the basic unit for allocating resources, while the thread is used as the basic unit for running and scheduling independently
Parallelism and concurrency
- Parallelism: multiple tasks are executed together and then summarized.
- Concurrency: Multiple threads are accessing the same resource at the same time. Multiple threads operate on a single point
3, Synchronized lock
Synchronized is the basis for implementing synchronization: every object in Java can be used as a lock. Specific performance is in the following three forms:
-
For normal synchronous methods, the lock is the current instance object.
-
For statically synchronized methods, the lock is the Class object of the current Class.
-
For Synchronized method blocks, locks are objects configured in Synchronized parentheses
When a thread attempts to access a synchronized block of code, it must first acquire the lock and release it when it exits or throws an exception. In other words:
If a non-statically synchronized method of an instance object acquires a lock, other non-statically synchronized methods of the instance object must wait for the method that acquires the lock to release the lock before acquiring the lock. However, non-statically synchronized methods of different instance objects can acquire their own locks without waiting for the non-statically synchronized methods of other instance objects to release the locks because they use the locks of different objects.
All statically synchronized methods use the same lock — the class object itself. Once a static synchronization method acquires a lock, all other static synchronization methods must wait for the method to release the lock.
There are no race conditions between statically synchronized methods (Class object locks) and non-statically synchronized methods (instance object locks).
4, JUC package Lock
4.1 ReentrantLock ReentrantLock:
Lock. Lock (): locks, lock. Unlock (): unlocks
A reentrant lock, also known as a recursive lock, means that when the same thread acquires the lock in the outer method, the inner method of the thread acquires the lock automatically. Java ReentrantLock and synchronized are both reentrantlocks. One advantage of ReentrantLock is that it can avoid deadlocks to some extent.
4.2 Fair Lock:
The thread with the longest wait on the lock gains access to the lock. It is commonly understood that the longest queue is the first to perform the lock acquisition.
private ReentrantLock lock = new ReentrantLock(true);//true indicates that the lock is fair
Copy the code
4.3 Timed wait: lock.tryLock()
If no parameter is passed in, the result of the lock application will be returned immediately: true indicates that the lock was successfully obtained, false indicates that the lock was failed. We can use this approach to solve deadlock problems.
4.4 Differences between ReentrantLock and synchronized:
(1) Synchronized is an exclusive lock, and the process of locking and unlocking is automatic, easy to operate, but not flexible enough. ReentrantLock is also an exclusive lock. The process of locking and unlocking a ReentrantLock is manual, difficult to operate, but flexible.
(2) synchronized can re-enter, because lock and unlock automatically, do not worry about whether the lock is released at last; ReentrantLock can also be reentrant, but the lock must be added and unlocked manually and the number of times must be the same; otherwise, the lock cannot be acquired by other threads.
(3) synchronized unresponsive interrupt, a thread can not obtain the lock has been waiting; ReentrantLock can respond to interrupts.
4.5 ReentrantReadWriteLock Read/write Lock
- After ReentrantReadWriteLock. Read: Read locks, lock, other threads can Read, can’t write
- After ReentrantReadWriteLock. Write: Write locks, lock, other threads can’t read, also can’t Write
5. Communication between threads
5.1 Thread false wake up – Wake up is not met
Interrupts and false awakenings are possible, so use a loop where if is only judged once and while is woken up and pulled back again. Solution: Do not use if to judge the wakeup condition, but use while to judge the condition.
5.2 Thread Communication: Condition is used to implement thread communication
Await () : waiting for | single () : wake | singalAll () : wake up all### concurrency container class
6.1 CopyOnWrite container
A CopyOnWrite container (COW container for short) is a container for copying while writing. The common understanding is that when we add elements to a container, we do not directly add to the current container, but first Copy the current container, Copy the new container, then add elements to the new container, after adding elements, the original container reference to the new container. The advantage of this is that we can do concurrent reads to the CopyOnWrite container without locking, since no elements are being added to the current container. So the CopyOnWrite container is also an idea of read-write separation, reading and writing different containers. That’s what it is: an array
6.2 CopyOnWrite role
The CopyOnWrite concurrent container is used in concurrent scenarios where more reads and less writes are required. For example: whitelist, blacklist. Suppose we have a search site, and the user enters keywords in the search box of the site to search for content, but certain keywords are not allowed to be searched. These unsearchable keywords are put into a blacklist, which is updated periodically.
6.3 CopyOnWrite shortcomings
-
** Memory usage problem. ** Creates new objects to add to the new container while the old container is still in use, so there are two pieces of object memory. Reduce the memory consumption of large objects by compressing the elements in the container. For example, if the elements are all base 10 numbers, consider compressing them to base 36 or 64. Or instead of using the CopyOnWrite container, use another concurrent container, such as ConcurrentHashMap.
-
** Real-time data consistency problem. ** The CopyOnWrite container only guarantees final data consistency, not real-time data consistency. So if you want to write data that is immediately readable, don’t use the CopyOnWrite container.
6.4 class contrast
-
ArrayList =========> CopyOnWriteArrayList
-
Set ==============> CopyOnWriteArraySet
-
HashMap==========> ConcurrentHashMap
JUC’s powerful helper classes
7.1 CountDownLatch
CountDownLatch is a very useful multithreaded control tool class with a wide range of applications.
For example, if an application is installed on a mobile phone and five child processes are required to check service authorization, the main process maintains a counter with an initial count of 5. Each time the user agrees to an authorization, the counter is reduced by 1. When the count is reduced to 0, the main process starts, otherwise there is only blocking and waiting.
“Count down” in “CountDownLatch” means “count down” and “latch” means “latch”. The overall meaning can be understood as the countdown deadbolt, which seems to have a little “three, two, one, open sesame” feeling. CountDownLatch works the same way.
Commonly used on the following methods:
- New CountDownLatch(int count
- CountDown () // Every time it is called, the count is reduced by one
- Await () // while the count is reduced to 0, the blocking thread (which can be one or more) executes in parallel
Interview: What is the difference between CountDownLatch and join?
After calling the join() method on a child thread, the thread is blocked until it finishes running. CountDownLatch uses a counter to allow the child thread to decrement when it finishes or is running. That is, CountDownLatch can allow the await method to return at any time the child thread is running without having to wait until the thread is finished. In addition, when using thread pools to manage threads, we usually add Runnable directly to the thread pool, and there is no way to call the join method on the thread. CountDownLatch gives us more flexible control over thread synchronization than the join method.
7.2 CyclicBarrie
As you can see from the literal meaning, this class means “circular fence” in Chinese. It basically means a recyclable barrier. This command is run only once at each barrier point. This barrier operation is useful if the shared state is updated before all participating threads
Common methods:
-
CyclicBarrier(int parties, Runnable barrierAction); CyclicBarrier(int parties, Runnable barrierAction); CyclicBarrier(parties, Runnable barrierAction); This command is run only once at each barrier point and state can be shared before subsequent business is performed. This operation is performed by the last thread to enter the barrier point.
-
CyclicBarrier(int parties) Creates an instance of CyclicBarrier that specifies the number of threads participating in waiting for each other.
-
Await () this method is called to indicate that the current thread has reached the barrier point, that the current thread blocks and goes to sleep, and that the current thread will not be awakened until all threads have reached the barrier point.
Interview: the difference between CyclicBarrier and CountDownLatch?
Whereas CountDownLatch’s counter can only be used once, CyclicBarrier’s counter can be reset using the reset() method and can be used multiple times, so cyclicBarriers can handle more complex scenarios; CountDownLatch allows one or more threads to wait for a set of events to occur, while CyclicBarrier is used to wait for other threads to run to the fence position.
7.3 Semaphore
Semaphore translates literally as Semaphore. Semaphore can control the number of threads accessed at the same time. Ideal for situations where demand is high and resources are tight. For example, given a resource pool with a finite number of resources, assuming the number of resources is N, each thread can acquire one resource. However, when the resource allocation is completed, the subsequent thread needs to block and wait until the previous thread that holds the resource can release the resource.
Common methods:
Permitting public void acquire() throws InterruptedException // Permits When a thread invokes the acquire operation, it either acquires a semaphore successfully (minus one), or waits until a thread releases a semaphore, or times out. Public void release() // (release) actually increments the semaphore value by one and wakes up the waiting thread.
Semaphores are primarily used for two purposes:
-
Mutually exclusive use of multiple shared resources.
-
Control of the number of concurrent threads. Protect a critical part from entering more than N threads at a time.
8. Callable interface
8.1 Comparison between Callable and Runable
8.2 Use of Callable
-
Create the implementation class for Callable and override the Call () method, which is the thread body and has a return value
-
Create an instance of Callable.
-
Instantiate the FutureTask class as the object of the Callable interface implementation class. FutureTask encapsulates the return value of the Callable object’s call() method
-
Create a multithreaded Thread object to start the Thread with the argument FutureTask object.
-
The return value of the thread is obtained through the get() method of the object of the FutureTask class
8.3 Differences between callable interfaces and Runnable interfaces
Similarities: Both are interfaces, both can write multithreaded programs, both use thread.start () to start threads
Differences: 1. Specific methods are different: one is run, the other is call
-
Runnable has no return value; Callable can return execution results and is a generic type
-
The Call () method of the Callable interface allows exceptions to be thrown; Runnable’s run() method only handles exceptions internally, not upwards
It provides a way to check that a calculation is complete, wait for it to complete, and retrieve the results of the calculation.
8.4 Several ways to obtain multiple threads
-
Traditionally, the thread class is inherited and the runnable interface is implemented, with methods under the java.lang package
-
Java 5 has implemented the Callable interface and Java thread pool access, JUC method
9. BlockingQueue
9.1 What is BlockingQueue
In the multithreaded world: blocking, in some cases suspends a thread (that is, blocks), and the suspended thread is automatically awakened once the condition is met.
BlockingQueue is an interface under Java.util.Concurrent, so it is understood that BlockingQueue is designed to efficiently and securely transfer data in multiple threads. As you can see from the word blocking, access to a blocking queue can cause a block under certain circumstances. There are two types of blocking:
-
An enqueue operation is performed when the queue is full
-
The dequeue operation is performed when the queue is empty
Therefore, when a thread attempts to queue an already full queue, it will block unless another thread dequeues it; Similarly, when a thread attempts to dequeue an empty queue, it will block unless another thread dequeue.
Blocking queues are mainly used in producer/consumer scenarios. The following diagram shows a scenario where one thread produces and one thread consumes:
Why do I need BlockingQueueThe nice thing about BlockingQueue is that you don’t have to worry about when you need to block or wake up threads, because BlockingQueue does everything for you. Prior to the concurrent package release, in a multi-threaded environment, each of us programmers had to control these details ourselves, especially with regard to efficiency and thread safety, which added considerable complexity to our programs.
9.2 know BlockingQueue
The BlockingQueue in the java.util.concurrent package is an interface that inherits from the Queue interface, which inherits from Collection.
The BlockingQueue interface has seven main implementation classes:
-
ArrayBlockingQueue: a bounded blocking queue composed of array structures.
-
LinkedBlockingQueue: a bounded (but size defaults to integer.max_value) blocking queue consisting of a linked list structure.Copy the code
-
PriorityBlockingQueue: An unbounded blocking queue that supports priority sorting.Copy the code
-
DelayQueue: delay unbounded blocking queue implemented using priority queues.Copy the code
-
SynchronousQueue: A blocking queue that does not store elements, that is, a queue of individual elements.
-
LinkedTransferQueue: An unbounded blocking queue of linked lists.Copy the code
-
LinkedBlockingDeque: a two-way blocking queue made up of linked lists.
The BlockingQueue interface has the following methods:
Its methods can be divided into the following four categories:
An exception is thrown | Special values | blocking | timeout | |
---|---|---|---|---|
insert | add(e) | offer(e) | put(e) | offer(e, time, unit) |
remove | remove() | poll() | take() | poll(time, unit) |
check | element() | peek() | Do not use | Do not use |
An exception is thrown
The add normal execution returns true, element (not deleted) and remove returns the first element of a blocking queue When blocking Queue is full, and then add insert elements into the Queue will throw an IllegalStateException: Queue full when blocking Queue is empty, Removing elements from the queue throws NoSuchElementException. When the blocking queue is empty, calling element to check throws NoSuchElementException
False Remove method, return element in queue on success, null check method, return element in queue on success, null check method
Has been blocked
If the attempted action cannot be executed immediately, the method call blocks until it can be executed. When the blocking queue is full, put elements to the queue, and the queue will block the producer thread until the put data or response is interrupted and exits. When the blocking queue is empty, take elements from the queue, and the queue will block the consumer thread until the queue is available
Timeout exit
If the attempted action cannot be executed immediately, the method call blocks until it can be executed, but for no longer than the given value. Returns a specific value to tell whether the operation was successful (typically true/false).
ThreadPool ThreadPool
10.1 Why use thread Pools
Thread pool advantage: thread pool to do work is mainly to control the number of threads running, process lieutenant general task in the queue, and then start the task after the thread creation, if the number of threads exceeded the maximum number, beyond the number of threads waiting in line, and other threads is performed, then removed from the queue tasks to perform.
Its main characteristics are: thread reuse; Control the maximum number of concurrent requests; Manage threads.
-
Reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads.
-
Improve response speed. When a task arrives, it can execute immediately without waiting for a thread to be created.
-
Improve thread manageability. Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for unified allocation, tuning and monitoring.
10.2 7 Important Parameters for Creating a thread pool
-
CorePoolSize: The number of resident core threads in the thread pool
-
MaximumPoolSize: The maximum number of concurrent threads in the thread pool, which must be greater than or equal to 1
-
KeepAliveTime: When the number of threads in the current pool exceeds corePoolSize, when the keepAliveTime reaches keepAliveTime, the excess threads are destroyed until only corePoolSize remains
-
Unit: keepAliveTime unit
-
WorkQueue: Queue of tasks that have been submitted but not yet executed
-
ThreadFactory: represents a threadFactory that generates worker threads in the thread pool and is used to create threads, usually the default
-
Handler: A rejection policy that specifies how to reject runnable requests for execution when the queue is full and the number of worker threads is greater than or equal to the maximum thread pool size (maximumPoolSize)
- Note: In practice, we usually customize the thread pool and never use the tool class that provides the thread pool creation: Executors
10.3 Underlying Working Principles of thread Pools
Specific process:Important things to say three times: important: Important: Important: Important: Important
-
After a thread pool is created, the number of threads in the pool is zero.
-
When the execute() method is called to add a request task, the thread pool makes the following judgments:
- If the number of running threads is less than corePoolSize, create a thread to run the task immediately.
- If the number of running threads is greater than or equal to corePoolSize, the task is queued;
- If the queue is full and the number of running threads is less than maximumPoolSize, create a non-core thread to run the task in the queue immediately.
- If the queue is full and the number of running threads is greater than or equal to maximumPoolSize, the thread pool initiates a saturation denial policy to execute.
-
When a thread completes a task, it takes the next task from the queue and executes it.
-
When a thread has nothing to do for more than a certain amount of time, the thread decides:
If the number of threads currently running is greater than corePoolSize, the thread is stopped.
So after all the tasks in the thread pool are complete, it will eventually shrink to the size of corePoolSize.
10.4 Thread Pool Rejection Policy
Normally when you create a thread pool, you create a bounded task queue in order to avoid running out of resources, but in this mode, if the task queue is full and the number of threads created by the thread pool reaches the maximum number of threads you set, In this case, you need to specify the RejectedExecutionHandler parameter of ThreadPoolExecutor as a reasonable rejection policy to handle thread pool “overload” cases.
ThreadPoolExecutor has the following built-in rejection policies:
-
AbortPolicy (default) : direct selling RejectedExecutionException exception to prevent normal operation of system
-
CallerRunsPolicy: “Caller runs” a moderation mechanism that does not discard tasks or throw exceptions, but instead rolls some tasks back to the caller, reducing traffic to new tasks.
-
DiscardOldestPolicy: Discards the longest waiting task in the queue and then adds the current task to the queue to try to commit the current task again.
-
DiscardPolicy: This policy silently discards unhandled tasks without processing or throwing exceptions. This is the best strategy if you allow tasks to be lost.
All the preceding built-in policies implement the RejectedExecutionHandler interface. You can also extend the RejectedExecutionHandler interface to define your own rejection policy
11, multi-threaded high concurrency underlying principle
11.1 Separation of lock types
Lock separation (emphasis from low to high) | describe |
---|---|
Optimistic locking | By default, no other thread has modified it. This thread compares the version when modifying it. If the version is the same, modify it. The underlying CAS |
spinlocks | Bottom do while (! CAS) |
Pessimistic locking | Reentrant locks and default resources can be modified by other threads |
Exclusive – mutex | Non-reentrant: Mutex API use case |
11.2 Java Memory Model (JMM)
The JMM isJAVA memory model(Java Memory Model). Because memory access logic varies between hardware manufacturers and operating systems, the result is that when you have code that works well and is thread-safe on one system, you have all kinds of problems on another. The Java memory model is designed to mask system and hardware differences so that a set of code can achieve the same access results on different platforms. JMM has matured and refined since the release of JSR-133, which began with Java 5.The JMM specifies that memory is mainly divided intoMain memoryandThe working memoryTwo kinds. The main memory and working memory partitions (heap, stack, method area) are in different dimensions from JVM memory partitions (heap, stack, method area). If necessary, main memory corresponds to the object instance part of the Java heap, and working memory corresponds to the part of the stack. At a lower level,Main memory corresponds to the physical memory of the hardware, and working memory corresponds to registers and caches.
The memory model has three major features:
• Atomicity: indivisibility. Such as a = 0; (a non-long and double) The operation is indivisible, so we say the operation is atomic. Another example: a++; This operation is actually a = a + 1; It’s divisible, so it’s not an atomic operation. All non-atomic operations are thread-safe and require sychronization or locking to make it an atomic operation. If an operation is atomic, it is said to be atomic. Java provides a number of atomic classes under the Concurrent package, and you can read the API to see how these are used. For example: AtomicInteger, AtomicLong, AtomicReference, and so on.
• ** Visibility: ** Each thread has its own working memory, so when a variable is modified in one thread, it may not be observed in other threads that the variable has been modified. ** Volatile, synchronized, and final implement visibility in Java. Volatile makes the content it modifies visible, but it does not guarantee atomicity.
• ** Orderliness: ** Java orderliness is thread related. All operations within a thread are ordered; if there are multiple threads, all operations are ordered. There is a delay between the JMM’s working memory and main memory, and Java reorders some instructions. Volatile and synchronized guarantee order, and many programmers only understand that these two keywords are mutually exclusive, but do not fully understand that volatile and synchronized also guarantee instruction reordering.
11.3 Volatile Keyword
1. Function: Synchronize the internal memory cache of the CPU in the thread to the main memory to ensure that other threads can access the resource data modified by volatile.
How it works: The Java language provides a weaker synchronization mechanism, known as volatile variables, to ensure that changes to variables are notified to other threads. When a variable is declared volatile, both the compiler and the runtime notice that the variable is shared and therefore do not reorder operations on it with other memory operations. Volatile variables are not cached in registers (working memory) or hidden from other processors, so volatile variables always return the most recently written value when read. JMM memory model
Access to volatile variables is not locked and therefore does not block the thread of execution, making volatile variables a lighter synchronization mechanism than the sychronized keyword.
When a variable is defined as volatile, it has two properties:
• Make this variable visible to all threads.
• Atomicity of variables is not guaranteed
Volatile performance: Volatile has almost the same read cost as normal variables, but writes are slower because it requires inserting many memory-barrier instructions into native code to keep the processor from executing out of order.
11.4 CAS: Optimistic spin lock
CAS: Compare and Swap. Compare and exchange. The CAS operation takes three basic parameters: memory address A, old value B, and new value C. It compares the contents of the specified memory address A with the old value given B, and if they are equal, replaces the contents with the new value provided in the instruction C; If not, the update fails. Similar to the process of changing the login password. The old password can be changed to a new one only when it is the same as the old one stored in the database. Otherwise, the new password cannot be changed.
**CAS is an optimistic locking algorithm to solve the problem of multi-threaded concurrency security. ** Because it compares whether the current value is the same as the previous value before updating the shared variable. If so, it updates. If not, the loop is executed (called spinlock) until the current value is the same as the previous value.
The Unsafe class is the core CAS class and provides hardware-level atomic operations (currently, virtually all cpus support hardware-level CAS operations).
// Object, object address, expected value, modified value
public final native boolean compareAndSwapInt(Object var1, long var2, int var4, int var5);
Copy the code
Disadvantages of CAS:
- High overhead: In the case of high concurrency, if repeated attempts to update a variable are not successful, it will put great pressure on the CPU
- ABA problem: When the variable is changed from A to B and then changed back to A, the value of the variable is equal to the expected value of A, but the modification cannot be determined. The CAS operation succeeds even after ABA modification.
- No guarantee of atomicity of code blocks: The CAS mechanism guarantees atomicity of only one variable, not the entire code block.
11.4 synchronizer AbstractQueuedSynchronizer AQS abstract queue
11.4.1 Framework Structure
AQS framework structure is as follows:
The new line tried to lock first, but did not get the lock. Prepare to queue, but only try to take the lock once before queuing.
11.4.2 Bottom layer can be re-entered
- Lock operation
AQS maintains a shared resource variable with volatile semantics (enabling visibility across multiple threads)stateAnd a FIRST-in-first-out.Thread wait queueThis queue is entered when a multithreaded contention state resource is blocked.Code judgment for reentrant locks –If: Determines whether the current thread is the process that already holds the lock.
11.4.3 The idea of realizing lock based on AQS
Most of the synchronization logic has been implemented in AQS. The inherited custom synchronizer only needs to implement the logic code of acquire and release state, which mainly includes the following methods:
• tryAcquire(int) : Exclusive mode. Attempts to obtain the resource return true on success and false on failure.
• tryRelease(int) : exclusive mode. Attempts to free resources return true on success and false on failure.
• tryAcquireShared(int) : Shared mode. Attempt to obtain resources. Negative numbers indicate failure; 0 indicates success, but no available resources are available. A positive number indicates success and free resources.
• tryReleaseShared(int) : share mode. Attempts to free the resource, returning true if subsequent wait nodes are allowed to wake up after release, false otherwise.
• isHeldExclusively() : Whether the thread is monopolizing resources. You only need to implement it if you use condition.
In other words:
AQS allows for exclusive locks (only one thread can acquire locks such as ReentrantLock) or shared locks (multiple threads can acquire locks such as Semaphore/CountDownLatch).
11.4.4 Implementing exclusive Lock Based on AQS
Direct JDK documentation in the case to rewrite their own exclusive lock code, official document address location:
11.5 Underlying Principles of ReentrantLock
Then take ReetrantLock as an example to illustrate the application of AQS at the bottom of the lock.There are three AQS implementation classes included in the ReentrantLock class:
-
An abstract class Sync
-
The unfair lock implementation class NonfaireSyncCopy the code
-
The fairlock implementation class FairSyncCopy the code
In the ReetrantLock source can be found:
11.5.1. Sync Abstract class
Internal methods mainly include:
/** * Custom methods: provide a fast path to the implementation of unfair locks */
abstract void lock(a);
/** * Custom generic methods, both subclasses' tryAcquire methods require the unfair trylock method */
final boolean nonfairTryAcquire(int acquires) {
final Thread current = Thread.currentThread();
int c = getState();
if (c == 0) { // If the current thread does not acquire the lock
if (compareAndSetState(0, acquires)) { // CAS obtains the lock
setExclusiveOwnerThread(current); // Set the current thread to have exclusive access
return true; }}else if (current == getExclusiveOwnerThread()) { // If the current thread already has the lock, then reentrant
int nextc = c + acquires; // stat accumulates acquires each time you re-enter
if (nextc < 0) // overflow
throw new Error("Maximum lock count exceeded");
setState(nextc);
return true;
}
return false;
}
/** * implement AQS lock release method */
protected final boolean tryRelease(int releases) {
int c = getState() - releases; // Reduce releases of stat every time
if(Thread.currentThread() ! = getExclusiveOwnerThread())// The current thread does not have a lock thread throw exception
throw new IllegalMonitorStateException();
boolean free = false;
if (c == 0) { // if stat decreases to 0, the lock is released
free = true;
setExclusiveOwnerThread(null);
}
setState(c);
return free;
}
protected final boolean isHeldExclusively(a) {
// While we must in general read state before owner,
// we don't need to do so to check if current thread is owner
return getExclusiveOwnerThread() == Thread.currentThread();
}
Copy the code
9.4.4.2. NonfairSync
/** * Sync object for non-fair locks */
static final class NonfairSync extends Sync {
private static final long serialVersionUID = 7316153563782823691L;
/** * Performs lock. Try immediate barge, backing up to normal * acquire on failure. */
final void lock(a) {
if (compareAndSetState(0.1)) // CAS sets stat to 1
setExclusiveOwnerThread(Thread.currentThread()); // Get the lock
else
acquire(1);
}
protected final boolean tryAcquire(int acquires) {
return nonfairTryAcquire(acquires); // Use the nonfairTryAcquire method of the Sync abstract class}}Copy the code
Acquire (1) method is the essence of AQS ‘own implementation, which is to call tryAcquire method. If tryAcquire obtains the lock and cannot enter the waiting queue, the thread will be terminated.
9.4.4.3. FairSync
/** * Sync object for fair locks */
static final class FairSync extends Sync {
private static final long serialVersionUID = -3000897897090466540L;
final void lock(a) {
acquire(1);
}
/** * Fair version of tryAcquire. Don't grant access unless * recursive call or no waiters or is first. */
protected final boolean tryAcquire(int acquires) {
final Thread current = Thread.currentThread();
int c = getState();
if (c == 0) {
if(! hasQueuedPredecessors() &&// Get the wait from the thread ordered wait queue
compareAndSetState(0, acquires)) {
setExclusiveOwnerThread(current);
return true; }}else if (current == getExclusiveOwnerThread()) { / / reentrant
int nextc = c + acquires;
if (nextc < 0)
throw new Error("Maximum lock count exceeded");
setState(nextc);
return true;
}
return false; }}Copy the code
Hasqueued24 is implemented as follows:
-
When there is only one thread in the wait queue, the lock is acquired directly
-
If there is more than one thread in the queue, but the next thread is the thread currently applying for the lock, the lock is acquired