Hello everyone, this is a full set of Java review process, this is the fourth issue of multi-threaded concurrent articles, shu Zhen Zhen building hey hey hey…

Beat the interviewer series – Multi-threaded concurrent series

White piao data

1, process thread difference, thread safety and non-thread safety difference

Process is the running process of the program, is the basic unit of resource allocation, process can contain multiple threads, multiple threads share heap, method area resources in the process

Threads are the smallest unit of execution for CPU task scheduling, and each thread has its own independent program counter, virtual machine stack, and local method stack

** Thread safety: ** Multiple threads operate on the same resource without affecting each other

** Non-thread-safe: ** Multiple threads operating on the same resource can affect each other

2, the thread state, start, run, wait, notify, yiled, sleep, and join the role of and distinction

Thread states: created, ready, running, blocked, dead

methods role The difference between
start Start the thread, which is automatically scheduled by the virtual machine to execute the run() method The thread is in the ready state
run Thread logic code blocks are processed, and the JVM schedules execution The thread is running
sleep Sleep the currently executing thread (suspend execution) Don’t release the lock
wait Causes the current thread to wait Release synchronization lock
notify Wakes up a single thread waiting on this object monitor Wake up a single thread
notifyAll Wakes up all threads waiting on this object monitor Wake up multiple threads
yiled Stop the current thread and let the thread with equal priority run Called with the Thread class
join Causes the current thread to stop and wait until another thread calling the Join method terminates Called with a thread object
Wait,notify, notifyAll

Where to block, where to wake up? Why is it in a synchronized code block?

Block:

Calls to all three methods will block the current thread. The thread will be placed in the request waiting queue for the Object, and then relinquish all synchronization requests currently held for the Object. The thread suspends all thread scheduling until one of the following conditions occurs:

The notify method of this Object is called by another thread, and this thread is the thread that is awakened.

(2) Another thread calls notifyAll of the Object.

Wake up:

The thread will be removed from the wait queue and become a schedulable thread again. It competes with other threads for object synchronization requests in the usual way. Once it retrieves the object’s synchronization request, all previous request states are restored, where the thread called wait. The thread will continue running where wait was called.

The reason:

Because wait() is an Object method that forces the lock on the current Object to be released, the current Object’s monitor Object must be held when wait() is called. Therefore, the wait() method is called in a synchronized method/code block.

Daemon thread, thread interrupt

Daemon thread:

T.setdaemon (true) is a daemon thread, also called a Sprite thread. If the main thread starts t thread, t thread is the daemon of the main thread. When the main thread finishes, it terminates.

White piao data

public class ThreadDaemon extends Thread{
	
	public ThreadDaemon(String name){
		super(name);
	}
 
	@Override
	public void run(a) {
		while(true){
			System.out.println(Thread.currentThread().getName() + "Thread running...");
			try {
				Thread.sleep(1000);
			} catch(InterruptedException e) { e.printStackTrace(); }}}public static void main(String[] args) {
		Thread t1 = new ThreadDaemon("Thread one");
		Thread t2 = new ThreadDaemon("Thread two");
		// Set to daemon thread
		t1.setDaemon(true);
		t2.setDaemon(true);
		// Start the thread
		t1.start();
		t2.start();
		// Exit the main thread after 2s
		try {
			Thread.sleep(2000);
		} catch(InterruptedException e) { e.printStackTrace(); }}}Copy the code

Thread interrupt:

t.interrupt(); Calling interrupt() does not interrupt the thread immediately, but only changes its interrupt status and interrupts it later

public class ThreadInterrupt extends Thread{
	
	public ThreadInterrupt(String name){
		super(name);
	}
 
	@Override
	public void run(a) {
		while(! interrupted()){// Interrupt status judgment
			System.err.println(Thread.currentThread().getName() + "Thread running...");
			try {
				Thread.sleep(1000);
			} catch(InterruptedException e) { e.printStackTrace(); }}}public static void main(String[] args) {
		Thread t1 = new ThreadInterrupt("Thread one");
		Thread t2 = new ThreadInterrupt("Thread two");
		// Start the threadt1.start(); t2.start(); t1.interrupt(); }}Copy the code

Java optimistic locking mechanism, CAS idea? Weaknesses? Is it atomic? How to guarantee?

Java optimistic locking mechanism for white piao data:

The optimistic lock is the opposite of the pessimistic lock. It is a positive idea that data cannot be modified, so it cannot be locked. However, optimistic locking will determine whether the data has been updated during the update. Optimistic locking is generally implemented in two ways (version number mechanism and CAS). Optimistic locking is suitable for scenarios with more reads and less writes, which increases the concurrency of the system. In Java. Java util. Concurrent. The atomic variable classes under the atomic is to use optimistic locking a way to achieve the CAS.

Optimistic locking is mostly implemented based on the data Version recording mechanism. Adding a version identifier to the data is typically done by adding a “Version” field to the database table in a database table-based version solution. This version number is read with the data when it is read, and is incremented with the version number when it is updated later. In this case, the version of the submitted data is compared with the current version of the database table. If the version of the submitted data is greater than the current version of the database table, the submitted data is updated. Otherwise, the submitted data is considered to be expired.

CAS ideas:

CAS is compare and swap. CAS is a well-known lock-free algorithm that allows synchronization between threads without locking. The CAS thread is not blocked, so it is also called non-blocking synchronization. The CAS algorithm involves three operations:

Need to read and write memory value V;

The value A for comparison;

The value B to be written

The value of V is updated with the value of B if and only if the value of V is equal to the value of A is equal to the value of V. Otherwise, no operation is performed (compare and replace is an atomic operation -A is compared with V, and V is replaced with B). In general, it is A spin operation, i.e. retry continuously

Disadvantages:

In the case of high concurrency, concurrency conflicts can easily occur, and if the CAS keeps failing, it will keep trying again, wasting CPU resources

Atomicity:

CAS ensures that operations on individual variables are atomic. In Java, volatile is used to ensure thread safety. CAS is helpless when multiple variables are involved; In addition, CAS implementation requires hardware support and cannot be directly used by ordinary Java users. It can only be implemented by atomic classes under the Atomic package, which limits its flexibility

6. Use method of synchronized? Underlying implementation?

Usage: there are three main ways of use

Modified instance method: lock the current object instance, before entering the synchronization code to obtain the current object instance lock

Accessorizing static methods: Locking the current class applies to all object instances of the class because static members do not belong to any of the instance objects. So if thread A calls the non-static synchronized method of an instance object, and thread B calls the static synchronized method of the class that the instance object belongs to, this is allowed, and mutual exclusion does not occur. Because a lock used to access a static synchronized method is the current class lock, and a lock used to access a non-static synchronized method is the current instance object lock.

Modify code block: specifies the lock object, locks the given object, and obtains the lock for the given object before entering the synchronous code base.

Summary: Synchronized locks only two types of resources: objects and classes.

Low-level implementation: The white prostitute data object head is the focus we need to pay attention to, it is the basis of synchronized implementation of lock, because synchronized apply for lock, lock, lock release are related to the object head. The main structure of the object header is composed of Mark Word and Class Metadata Address. Mark Word stores the hashCode, lock information, generational age or GC flag of the object. A Class Metadata Address is Class Metadata whose type pointer points to an object that the JVM uses to determine which Class the object is an instance of.

There are also different states of lock. In JDK6, there are only two states: lock free and lock (heavyweight lock). In JDK6, there are two states of synchronized: lock free, partial lock, lightweight lock, and heavyweight lock. The type and status of the lock are recorded in the object header Mark Word. The JVM needs to read the Mark Word data of the object during lock application and lock upgrade.

Each lock corresponds to a monitor object, which in the HotSpot virtual machine is implemented by ObjectMonitor (C++ implementation). Each object has a Monitor associated with it, and the relationship between the object and its Monitor can be implemented in various ways. For example, the monitor can be created and destroyed together with the object or automatically generated when a thread tries to acquire an object lock. However, when a monitor is held by a thread, it is locked.

How to use ReenTrantLock? Underlying implementation? Different from synchronized?

Since ReentrantLock is a set of mutex locks provided under the java.util.concurrent package, the ReentrantLock class provides some advanced features compared to Synchronized, including the following three:

1. Wait can be interrupted. When the thread holding the lock does not release it for a long time, the thread waiting can choose to give up waiting, which is equivalent to avoiding deadlock as Synchronized. This mechanism is implemented with lock.lockInterruptibly(). 2. Fair lock: When multiple threads are waiting for the same lock, they must obtain the lock according to the time sequence in which the lock is applied. Synchronized lock is an unfair lock. 3. Multiple conditions are bound to the lock. A ReentrantLock object can be bound to two objects simultaneously. ReenTrantLock provides a Condition class that can be used to wake up threads in groups, as opposed to synchronized, which wakes up either a random thread or all threads.Copy the code

Usage:

Api-level mutex requires lock() and unlock() methods in conjunction with a try/finally block

Underlying implementation:

The implementation of ReenTrantLock is a spinlock that is locked by cyclic calls to the CAS operation. It also performs better because it avoids the blocking that causes threads to enter the kernel state. Preventing threads from blocking into the kernel is a key to our analysis and understanding of lock design.

Difference from synchronized:

Synchronized is a Java keyword at the JVM level. Synchronized is implemented through monitor objects (Monitorenter and Monitorexit). Objects can only call wait/notify in synchronized blocks or methods. Already since jdk1.5 (Java. Util. Concurrent. The locks, Lock) provided by the API level of the Lock. The implementation of synchronized involves the upgrading of locks, including no lock, biased lock, spin lock and applying heavyweight lock to OS. ReentrantLock is implemented by using CAS (CompareAndSwap) spin mechanism to ensure atomicity of thread operations and volatile to ensure data visibility to achieve the locking function.

3. Manual release: Synchronized does not require users to manually release the lock, and the system will automatically let the thread release the occupation of the lock after the execution of synchronized code; ReentrantLock requires the user to manually release the lock. If the lock is not released manually, a deadlock may occur. This is typically done with lock() and unlock() methods combined with try/finally blocks, making release more flexible.

Synchronized is an uninterruptible type of lock, unless there is an exception in the locking code or the normal execution is completed; ReentrantLock can be interrupted, either by setting the timeout method with trylock(long Timeout,TimeUnit Unit) or by placing lockInterruptibly() in a block of code and calling the interrupt method.

A Boolean value is passed to the constructor new ReentrantLock to select the value. If the value is null, the default value is false and the value is true.

8, fair lock and unfair lock difference? Why is fair locking inefficient?

White piao information fair lock:

Fair locking naturally follows the FIFO (first in, first out) principle. The thread that arrives first will get the resource first, and the thread that arrives last will queue up and wait

Advantages: All threads get resources and do not starve to death in the queue.

Disadvantages: Throughput decreases, all but the first thread in the queue blocks, and it is expensive for the CPU to wake up blocked threads

Unfair lock:

When multiple threads try to obtain the lock, they will directly try to obtain it. If they cannot obtain it, they will enter the waiting queue. If they can obtain it, they will directly obtain the lock.

Advantages: can reduce the CPU wake up thread overhead, the overall throughput efficiency will be higher, CPU does not have to wake up all threads, will reduce the number of wake up threads.

Disadvantages: As you may have noticed, this can result in a thread in the middle of the queue not acquiring the lock for long periods of time

Low efficiency of fair lock

If the lock is idle, the thread must first check to see if any other thread is waiting. If any thread wants to suspend, it will be added to the queue and wake up the thread at the top of the queue. There is one more hang and wake up in this case than in an unfair lock.

The overhead of thread switching is actually the reason why unfair locking is more efficient than fair locking, because unfair locking reduces the probability that the thread will hang, and the subsequent thread will have some chance to escape the hanging overhead.

9. Lock optimization. Spin lock, adaptive spin lock, lock elimination, lock coarsening, biased lock, lightweight lock, heavyweight lock explanation

Lock optimization:

[1] Reduce lock time: the code that does not need to be executed synchronously can not be executed in synchronously fast, so that the lock can be released as soon as possible;

[2] Reduce the granularity of locks: Its idea is to split a physical lock into multiple logical locks to increase the degree of parallelism and thus reduce lock competition. The idea is to trade space for time; Many data structures in Java use this approach to improve the efficiency of concurrent operations. For example:

ConcurrentHashMap:

Segment< K,V >[] segments: Segment< K,V >[] segments

Insert a HashEntry< K,V > array into each Segment. Insert a HashEntry< K,V > array into each Segment. Insert a HashEntry into each Segment. If put is executed, other segments will not be locked. So the number of segments in the array allows as many threads to hold data at any one time, increasing concurrency.

Insert a HashEntry< K,V > array into each Segment. Insert a HashEntry< K,V > array into each Segment. If put is executed, other segments will not be locked. So the number of segments in the array allows as many threads to hold data at any one time, increasing concurrency.

[3] Lock coarsening: In most cases, we want to minimize the lock granularity, and lock coarsening is to increase the lock granularity;

The granularity of locks needs to be coarsened in the following scenarios:

If we have a loop and the operations inside the loop need to be locked, we should put the lock outside the loop, otherwise every time we enter and exit the loop, we enter and exit the critical region, which is very inefficient;

[4] Use read/write lock:

ReentrantReadWriteLock is a read/write lock. Read operations add a read lock and can be read concurrently. Write operations use a write lock and can only be written in a single thread.

[5] Use CAS:

If need to be synchronized operation execution speed is very fast, and the thread is not competitive, this time using cas efficiency will be higher, because the lock will lead to the thread context switching, if a context switch time consuming more time consuming than synchronous operation itself, and the thread of resources is not competitive, using volatiled + cas operation is a very efficient choice;

The spin lock.

Spinlocks principle is very simple, if the thread holding the lock can lock is released in a very short time resources, and the thread lock wait for competition there is no need to do between kernel mode and user mode switch into the block pending state, they just need to wait for a while (spin), such as thread holding the lock immediately after releasing the lock locks, thus avoiding the consumption of user and kernel thread switching.

Disadvantages: If the lock is held by other threads for a long time and the CPU is not released, it will bring a lot of performance overhead; The default number of spins is 10

Adaptive spin lock:

Further optimization of the above spin lock optimization method, its spin number is no longer fixed, its spin number is determined by the previous spin time on the same lock and the state of the lock owner, which solves the disadvantages brought by the spin lock

Lock elimination:

Lock shaving refers to the fact that the virtual machine just-in-time compiler runs on locks that require synchronization but detect that there is no possibility of competing for shared data.

Lock coarsening.

If we have a loop and the operations inside the loop need to be locked, we should put the lock outside the loop, otherwise every time we enter and exit the loop, we enter and exit the critical region, which is very inefficient;

Biased locking:

The so-called bias, is eccentric, that is, the lock will be biased to the current thread has occupied the lock; That is to say, this thread already holds the lock, when the time he tried to acquire the lock, he will have the fastest way to get the lock, and some don’t need to monitor the operation, so that he will be to improve performance, because in most cases there is no competition, so the lock at this point is useless, So using biased locking can improve performance;

Heavyweight locks:

The process of locking and unlocking heavyweight locks is similar to that of lightweight locks, but the difference is that the thread blocks after losing the competition, and the blocked thread wakes up after releasing the lock. Without using the spin lock, it will not consume CPU so much. Therefore, heavyweight locks are suitable for the synchronous block execution for a long time.

Java memory model

The Java Memory Model (JMM) is a mechanism and specification that conforms to the specification of the Memory Model, shielding the access differences of various hardware and operating systems, and ensuring the consistency of the access effect of Java programs on various platforms.

JMM is a specification to solve the problems caused by inconsistencies in local memory data, compiler reordering of code instructions, and out-of-order execution of code by processors when multiple threads communicate through shared memory. The goal is to ensure atomicity, visibility, and order in concurrent programming scenarios.

Therefore, the Java memory model, in addition to defining a set of specifications, provides a set of primitives that encapsulate the underlying implementation for developers to use directly. As mentioned earlier, concurrent programming addresses issues of atomicity, orderliness, and consistency.

In Java, two advanced bytecode instructions, Monitorenter and Monitorexit, are provided to ensure atomicity. The Java equivalent for these two bytecodes is Synchronized. Therefore, Synchronized can be used in Java to ensure that operations within methods and code blocks are atomic.

Visibility:

The Java memory model relies on main memory as a transfer medium by synchronizing the new value back to main memory after a variable is modified and flushing the value from main memory before the variable is read. Variables decorated with the Volatile keyword in Java can be synchronized to main memory immediately after modification. The variable it decorates is flushed from main memory before each use. Therefore, Volatile can be used to ensure visibility of variables in multithreaded operations. In addition to Volatile, the Java keywords Synchronized and Final are also visible. It’s just implemented differently

order

In Java, Synchronized and Volatile can be used to ensure order between multiple threads. Difference: Volatile disallows instruction reordering. Synchronized ensures that only one thread can operate at a time.

11. What does volatile do? Underlying implementation? What is the role of volatile in singleton mode?

Function:

Ensure “visibility” of data: Volatile variables ensure that each thread gets the latest value of the variable, preventing dirty reads.

Disallow instruction reordering: Instruction reordering may result in inconsistent results in multi-threaded operations

Underlying implementation:

“Looking at the assembly code generated with and without volatile, we found that volatile had an extra lock prefix.”

The LOCK prefix directive actually acts as a memory barrier (also known as a memory fence) that provides three functions:

1) It ensures that instruction reordering does not place subsequent instructions in front of the memory barrier, nor does it place previous instructions behind the memory barrier; That is, by the time the memory barrier instruction is executed, all operations in front of it have been completed;

2) It forces changes to the cache to be written to main memory immediately;

3) If it is a write operation, it invalidates the corresponding cache line in the other CPU.

The role of volatile in singleton mode:

Prevent code from reading that instance is not null if the object referenced by instance has not been initialized.

class Singleton{
    private volatile static Singleton instance = null;   // Disable command reordering
    private Singleton(a) {}public static Singleton getInstance(a) {
        if(instance==null) {
            synchronized (Singleton.class) {
                if(instance==null)
                    instance = newSingleton(); }}returninstance; }}Copy the code

12, AQS idea, and AQS based on lock, CountDownLatch, CyclicBarrier, Semaphore introduction

AQS all called (AbstractQueuedSynchronizer) abstract queue type synchronizer, is a framework used to build the lock and synchronizer, using AQS can simple and widely used to efficiently construct a lot of synchronizer, such as: AQS based lock, CountDownLatch, CyclicBarrier, Semaphore

The core idea of AQS is that if the requested shared resource is idle, the thread of the current requested resource is set as a valid worker thread, and the shared resource is set to the locked state. If the requested shared resource is occupied, then a mechanism is required for thread blocking and waiting and lock allocation when woken up. This mechanism is implemented by CLH (virtual two-way queue) queue lock, which is to queue up threads that cannot acquire the lock temporarily.

Lock: White sex data is a reentrant lock. In addition to all the work that synchronized can do, it also provides methods such as responsible interrupt lock, polling lock request, timing lock and so on to avoid multi-thread deadlock. Default is non-fair lock, but can be initialized to fair lock; Use methods lock() and unlock() to lock and unlock.

CountDownLatch:

By counting (counting timers), some threads are blocked until another thread completes a sequence of operations. This tool is usually used to control thread waiting, which allows a thread to wait until the countdown is over before starting execution.

Suppose we have such a scene, in the classroom there are monitor and other 6 people in the classroom self-study, how to ensure that monitor and other 6 people out of the classroom to close the classroom door.

public class CountDownLanchDemo {
    public static void main(String[] args) {
        for (int i = 0; i < 6; i++) {
            new Thread(() -> {
                System.out.println(Thread.currentThread().getName() + "Left the classroom...");
            }, String.valueOf(i)).start();
        }
        System.out.println("The monitor closed the door and left the classroom..."); }}Copy the code

Output:

0Left the classroom...1Left the classroom...2Left the classroom...3Left the classroom... The monitor closed the door and left the classroom...5Left the classroom...4Left the classroom...Copy the code

When the monitor closed the door without waiting for the others to latch, we could use CountDownLatch to control the latch

public class CountDownLanchDemo {
    public static void main(String[] args) throws InterruptedException {
        CountDownLatch countDownLatch = new CountDownLatch(6);
        for (int i = 0; i < 6; i++) {
            new Thread(() -> {
                countDownLatch.countDown();
                System.out.println(Thread.currentThread().getName() + "Left the classroom...");
            }, String.valueOf(i)).start();
        }
        countDownLatch.await();
        System.out.println("The monitor closed the door and left the classroom..."); }}Copy the code

Output:

0Left the classroom...1Left the classroom...2Left the classroom...3Left the classroom...4Left the classroom...5Left the classroom... The monitor closed the door and left the classroom... CyclicBarrier:Copy the code

Literally means a Barrier that can be used Cyclic. What he does is to have a group of threads block when they reach a barrier (also known as a synchronization point), and until the last thread reaches the barrier, the barrier will open and all threads blocked by the barrier will continue to work. Threads enter the barrier and pass the await() method of the CyclicBarrier.

Let’s say we have a scenario where we can only fit four people in each car, and when the car is full, we start.

public class CyclicBarrierDemo {
    public static void main(String[] args) {
        CyclicBarrier cyclicBarrier = new CyclicBarrier(4, () -> {
            System.out.println("The bus is full, let's go...");
        });
        for (int i = 0; i < 8; i++) {
            new Thread(() -> {
                System.out.println(Thread.currentThread().getName() + "Start boarding...");
                try {
                    cyclicBarrier.await();
                } catch (InterruptedException e) {
                    e.printStackTrace();
                } catch(BrokenBarrierException e) { e.printStackTrace(); } }).start(); }}}Copy the code

Output result:

Thread-0Start boarding... Thread-1Start boarding... Thread-3Start boarding... Thread-4Start boarding... The bus was full and started to move... Thread-5Start boarding... Thread-7Start boarding... Thread-2Start boarding... Thread-6Start boarding... The bus was full and started to move...Copy the code

Semaphore:

Semaphore is mainly used for two purposes, one is used for mutual exclusion of multiple shared resources, and the other is used for control of the number of concurrent threads.

Let’s say we have 3 parking Spaces and 6 cars are fighting for them; Specifies that multiple threads access a resource at the same time.

public class SemaphoreDemo {
  public static void main(String[] args) {
      Semaphore semaphore = new Semaphore(3);
      for (int i = 0; i < 6; i++) {
          new Thread(() -> {
              try {
                  semaphore.acquire(); // Get a license
                  System.out.println(Thread.currentThread().getName() + "Grab a parking spot...");
                  Thread.sleep(3000);
                  System.out.println(Thread.currentThread().getName() + "Leave the parking space.");
              } catch (InterruptedException e) {
                  e.printStackTrace();
              } finally {
                  semaphore.release(); // Release a license} }).start(); }}}Copy the code
/** Thread-1; /** Thread-1; Thread-2 grabs a parking space... Thread-0 grabs a parking space... Thread-2 leaves the parking space Thread-0 leaves the parking space Thread-3 Grabs the parking space... Thread-1 left the parking space Thread-4 grabbed the parking space... Thread-5 Grabs a parking space... Thread3 out of the parking space thread5 out of the parking space thread4 out of the parking space */
Copy the code

13, thread pool constructor 7 parameters, thread processing task process, thread rejection policy

/** * Thread pool constructor 7 big arguments */
public ThreadPoolExecutor(int corePoolSize,int maximumPoolSize,long keepAliveTime,
    TimeUnit unit,BlockingQueue<Runnable> workQueue,ThreadFactory threadFactory,
    RejectedExecutionHandler handler) {
    	if (corePoolSize < 0 ||maximumPoolSize <= 0 ||maximumPoolSize < corePoolSize ||
    			keepAliveTime < 0)
 			throw new IllegalArgumentException();
    	if (workQueue == null || threadFactory == null || handler == null)
    		throw new NullPointerException();
    	this.corePoolSize = corePoolSize;
   		this.maximumPoolSize = maximumPoolSize;
        this.workQueue = workQueue;
        this.keepAliveTime = unit.toNanos(keepAliveTime);
        this.threadFactory = threadFactory;
        this.handler = handler;
}
Copy the code

Parameter description: White piao information

parameter role
corePoolSize Core thread pool size
maximumPoolSize Maximum thread pool size
keepAliveTime Maximum lifetime of idle threads in a thread pool that exceed the number of corePoolSize; AllowCoreThreadTimeOut (true) can be used to make the core thread valid
TimeUnit keepAliveTime Unit of time
workQueue Blocking the task queue
threadFactory New Thread Factory
RejectedExecutionHandler Rejection strategy. If the number of submitted tasks exceeds the total number of maxmumPoolSize+workQueue, the task will be processed by RejectedExecutionHandler
Thread rejection policy:

The thread pool has run out of threads to service new tasks, and the wait queue is full for new tasks. At this point we need to reject the strategy mechanism to properly handle this problem.

The built-in rejection policy in the JDK is as follows:

AbortPolicy: Directly throws an exception to prevent the system from running normally.

CallerRunsPolicy: This policy runs the currently discarded task directly in the caller thread as long as the thread pool is not closed. Obviously, this will not actually drop the task, but it is highly likely that the performance of the task submission thread will drop dramatically.

DiscardOldestPolicy: Discards the oldest request, i.e. the task to be executed, and attempts to resubmit the current task.

DiscardPolicy: This policy silently discards unprocessed tasks without any processing. This is the best solution if you allow task loss.

Thread processing task process:

  1. When the thread pool is smaller than corePoolSize, the newly submitted task creates a new thread to execute the task, even if there are idle threads in the thread pool.
  2. When the thread pool reaches corePoolSize, the new submitted task is put into the workQueue and waits for the task scheduled to execute in the thread pool.
  3. When the workQueue is full and maximumPoolSize is greater than corePoolSize, a new thread is created to execute the new submitted task.
  4. If the number of submitted tasks exceeds maximumPoolSize, RejectedExecutionHandler handles the new task.
  5. When there are more corePoolSize threads in the thread pool and the idle time reaches keepAliveTime, the idle thread is closed.

The Execuors class implements several thread pool types.

  • Executors. NewSingleThreadExecutor () : there is only one thread of the thread pool, so the task of all submissions are sequentially, apply to a scene a task execution
  • Executors. NewCachedThreadPool () : there are a lot of threads in thread pool need to perform at the same time, the old threads will be new task trigger to perform are available, and if the thread didn’t perform more than 60 seconds, so will be terminated and is removed from the pool, used to perform many short-term asynchronous small program or service load is lighter
  • Executors. NewFixedThreadPool () : with fixed number of threads of the thread pool, if there is no task execution, the thread will be has been waiting for, suitable for long-term mission, performance better.
  • Executors. NewScheduledThreadPool () : used for scheduling the execution of task thread pool

Because all the above methods have disadvantages:

FixedThreadPool and SingleThreadExecutor: Allows requests with a queue length of Integer.MAX_VALUE, which can pile up requests to OOM. CachedThreadPool and ScheduledThreadPool: The number of threads allowed to be created is integer. MAX_VALUE, which may create a large number of threads, resulting in OOM.

How to set the thread pool size?

cpu-intensive

CPU intensive means that the task requires a lot of computation without blocking and the CPU runs at full speed all the time. The minimum number of threads for cpu-intensive tasks, typically the number of CPU cores + 1 thread pool.

IO intensive

Since IO intensive task threads are not always executing tasks, more threads can be allocated. For example, CPU * 2 can also use the formula: CPU cores/(1 – blocking coefficient); The blocking coefficient is between 0.8 and 0.9.

In the end, I wish you all success as soon as possible, get satisfactory offer, fast promotion and salary increase, and walk on the peak of life. If you can please give me a triple support for me yo, we will see you next time

White piao data