Phase to recommend

  • Learn more about Java exceptions
  • Java Basics: The Object class
  • Do you understand these frequent meeting questions of the Object class?
  • Spring common API: Spring class and related surface pilot
  • Java concurrent programming: Threads

Why do we have thread pools

In actual use, the thread is very occupy system resources, if the thread management is not perfect, it is easy to lead to system problems. Therefore, thread pools are used in most concurrency frameworks to manage threads. The main benefits of using thread pools to manage threads are as follows:

1.Reducing resource consumption A thread pool can fully reuse threads and allow them to continuously process tasks, effectively avoiding the resource consumption caused by frequently creating and destroying threads.2.Improved response time When a task arrives, it can be executed immediately without waiting for a thread to be created, without the cost of creating and destroying the thread.3.Improving thread manageability can prevent the creation of infinite threads`OutOfMemoryError`If you create threads without limit, it not only consumes system resources, but also reduces system stability. Thread pools are used for consent allocation, tuning, and monitoring.Copy the code

At the same time, Alibaba also mandates in its Java Development Manual that thread resources must be provided through thread pools, and it is not allowed to explicitly create threads in applications.

  • The core workflow of a thread pool:

Seven ways to create a thread pool

  • (1) the Executors. NewFixedThreadPool:

Create a fixed size thread pool to control the number of concurrent threads. Excess threads are queued. It uses the LinkedBlockingQueue queue, which is actually a bounded queue.

public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }
Copy the code
  • (2) the Executors. NewCachedThreadPool:

Create a pool of cacheable threads. If the number of threads exceeds the number required for processing, the pool will be reclaimed after a period of time. If the number of threads is insufficient, a new thread will be created. The maximum number of threads it can hold is integer.max_value, so it is more prone to memory overruns.

public static ExecutorService newCachedThreadPool() {
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                  60L, TimeUnit.SECONDS,
                                  new SynchronousQueue<Runnable>());
}
Copy the code
  • (3) the Executors. NewSingleThreadExecutor:

Create a thread pool with a single number of threads, which ensures first-in, first-out execution order; It will only execute tasks with a unique worker thread. If the unique thread terminates due to an exception, it will be replaced by a new worker thread, which must ensure that the previous task is executed before the next one can be executed.

public static ExecutorService newSingleThreadExecutor() {
    return new FinalizableDelegatedExecutorService
        (new ThreadPoolExecutor(1.1,
                                0L, TimeUnit.MILLISECONDS,
                                new LinkedBlockingQueue<Runnable>()));
}
Copy the code
  • (4) the Executors. NewScheduledThreadPool:

Create a thread pool that can perform deferred tasks. Commands can be scheduled to run after a given delay or to be executed periodically. DelayedWorkQueue is used, which has the ability to delay

public ScheduledThreadPoolExecutor(int corePoolSize) {
            super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
              newDelayedWorkQueue()); }}Copy the code
  • (5) the Executors. NewSingleThreadScheduledExecutor:

Create a single threaded pool that can perform deferred tasks.

public static ScheduledExecutorService newSingleThreadScheduledExecutor(ThreadFactory threadFactory) {
    return new DelegatedScheduledExecutorService
        (new ScheduledThreadPoolExecutor(1, threadFactory));
}
Copy the code
  • (6) Executors. NewWorkStealingPool:

Creating a preemptively executing thread pool (a new thread pool added to JDK1.8) will allow a multi-core CPU to be idle, and there will always be threads alive for the CPU to run.

public static ExecutorService newWorkStealingPool(int parallelism) {
    return new ForkJoinPool
        (parallelism,
         ForkJoinPool.defaultForkJoinWorkerThreadFactory,
         null.true);
}
Copy the code
  • (7) ThreadPoolExecutor:

The original way to create a thread pool, it contains 7 parameters to set, more on this later.

ThreadPoolExecutor

1. The seven parameters of ThreadPoolExecutor

In the development of alibaba clearly pointed out in the statute, the thread pool Executors are not allowed to be used to create, but to create through ThreadPoolExecutor, because of such treatment can make us more clear thread pool operation rules, to avoid the risk of resource depletion.

public ThreadPoolExecutor(int corePoolSize, // Size of the core thread pool
                          int maximumPoolSize, // The maximum number of core thread pools
                          long keepAliveTime,	// The maximum idle time
                          TimeUnit unit,	// Timeout unit
                          BlockingQueue<Runnable> workQueue,  // Block the queue
                          ThreadFactory threadFactory, // Thread factory, which is used to create threads
                          RejectedExecutionHandler handler  // Reject policy
Copy the code

1. Int corePoolSize: Indicates the number of core threads in a thread pool. If the allowCoreThreadTimeOut attribute is false, the core thread will not be reclaimed even if it is idle. If the property is true, the core thread can also be destroyed.

Int maximumPoolSize: Specifies the maximum number of threads that a thread pool can hold.

3. Long keepAliveTime: The maximum time an idle thread can wait for a new task.

4.TimeUnit unit: indicates the TimeUnit of keepAliveTime.

5.BlockingQueue

workQueue: This queue is used to hold tasks until they are executed.

ThreadFactory: the ThreadFactory used by the thread pool to create new threads.

7. RejectedExecutionHandler handler: thread pool refused to perform processing procedures. (4 rejection strategies)

ThreadPoolExecutor rejects four policies

  • AbortPolicy: when the thread pool saturation, throw RejectedExecutionException anomalies.
/ * * * refused to strategy 1 AbortPolicy - > beyond the maximum bearing capacity behind Java. Util. Concurrent. RejectedExecutionException * /
public class demo1 {
    public static void main(String[] args) {
        // Customize the thread pool
        ThreadPoolExecutor threadPool = new ThreadPoolExecutor(
                2.5.0,
                TimeUnit.SECONDS,
                new LinkedBlockingDeque<>(3),
                Executors.defaultThreadFactory(),
                new ThreadPoolExecutor.AbortPolicy());

        try{
            // maximumPoolSize = Capacity of LinkedBlockingDeque + maximumPoolSize
            for (int i = 1; i <=9; i++) {
                / / than maximumPoolSize + the size of the capacity, is rejected and throw the Java. Util. Concurrent. RejectedExecutionException anomaliesthreadPool.execute(()->{ System.out.println(Thread.currentThread().getName()); }); }}catch (Exception e){
            e.printStackTrace();
        }finally{ threadPool.shutdown(); }}}Copy the code
  • CallerRunsPolicy: When the thread pool is full, the thread that submitted the task executes the task.
/** * Reject policy 2 CallerRunsPolicy -> Return to main thread */
public static void main(String[] args) {
    // Customize the thread pool
    ThreadPoolExecutor threadPool = new ThreadPoolExecutor(
            2.5.0,
            TimeUnit.SECONDS,
            new LinkedBlockingDeque<>(3),
            Executors.defaultThreadFactory(),
            new ThreadPoolExecutor.CallerRunsPolicy());

    try{
        // maximumPoolSize = Capacity of LinkedBlockingDeque + maximumPoolSize
        for (int i = 1; i <=9; i++) { threadPool.execute(()->{ System.out.println(Thread.currentThread().getName()); }); }}catch (Exception e){
        e.printStackTrace();
    }finally{ threadPool.shutdown(); }}Copy the code
  • DiscardPolicy: silently discards the task.
/** * Reject policy 3 DiscardPolicy -> Discard the task when the load exceeds the maximum, and do not throw an exception */
public class demo3 {
    public static void main(String[] args) {
        // Customize the thread pool
        ThreadPoolExecutor threadPool = new ThreadPoolExecutor(
                2.5.0,
                TimeUnit.SECONDS,
                new LinkedBlockingDeque<>(3),
                Executors.defaultThreadFactory(),
                new ThreadPoolExecutor.DiscardPolicy());

        try{
            // maximumPoolSize = Capacity of LinkedBlockingDeque + maximumPoolSize
            for (int i = 1; i <=9; i++) { threadPool.execute(()->{ System.out.println(Thread.currentThread().getName()); }); }}catch (Exception e){
            e.printStackTrace();
        }finally{ threadPool.shutdown(); }}}Copy the code
  • DiscardOldestPolicy: Discards the earliest enqueued task in the blocking queue.
/ * * * 4 DiscardOldestPolicy rejection policies - > beyond the maximum bearing capacity, discard the long pending request in the queue, and then try again, don't throw an exception * /
public class demo4 {
    public static void main(String[] args) {
        // Customize the thread pool
        ThreadPoolExecutor threadPool = new ThreadPoolExecutor(
                2.5.0,
                TimeUnit.SECONDS,
                new LinkedBlockingDeque<>(3),
                Executors.defaultThreadFactory(),
                new ThreadPoolExecutor.DiscardOldestPolicy());

        try{
            // maximumPoolSize = Capacity of LinkedBlockingDeque + maximumPoolSize
            for (int i = 1; i <=9; i++) { threadPool.execute(()->{ System.out.println(Thread.currentThread().getName()); }); }}catch (Exception e){
            e.printStackTrace();
        }finally{ threadPool.shutdown(); }}}Copy the code

3. Workflow of ThreadPoolExecutor

  1. Determines if the core thread pool is full, and if not, creates a new worker thread to perform the task.

  2. Determine whether the task queue is full, if not, add the new submitted task to the work queue.

  3. Determine if the entire thread pool is full, if not, create a new worker thread to execute the task, if it is full, execute the saturation (reject) policy.

The workflow is shown as follows:

  • When a task is submitted to a thread pool for execution, the thread pool status changes as follows:

(1) When the number of threads in the thread pool is less than the corePoolSize, the thread pool creates new threads to perform the tasks submitted to the thread pool.

(2) When the number of threads in the thread pool is greater than or equal to the corePoolSize, the task will enter the blocking queue and wait for the core thread to execute. If the blocking queue does not reach its maximum capacity, the task is successfully queued. Otherwise, go to Step 3.

(3) When the number of threads in the thread pool is greater than or equal to the corePoolSize and the blocking queue is full, new non-core threads are created in the thread pool to perform tasks.

(4) If the number of threads in the pool is smaller than maximumPoolSize, the creation of non-core threads succeeds. Otherwise, go to Step 4.

(5) When the number of threads in the thread pool is equal to maximumPoolSize, the thread pool refuses to execute the handler.

4. Five states of a ThreadPoolExecutor

  1. RUNNING: Can accept new submitted tasks and also process tasks in the blocking queue.

  2. SHUTDOWN: The state in which new tasks are not accepted, but tasks that have been saved in the blocking queue can continue to be processed. Calling the shutdown() method while the thread pool is in the RUNNING state causes the thread pool to enter that state.

  3. STOP: New tasks cannot be accepted and tasks in the queue will not be processed. The thread that is processing the task will be interrupted. Calling the shutdownNow() method when the thread pool is in the RUNNING or SHUTDOWN state causes the thread pool to enter that state.

  4. TIDYING: If all tasks have terminated and the workerCount (valid threads) is 0, the pool enters the state and calls the terminated() method to enter the state.

  5. TERMINATED: to enter the state after the TERMINATED () method has executed, nothing is done in the TERMINATED () method by default. The conditions for entry to TERMINATED are as follows:

    • The thread pool is not in the RUNNING state.

    • The thread pool state is not TIDYING or TERMINATED.

    • If the thread pool status is SHUTDOWN and the workerQueue is empty;

    • WorkerCount 0;

    • Setting the TIDYING state succeeded. Procedure

5. Disable ThreadPoolExecutor

  • Close the thread poolshutdownandshutdownNowTwo methods.
    • ShutdownNow first sets the state of the thread pool to STOP, then attempts to STOP all threads with executing and executing tasks, and returns a list of waiting tasks;

    • Shutdown simply sets the state of the thread pool to shutdown, and then interrupts all threads that are not executing tasks.

6. Properly size the ThreadPoolExecutor queue

  1. CPU intensive task

    • Try to use a small thread pool, typically the number of CPU cores +1. CPU usage is high due to CPU-intensive tasks. If too many threads are enabled, CPU switching may occur.
  2. IO intensive task

    • Slightly larger thread pools can be used, typically 2*CPU cores. IO intensive tasks do not use much CPU, so you can make full use of CPU time by allowing other threads to work on other tasks while waiting for IO.
  3. Hybrid task

    • You can divide tasks into IO – and CPU-intensive tasks and use separate thread pools to handle each. As long as the execution time of the two tasks is not different, it will be more efficient than the sequential execution. Because if there is a data-level difference between the execution time of the two tasks after partitioning, then splitting makes no sense. Because the tasks that are executed first will wait for the tasks that will be executed later, the final time still depends on the tasks that will be executed later, and the cost of splitting and merging the tasks is added, so the gains are not worth the losses.

referenceThe title &

The Art of Concurrent Programming in Java

zhuanlan.zhihu.com/p/337544553