A computer’s CPU resources are limited, and there is no positive correlation between the processing speed of a task and the number of threads. When the number of threads is too high, the CPU has to switch between different threads frequently, which can lead to a decrease in processing performance. The maximum number of threads in a thread pool is set in advance considering various factors, such as hardware conditions, type of business, and so on.

When we request a thread from a thread pool of a fixed size, how does the thread pool handle the request when there are no free resources in the pool? Reject the request or queue the request? How are the strategies implemented?

In fact, the processing of these problems is not complicated, the underlying data structure, is a queue.

Introduction to Java thread pools

1. The role of thread pools

Limit the number of execution threads in the system. Reduces the number of threads created and destroyed, reusing threads.

2. main classes

Executtor: ExecutorSerivce: Thread pool interface ThreadPoolExecutor: Thread pool Executors: Common thread pool factory

3. Common thread pools

Configuring a thread pool is a complex process, so you can use existing thread pool factories to generate common thread pools:

  1. NewFixedThreadPool Creates a fixed-length thread pool that controls the maximum number of concurrent threads, and the excess threads wait in the queue. To make proper use of resources, we usually set the pool length to the number of CPU cores the current PC obtains: Runtime.getruntime (). AvailableProcessors (): the number of CPU cores the current PC obtains;
  2. NewCachedThreadPool Creates a cacheable thread pool. If the length of the thread pool exceeds the processing requirement, idle threads can be recycled flexibly. If none can be recycled, new threads can be created.
  3. NewScheduledThreadPool Creates a thread pool of fixed length that supports scheduled and periodic task execution.
  4. NewSingleThreadExecutor creates a single-threaded thread pool that uses only one worker thread to execute tasks, ensuring that all tasks are executed in the specified order (FIFO, LIFO, priority).
import java.util.concurrent.Executor; import java.util.concurrent.Executors; public class MyThreadPool { public static void main(String [] args){ int num = Runtime.getRuntime().availableProcessors(); Executor executor = Executors.newFixedThreadPool(num); for (int i = 0 ; i<num ; I++){executor.execute(new Runnable() {@override public void run() {system.out.println (" I am a child thread!!") ); }}); }}}Copy the code

. Let’s look at Executors newFixedThreadPool (num), point, will find that is new to a LinkedBlockingQueue:

    public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }Copy the code

Second, thread pools and queues combine to implement a log processing

The JDK’s own thread pool base is not only implemented with queues, but we can also use thread pools and queues in combination to achieve some functionality.

Usually we put the tasks to be executed into a queue, which is executed by a thread pool, such as crawlers and logs. Let’s start with an example of logging using a combination of thread pools and queues.

import com.swagger.demo.Entity.LogContentEntity; import org.aspectj.lang.JoinPoint; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Configuration; import org.springframework.stereotype.Component; import javax.servlet.http.HttpServletRequest; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.LinkedBlockingQueue; @Configuration @Aspect @Component public class AopLogConfig implements Runnable { @Autowired private HttpServletRequest request; private LinkedBlockingQueue<LogContentEntity> logQueue; Public AopLogConfig() {// When the object is created after Spring starts. Initialize queues and thread pools. logQueue = new LinkedBlockingQueue<LogContentEntity>(3000); int num = Runtime.getRuntime().availableProcessors(); ExecutorService executor = Executors.newFixedThreadPool(num); for (int i = 0 ; i<num ; i++){ executor.execute(this); } } @Before("execution(public * com.swagger.demo.controller.. *. * (..) )") public void doBefore(JoinPoint JoinPoint) throws Exception{// The information recorded in logs can be modified automatically LogContentEntity Log = new LogContentEntity(); String method = request.getMethod(); Log.setHttpMethod(method); String url = request.getRequestURL().toString(); Log.setUrl(url); String ip = request.getRemoteAddr(); Log.setIp(ip); Log.setContent("test Log Content"); // Queue the log object to be logged and wait for the thread to execute asynchronously. logQueue.put(Log); } @override public void run() {try{while(true){// If there is no queue, it will block; LogContentEntity take = logQueue.take(); // Log processing logic can be modified; System.out.println(take.toString()); } }catch(Exception e){ e.printStackTrace(); }}}Copy the code

Thread pool + queue Performs queue tasks in priority mode

import java.util.concurrent.TimeUnit; public class MyPriorityTask implements Runnable, Comparable<MyPriorityTask> { private int priority; private String name; public MyPriorityTask(String name, int priority) { this.name = name; this.priority = priority; } public void run() { System.out.printf("MyPriorityTask: %s Priority :%d\n", name, priority); try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } } public int compareTo(MyPriorityTask o) { if (this.getPriority() < o.getPriority()) { return 1; } if (this.getPriority() > o.getPriority()) { return -1; } return 0; } public int getPriority() { return priority; }}Copy the code

import java.util.concurrent.PriorityBlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class Main { public static void main(String[] args) { ThreadPoolExecutor executor = new ThreadPoolExecutor(2, 2, 1, TimeUnit.SECONDS, new PriorityBlockingQueue<Runnable>()); for (int i = 0; i < 100; i++) { MyPriorityTask task = new MyPriorityTask("Task " + i, 0); executor.execute(task); System.out.println(executor.getTaskCount()); } try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } for (int i = 101; i < 8; i++) { MyPriorityTask task = new MyPriorityTask("Task " + i, 1); executor.execute(task); System.out.println(executor.getTaskCount()); } try { executor.awaitTermination(1, TimeUnit.DAYS); } catch (InterruptedException e) { e.printStackTrace(); } System.out.printf("Main: End of the program.\n"); }}Copy the code

Some pitfalls of using thread pools

While thread pools are a powerful mechanism for building multithreaded applications, they are not without their drawbacks. Applications built using thread pools face the same concurrency risks that other multithreaded applications face, such as synchronization errors and deadlocks, as well as other specific drawbacks of thread pools, such as thread pool-association deadlocks, insufficient resources, and thread leaks.

1. The deadlock

Any multithreaded application runs the risk of deadlock. A pair of processes or threads that are waiting for an event that can only be provided by the other is called a deadlock. The simplest case of A deadlock is when thread A holds the exclusive lock on object X, thread A is waiting for the lock on object Y, and thread B holds the exclusive lock on object Y, and thread B is waiting for the lock on object X. Unless there is some way to break the lock wait (which the Java locking mechanism does not support), the pair of deadlocked threads will wait forever. Since deadlocks are a risk for all multithreaded programming, thread pools introduce another kind of deadlock: all threads in the thread pool are blocking the result of another task in the queue, but the other task cannot be executed because there are no free available threads in the pool at all. This can occur in a simulated implementation of a thread pool for some interacting objects that send queries to each other and then execute them as a task queue, with the object initiating the query waiting for the response synchronously.

2. Insufficient resources

One of the advantages of thread pools is that they provide better performance in most cases than other scheduling mechanisms, such as the ones we discussed above. But this depends on whether you have configured the thread pool size properly. Threads consume a lot of resources, including memory and other system resources. In addition to the memory required for thread objects, each thread requires two execution call stacks, which can be large. In addition, the JVM might create a local thread for each Java thread, which would consume additional system resources. Finally, although the scheduling overhead of switching between threads is small, a lot of thread context switching can affect your application’s performance. If the thread pool is too large, the resources consumed by these many threads will have a significant impact on system performance. Time can be wasted switching between threads, and having more threads configured than you really need can lead to under-resourcing because the resources occupied by threads in the pool could be used more efficiently for other tasks. In addition to the resources used by these threads themselves, the work done to service the request may require additional resources, such as JDBC connections, sockets, or files. These are also limited resources, and too many concurrent requests for them can lead to failures, such as failure to assign a JDBC connection.

3. Concurrency errors occur

Thread pools and other queuing mechanisms rely on the use of wait() and notify() methods, which can get tricky. If not coded properly, notifications can be lost, with the result that the threads in the pool are in an idle state when there are actually tasks in the queue that need to be processed. Be careful in using these tools; Even experts often make mistakes when using them. Fortunately, there are some time-tested, off-the-shelf implementations available, such as the java.util.Concurrent package, which you don’t have to code yourself, as discussed below.

4. Thread leakage

One of the major dangers in the various thread pools is thread leakage, which occurs when a thread is removed from the pool to perform a task and the task is not returned to the pool. One way this can happen is when the task throws a RuntimeException or an Error. If the thread pool class does not catch this, the thread will be foolishly stored in the thread pool, and the number of threads in the thread pool will be permanently reduced by one. When this happens enough times, the thread pool will eventually be empty (there are no threads available) and the system will crash because there are no threads left to process tasks. Paralyzed tasks, such as those that wait permanently for resources that are not guaranteed to be available or for input from users who have already gone home, can also have the same effect as thread leaks. If a thread is permanently occupied by such a task, it has the same effect as removal from the pool. Tasks like this should either give them a thread outside the thread pool or control their wait time.

5. Request overload

Servers are likely to be inundated with requests. In this case, we might not want to put every incoming request into our work queue, because the queue of tasks waiting to be executed could also consume too many system resources and cause insufficient resources. It’s up to you to decide what to do, for example by rejecting requests with a response indicating that the server is temporarily too busy.

Efficient thread pool usage Guide

Thread pools can be a very effective way to build a service application if you just follow a few simple guidelines:

  • Do not queue tasks that are waiting for the results of other tasks. This results in the kind of deadlock described above, where all threads in the pool are waiting for the result of a task that cannot be executed because all threads in the queue are in use.
  • Potentially long-running tasks should be added to the thread pool with caution. If the program must wait for a resource, such as an I/O, to complete, define a maximum wait time and then fail or re-execute later. This ensures that by freeing a thread from one possible task, some other task will eventually be executed successfully.
  • Understand your task. To effectively resize the thread pool, you need to understand what the tasks in the queue do. Are they CPU intensive operations? Do they take up I/O for a long time? Your answer will affect how you configure your application. If the tasks come from different classes and have distinct characteristics, it may make more sense to customize different work queues for different types of tasks so that each pool can be configured with data.

Thread pool size configuration

Resize the thread pool is largely a matter of avoiding two mistakes: having too many or too few threads. Fortunately, for most applications, the middle ground between too much or too little is wide. Review the two main advantages of using threads in an application: the ability for a process to continue while waiting for a slow operation such as I/O, and the availability of multiple processors. In a computationally intensive application running on an N-processor host, adding additional threads by setting the number of threads to N might improve throughput, but there is no benefit to adding more than that. Indeed, too many threads can even degrade performance because of the additional context-switching overhead. The optimal size of the thread pool depends on the number of processors available and the nature of the tasks in the work queue. For a work queue that will hold fully computationally intensive tasks on an N-processor system, it is common to configure a thread pool size of N or N + 1 threads to achieve maximum CPU utilization. For tasks that might wait for I/O to complete, for example, a task reading an HTTP request from a socket – you need to increase the number of threads in the thread pool beyond the number of available processors, because all threads are working at the same time. Through analysis, you can estimate the ratio between wait time (WT) and service time (ST) for a typical request. For example, let’s call this ratio WT/ST. For an n-processor system, you need approximately N * (1 + WT/ST) threads to keep the processor fully utilized. Processor utilization is not the only basis for configuring thread pool size. As the thread pool grows, you may encounter scheduler limitations, memory availability, or other system resources such as socket count, open file processing, or database connection issues.

Six, summarized

  1. Creation using the JDK method generates OOM status, mainly due to the use of the LinkedBlockingQueue queue, which causes OOM status.
  2. Threads can use the method recommended by Alibaba, but because the number of threads is fixed and the queue is ArrayBlockingQueue, it is less efficient, but the memory is not OOM guaranteed.
  3. No coding required. Doug Lea has written an excellent open source concurrency toolkit, java.util.Concurrent, which contains mutex, letter concurrence, collection classes such as queues and hash tables that perform well under concurrent access, and some work queue implementations. The PooledExecutor class in this package is an efficient and widely used implementation of a workqueue-based thread pool. Instead of trying to code it yourself, which can be error prone, consider using some of the tools in the java.util.concurrent package.
  4. Thread pools are a useful tool for building server applications. The concept is simple, but there are some issues to be aware of when implementing or using it, such as deadlocks, insufficient resources, and the complexity of wait() and notify(). If you find that your application needs a thread pool, consider using one of the Executor classes in the java.util.concurrent package, such as PooledExecutor, rather than writing one from scratch. If you find yourself creating threads that handle short tasks, you should consider using thread pools.

——

My wechat official account: Architecture (ID: Gentoo666), sharing Java dry goods, high concurrency programming, hot technology tutorials, micro services and distributed technology, architecture design, block chain technology, artificial intelligence, big data, Java interview questions, and cutting-edge hot information, etc. Updated daily!

References:

  1. https://blog.csdn.net/weixin_39770927/article/details/81360511
  2. https://blog.csdn.net/zhangqinfu/article/details/52931530
  3. https://blog.csdn.net/every__day/article/details/83900109
  4. https://blog.csdn.net/wwp231/article/details/52504687
  5. https://blog.csdn.net/qq360694660/article/details/78296919
  6. https://blog.csdn.net/defonds/article/details/43796951