With the rapid development of the computer industry, Moore’s law gradually fails, and multi-core CPU becomes the mainstream. The use of multithreaded parallel computing is gradually becoming the basic weapon for developers to improve server performance. Thread pools provided by J.U.C: The ThreadPoolExecutor class helps developers manage threads and easily perform parallel tasks. Understanding and using thread pools properly is a basic skill that developers must learn.

This paper begins with a brief description of the concept and use of the thread pool, then combined with the source code of the thread pool to help readers appreciate the design of the thread pool, finally return to practice, through cases to describe the use of thread pool encountered problems, and gives a dynamic thread pool solution.

First, write first

1.1 What is a thread pool

Thread Pool is a tool to manage threads based on pooling idea, which often appears in multi-threaded server, such as MySQL.

Too many lines will bring extra costs, including the cost of creating and destroying threads, the cost of scheduling threads, etc., and also reduce the overall performance of the computer. A thread pool maintains multiple threads waiting for a supervisor to assign tasks that can be executed concurrently. This approach, on the one hand, avoids the cost of creating and destroying threads while processing tasks, on the other hand, avoids the excessive scheduling problem caused by the expansion of the number of threads, and ensures the full utilization of the kernel.

The thread pool described in this article is the ThreadPoolExecutor class provided in the JDK.

Of course, there are a number of benefits to using thread pools:

  • Reduced resource consumption: Reuse of created threads through pooling techniques to reduce wastage from thread creation and destruction.
  • Improved response time: Tasks can be executed immediately when they arrive without waiting for threads to be created.
  • Improve manageability of threads: Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also cause resource scheduling imbalance due to unreasonable distribution of threads, which reduces system stability. Thread pools allow for uniform allocation, tuning, and monitoring.
  • More and more power: Thread pools are extensible, allowing developers to add more functionality to them. Such as delay timer thread pool ScheduledThreadPoolExecutor, allows a stay of execution or regular task execution.

1.2 What is the problem solved by thread pools

The core problem solved by thread pools is resource management. In a concurrent environment, the system cannot determine how many tasks need to be executed or how many resources need to be invested at any given time. This uncertainty raises several questions:

  1. The additional cost of applying/destroying resources and scheduling resources frequently can be significant.
  2. The lack of means to suppress unlimited resource applications may lead to the risk of system resource exhaustion.
  3. The system cannot properly manage internal resource distribution, which reduces system stability.

To solve the problem of resource allocation, thread Pooling adopts the idea of Pooling. Pooling, as the name suggests, is the idea of managing resources together in order to maximize returns and minimize risks.

Pooling is the grouping together of resources (assets, equipment, personnel, effort, etc.) for the purposes of maximizing advantage or minimizing risk to the users. The term is used in finance, Computing and equipment management. – wikipedia

The idea of “pooling” can not only be applied to the computer field, but also to finance, equipment, personnel management, work management and other fields.

In the field of computer performance for: unified management of IT resources, including servers, storage, and network resources and so on. By sharing resources, users benefit from low investment. In addition to thread pools, several other typical usage strategies include:

  1. Memory Pooling: Apply for Memory in advance to speed up Memory Pooling and reduce Memory fragmentation.
  2. Connection Pooling: Apply for database connections in advance to improve the speed of applying for connections and reduce the system overhead.
  3. Object Pooling: Recycles objects to reduce expensive consumption of resources during initialization and release.

With the “what” and “why” behind us, let’s take a closer look at the internal implementation of thread pools.

Second, thread pool core design and implementation

In the previous article, we learned that thread pooling is a tool that helps us manage threads to achieve concurrency through the idea of pooling, represented in Java by the ThreadPoolExecutor class. So what is the detailed design and implementation of it? We will cover this in detail in this chapter.

2.1 Overall Design

The core implementation class of Java thread pool is ThreadPoolExecutor. This chapter analyzes the core design and implementation of Java thread pool based on JDK 1.8 source code. Let’s first look at the UML class diagram of ThreadPoolExecutor to understand the inheritance of ThreadPoolExecutor.

The top-level interface of ThreadPoolExecutor implementation is Executor, and the top-level interface Executor provides the idea of decoupling task submission from task execution. You do not need to worry about how to create a thread or schedule a thread to execute a task. You only need to provide a Runnable object and submit the execution logic of a task to an Executor. The Executor framework takes care of thread allocation and task execution. The ExecutorService interface adds some new capabilities :(1) extending the ability to execute tasks, and adding ways to generate futures for one or a group of asynchronous tasks; (2) provides methods to manage thread pools, such as stopping them from running. AbstractExecutorService is a high-level abstract class that strings together the process of performing a task, ensuring that the underlying implementation only needs to focus on a single method to perform the task. The lowest implementation class, ThreadPoolExecutor, implements the most complex part of the run. ThreadPoolExecutor will maintain its own life cycle while managing threads and tasks in a good combination to execute parallel tasks.

How does ThreadPoolExecutor work, maintaining threads and executing tasks at the same time? Its operating mechanism is shown in the figure below:

Thread pooling actually builds a producer-consumer model internally, decoupling threads and tasks from each other and not directly related to each other, so as to buffer tasks well and reuse threads. The operation of thread pool is mainly divided into two parts: task management and thread management. The task management part acts as a producer. After the task is submitted, the thread pool will judge the subsequent flow of the task :(1) directly apply for the thread to execute the task; (2) Buffer to queue for thread execution; (3) Reject the task. The thread management part is the consumer, which is uniformly maintained in the thread pool. According to the task request, threads are allocated. When the thread completes the task, it will continue to acquire new tasks to execute.

Next, we will explain the thread pool operation mechanism in detail in the following three parts:

  1. How thread pools maintain their state.
  2. How thread pools manage tasks.
  3. How thread pools manage threads.

2.2 Life cycle management

The running state of the thread pool is not explicitly set by the user, but is maintained internally along with the running of the thread pool. A variable is used internally to maintain two values: runState and number of threads (workerCount). In the implementation, the thread pool combines the maintenance of two key parameters, runState and workerCount, as shown in the following code:

private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
Copy the code

CTL this AtomicInteger is a field that controls the running state of the thread pool and the number of valid threads in the pool. The runState of the thread pool and the number of valid threads in the thread pool (workerCount). The runState is stored in the higher 3 bits and the workerCount is stored in the lower 29 bits. The two variables do not interfere with each other. Using a variable to store two values can avoid inconsistencies when making relevant decisions. It is unnecessary to occupy lock resources to maintain the consistency of the two values. As you can also see from reading the thread pool source code, it is often necessary to determine both the running state of the thread pool and the number of threads. Thread pools also provide several methods for the user to obtain the current running state of the thread pool and the number of threads. All of these are bit operations, which are much faster than basic operations.

The internal encapsulation to get the life cycle state, get the number of threads in the thread pool is calculated as follows:

private static int runStateOf(int c)     { return c & ~CAPACITY; } // Calculate the current running status
private static int workerCountOf(int c)  { return c & CAPACITY; }  // Count the number of current threads
private static int ctlOf(int rs, int wc) { return rs | wc; }   // Generate CTLS from state and thread count
Copy the code

ThreadPoolExecutor can run in five states:

Its lifecycle transformation is shown below:

2.3 Task Execution Mechanism

2.3.1 Task Scheduling

Task scheduling is the main entry point to the thread pool. When a user submits a task, how the task will be executed is determined by this stage. Understanding this section is equivalent to understanding the core workings of thread pools.

First of all, all tasks are scheduled by the Execute method. This part of the job is to check the running status of the current thread pool, the number of running threads, the execution policy, and determine the next process to be executed, whether to directly apply for thread execution, or buffer to the queue execution, or directly reject the task. Its execution process is as follows:

  1. First, check the RUNNING status of the thread pool. If it is not RUNNING, reject it directly. Ensure that the thread pool executes tasks in the RUNNING state.
  2. If workerCount < corePoolSize, a thread is created and started to execute the newly submitted task.
  3. If workerCount >= corePoolSize and the blocking queue in the thread pool is not full, the task is added to the blocking queue.
  4. If workerCount >= corePoolSize && workerCount < maximumPoolSize and the blocking queue in the thread pool is full, a thread is created and started to execute the newly submitted task.
  5. If workerCount >= maximumPoolSize and the blocking queue in the thread pool is full, the task is processed according to the reject policy. The default is to throw an exception directly.

Its execution process is shown in the figure below:

2.3.2 Task buffering

The task buffer module is the core part of the thread pool that can manage tasks. The essence of thread pool is the management of tasks and threads, and the key idea to achieve this is to decouple the tasks and threads from the direct correlation, so that the subsequent allocation work can be done. Thread pools are implemented in producer-consumer mode through a blocking queue. The blocking queue caches tasks from which the worker thread retrieves them.

A BlockingQueue is a queue that supports two additional operations. The two additional operations are: when the queue is empty, the thread that fetched the element waits for the queue to become non-empty. When the queue is full, the thread that stores the element waits for the queue to become available. Blocking queues are often used in producer and consumer scenarios, where the producer is the thread that adds elements to the queue and the consumer is the thread that takes elements from the queue. A blocking queue is a container in which producers hold elements, and consumers only take elements from the container.

The following figure shows thread 1 adding elements to the blocking queue and thread 2 removing elements from the blocking queue:

Different queues can implement different task access strategies. Here, we can revisit the blocking queue members:

2.3.3 Task Application

As can be seen from the task assignment section above, there are two possibilities for the task execution: one is that the task is directly executed by the newly created thread. The other is that the thread retrieves the task from the task queue and executes it, and the idle thread that completes the task will apply for the task from the queue again to execute it. The first is only when the thread is initially created, and the second is when the thread acquires most of the tasks.

The thread needs to fetch tasks from the task cache module to help the thread get tasks from the blocking queue, and realize the communication between the thread management module and the task management module. This part of the strategy is implemented by getTask method, and its execution process is shown in the figure below:

The getTask section makes several judgments to control the number of threads to match the state of the thread pool. Null is returned if the thread pool should not now hold that many threads. The Worker thread Worker will constantly receive new tasks to execute, and when the Worker thread Worker cannot receive tasks, it will start to be recycled.

2.3.4 Task Rejection

The task rejection module is the protected part of the thread pool. The thread pool has a maximum capacity. When the task cache queue of the thread pool is full and the number of threads in the thread pool reaches maximumPoolSize, the task must be rejected and the task rejection policy is adopted to protect the thread pool.

A rejection policy is an interface designed as follows:

public interface RejectedExecutionHandler {
    void rejectedExecution(Runnable r, ThreadPoolExecutor executor);
}
Copy the code

Users can implement this interface to customize rejection policies or choose from the four existing rejection policies provided by the JDK, which have the following features:

2.4 Worker thread management

Against 2.4.1 Worker thread

In order to master the state of threads and maintain the life cycle of threads, a Worker thread in the thread pool is designed. Let’s look at some of its code:

private final class Worker extends AbstractQueuedSynchronizer implements Runnable{
    final Thread thread;// The thread held by the Worker
    Runnable firstTask;// The initialized task can be null
}
Copy the code

The Worker thread implements the Runnable interface and holds a thread, thread, that initializes the task firstTask. Threads are threads created from ThreadFactory when the constructor is called and can be used to perform tasks. FirstTask uses it to hold the first incoming task, which can be null or null. If this value is non-empty, the thread will execute the task immediately after startup, which is the case when the core thread is created. If the value is null, a thread needs to be created to perform the tasks in the workQueue, that is, the creation of non-core threads.

The task execution model of Worker is shown in the figure below:

Thread pools need to manage the life cycle of threads and need to be recycled when threads are not running for a long time. Thread pools use a Hash table to hold references to threads, which can control the thread’s life cycle by adding and removing references. What matters is how to tell if the thread is running.

Worker inherits AQS and uses AQS to realize the function of exclusive lock. Instead of ReentrantLock, AQS is used to reflect the thread’s current execution state.

1. Once the lock method obtains an exclusive lock, it indicates that the current thread is executing a task. 2. If a task is being executed, the thread should not be interrupted. 3. If the thread is not in the exclusive lock state, that is, in the idle state, it is not processing tasks. In this case, you can interrupt the thread. 4. The thread pool calls interruptIdleWorkers to interrupt idle threads when the shutdown or tryTerminate methods are executed. The interruptIdleWorkers method uses tryLock to determine whether threads in the thread pool are idle. If the thread is idle, it can be safely reclaimed.

This feature is used in the thread collection process, as shown in the following figure:

2.4.2 Worker threads increase

Increase the thread by thread pool of addWorker method, the function of the method is to increase a single thread, this method does not consider the thread pool is to increase the thread, in which stage the allocation of thread strategy is done in the last step, this step only complete increase thread, and make it run, finally returned to the success of the results. The addWorker method takes two parameters: firstTask and core. The firstTask parameter is used to specify the firstTask to be executed by the new thread. This parameter can be null. If the core parameter is true, it will determine whether the number of active threads is less than corePoolSize when adding threads. If the core parameter is false, it will determine whether the number of active threads is less than maximumPoolSize before adding threads.

2.4.3 Worker thread reclamation

The thread pool’s job is to maintain a certain number of thread references based on the current state of the thread pool and prevent these threads from being reclaimed by the JVM. When the thread pool decides which threads need to be reclaimed, it simply removes the references. After Worker is created, it will poll continuously and then acquire tasks for execution. Core threads can wait indefinitely to acquire tasks, while non-core threads have to acquire tasks within a limited time. When the Worker fails to obtain the task, that is, the acquired task is empty, the loop will end and the Worker will actively eliminate its own reference in the thread pool.

try {
  while(task ! =null|| (task = getTask()) ! =null) {
    // Execute the task}}finally {
  processWorkerExit(w, completedAbruptly);// If you can't get the task, take the initiative to reclaim yourself
}
Copy the code

Thread recycling is done in the processWorkerExit method.

In fact, in this method, removing the thread reference from the thread pool completes the thread destruction part. However, since there are many possible causes of thread destruction, the thread pool also needs to determine what caused the destruction, whether to change the current state of the thread pool, and whether to reassign threads based on the new state.

2.4.4 Worker Thread Executing tasks

The run method in the Worker class calls the runWorker method to execute the task. The execution process of the runWorker method is as follows:

1. The while loop keeps getting tasks through the getTask() method. 2. The getTask() method fetches the task from the blocking queue. 3. If the thread pool is stopping, ensure that the current thread is interrupted, otherwise ensure that the current thread is not interrupted. 4. Perform tasks. 5. If getTask is null, break out of the loop and execute processWorkerExit() to destroy the thread.

The execution process is as follows:

Practice of thread pool in business

3.1 Service Background

In today’s Internet industry, the ability of parallel computing is indispensable in order to maximize the multi-core performance of the CPU. Managing thread fetching concurrency through a thread pool is a very basic operation, so let’s look at two typical scenarios using thread pools for fetching concurrency.

Scenario 1: Quickly respond to user requests

Description: Real-time request initiated by the user. The service pursues response time. For example, if the user wants to view the information of a commodity, we need to aggregate a series of information from the commodity dimension, such as the price, discount, inventory, pictures and so on, and show them to the user.

Analysis: From the perspective of user experience, the faster the response, the better, if a page does not appear for a long time, users may give up checking the product. However, user-oriented function aggregation is usually very complex. With the cascades and multi-level cascades between calls, business development students often choose the simple way of thread pool to encapsulate calls into parallel execution of tasks and shorten the overall response time. In this scenario, the most important thing is to get the maximum response time to satisfy the user. Therefore, no queue should be set to buffer concurrent tasks, and increase corePoolSize and maxPoolSize to create as many threads as possible to execute tasks quickly.

Scenario 2: Quickly processing batch tasks

Description: A large number of offline computing tasks that need to be performed quickly. For example, in the statistics of a report, it is necessary to calculate which commodities in each store in the country have certain attributes for subsequent marketing strategy analysis, so we need to query all commodities in all stores in the country, record the commodities with certain attributes, and then quickly generate reports.

Analysis: This scenario requires a lot of work to be done, and we want it to be done as quickly as possible. In this case, you should also use a multi-threaded strategy, parallel computing. However, different from the response-first scenario, such scenario has a huge amount of tasks and does not require instantaneous completion. Instead, it focuses on how to use limited resources to handle as many tasks as possible in a unit time, which is the problem of throughput priority. So you should set up queues to buffer concurrent tasks, and adjust corePoolSize appropriately to set the number of threads handling the task. Setting too many threads here can also cause frequent thread context switches, slow down the processing of tasks, and reduce throughput.

3.2 Practical problems and scheme thinking

The core problem with thread pool usage is that thread pool parameters are not easy to configure. On the one hand, the operation mechanism of thread pool is not well understood, and rational configuration depends on the personal experience and knowledge of developers. On the other hand, thread pool execution is highly dependent on the type of task, and IO intensive and CPU intensive tasks perform very differently, resulting in no mature empirical strategies for developers to refer to.

There are many internal records of failures caused by improper thread pool configuration. Here are some examples:

Case1:2018 XX page display interface a large number of calls degraded:

Accident description: XX page display interface generated a large number of call degradation, the order of magnitude is tens to hundreds.

Show the cause of the accident: the service interface internal logic do parallel computing using a thread pool, since there is no good forecasts call flow, lead to the largest number of core set small, dumped RejectedExecutionException, trigger interface degradation conditions, schematic diagram is as follows:

Case2: a class S2 fault occurs in 2018

Description: The services provided by XX take a long time to execute. As upstream services, the whole service times out, and a large number of downstream services fail to invoke.

Cause: The internal logic of the service processing requests uses the thread pool to isolate resources. The maximum number of threads is invalid because the queue is set too long. As a result, when the number of requests increases, a large number of tasks are piled up in the queue, and the task execution takes too long. The schematic diagram is as follows:

Thread pools are used in businesses, and improper use can lead to failures, so how can we make better use of thread pools? To solve this problem, we extend in the following directions:

1. Can we not use thread pools?

Going back to the original question, businesses use thread pools for concurrency. Is there an alternative to getting concurrency? We tried to investigate some other options:

Taken together, these new approaches can improve the performance of parallel tasks in some cases, but this time the focus is on how to achieve concurrency more easily and safely. In addition, the Actor model is actually used very little, only widely in Scala, and the coroutine framework is poorly maintained in Java. None of these are easy enough to use at this stage, nor do they solve business problems at this stage.

2. Pursue rationality of parameter setting?

Is there a formula that makes it easy for developers to figure out what the thread pool parameters should be in a particular scenario?

With this in mind, we investigated some thread pool configuration options in the industry:

After investigating the above industry solutions, we do not have a common thread pool calculation method. The execution of concurrent tasks is related to the task type. IO intensive tasks and CPU intensive tasks have very different performance, but this proportion is difficult to reasonably estimate, which makes it difficult to have a simple and effective general formula to help us directly calculate the results.

3. Thread pool parameters dynamic?

After careful evaluation, there is no guarantee that the correct parameters will be computed at once, so can we reduce the cost of modifying thread pool parameters so that we can at least adjust quickly in the event of a failure and thus shorten the recovery time? Based on this consideration, can we migrate the parameters of the thread pool from the code to the distributed configuration center, so that the parameters of the thread pool can be dynamically configured and take effect immediately? The parameter modification process before and after the dynamic thread pool parameter is compared as follows:

Based on the comparison of the above three directions, we can see that the parameter dynamic direction is simple and effective.

3.3 Dynamic thread pools

3.3.1 Overall design

The core design of dynamic thread pools includes the following three aspects:

  1. Simplify thread pool configuration: There are eight thread pool construction parameters, but the core three are corePoolSize, maximumPoolSize, and workQueue, which most determine the task allocation and thread allocation strategy of the thread pool. Considering that in practical application, there are mainly two scenarios for obtaining concurrency :(1) parallel execution of sub-tasks to improve response speed. In this case, a synchronous queue should be used, and no task should be cached, but should be executed immediately. (2) Perform a large number of tasks in parallel to improve throughput. In this case, bounded queues should be used. Queues should be used to buffer large numbers of tasks, and queue capacity must be declared to prevent unrestricted accumulation of tasks. Therefore, the thread pool only needs to provide the configuration of the three key parameters and the choice of two queues to meet most service requirements. Less is More.
  2. Parameters can be dynamically modified: in order to solve the problem of parameter mismatch and high cost of modifying parameters. On the basis of the high scalability of Java thread pool, encapsulate the thread pool, allowing the thread pool to listen for messages outside the synchronization, and modify the configuration according to the message. The thread pool configuration is placed on the platform side, allowing developers to easily view and modify the thread pool configuration.
  3. Add thread pool monitoring: Lack of observation of something’s state makes it impossible to improve. Add the ability to monitor the life cycle of thread pool tasks to help developers understand thread pool status.

3.3.2 Functional Architecture

Dynamic thread pools provide the following functions:

Dynamic parameter adjustment: supports dynamic adjustment and interface operation of thread pool parameters. Including modifying thread pool core size, maximum core size, queue length, etc. After parameters are modified, they take effect immediately. Task monitoring: Transaction monitoring with application granularity, thread pool granularity, and task granularity; You can see the thread pool task execution, maximum task execution time, average task execution time, 95/99 lines, and so on. Load alarm: When the backlog of tasks in the thread pool queue reaches a certain value, the application development leader will be informed through Elephant (Meituan internal communication tool). The application development lead is notified by elephant when thread pool load reaches a certain threshold. Operation monitoring: The application development lead is notified of the creation/modification and deletion of thread pools. Operation log: You can view the modification records of thread pool parameters, who changed the thread pool parameters when, and what parameter values were before modification. Permission verification: Only the application developer can modify the application’s thread pool parameters.

Parameter dynamic

The JDK native thread pool ThreadPoolExecutor provides the following public setter methods, as shown in the figure below:

The JDK allows a thread pool user to dynamically set the core policy of a thread pool through an instance of ThreadPoolExecutor. Take setCorePoolSize as an example. After a runtime thread pool user calls this method to setCorePoolSize, The thread pool overwrites the original corePoolSize value and takes a different processing strategy based on the comparison between the current value and the original value. If the current value is smaller than the current number of worker threads, it indicates that there are redundant worker threads. At this time, the idle worker threads will send interrupt requests to realize recycling, and the redundant workers will also be reclaimed in the next IDEL. If the current value is greater than the original value and there is a task waiting to be executed in the current queue, the thread pool will create a new worker thread to execute the queue task. The setCorePoolSize process is as follows:

Internal thread pool will handle the current state to achieve smooth modification, other methods are limited by space, not introduced here. The point is that with these public methods, we just need to maintain an instance of ThreadPoolExecutor and take it and change its parameters if needed. Based on the above ideas, we realized dynamic thread pool parameters and configurable and modifiable thread pool parameters in the management platform. The effect picture is as follows:

You can find the specified thread pool by its name on the management platform and modify its parameters. After saving the Settings, the parameters take effect in real time. The supported dynamic parameters include the number of cores, maximum value, and queue length. In addition, you can also configure whether to enable alarms, alarm thresholds for queue waiting tasks, and activity alarms. Regarding monitoring and alarms, we will cover alignment in the following section.

Thread pool monitoring

In addition to parameter dynamics, in order to make good use of thread pools, we need to be aware of the health of the thread pool. For example, what is the current load of the thread pool? Are the resources allocated adequate? What was the performance of the mission? Is it a long or a short mission? With these issues in mind, dynamic thread pools provide multiple dimensions of monitoring and alerting capabilities, including: Thread pool activity, task execution Transaction (frequency, time), Reject exceptions, internal statistics of the thread pool, etc., can help users analyze thread pool usage from multiple dimensions, and notify users in the first time when problems occur, so as to avoid failures or speed up fault recovery.

1. Load monitoring and alarm

The core issue of thread pool load is whether there are enough resources allocated based on the current thread pool parameters. We can approach this question from both antecedent and middle perspectives. Beforehand, thread pool, defines the concept of “active” to let the user able to perceive the thread pool before occurrence Reject abnormal load problem, the thread pool activity calculation formula is: thread pool activity = activeCount/maximumPoolSize. This formula represents a higher thread load as the number of active threads approaches maximumPoolSize. There is a Reject exception, and there is a waiting task in the queue (custom threshold is supported). In both cases, an alarm will be triggered, and the alarm information will be pushed to the person in charge of the service through the elephant.

2. Task-level fine monitoring

In traditional thread pool scenarios, task execution in a thread pool is transparent to users. In a specific business scenarios, for example, business development for a thread pool is used to perform two tasks at the same time, a message is tasks, a task is texting, frequency and length of these two kinds of actual task execution is not an intuitive feel for the users, is likely to share these two tasks are not suitable for a thread pool, but the user can’t perceive, So there’s no way to optimize. Dynamic thread pool internal implementation the task-level burial site, and allows for different business tasks assigned with the name of the business meaning, internal thread pool based on the name to do the Transaction rbis, based on this feature, users can see the thread pool internal task level of execution, and distinguish between business, mission control diagram as shown in the figure below:

3. View the running status in real time

Using public getters provided by JDK native thread pool ThreadPoolExecutor, users can read the running status and parameters of the current thread pool, as shown in the following figure:

Based on these interfaces, the dynamic thread pool encapsulates the real-time view of the runtime state. Based on this function, the user can know the real-time state of the thread pool, such as how many worker threads are currently, how many tasks are executed, the number of tasks waiting in the queue, and so on. The effect is shown below:

3.4 Practice Summary

Of the problems facing business used in the thread pool, we have to think back to support concurrency problems itself have replaced the thread pool plan, also try to pursue the rationality of the thread pool setting, but in the face of the industry solutions of the complexity of the concrete floor, maintainability and real operation environment of uncertainty, we in the first two directions is “struggling”. Finally, we went back to the thread pool parameter dynamics and came up with a solution to the business problem that, while still not out of the realm of using thread pools per se, achieved a good balance between costs and benefits. The cost lies in the realization of dynamic and low monitoring cost, while the benefit lies in the reduction of the probability of failure from the two aspects of reducing the cost of thread pool parameter modification and multidimensional monitoring on the basis of not subverting the original way of using thread pool. Hopefully, the dynamic thread pool ideas provided in this article have been helpful.

Iv. Reference materials

  • [1] JDK 1.8 source
  • [2] Wikipedia – Thread Pool
  • [3] Better use of Java thread pools
  • [4] Pooling(Resource Management)
  • [5] Understanding Java thread pools in depth: ThreadPoolExecutor
  • [6] Concurrent Programming practices in Java

Author’s brief introduction

  • Zhiyuan joined Meituan-Dianping in 2018 and is a backstage development engineer of The Integrated R&D Center of Meituan-Dianping.
  • Lu Chen, who joined Meituan Dianping in 2015, is a backstage technical expert of The Comprehensive R&D center of Meituan Dianping.

Recruitment information

We are looking for front-end, back-end, data warehouse, machine learning/data mining algorithm engineers. Please send your resume to [email protected] (email subject: Meituan In-store Integrated R&d Center – Shanghai).

To read more technical articles, please scan the code to follow the wechat public number – Meituan technical team!