Why use thread pools
In practice, threads are a resource hog, and can easily lead to system problems if threads are poorly managed. Therefore, thread pools are used to manage threads in most concurrency frameworks. The main benefits of using thread pools to manage threads are as follows:
(1) Reduce resource consumption. The system performance loss is minimized by reusing existing threads and reducing the number of times threads are closed
(2) Improve system response speed. By multiplexing threads, the process of creating threads is eliminated, so the overall response speed of the system is improved
(3) Improve the manageability of threads. Threads are scarce resources. If created without limit, they will consume system resources and reduce system stability. Therefore, thread pools are needed to manage threads.
How thread pools work
When a concurrent task is submitted to a thread pool, the thread pool allocates threads to execute the task as follows:
There are several stages in the thread pool execution of submitted tasks:
(1) First determine whether all threads in the core thread pool are executing tasks. If not, create a new thread to execute the submitted task, otherwise, all threads in the core thread pool are executing the task, enter (2)
(2) Judge whether the current blocking queue is full, if not, put the submitted task in the blocking queue; Otherwise, enter (3)
(3) Determine whether all threads in the thread pool are executing tasks, if not, create a new thread to execute tasks, otherwise, it will be handed over to the saturation policy for processing
Thread pools perform logic
After creating a thread pool with ThreadPoolExecutor, the task is submitted and executed. Execute ()
public void execute(Runnable command) {
if (command== null) throw new NullPointerException(); int c = ctl.get(); // If the number of threads in the thread pool is less than corePoolSize, a new thread is created to perform the current taskif (workerCountOf(c) < corePoolSize) {
if (addWorker(command.true))
return; c = ctl.get(); } // If the number of threads is greater than corePoolSize or the thread creation fails, the task is placed in the blocking queue workQueueif (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false); } // If the current task cannot be placed in the blocking queue, a new thread is created to execute the taskelse if(! addWorker(command.false))
reject(command);
}Copy the code
Execute ()
The execution logic of the execute method is as follows:
(1) If there are fewer threads currently running than corePoolSize, a new thread will be created to execute the new task, even if the other threads in the thread pool are idle;
(2) If the number of threads running is equal to or greater than corePoolSize and smaller than maximumPoolSize, the submitted task will be stored in the blocking queue;
(3) If the current workQueue is full, a new thread will be created to execute the task.
(4) If the number of threads exceeds maximumPoolSize, the saturation policy RejectedExecutionHandler will be used to handle the increment.
Closing the thread pool
To shutdown the thread pool, you can use shutdown and shutdownNow. They all work by traversing all threads in a thread pool and then interrupting them in turn. There are some differences between Shutdown and shutdownNow:
- ShutdownNow first sets the state of the thread pool to STOP, then attempts to STOP all threads with executing and pending tasks and returns a list of tasks awaiting execution
- Shutdown simply sets the state of the thread pool to shutdown and interrupts all threads that are not executing
As you can see, the shutdown method continues the ongoing task, while shutdownNow directly interrupts the ongoing task. The isShutdown method returns true when either of the two methods is called. The thread pool is closed only when all threads have been shut down, and the isTerminated method returns true.
How to configure thread pool parameters properly
To properly configure thread pools, you must first analyze task characteristics, which can be analyzed from the following perspectives:
- Nature of tasks: CPU intensive tasks, IO intensive tasks and hybrid tasks.
- Task priority: high, medium and low.
- Task execution time: long, medium and short.
- Task dependencies: Whether they depend on other system resources, such as database connections.
- CPU intensive tasks configure as few threads as possible, such as (N CPU)+1 thread pool.
- For IO – intensive tasks, wait for I/O operations and the threads are not executing the tasks all the time. In this case, configure as many threads as possible, for example, 2 x (N CPU).
- Hybrid task, if you can break up, it is split into a cpu-intensive task and an IO intensive tasks, as long as the two task execution time difference is not too big, then decomposed execution throughput than the throughput of serial execution, if the huge difference between the two task execution time, don’t need to split up.
- We can use runtime.getruntime ().availableProcessors() to get the number of cpus on the current device.
Tasks with different priorities can be processed using the PriorityBlockingQueue. It allows the higher-priority tasks to be executed first. Note that if higher-priority tasks are always submitted to the queue, the lower-priority tasks may never be executed.
Tasks with different execution times can be assigned to thread pools of different sizes, or priority queues can be used to allow shorter tasks to be executed first.
A task that depends on the database connection pool. If a thread submits SQL and waits for the database to return the result, the longer the wait, the longer the CPU idle time, the larger the number of threads should be set to better use the CPU.
The best blocking queue is bounded queue, if the use of unbounded queue, once the backlog of tasks in the blocking queue will occupy too much memory resources, and even cause the system crash.
ScheduledThreadPoolExecutor
ScheduledThreadPoolExecutor inherited ThreadPoolExecutor class, therefore, the overall function is consistent, the thread pool is mainly responsible for create a thread (the Worker class), get new asynchronous task thread from blocking the queue, Until there are no more asynchronous tasks in the blocking queue. But compared with ThreadPoolExecutor, ScheduledThreadPoolExecutor time-delay for performing tasks and characteristics of the periodic task, Class ScheduleFutureTask ScheduledThreadPoolExecutor redesigned task, ScheduleFutureTask rewrite the run method make its have to delay implementation and characteristics of the tasks can be performed periodically. In addition, the blocking queue DelayedWorkQueue is a queue that can be sorted by priority and uses the underlying data structure of the heap to place tasks closer to the current time at the head of the queue so that the thread can acquire the task to execute
Thread pool in both ThreadPoolExecutor ScheduledThreadPoolExecutor, when the design of the three key elements are: task, practitioners, and task results. The idea is to completely decouple these three key elements.
practitioners
The execution mechanism of tasks is completely entrusted to Worker class, which further encapsulates Thread. Submit a task to the thread pool, no matter the execute method and submit for ThreadPoolExecutor, or ScheduledThreadPoolExecutor schedule method, is to first mission to the blocking queue, Then, the addWork method is used to create a new Work class, and the thread is started by runWorker method, and the asynchronous task execution is continuously obtained from the blocking pair column and handed to the Worker until the task cannot be obtained from the blocking queue.
task
In the ThreadPoolExecutor and ScheduledThreadPoolExecutor task is to point to implement the Runnable interface and Callable interface implementation class. ThreadPoolExecutor task will be converted into FutureTask classes, and in order to implement can carry out tasks and time delay in ScheduledThreadPoolExecutor periodic characteristics of the mission, The task is converted to the ScheduledFutureTask class, which inherits FutureTask and overwrites the RUN method.
The task results
After submitting a task in ThreadPoolExecutor, the result of the task can be obtained through the Future interface class, which in ThreadPoolExecutor is actually the FutureTask class, And in ScheduledThreadPoolExecutor is ScheduledFutureTask class
Status of the thread pool
The thread pool states are:
- RUNNING: Can accept newly submitted tasks and also process tasks in a blocking queue;
- SHUTDOWN: no longer accepts newly submitted tasks, but can process existing tasks (that is, tasks in the blocking queue);
- STOP: no longer accepts newly submitted tasks or processes existing tasks.
- TIDYING: All missions have ended;
- TERMINATED: The default does nothing but is used as an identifier.
The state transition is shown below:
Life cycle of worker thread: