Previous article: Threads and Thread Pools in Java — Threads
The thread pool
What is a thread pool?
To put it simply, a thread pool is a pool of threads that are created in advance. When a task needs to be processed, the threads in the pool will handle the task. After the task is processed, the threads will not be destroyed, but continue to wait for the next task. Since thread creation and destruction consume system resources, thread pools can be used to improve system performance when a business requires frequent thread creation and destruction.
What can thread pools do?
From the Art of Concurrent Programming in Java, using thread pools can help:
- Reduce resource consumption. By reusing already created threads, you can reduce the cost of thread creation and destruction.
- Improve response speed. When a task arrives, it can execute immediately without waiting for a thread to be created.
- Improve thread manageability. Threads are scarce resources. If created without limit, they consume system resources and degrade system stability. Thread pools are used for uniform allocation, tuning, and monitoring.
How do I create a thread pool
Start by creating a Runnable interface implementation class.
package demo;
import java.util.Date;
/ * * *@author yuanyiwen
* @createThe 2020-02-28 16:05 *@description* /
public class DemoThread implements Runnable {
private String command;
public DemoThread(String command) {
this.command = command;
}
@Override
public void run(a) {
System.out.println(Thread.currentThread().getName() + "Start time:" + new Date());
processCommand();
System.out.println(Thread.currentThread().getName() + "End time:" + new Date());
}
private void processCommand(a) {
try {
Thread.sleep(5000);
} catch(InterruptedException e) { e.printStackTrace(); }}@Override
public String toString(a) {
return "DemoThread{" +
"command='" + command + '\' ' +
'} '; }}Copy the code
Let’s test this by creating a thread pool using ThreadPoolExecutor:
package demo;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
/ * * *@author yuanyiwen
* @createThe 2020-02-28 is better *@description* /
public class DemoThreadPoolExecutor {
private static final int CORE_POOL_SIZE = 5;
private static final int MAX_POOL_SIZE = 10;
private static final int QUEUE_CAPACITY = 100;
private static final Long KEEP_ALIVE_TIME = 1L;
public static void main(String[] args) {
// Use a thread pool to create threads
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(
// The number of core threads is 5
CORE_POOL_SIZE,
// Maximum number of threads: 10
MAX_POOL_SIZE,
// Wait time: 1L
KEEP_ALIVE_TIME,
// Wait time in seconds
TimeUnit.SECONDS,
// The task queue is ArrayBlockingQueue and has a capacity of 100
new ArrayBlockingQueue<>(QUEUE_CAPACITY),
// Saturation policy is CallerRunsPolicy
new ThreadPoolExecutor.CallerRunsPolicy()
);
for(int i = 0; i < 15; i++) {
// Create a WorkerThread object that implements the Runnable interface
Runnable worker = new DemoThread("Task" + i);
// Execute Runnable through the thread pool
threadPoolExecutor.execute(worker);
}
// Terminates the thread pool
threadPoolExecutor.shutdown();
while(! threadPoolExecutor.isTerminated()) { } System.out.println("All threads terminated"); }}Copy the code
Finally, let’s look at the results:
As you can see, when the number of core threads is 5, even though there are 15 threads to run, only 5 tasks will be executed at the same time, and the remaining tasks will be queued for execution when the core thread is idle. In general, the steps are as follows:
Executor framework
The Executor framework was introduced after Java5. After Java5, it is better to start a Thread through Executor rather than using the start method of Thread. In addition to being more manageable and more efficient (implemented with thread pools, saving money), there is a key point: it helps avoid the This escape problem.
This escape
This escape is when another thread holds a reference to the object before the constructor returns, and a call to a method that has not yet constructed a complete object may raise strange errors.
Raising this escape usually requires two conditions: one is to create the inner class in the constructor and the other is to publish the inner class in the constructor.
Because the published inner class object has built-in access to the external class This, the outer class may not be constructed when accessed through the inner class object, leading to some unexpected problems.
A typical This escape scenario is as follows:
public class DemoThisEscape {
private int a = 10;
public DemoThisEscape(a) {
// Call the inner class from the constructor of the outer class
new Thread(new InnerClass()).start();
}
private class InnerClass implements Runnable {
@Override
public void run(a) {
// Reference the unconstructed object here via DemoThisEscape. This, for example:
System.out.println(DemoThisEscape.this.a); }}}Copy the code
By using a thread pool for uniform thread scheduling, you eliminate the need to start a thread manually in the program, thereby avoiding the need to start a thread in the constructor, and thus effectively circumvent this escape.
Common parameters of ThreadPoolExecutor
1. CorePoolSize: specifies the number of core threads
Defines the minimum number of threads that can run simultaneously.
2. MaximumPoolSize: indicates the maximum number of threads
When the number of tasks in the queue reaches the queue capacity, the current number of threads that can run simultaneously increases to the maximum number of threads.
3. KeepAliveTime: indicates the wait time
The maximum number of idle threads to survive when the number of threads is greater than the number of core threads.
4.
KeepAliveTime TimeUnit of a parameter, including timeunit. SECONDS, timeunit. MINUTES, timeunit. HOURS, and timeunit. DAYS.
5. WorkQueue: indicates a task queue
Task queue, used to store the queue of tasks waiting to be executed.
ThreadFactory: a threadFactory
Thread factory, used to create threads, generally the default is ok.
7. Handler: rejects the policy
Also called saturation strategy; When too many submitted tasks cannot be processed in a timely manner, you can customize policies to process the tasks.
ThreadPoolExecutor saturation policy: The policy that ThreadPoolTaskExecutor executes when the maximum number of concurrent threads has been reached and the queue is full.
Common rejection strategies include:
- ThreadPoolExecutor. AbortPolicy:throw
RejectedExecutionException
To reject processing of new tasks is the default reject policy used in Spring. - ThreadPoolExecutor. CallerRunsPolicy:The thread calls the one running the task
execute
Itself, which is directly callingexecute
Method runs in the thread of therun
) rejected task, which is discarded if the executor is closed. This strategy provides a simple feedback control mechanism that can slow down the delivery of new tasks, but may cause delays. Choose this policy if the application can afford the delay and cannot drop any task requests. - ThreadPoolExecutor. DiscardPolicy: does not handle the new task, discarded directly.
- ThreadPoolExecutor. DiscardOldestPolicy: this policy will discard the first requests pending tasks.
Why is it recommended to use ThreadPoolExecutor to create threads?
Rule 1: Thread resources must be provided through a thread pool. It is not allowed to display created threads in the application.
The benefit of using thread pools is to reduce the time spent creating and destroying threads, as well as the overhead of system resources, and solve the problem of insufficient resources. If you don’t use thread pools, you can run out of memory or “overswitch” by creating a large number of similar threads.
If you do not allow the thread pool to be created by Executors, use the ThreadPoolExecutor constructor instead. If you do not allow the thread pool to be created by Executors, use the ThreadPoolExecutor constructor to clear the thread pool operation rules and avoid resource depletion.
Executors return thread pool object pros:
FixedThreadPool and SingleThreadExecutor: Queue length of allowed requests is integer. MAX_VALUE, may pile up a large number of requests, resulting in OOM.
CachedThreadPool and ScheduledThreadPool: The number of threads allowed to be created is integer. MAX_VALUE, which may create a large number of threads, resulting in OOM.
Several common thread pools
FixThreadPool Fixed thread pool
FixThreadPool: a reusable thread pool with a fixed number of threads.
public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory threadFactory) {
return new ThreadPoolExecutor(
nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(),
threadFactory);
}
Copy the code
Execution mechanism:
- If the number of threads currently running is less than
corePoolSize
When a new task comes, a new thread is created to execute the task. - The number of threads currently running is equal to
corePoolSize
After, if a new task comes, the task will be added toLinkedBlockingQueue
; - Threads in the thread pool, after completing their work at hand, iterate through the loop from the thread pool
LinkedBlockingQueue
To execute.
FixThreadPool uses the unbounded queue LinkedBlockingQueue (queue size integer.max_value), which has the following effects on the thread pool:
- When the number of threads in the thread pool reaches
corePoolSize
After that, the new task will wait in an unbounded queue, so the number of threads in the thread pool will not exceedcorePoolSize
; - Since we are using an unbounded queue, so
maximumPoolSize
Will be an invalid argument, because there can be no case of a full task queue, so a FixedThreadPoolcorePoolSize
,maximumPoolSize
Is set to the same value andkeepAliveTime
Will be an invalid argument; - Running FixedThreadPool (meaning not executed
shutdown()
或shutdownNow()
Will not reject the task, so may result in OOM if the task is heavy.
SingleThreadExecutor Single thread pool
SingleThreadExecutor is a thread pool with only one thread.
public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory) {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(
1.1.0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(),
threadFactory));
}
Copy the code
This is basically the same as a FixThreadPool, except that there is only one thread in the pool.
CachedThreadPool Cache thread pool
CachedThreadPool is a thread pool that creates new threads as needed, but reuses previously built threads as they become available.
public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) {
return new ThreadPoolExecutor(
0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(),
threadFactory);
}
Copy the code
CorePoolSize is set to 0 and maximumPoolSize is set to integer.max-value, which is unbounded. Although unbounded, the thread pool also has a destruction mechanism, which means that if a thread has not been used for 60 seconds, it is destroyed, which saves a lot of resources.
However, if the main thread submits tasks faster than the threads in maximunPool can process them, CachedThreadPool will create a steady stream of new threads, still potentially running out of CPU or memory.
Execution mechanism:
- Executed first
offer
Operation to submit a task to a task queue. If there are idle threads executing in maximumPoolpoll
Operation, and the main threadoffer
With idle threadspoll
When the pairing succeeds, the main thread will delegate the task to the idle thread, which is then treated asexecute()
Method execution completed; Otherwise, the following steps are performed. - When the initial
maximum
Is empty, ormaximumPool
When there are no idle threads in, there are no threads to executepoll
Operation. At this point, CachedThreadPool creates a new thread to execute the task,execute()
Method completes execution.
How do I size a thread pool?
Context switch
In multithreaded variable programming, the number of threads is generally greater than the number of CPU cores, and a CPU core can only be used by one thread at any time. In order for these threads to execute efficiently, the CPU’s strategy is to allocate time slices to each thread and rotate them. When a thread runs out of time, it is ready to be used by another thread. This process is a context switch.
In a nutshell, the current task saves its state before switching to another task after executing the CPU slice, so that it can be directly loaded to the previous state when switching back to the task next time. The process from saving to reloading a task is a context switch.
Context switching is usually computationally intensive. In other words, it requires a significant amount of processor time, with each switch taking nanosecond of tens or hundreds of times per second. Therefore, context switching means consuming a lot of CPU time for the system, in fact, it may be the most time consuming operation in the operating system.
One of the many things Linux does compared to other operating systems, including other Unix-like systems, is that it takes very little time to switch contexts and modes.
Make simple judgments
Cpu-intensive tasks (N+1) :
This type of task consumes CPU resources. You can set the number of threads to N (number of CPU cores) +1. One more thread than the number of CPU cores is used to prevent occasional page miss interrupts, or the impact of task suspension caused by other reasons. Once the task is paused, the CPU is idle, and in this case the extra thread can take full advantage of the idle CPU time.
I/O intensive Tasks (2N) :
In this case, the system will spend most of its time processing I/O interactions, and the thread will not occupy CPU for the time it is processing I/O, so the CPU can be handed over to another thread. Therefore, in the application of I/O intensive tasks, we can configure more threads, the specific calculation method is 2N.
Reference article: JavaGuide