Basic concepts of thread pools
Concept: Thread pools control the number of running threads, place pending tasks on a wait queue, and then create threads to execute those tasks. If the maximum number of threads is exceeded, wait.
Advantages:
- 1. Thread reuse: it is unnecessary to keep new and new threads, and reuse the created threads to reduce the overhead of thread creation and destruction and save system resources.
- 2. Improved response time: When a task is reached, instead of creating a new thread, use the thread pool directly.
- 3. Manage threads: can control the maximum number of concurrent, control the creation of threads and so on.
System: Executor→ExecutorService→AbstractExecutorService→ThreadPoolExecutor ThreadPoolExecutor is the core class for thread pool creation. Exector, like the Arrays and Collections utility classes, has its own Executors class.
There are three common ways to create a thread pool
NewFixedThreadPool: Implemented using LinkedBlockingQueue, a thread pool of fixed length.
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
Copy the code
NewSingleThreadExecutor: Implemented using LinkedBlockingQueue, only one thread per pool.
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
}
Copy the code
NewCachedThreadPool: A variable thread pool implemented using SynchronousQueue.
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
Copy the code
Seven parameters for thread pool creation
- 1.
corePoolSize
: Number of resident core threads in the thread pool - 2.
maximumPoolSize
: Maximum number of threads that can be held - 3.
keepAliveTime
: Indicates the lifetime of idle threads - 4.
unit
: Unit of survival time - 5.
workQueue
: stores queues for submitted but unexecuted tasks - 6.
threadFactory
: factory class for creating threads - 7.
handler
: Indicates the denial policy when the queue is full
Understanding: Thread pool creation parameters, like a bank.
CorePoolSize is like a bank’s “duty window,” for example, today there are two tellers handling customer requests (tasks). If there are more than two customers, the new customer will wait in the waiting area (workQueue). When the waiting area is also full, the “overtime window” will be opened to allow the other 3 tellers to work overtime. At this time, the maximum window size is 5. If all Windows are open and the waiting area is still full, the “rejection policy” handler should be activated to tell the incoming customers not to enter because the waiting area is full. Due to no more influx of new customers, the number of finished customers increased, the window began to idle, at this time through keepAlivetTime to cancel the three extra “overtime window”, restore to the two “duty window”.
The underlying principles of thread pools
Schematic: The bank example above is actually how thread pools work.
Flow chart: New task arrives →
If the number of running threads is less than corePoolSize, create a core thread; Greater than or equal to corePoolSize, put into the wait queue.
If the wait queue is full, but the number of running threads is less than maximumPoolSize, create a non-core thread; If the value is greater than or equal to maximumPoolSize, the reject policy is enabled.
When a thread has nothing to do for a period of keepAliveTime, the non-core thread is closed if the number of running threads is greater than corePoolSize.
Thread pool rejection policy
When the waiting queue is full and the maximum number of threads is reached, the rejection policy needs to be started when new tasks arrive. The JDK provides four rejection policies, namely.
- AbortPolicy: the default strategy, direct selling RejectedExecutionException abnormalities, prevent normal operation of system.
- CallerRunsPolicy: Neither throws an exception nor terminates the task, but returns the task to the caller.
- 3.DiscardOldestPolicy: Discards the longest waiting task in the queue and adds the current task to the queue to try to submit the task again.
- 4.DiscardPolicy: Discards the task without processing it.
Which thread pool is used in the actual production?
Single, variable, fixed length are not used! The reason for this is that both FixedThreadPool and SingleThreadExecutor are underwritten with LinkedBlockingQueue, which has a maximum length of integer.max_value, which obviously results in OOM. So the actual production typically customizes its own thread pool with seven parameters of ThreadPoolExecutor.
ExecutorService threadPool=new ThreadPoolExecutor(2,5, 1L, timeunit. SECONDS, new LinkedBlockingQueue<>(3), Executors.defaultThreadFactory(), new ThreadPoolExecutor.AbortPolicy());Copy the code
public class MyThreadPoolDemo {
public static void main(String[] args) {
System.out.println("Fixed Thread Pool");
fixedThreadPool();
System.out.println("Single Thread Pool");
singleThreadPool();
System.out.println("Cached Thread Pool");
cachedThreadPool();
System.out.println("Custom Thread Pool");
customThreadPool();
}
private static void customThreadPool() {
ExecutorService threadPool=
new ThreadPoolExecutor(2,
5,
1L,
TimeUnit.SECONDS,
new LinkedBlockingQueue<>(3),
Executors.defaultThreadFactory(),
new ThreadPoolExecutor.AbortPolicy()
);
try {
for (int i = 0; i < 9; i++) {
threadPool.execute(() -> {
System.out.println(Thread.currentThread().getName()+"\t Transact business");
});
}
} catch (Exception e) {
e.printStackTrace();
} finally {
threadPool.shutdown();
}
}
private static void cachedThreadPool() {/ / no quantitative thread ExecutorService threadPool = Executors. NewCachedThreadPool (); try {for (int i = 0; i < 9; i++) {
threadPool.execute(() -> {
System.out.println(Thread.currentThread().getName()+"\t Transact business");
});
}
} catch (Exception e) {
e.printStackTrace();
} finally {
threadPool.shutdown();
}
}
private static void singleThreadPool() {/ / a thread pool 1 ExecutorService threadPool = Executors. NewSingleThreadExecutor (); try {for (int i = 0; i < 9; i++) {
threadPool.execute(() -> {
System.out.println(Thread.currentThread().getName() + "\t Transact business");
});
}
} catch (Exception e) {
e.printStackTrace();
} finally {
threadPool.shutdown();
}
}
private static void fixedThreadPool() {/ / a thread pool five ExecutorService threadPool = Executors. NewFixedThreadPool (5); Try-catch-finally (); // Create a thread for each user.for (int i = 0; i < 9; i++) {
threadPool.execute(() -> {
System.out.println(Thread.currentThread().getName() + "\t Transact business"); }); } } catch (Exception e) { e.printStackTrace(); } finally { threadPool.shutdown(); }}}Copy the code
Custom thread pool parameter selection
For CPU-intensive tasks, the maximum number of threads is CPU threads +1. For IO intensive tasks, allocate as many points as possible, either the number of CPU threads x 2, or the number of CPU threads /(1- blocking factor).
Deadlock encoding and location
Public class DeadLockDemo {private static Object obj1 = new Object(); private static Object obj2 = new Object(); public static void main(String[] args) { new Thread(() -> {while (true) {
synchronized (obj1) {
System.out.println(Thread.currentThread().getName() + "Get obj1 lock"); Timeunit.seconds.sleep (3); timeunit.seconds.sleep (3); Synchronized (obj2) {system.out.println (thread.currentThread ().getName() +"Get obj2 lock");
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}).start();
new Thread(() -> {
while (true) {
synchronized (obj2) {
System.out.println(Thread.currentThread().getName() + "Get obj2 lock"); Timeunit.seconds.sleep (3); timeunit.seconds.sleep (3); synchronized (obj1) { System.out.println(Thread.currentThread().getName() +"Get obj1 lock"); } } catch (InterruptedException e) { e.printStackTrace(); } } } }).start(); }}Copy the code
Two commands are used together to locate deadlocks.
JPS directive: JPS -l allows you to view running Java processes. Jstack directive: JStack PID can view the stack information of a Java process and analyze deadlocks.