GitHub 20.4K Star Java engineers become god’s path, not to learn about it!
GitHub 20.4K Star Java engineer into god’s path, really not to learn about it!
GitHub 20.4K Star Java engineer into god’s road, really really not to learn about it!
In the final analysis, the running of the Java system is the running of the program, the running of the program is the execution of the code, and the execution of the code is the execution of the VIRTUAL machine. The execution of the virtual machine is actually the execution of the operating system thread, and will occupy certain system resources, such as CPU, memory, disk, network and so on. Therefore, how to use these resources efficiently is a programmer in the peacetime code of a direction of effort. Thread pools are one way to optimize CPU utilization.
Thread pools, baidu Baike explains:
Thread pooling is a form of multithreaded processing in which tasks are added to a queue and then automatically started after a thread is created. Thread pool threads are background threads. Each thread uses the default stack size, runs at the default priority, and is in a multithreaded cell. If a thread is idle in managed code (such as waiting for an event), the thread pool inserts another worker thread to keep all processors busy. If all thread pool threads are always busy, but the queue contains pending work, the thread pool will create another worker thread after some time but the number of threads will never exceed the maximum. Threads that exceed the maximum can be queued, but they do not start until other threads have finished.
HttpClient connection pool, database connection pool, memory pool, etc.
Advantages of thread pools
Thread pools are the most widely used technology in the Java concurrent programming framework, and can be used by almost any program that needs to perform tasks asynchronously or concurrently. There are at least four benefits to using thread pools properly during development.
First: reduce resource consumption. Reduce thread creation and destruction costs by reusing created threads;
Second: improve response speed. When a task arrives, it can be executed immediately without waiting for the thread to be created;
Third: improve thread manageability. Threads are scarce resources. If created without limit, they consume system resources and degrade system stability. Thread pools can be used for uniform allocation, tuning, and monitoring.
Fourth: provide more powerful features, such as delayed timing thread pool;
Implementation principle of thread pool
When a task is submitted to a thread pool, how does the thread pool handle the task? Let’s take a look at its main processing process. Let’s take a look at the picture below and explain it step by step.
When a user submits a task to the thread pool, the thread pool performs this:
First determine whether the number of core threads is full, if not, then create a thread to execute the task; Otherwise see the next step
② If the number of core threads in the thread pool is full, then the task queue is judged to be full, if not, then the task is put in the task queue; Otherwise see the next step
③ If the task queue is full, then determine whether the thread pool is full, if not, then create a thread to execute the task; Otherwise, see next step;
(4) If the thread pool is full, processing will be performed according to the rejection policy.
The above four steps are enough to describe how thread pools work. If you don’t understand, it’s ok to read step by step, the above mentioned thread pool terminology will be covered in detail.
ThreadPoolExexutor class ThreadPoolExexutor class ThreadPoolExexutor
We will focus on this class since it is ThreadPoolExecutor based in use.
As for the other classes or interface properties in his construction system, I won’t take screenshots here, they are not necessary. If you really want to see it, you can open the code and have a look.
ThreadPoolExecutor
In the Alibaba Java Development Manual, it is pointed out that thread resources must be provided through the thread pool, and the creation threads are not allowed to be displayed in the application. On the one hand, the creation of threads is more standardized, and the number of threads can be reasonably controlled. On the other hand, the detail management of threads is left to the thread pool, which optimizes the cost of resources.
The original description is as follows:
The ThreadPoolExecutor class provides four constructors, but each of its four constructors actually ends up calling the same constructor, except that in the other three, ThreadPoolExecutor will use the default parameters for you if they are not passed. So, let’s go straight to the full argument constructor to thoroughly disentangle the arguments inside.
public class ThreadPoolExecutor extends AbstractExecutorService { ...... public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize,long keepAliveTime,TimeUnit unit, BlockingQueue<Runnable> workQueue,ThreadFactory threadFactory, RejectedExecutionHandler handler) { if (corePoolSize < 0 || maximumPoolSize <= 0 || maximumPoolSize < corePoolSize || keepAliveTime < 0){ throw new IllegalArgumentException(); } if (workQueue == null || threadFactory == null || handler == null){ throw new NullPointerException(); } this.acc = System.getSecurityManager() == null ? null : AccessController.getContext(); this.corePoolSize = corePoolSize; this.maximumPoolSize = maximumPoolSize; this.workQueue = workQueue; this.keepAliveTime = unit.toNanos(keepAliveTime); this.threadFactory = threadFactory; this.handler = handler; }}Copy the code
The main parameters are as follows:
- CorePoolSize: the number of core threads in the thread pool, including idle threads, i.e. the size of the number of core threads.
- MaximumPoolSize: the maximum number of threads allowed in the thread pool, which cannot be exceeded;
- KeepAliveTime: when the number of threads in the thread pool is greater than corePoolSize, the number of idle threads with corePoolSize is removed from the thread pool after the specified time.
- Unit: keepAliveTime parameter unit (usually in seconds).
- WorkQueue: A queue for holding tasks. This queue holds only Runnable tasks submitted by executor methods.
- ThreadFactory: Thread pool factory, which is mainly used to identify threads. That is, give the thread a meaningful name;
- Handler: rejects the policy
Blocking queue
WorkQueue has several options. In the JDK, there are seven blocking columns:
-
ArrayBlockingQueue: A bounded blocking queue composed of array structures. This queue sorts elements on a first-in, first-out (FIFO) basis. By default, visitors are not guaranteed fair access to queues, which are blocked threads that can access the queue in the order in which they are blocked. Unfairness is unfair to the threads that wait first. When the queue is available, the blocking threads can compete for access to the queue.
-
LinkedBlockingQueue: A bounded blocking queue consisting of a linked list structure. The default and maximum length of this queue is integer.max_value. This queue sorts elements on a first-in, first-out basis.
-
PriorityBlockingQueue: An unbounded blocking queue that supports priority sorting. (Although this queue is logically unbounded, an attempt to add when the resource is exhausted will fail, resulting in an OutOfMemoryError)
-
DelayQueue: an unbounded blocking queue implemented using a priority queue. An unbounded blocking queue of elements from which to extract elements only when the delay expires
-
SynchronousQueue: A blocking queue that does not store elements. A blocking queue in which each insert operation must wait for a corresponding remove operation by another thread and vice versa. SynchronousQueue This queue does not hold elements.
-
LinkedTransferQueue: An unbounded blocking queue consisting of a linked list structure. There are tryTransfer and Transfer methods relative to other blocking queue LinkedTransferQueue.
-
LinkedBlockingDeque: A bidirectional blocking queue consisting of a linked list structure. Is a bidirectional blocking queue consisting of a linked list structure
The common threads in the pool are ArrayBlockingQueue, LinkedBlockingQueue, SynchronousQueue,
Common methods in queues are as follows:
type | methods | meaning | The characteristics of |
---|---|---|---|
Throw exceptions | add | Add an element | If the queue is full, an IllegalStateException is thrown |
Throw exceptions | remove | Returns and deletes the head node of the queue | If the queue is empty, NoSuchElementException is thrown |
Throw exceptions | element | Returns the queue head node | If the queue is empty, NoSuchElementException is thrown |
No exceptions thrown, but no blocking | offer | Add an element | Return true on success, false on failure |
No exceptions thrown, but no blocking | poll | Returns and deletes the head node of the queue | If the queue is empty, null is returned |
No exceptions thrown, but no blocking | peek | Returns the queue head node | If the queue is empty, null is returned |
blocking | put | Add an element | If the queue is full, block |
blocking | take | Returns and deletes the head node of the queue | If the queue is empty, block |
That’s pretty much it for blocking queues.
Thread pool factory
Thread pool factories, as described above, are designed to give threads meaningful names. It’s easy to use, just implement the ThreadFactory interface
public class CustomThreadFactory implements ThreadFactory { @Override public Thread newThread(Runnable r) { Thread thread = new Thread(r); Thread. setName(" I'm your own thread name "); return thread; }}Copy the code
Specific use does not go nonsense.
Rejection policies
There are four default denial policies for thread pools:
-
AbortPolicy: This is the default rejection policy of the thread pool. When the task can no longer be submitted, it throws an exception to give timely feedback on the running status of the application. This policy is recommended for critical services so that exceptions can be discovered in time when the system cannot handle large amounts of concurrency.
-
DiscardPolicy: Discards the task without throwing an exception. If the thread queue is full, all subsequent submitted tasks are discarded silently. This is not recommended;
-
DiscardOldestPolicy: Discards the task at the top of the queue and resubmits the rejected task. This is not recommended;
-
CallerRunsPolicy: If the task fails to be added, the main thread itself calls the Executor method in the executor to execute the task. This is not recommended;
In other words, it is best to use the default denial policy for thread pools. This allows exceptions to be found in time. If none of the above does not satisfy your needs, you can also customize the RejectedExecutionHandler interface
public class CustomRejection implements RejectedExecutionHandler { @Override public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {system.out.println (" do what you want with it "); }}Copy the code
Here, let’s draw another diagram to summarize and summarize the thread pool execution:
The detailed execution process is all illustrated in the figure.
Submit tasks to the thread pool
In Java, there are two methods to submit a task to a thread pool: Submit and Execute.
The execute method
The execute() method is used to submit tasks that do not require a return value, so there is no way to determine whether the task was successfully executed by the thread pool.
void execute(Runnable command);
Copy the code
The following code shows that the task entered by the execute() method is an instance of the Runnable class.
executorService.execute(()->{
System.out.println("ThreadPoolDemo.execute");
});
Copy the code
The submit method
The submit() method is used to submit tasks that require a return value.
Future<? > submit(Runnable task);Copy the code
The thread pool returns an object of type Future that determines whether the task was successfully executed, and the return value can be retrieved from the Future’s get() method. The get() method blocks the current thread until the task is complete. Instead, get(long timeout, The TimeUnit unit method blocks the current thread for a while and then returns immediately, possibly without completing the task.
Future<? > submit = executorService.submit(() -> { System.out.println("ThreadPoolDemo.submit"); });Copy the code
Closing the thread pool
In fact, if gracefully shutting down thread pools is a headache, threads are easy to start, but not so easy to stop. In general, most programmers use two methods provided by the JDK to shutdown thread pools: shutdown or shutdownNow;
The thread pool is shutdown by calling its shutdown or shutdownNow methods. They work by iterating through worker threads in a thread pool and then interrupting them one by one by calling the thread_interrupt method (PS: Interrupt, simply marking a thread, does not mean that the thread has stopped. If the thread does not respond to interrupt, the flag is useless), so a task that cannot respond to interrupt may never terminate.
ShutdownNow first sets the state of the thread pool to STOP, then tries to STOP all threads executing or suspending tasks and returns the list of tasks waiting to be executed, while shutdown only sets the state of the thread pool to shutdown. It then interrupts all threads that are not executing tasks.
The isShutdown method returns true whenever either of the two shutdown methods is called. The thread pool is closed successfully when all tasks are closed, and calling isTerminaed returns true. Which method should be called to shutdown the thread pool depends on the nature of the task submitted to the pool. The shutdown method is usually called to shutdown the thread pool, or the shutdownNow method can be called if the task is not necessarily finished.
Safe shutdownNow is recommended to close the thread pool, but more elegant methods will be covered in more detail in the two-phase termination mode in the concurrent programming design mode.
Reasonable parameter
Why is it called a reasonable parameter, and what does an unreasonable parameter look like? When we create a thread pool, how do we set the parameters to be reasonable? In fact, there is a certain basis, let’s take a look at the following way to create:
ExecutorService executorService = new ThreadPoolExecutor(5, 5, 5, TimeUnit.SECONDS, new ArrayBlockingQueue<>(5), r -> { Thread thread = new Thread(r); Thread. setName(" Thread Pool principles "); return thread; });Copy the code
Is he reasonable or not? I don’t know, because we have no reference, in actual development, we need to depend on the nature of the task (is IO frequent?) To determine the size of the core we create, we can actually analyze it from one of the following perspectives:
- The nature of the task: CPU intensive task, IO intensive task and mixed task;
- Task priority: high, medium and low;
- Task execution time: long, medium and short;
- Task dependencies: Whether they depend on other system resources, such as database connections;
Tasks of different natures can be handled separately by thread pools of different sizes. CPU intensive and IO intensive.
Cpu-intensive tasks should be configured with the smallest possible threads, such as a thread pool of Ncpu+1 threads. (Use Runtime.getruntime ().availableProcessors() to get the number of physical CPUS.)
IO intensive task threads are not executing tasks all the time, so configure as many threads as possible, such as 2*Ncpu.
If a mixed task can be split into one CPU intensive task and one IO intensive task, the throughput of the split execution will be higher than that of the serial execution as long as the time difference between the two tasks is not too great.
If the execution time of the two tasks is too different, there is no need to break them down. You can use runtime.getruntime ().availableProcessors() to get the number of cpus on the current device.
Tasks with different priorities can be processed using the PriorityBlockingQueue. It allows higher priority tasks to be executed first (note: if higher priority tasks are always submitted to the queue, lower priority tasks may never be executed)
Tasks with different execution times can be assigned to thread pools of different sizes, or priority queues can be used to allow shorter tasks to be executed first. Tasks that depend on the database connection pool, because the longer threads wait for the database to return results after submitting SQL, the longer the CPU is idle, the larger the number of threads should be set to better utilize the CPU.
A bounded queue is recommended. The bounded queue can increase the stability and warning capability of the system, and can be set as large as necessary. OOM due to too many submitted tasks;
7. Summary of this paper
This article mainly introduces the implementation principle of thread pool and some use skills, in practical development, thread pool can be said to be a slightly more advanced programmer’s necessary skills. So master the thread pool technology is also a top priority!
Follow the public account [Hollis], the background reply “into god map” can be downloaded to receive the Java engineer advanced mind map.