Java is a multithreaded language, and almost all Java projects in production environment need multithreading. Thread is one of the most important system resources, if this resource is not used properly, it is easy to lead to low efficiency, or even problems. Have the following scenario, you have a telephone dialing system, you have a bunch of tasks that need to be dialed to perform, the first thing you must consider is multithreading asynchronously to perform. If I create a new Thread for each dialing task, then 10,000 threads will be created when 10,000 tasks need to be executed at the same time, and the initial destruction and other operations of threads will be huge consumption. In fact, when implementing these functions, it is not necessary to complete them in real time, but to improve the concurrent performance of execution as much as possible within the controllable scope. Hence the application of thread pool technology. The most common thread pool technology in Java is ThreadPoolExecutor. Let’s look at the implementation of ThreadPoolExecutor as a whole. This class has a lot of annotations, many of which are important, so I won’t start with annotations. Let’s start with the usage. There’s a concept.
The basic use
|
|
Define the necessary parameters and build a ThreadPoolExecutor object. Then call the execute() method of the object. Parameter Description:
-
CorePoolSize, the minimum number of threads retained by the thread pool. If there are fewer threads in the pool, they are created when execut() is executed.
-
MaximumPoolSize, the maximum number of threads allowed in a thread pool.
-
KeepAliveTime, unit, keepAliveTime to keep a thread alive when it is idle.
-
WorkQueue: Stores submitted waiting tasks. There is a limit on the size of the queue.
Thread management mechanism
Many people misunderstand the relationship between corePoolSize, maximumPoolSize, and workQueue. Many people think that corePoolSize and maximumPoolSize must be useful and valid regardless of the queue selection, but they are not.
Take a look at the thread basic rule notes
-
By default, the pool starts with 0 threads. When a task is received, a new thread is created if the number of threads alive in the thread pool is smaller than the corePoolSize core thread.
-
If all the running core threads are busy, exceeding the tasks that the core thread is processing, the executor is more likely to queue the tasks instead of creating a new thread.
-
If a task cannot be submitted to the queue, a new thread will be created within the maximum number of threads. Beyond that, an error is reported.
Alternatively, if you want to have a core thread at thread initialization, you can call either prestartCoreThread(), which starts one, or prestartAllCoreThread(), which starts all.
Look at the queuing strategy
-
Commit directly, using SynchronousQueue. Do not save, directly submit to the thread, if there is no thread, create a new one.
-
Unlimited submissions, using an unbounded queue like LinkedBlockingQueue. The feature is to store all tasks that the core thread cannot handle, the queue has no upper limit, and the maximum thread is useless.
-
Limited submissions, using bounded queues like ArrayBlockingQueue. It can hold more tasks than the core thread, and the queue is capped. A new thread was created (full of throw errors) when the upper limit was exceeded. Better protection of resources against crashes is also the most common queuing strategy.
As can be seen from the above rules, the number of core threads and the maximum number of threads, as well as the queue structure are mutually affected, how to queue, the size of the queue, the maximum number of threads are not necessarily.
Let’s look at survival mechanisms
When the number of core threads exceeds, the thread pool keeps the thread alive for keepAliveTime, and then destroys it. By default, this works for non-core threads. If you want this mechanism to work for core threads, you can call allowCoreThreadTimeOut(). There is no such thing as a core thread.
To sum up, the thread pool keeps some threads alive after multiple executions, even if it is idle. The purpose of this is to reduce the overhead of thread destruction creation, so that the next time a task needs to be executed, the thread can be used directly from the pool. But core threads can’t be maintained too much, because there’s also some overhead. The maximum number of threads protects the stability of the entire system and prevents threads from crowding out when there is a large amount of concurrency. Work queue is to ensure the task order and temporary storage, system reliability. Thread survival rules serve a similar purpose to maintaining core threads, but reduce their lifetime.
There is also a rejection mechanism, which provides a solution to some unusual situations.
CTL Thread status control
This CTL variable is the core control state for the entire thread pool.
This CTL represents two variables
-
WorkerCount: indicates the number of valid threads. Basically, it’s a thread that’s alive, but at some point there’s a temporary difference.
-
RunState, the running status of the thread pool. Where the lower 29 bits of CTL (int32 bits) represent workerCount, so the maximum number of threads is (2^29)-1. The other three bits represent runState.
RunState has the following states:
-
RUNNING: Receives new tasks and processes queue tasks.
-
SHUTDOWN: Does not receive new tasks, but processes queue tasks.
-
STOP: does not receive new tasks, does not process queued tasks, and interrupts all tasks in process.
-
TIDYING: All tasks are terminated, valid thread is 0. The terminated() method is triggered.
-
TERMINATED: When the TERMINATED () method is TERMINATED.
When shutdown() is called, the state changes from RUNNING to shutdown, no new tasks are received, and all tasks in the queue are processed. If shutdownNow() is called, the state changes directly to STOP. When both threads or queues are empty, the state becomes TIDYING. Terminated () will become terminated when it is finished.
execute()
With an understanding of the rules and mechanisms above, now look at the source code from this entry, exactly how the process is implemented.
-
If there are fewer core threads running, use this task to try to create a new thread.
-
If a task is successfully enqueued, check the thread pool state again to see if it needs to be enqueued, because the status may have changed during enqueueing. If the queue is confirmed and there are no live threads, create an empty thread.
-
If not, try to create a new thread, if all failed. Reject this task for the second point why finally create a new thread? It is easy to imagine that there would be a polling mechanism for the next task to queue up and use the idle thread directly.
The comments basically explain all the code, and there’s nothing special about the code. The main one is the addWoker() method, which I’ll take a look at.
addWoker()
So just to understand the whole idea of this
From the description, addwoker fails when the thread pool is in an incorrect state, the thread is full, or the thread factory fails to create the thread pool.
This is a longer method, so let’s look at it in two paragraphs. Look at the first paragraph.
Retry: This is unprecedented if you are not reading the source code. This is a circular location marker that is part of Java syntax. Retry: Marks the first for loop, and the break and continue statements refer to the retry. Both break and continue are for loops that operate on the outer layer. Retry can be any character that is valid for variable naming.
Then look at the if statement in the outgoing for loop
This if judgment wants to return false; The empty queue is a necessary condition. Because addWork() is called not only to receive new tasks, but also to process tasks in the queue. The SHUTDOWN state also processes tasks in the queue, so if the queue is not empty, it will continue to execute.
For the inner for loop
It will first determine whether worker data meets corePoolSize and maximumPoolSize definitions. Failure will be returned if the data does not. Then try CAS to increment workerCount. If CAS fails, continue spinning until it succeeds. Unless the state of the thread pool has changed, send back to the outer for loop and re-execute to determine the state of the thread pool.
The first piece of code is to let the workerCount increment if the condition is met
Second code
Create a Worker object that contains a thread created by a thread factory and a task (which can be empty) that needs to be executed.
If the thread is successfully created, a re-entrant lock is added to place the new Worker object into the workers member variable. Before joining, the state of the thread pool and the state of the new thread need to be judged again.
If worker is added to the workers member variable, the newly created thread is started.
If the add fails, run addWorkFailed(w).
If that fails, the lock operation rolls back wokers, workerCount, and then determines the status to see if the thread pool needs to be terminated.
This is the general process of addWorker().
conclusion
For other methods, nothing special, in this no longer too much narrative, interested can turn over the source code to read. Review the key points above
-
When the core thread is full and busy, the thread pool tends to queue submitted tasks rather than create new threads.
-
Depending on the queue selected, maximumPoolSize may not be useful. There are three different strategies.
-
CTL is the core control state of the thread pool and contains the runState thread pool running state and the number of valid threads in the workCount.
-
Retry: is a syntax for marking loops, and retry can be any variable naming legal character.
If you feel good, please pay attention to the public number: Zack said code