This is the 12th day of my participation in the August Text Challenge.More challenges in August
Let’s start with creation, which consists of the following sections.
The first one is that when we create it, it has a constructor, and the constructor has complicated parameters for the thread pool, like our other class constructors are very simple, for example, if we define a Map, we may not even pass the parameters in the constructor. But on this side of the thread pool it has a lot of constructors, and the meaning of each one needs to be understood before it can be created properly.
The second one is whether we should create it manually or automatically. Automatic creation is actually very convenient, and we emphasize automation, but for thread pools, automatic creation has some disadvantages, and in some cases it is better to create it manually.
Number three, there’s going to be some threads in the thread pool, the number of threads in the thread pool how do I choose that number? Let’s say there are 1000 tasks. How many threads should we use to execute it?
The fourth is about stopping the existing method, which is different from our thread stop creation. It is easier to go up the hill and harder to go down the hill when we create and stop the thread. It’s the other way around. It’s harder to go up and easier to go down. Creating something is actually a lot to think about, but it’s a lot easier to stop.
Thread pool parameters
Let’s start by learning the parameters of the thread pool constructor. Here we present six parameters that we need to know when creating a thread pool.
Parameter names | type | meaning |
---|---|---|
corePoolSize | int | Core threads |
maxPoolSize | int | Maximum number of threads |
keepAliveTime | long | The lifetime of idle threads |
theadFactory | ThreadFactory | ThreadFactory is used to generate new threads when the thread pool needs them |
workQueue | BlockingQueue | A queue for holding tasks |
Handler | RejectedExecutionHandler | The thread pool cannot accept the rejection policy for your submitted task |
The first mouse that you’ll follow is call Process, which is the number of core threads.
On this slide.
Inside I can’t completely introduce it clearly, so I first tell you what these six are generally used to do first, and then we will explain one by one in detail.
The second one is Max, process, which is the maximum number of threads.
The third one is keep, like, time.
The fourth is our work, and ▁q is our task storage queue, which is usually a blocking queue type.
The fifth is the county factory, which is used to create new counties, and the last is a processor for rejection strategies.
Let’s say we end up running out of work, and we need to reject and this handler will execute the reject.
CorePoolSize and maxPoolSize
CorePoolSize and maxPoolSize If you haven’t worked with thread pools before, the design is actually quite clever.
Let’s take a look at what the two parameters are different, or what they mean. CorePoolSize refers to the number of core threads. After the pool is initialized, there are no threads in the pool, so the default number of threads in the pool is 0. When a new task is submitted, a new thread is created to execute the task. If no special Settings are made, the number of threads is usually no smaller than corePoolSize after that because they are core threads and will not be destroyed even if there may be no future tasks to execute. If we assume that corePoolSize is set to 5 at this point, and you suddenly drop 5 tasks, I’m going to create 5 threads, which makes perfect sense.
But we also have a parameter called maxPoolSize. What does that mean? Sometimes our task actually doesn’t have a uniform quantity, or a fixed quantity. The same thread pool may perform very, very many tasks today, and very few tasks tomorrow, and this uneven thing is the norm. In this case, we introduced maxPoolSize to help us better deal with this situation. In the previous example, we assumed that we had set corePoolSize to 5, and that we had created five threads. These five threads would normally stay alive because it was our core thread count, so the number of cores would stay at 5. So even if no new tasks come in during that time, it doesn’t reduce the number of threads below 5, unless there’s an exception, which we’ll ignore. So sometimes those 5 threads aren’t enough to handle that many tasks, and then all of a sudden a lot of tasks come in and we need to use more threads.
The number of more threads also needs to have an upper limit, because it can’t be an infinite number, so this upper limit is our maxPoolSize.
Let’s take a look at this picture, as shown below:
So the way to think about it is we start from the left side, and the left side is the capacity of the core thread pool that we started with, and when our task comes in, it’s going to create threads until it reaches this value of corePoolSize, let’s say five, which is its core processing threads.
And then at this point let’s say some more threads come in, it’s going to put it on its storage queue. We know that one of the initial parameters is the storage queue, which is where the tasks are stored, which means that he doesn’t want to break through the corePoolSize value, so if the corePoolSize value is 5, now all five threads are working, five more threads are coming in, It puts these five new threads in the queue, and then waits until those five threads are finished and then goes to the queue and gets another thread to execute. So we haven’t gotten to currentPoolSize yet, and we haven’t broken through our corePoolSize yet.
But if our queue is full, because our queue is under certain circumstances, you can set a capacity for it. So let’s say that this capacity is 10, so let’s say that we have five core threads that are processing, and the queue capacity is 10, and those 10 are already full, and we want more tasks that we want to put into this thread pool to execute. So that’s where the extensibility comes in.
At this point, the thread pool will continue to add new threads to this number of tasks to help us perform, because it feels that there are not enough threads to handle this number of tasks. The current thread pool will be larger than our currentSize and will gradually expand out.
How long does that extend to? At most, it can be extended to our maxPoolSize. So the current size is the number of threads we have, it contains core Pool size and current pool size, and Max Pool size is the maximum capacity, it contains all three of our blocks.
Add thread rules
Let’s look at the add thread rule in detail.
The first one is that if our number of threads is smaller than our number of core threads, then naturally, even if there are other threads that are idle, I don’t have enough threads, so I’m going to create a new thread to perform a new task, and as soon as the task is submitted, I’m going to create a new thread.
Second, but let’s say our thread is equal to or greater than our core core thread. Let’s look at the equals case, which is equal to our core thread, say 5, but less than our maximum, which is 10. So in this case, the new task will be queued, rather than the priority to expand the thread, because there is still a queue, usually put in the queue, if not. At this point, let’s say we really can’t let go, and our thread is still 5, less than our 10, which means we haven’t reached the limit yet. So we try to break our 5, create new threads to run tasks, and there is always a limit.
Number three, even now the queue is full, and the number of threads has reached 10, which is the maximum number of threads, and I can’t go any further.
At this point, if another task is coming in, I have to execute the reject strategy, which is what we said in the initial parameter reject here.
So now we can see that there are several parameters involved. The first is the number of core threads, which is the number of resident threads in our thread pool. The second is the maximum thread, which may activate the maximum value when there are too many tasks.
The workQueue is our workQueue, which is used to hold various tasks, and when the queue is full and the maximum number of threads maxPoolSize is reached, the Handler parameter is used to execute the rejection policy.
So let’s use a picture to make this a little bit clearer. As shown below:
If a task comes in, it first checks to see if the core thread pool is full, right? If the queue is not full, the core thread is created. If the queue is full, the task is placed on the queue. If the queue is full, more threads are created. If even maxPoolSize is full, there is no choice but to reject it. So this flow chart sort of illustrates our entire process.
So at this point we can sum up:
For our thread addition, its rules need to be judged one by one, in this order. First, it determines the number of core threads (our corePoolSize), then it determines whether the queue is full, and finally, if the queue is full, is the maximum number of threads full? So it’s a judgment order, if it’s full, it’s rejected.
Understanding, then we will take as an example, this time we do a metaphor, we go to eat barbecue, for example, in the autumn, when to eat barbecue, usually are not particularly cold, also not very hot, we’ll give preference to eat inside the store, store inside assuming that there are five desks, so let’s go and eat, the five table is always there. They’re like our corePoolSize. However, if there are not enough tables and more guests come, then we need to expand, which means to move some temporary tables outside, which is our maxPoolSize. Because the capacity of the outside is also limited, for example, we can put 5 tables outside, when the stalls are closed, the chairs outside will be taken back, while the tables inside will not be dealt with, because the inside will always exist.
Let’s take another practical example of a thread pool, where the parameters are real. Let’s assume a core pool of 5, a maximum of 10, and a queue capacity of 100. So when your task comes in, first I’m going to create five core pools of threads, and then tasks are added to this queue, which has a capacity of 100, so we’re going to keep adding to it, adding to it until 100 is full, and if there are more tasks coming in, THEN I’m going to create new threads, The maximum number of threads can be up to 10. So when you get to 10, if you come back, you say no.
Add or subtract thread characteristics
Learn through the above process. Let’s summarize the characteristics of this addition and subtraction thread.
In fact, we found that while we understood a set of rules and processes, the characteristics became clearer when we looked at them again.
First, what if we create a thread pool with corePoolSize and maximumPoolSize equal to each other? In effect, it becomes a fixed-size thread pool. After the initial expansion phase, the number of tasks in the queue will not change, even if the queue is full, it will not expand out.
The second, thread pool, actually wants to keep the number of threads smaller, and only increases the number of threads when the load becomes very large. This is obvious because it only tries to expand the queue when it is full, and it does not increase the capacity of the thread until the queue is full. The thread pool doesn’t want too many threads to participate.
Third, if we set the maximum number of threads to a very high value, such as the maximum number of integers, such as integer.max_value, this value is so high that we don’t even need to know what it is, as long as we know it is almost never enough. In this case, it means that our thread pool can hold any number of concurrent tasks. Why? Because queues still exist, and queues have a finite capacity. Let’s say the queue is 100, but if the queue is full, and you want more tasks to come in, let’s say two thousand, three thousand, when those tasks get into this thread pool, it’s going to tell you by rule that the queue is full. What’s the next step? The next step is to create more threads until I reach my maximum. However, this maximum can not be reached because it is too large, so N threads will be created at this time, possibly thousands of such threads to handle these tasks at the same time. So with this setting, we can make our thread pool capacity almost unlimited, that is, the number of threads can be arbitrarily expanded.
Fourth, we only create more threads when the queue is full, but if your queue is infinite, this is possible. Our third point is that queues have limits, but ready-made ones don’t. Fourth, there is no limit to the number of queues. If the queue is not full, it is impossible to create more threads than the number of core threads, because its rule requires that the queue is full before new threads are created. In this case we use an unbounded queue, such as LinkedBlockingQueue, which has no capacity limit. In this case, more tasks will be added to the queue, so the number of threads will not swell and will not exceed the corePoolSize. It doesn’t matter how big you set the maxPoolSize, because it can’t exceed the corePoolSize.
keepAliveTime
KeepAliveTime means to keep alive for a period of time. As we have just said, the number of core threads will be broken under certain circumstances, but you just break, regardless of recycling is not good. Because that’s what our core is all about. KeepAliveTime is a keepAliveTime that allows you to recycle redundant threads when you are not busy.
If the number of threads we are more than the number of our core, and the rest of the idle thread, and free time than we specify the parameters of the time, this time they will be terminated, rather then be recycled, this is a kind of mechanism, this mechanism is quite so can when we too threads redundancy to reduce resource consumption. The default is more than the number of core threads we have, the excess will be recycled, and those that do not exceed should not be recycled since they are the number of cores. Unless you change its parameters, such as allowCoreThreadTimeout to true, in which case it will recycle the number of core threads, which is usually not the case. So we’re going to give our thread, in effect, some buffer time. When it is found that redundant threads have exceeded the buffer time and no tasks can be executed, they are no longer necessary and are recycled.
ThreadFactory
A ThreadFactory is actually a ThreadFactory that produces threads to perform tasks.
Our new threads are created by ThreadFactory. If you do not specify, you can use the default. The default is Executos defaultThreadFactory (). By default, all created threads are in the same thread group. They have the same priority transactions and are not daemons.
If we want to specify our own thread factory, we can set the corresponding thread name, thread group, priority, whether daemons, etc., according to their own needs. Usually, though, we don’t have to do this, and the default thread factory is basically enough. Through the following source code, we will understand.
static class DefaultThreadFactory implements ThreadFactory { private static final AtomicInteger poolNumber = new AtomicInteger(1); private final ThreadGroup group; private final AtomicInteger threadNumber = new AtomicInteger(1); private final String namePrefix; DefaultThreadFactory() { SecurityManager s = System.getSecurityManager(); group = (s ! = null) ? s.getThreadGroup() : Thread.currentThread().getThreadGroup(); namePrefix = "pool-" + poolNumber.getAndIncrement() + "-thread-"; } public Thread newThread(Runnable r) { Thread t = new Thread(group, r, namePrefix + threadNumber.getAndIncrement(), 0); if (t.isDaemon()) t.setDaemon(false); if (t.getPriority() ! = Thread.NORM_PRIORITY) t.setPriority(Thread.NORM_PRIORITY); return t; }}Copy the code
Let’s take a look at the source code, you will be clear. DefaultThreadFactory implements the ThreadFactory interface. This class is very simple. When a thread is created, the newThread() method is called. The newThread() method also uses the newThread() method, but passes in a lot of parameters. For example, the thread name, whether it is a daemon thread, what its priority is, this is the thread factory. In general, we can use the default thread factory for most of our needs.
The parameters we haven’t learned so far are workQueue and Handler.
workQueue
A workQueue is a workQueue, and for a workQueue, we usually have three queue types. The first is a directly exchanged queue, SynchronousQueue, which we can use if we don’t have too many tasks and simply pass them on to threads. The queue itself does not have a capacity. For example, if the queue mentioned above has 10 capacity, 10 can be stored in it. However, if we used SynchronousQueue, there would be no place for tasks. So if we are using SynchronousQueue, we need to be aware that our maxPoolSize may be larger. Since we don’t have a queue to buffer, it might be easy to create new threads.
The second type, unbounded queue, is called LinkedBlockingQueue. As I mentioned earlier, the LinkedBlockingQueue will not be filled. So if our corePoolSize thread core number is busy, then new tasks will be put into the queue. So in this case, setting maxPoolSize to be much more is useless because it will be put directly into the queue and the queue will not have enough space. If we use this kind of queue, we can really prevent our traffic surge. As traffic builds up and we can’t handle it, we put it in a queue. But there are risks, such as the pace of processing not keeping up with the pace of task submissions. There will be more and more tasks in the queue, which may cause memory waste or OOM exceptions.
The third, more typical, is bounded queues. ArrayBlockingQueue is a typical bounded queue that can be set to a queue size. Let’s say we set it to 10, in which case the maxPoolSize parameter of the thread pool makes sense if we use bounded queues. When the queue becomes full, new threads need to be created.
We can choose the type of queue that suits us according to our needs.