The last few days are really a little busy, so off more a few days, today on the busy take a break to update the article; A few days ago, some friends on CSDN asked me some questions about multi-threading, so today we are going to talk about multi-threading;

Speaking of multithreading, we can say one or two, such as the familiar way to create threads inherit Thread class, implement Runnable interface, implement Callable interface and so on; But times are always progressing because

Training classes churn out lots of Java developers every year, and it’s a little dangerous if we’re stuck with three ways of implementing threads.

So what are we talking about today? The fourth way to create threads -> thread pool; Don’t be a stranger, it’s used in almost every project nowadays, if you don’t use it in your project it just shows that your project is very traditional, why talk about thread pools, because interviewers like to ask, if you don’t believe it’s ok

Take a look at the screenshot below:

You can see that both large and small factories now like to ask about thread pools, so we’ll address that today; First, what is a thread pool?

As the name implies, it is a pool for creating threads, just like a database connection pool, so why use it? Do you think if you didn’t have a thread pool would you be creating a new thread every time? If you really dare to use the new thread, then I can only say that you really awesome, and will tell you not to let the boss find out;

The advantages of thread pools are as follows:

(1) Reduce the consumption of system resources. By reusing existing threads, reduce the consumption caused by thread creation and destruction; (2) Improve the system response speed. When a task arrives, it can be executed immediately without waiting for the creation of a new thread by reusing the existing thread; (3) It is convenient to control the number of concurrent threads. This is because threads created without limit can take up too much memory to generate OOM, and can cause excessive CPU switching (CPU switching has a time cost (the need to keep the current thread of execution alive and restore the thread of execution). (4) Provide more powerful function, delay timing thread pool.

To create a thread pool:

There are four methods to create a file by using the following tools: newFixedThreadPool(), newSingleThreadPool, newCachedThreadPool, and newScheduledThreadPool(). For example, if I create it with newFixedThreadPool, it has only one constructor which is to give an int number of threads, pretty simple, right

Here I use this to simulate two threads handling ten tasks as follows:

Right? All of these ten tasks are handled by two threads in my thread pool, and the other three create thread pools in a similar way, so I won’t go into too much detail; The bottom line is that none of these four ways are important so I’m not going to go over them;

When you are asked what thread pool creation method is used in the project, the answer is none, we actually use a different ThreadPoolExecutor in the development, why this one, because

Alibaba development manual clearly uses this one, Ali through a lot of practice, we can not refute;

Do not doubt that the screenshot above is from the Ali development manual, which clearly points out the shortcomings of the above methods. You see it is really the Ali development manual

So the point is ThreadPoolExecutor, four constructors, look at the most, seven parameters, very important, the interviewer will ask

CorePoolSize: Specifies the number of threads in the thread pool, which determines whether the added task should be executed by a new thread or placed in the workQueue task queue.

MaximumPoolSize: specifies the maximum number of threads in the pool. This parameter determines the maximum number of threads in the pool depending on the type of workQueue you are using.

KeepAliveTime: When the number of idle threads in the thread pool exceeds corePoolSize, how long it takes for the excess threads to be destroyed;

Unit: keepAliveTime units

WorkQueue: a queue of tasks that have been added to the thread pool but have not yet been executed It is generally divided into direct submission queue, bounded task queue, unbounded task queue and priority task queue.

ThreadFactory: a threadFactory used to create threads, usually using the default;

Handler: rejects the policy. How to reject tasks when there are too many to handle;

AbortPolicy (AbortPolicy) AbortPolicy (AbortPolicy) AbortPolicy (AbortPolicy) AbortPolicy (AbortPolicy) AbortPolicy (AbortPolicy) AbortPolicy

There are four kinds of rejection strategies, and they should be shown in the picture:

Any theory must have a practice to support, or empty words is not practical, so we in the code, the core number of threads, maximum thread 2 5, blocking queue simulation six 3, this time I request, should get the core thread two requests, three requests into the blocking queue, and the remaining one of the biggest thread, an open, that is, there will be only three threads can perform a task, Is this the case? Look at the picture below:

What’s the difference between the thread pool submission method execute and submit?

1, execute(), execute a task, no return value. Submit (), submit a thread task, return value.

Submit (Callable Task) gets its return value through future.get() (block until the task completes). Generally, FutureTask and Callable are used together (as shown in IntentService). Submit (Runnable task, T result) can indirectly obtain the return value of the thread through the incoming carrier result. Submit (Runnable task) returns no value, even if its return value is null;

The future. get method blocks the thread that fetched the result, wakes it up, and returns the result after it completes execution.

Again, I want to talk about future, Future Task, which is too much, but I’ll talk about it next time.

You think this is the end of it? No, for example, blocking queues, rejection policies, and the number of core threads in the thread pool is determined by what?

Blocking queues is another topic, and I’m not going to talk about it here. There are four types of rejection strategies, so if you’re interested, you can take a look at them privately. Now let’s talk about how the number of core threads is determined;

Cpu-intensive tasks should use as small a thread pool as possible, typically the number of CPU cores +1. Cpu-intensive tasks cause high CPU usage. If too many threads are enabled, excessive CPU switchover may occur.

IO intensive tasks can use slightly larger thread pools, typically 2*CPU cores. IO intensive tasks have low CPU usage. Therefore, you can make full use of CPU time by allowing other threads to handle other tasks while waiting for I/OS.

How iron children, is not so difficult to understand, simple bar;

Interested partners can scan the code to pay attention to the public number, the public number will share high quality technical articles