In Java, threads are used to execute tasks asynchronously. The creation and destruction of Java threads requires some overhead, and if we create a new thread for each task to execute, the creation and destruction of these threads consumes a lot of computing resources. At the same time, creating a new thread for each task to execute is a strategy that can eventually crash an application under heavy load.
Java threads are both units of work and units of execution. Starting with JDK1.5, separate the unit of work from the execution mechanism. Units of work include Runnable and Callable, and the execution mechanism is provided by the Executor framework.
Introduction to the Executor Framework
A two-level scheduling model for the Executor framework
In HotSpot VM’s threading model, Java threads are mapped one-to-one to local operating system threads. When a Java thread starts, it creates a local operating system thread. When the Java thread terminates, the operating system thread is also reclaimed. The operating system calls all the threads and allocates them to available cpus.
This pattern can be divided into two layers. At the upper level, Java multithreaded programs typically break the application into tasks and then use a user-level Executor framework to map those tasks to a fixed number of threads. Underneath, the operating system kernel maps these threads to the hardware processor.
Schematic diagram of two-level scheduling model:
As you can see from the figure, the framework is used to control the upper level scheduling of the application (the lower level scheduling is controlled by the operating system kernel, not the application).
The structure and members of the Executor framework
Structure of the Executor framework
Task 1.
This interface includes the Runnable interface and the Callable interface
2. Task execution
It includes Executor, the core task execution interface, and ExecutorService interface inherited from Executor.
There are two key Executor framework class implements the ExecutorService interface: ThreadPoolExecutor and ScheduledThreadPoolExecutor
3. Result of asynchronous calculation
Includes the Future and the FutureTask class that implements the Future interface.
Executor framework classes and interfaces
Schematic diagram
- Executor is an interface that forms the basis of the Executor framework, separating the submission of tasks from their execution.
- ThreadPoolExecutor is the core implementation class of the thread pool used to execute the submitted task.
- ScheduledThreadPoolExecutor is an implementation class, can run in a given delay, or execute the command on a regular basis. ScheduledThreadPoolExecutor than the Timer is more flexible and more powerful.
- The Future interface and its implementation, the FutureTask class, represent the result of asynchronous computation.
- A Runnable and Callable interface implementation class, can be ThreadPoolExecutor or ScheduledThreadPoolExecutor execution.
Use of the Executor framework
Let’s start with a picture:
- The main thread first creates a task object that implements either the Runnable or Callable interfaces. Tool Executors encapsulate a Runnable object into a Callable object
1
2
3
|
Executors.callable(Runnale task);
or
Executors.callable(Runnable task, Object resule);
|
- The Runnable object can then be handed over directly to the ExecutorService for execution
1
2
3
|
ExecutorServicel.execute(Runnable command);
Alternatively, you can submit Runnable objects or Callable objects to the ExecutorService for execution
ExecutorService.submit(Runnable task);
|
If executorService.submit (…) , the ExecutorService will return an object that implements the Future interface (so far, the FutureTask object has been returned in the JDK). Since FutureTask implements the Runnable interface, you can also create a FutureTask class and hand it over directly to the ExecutorService.
- Finally, the main thread can execute the futureTask.get () method to wait for the task to complete. The main thread can also cancel this task by executing futureTask.cancel (Boolean mayInterruptIfRunning).
ThreadPoolExecutor,
The core class of the Executor framework is ThreadPoolExecutor
Components of ThreadPoolExecutor
- CorePool: Size of the core thread pool
- MaximumPool: size of the maximum thread pool
- BlockingQueue: A work queue used to temporarily save tasks
- RejectedExecutionHandler: Handler to be invoked by the execute() method when ThreadPoolExecutor is closed or when ThreadPoolExecutor is saturated (the maximum thread pool size is reached and the work queue is full).
Executors can create ThreadPoolExecutor pools of three classes:
1. FixedThreadPool
Create a thread pool of fixed length, one thread is created each time a task is submitted until the maximum number of thread pools is reached and the pool size does not change.
This thread pool can create a thread pool with a fixed number of threads. The feature is that a thread pool with a fixed number of threads can be reused. Its construction source code is as follows:
1
2
3
4
5
|
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads, 0L,
TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
|
- Both corePoolSize and maxiumPoolSize of FixedThreadPool are set to nThreads, the parameters specified when creating FixedThreadPool.
- 0L indicates that when the number of threads in the thread pool manipulates the number of core threads, the excess threads will be stopped immediately
- The last parameter indicates that FixedThreadPool uses the unbounded queue LinkedBlockingQueue as the thread pool worker queue. Since it is unbounded, new tasks will wait in the unbounded queue when the number of threads in the thread pool reaches corePoolSize. Therefore, the number of threads in the thread pool cannot exceed corePoolSize, maxiumPoolSize becomes an invalid parameter, and the running thread pool does not reject the task.
A FixedThreadPool diagram is shown below
The execution process is as follows:
1. If the number of threads currently working is less than the number of corePools, a new thread is created to execute the task.
2. Add tasks to LinkedBlockingQueue when the number of threads working in the thread pool reaches corePool.
3. After completing the task in 1, the thread removes the task from the queue.
Note that LinkedBlockingQueue is an unbounded queue, so you can always add new tasks to the thread pool.
2. SingleThreadExecutor
SingleThreadExecutor is an Executor that uses a single worker thread. Features the use of a single worker thread to perform tasks. Its construction source code is as follows:
1
2
3
4
5
6
|
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
}
|
CorePoolSize and maxiumPoolSize for SingleThreadExecutor are both set to 1.
All other parameters are the same as FixedThreadPool. The diagram is as follows:
The execution process is as follows:
1. If the number of threads currently working is less than the number of corePools, a new thread is created to execute the task.
2. Add tasks to LinkedBlockingQueue when the number of threads working in the thread pool reaches corePool.
3. After completing the task in 1, the thread removes the task from the queue.
Note: Since there is only one worker thread in the thread pool, tasks can be executed in the order they are added.
3. CachedThreadPool
CachedThreadPool is an “unlimited” thread pool that creates new threads as needed. The feature is that you can create new threads to perform tasks as needed, with no specific corePool. Here is how it is constructed:
1
2
3
4
5
|
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
|
CorePoolSize of CachedThreadPool is set to 0, that is, corePool is empty. MaximumPoolSize is set to integer. MAX_VALUE, that is, maximum is unbounded. The keepAliveTime is set to 60 seconds, meaning that idle threads can wait a maximum of 60 seconds for a task, otherwise they will be reclaimed.
The CachedThreadPool uses the SynchronousQueue, which has no capacity, as the work queue of the main thread pool, which is a blocking queue with no capacity. Each insert operation must wait for a corresponding remove operation from another thread. This means that CachedThreadPool keeps creating new threads if the main thread is submitting tasks faster than the thread pool can process them. In extreme cases, CachedThreadPool can run out of CPU resources by creating multiple threads. Its operation diagram is as follows:
The execution process is as follows:
1. Execute synchronousQueue. offer(Runnable task) first. If there are idle threads executing synchronousQueue.poll () in the current thread pool, the offer operation performed by the main thread is paired with the poll operation performed by the idle thread, and the main thread assigns the task to the idle thread. , execute() succeeds. Otherwise, go to Step 2
2. When the thread pool is empty (the initial maximumPool is empty) or there are no idle threads, the pairing fails and no thread performs synchronousQueue.poll. In this case, the thread pool creates a new thread to perform the task.
3. After the new thread is created, the poll operation is performed. When the step 2 thread completes, it will wait 60 seconds. If the main thread submits a new task, the idle thread will execute the new task, otherwise it will be reclaimed. Therefore, cachedThreadPools that do not commit tasks for a long time do not consume system resources.
SynchronousQueue is a block queue that does not store elements and must wait for a poll operation each time an offer operation is performed otherwise elements cannot be added.