Pooling technology
The essence of program operation is to occupy system resources. In a multi-threaded environment, creating and releasing a thread each time consumes resources.
If we use a container to manage threads, we don’t release them when they are idle, we put them in the container, and we take them out of the container instead of creating them when we need them, we greatly reduce resource consumption. This is the concept of pooling technology.
Pooling technology is widely used, common include: JDBC connection pool, memory pool, object pool, constant pool…
Benefits of pooling:
- Reduce resource consumption
- Improved response speed (no create and destroy process)
- Easy to manage. (Thread reuse, control the maximum concurrency, manage thread usage)
A container for pooling technology
Pooling techniques typically use queues as containers. Because queues have a first-in, first-out (FIFO) nature.
Thread pools have a limit on concurrency, so choose BlockingQueue.
There are ArrayBlockingQueue, LinkedBlockQueue, LinkedBlockingDeque, ConcurrentLinkedQueue, etc.
1. ArrayBlockingQueue
ArrayBlockingQueue is implemented based on arrays (circular queues). ReetrantLock and Condition ensure the security of concurrency.
It has two members, putIndex (tail of the team) and takeIndex (head).
PutIndex is the next place in the array where the last element was added. If putIndex == length-1, it will start at 0. That’s the idea of a circular queue.
2. LinkedBlockingQueue
LinkedBlockingQueue is based on a linked list implementation and uses the technique of split locks for concurrency security.
Use two locks takeLock and putLock, and separate them from each other.
When an element is added, takeLock locks it, then adds the element, and determines if the current queue is full. If so, it waits () for the current thread, holding the lock while the other thread that added the element has to wait.
When an element is pulled out, takeLock locks it, takes it out, and determines if the current queue is empty. If so, takeLock waits () to block the current thread, making the other fetching threads wait. The thread that adds the element wakes up the waiting thread for the element. Similarly, the thread that pulls out the element wakes up the waiting thread that adds the element.
Why can these two locks be separated without affecting each other?
Because you can think of two threads maintaining two lists, and when two threads get the same node, the queue is empty, which stops the concurrency of one thread, otherwise, the two threads don’t get the same node.
SynchronousQueue
SynchronousQueue is also a queue, but it is unique in that it has no internal container. When a production thread produces a finished product, if no one wants to consume it, the production thread must also block. Queue with only one capacity.
Queue methods
function | An exception is thrown | No exception is run (return value) | Block waiting for | Timeout waiting for |
---|---|---|---|---|
add | add | offer | put | offer |
remove | remove | poll | take | poll |
Judge team first | element | peek | – | – |
- Throw exception: An exception is thrown when a queue is empty or full for taking out or adding.
- Return value: Returns true or false instead of throwing an exception when a queue is empty or full.
- Blocking wait: If a queue is empty or full and cannot be taken out or added, the thread will continue to wait until it can operate on the position.
- Timeout wait: Similar to blocking wait, but with a time limit, if the time is exceeded, the wait is automatically quit.
Thread pool classification
1. newFixedThreadPool(corePoolSize)
Initialize a thread pool with a specified number of threads, using LinkedBlockingQueue as the blocking queue.
Features: Threads are not released even if there are no executable tasks in the thread pool.
2. newCachedThreadPool()
Initialize a thread pool that can cache threads. The default cache time is 60 seconds, the number of threads in the pool can be up to intege.MAX_VALUE, and the SynchronousQueue is used as the blocking queue.
Features: When no task is executed, thread resources are automatically released when the idle time of the thread exceeds keepAliveTime. When a new task is submitted, if there are no free threads, a new thread is created to execute the task, which incurs some system overhead.
Therefore, control the number of concurrent tasks to prevent performance degradation caused by creating a large number of threads.
3. newSingleThreadExecutor()
Initialize a thread pool with only one thread, using LinkedBlockingQueue as a blocking queue.
Features: If the thread ends abnormally, a new thread will be created to continue executing the task. The unique thread can ensure that the submitted tasks are executed in sequence.
4. newScheduledThreadPool()
The initialized thread pool can perform submitted tasks within a specified period of time, and can be used to periodically synchronize data in real business scenarios.
The underlying implementation class for thread pools
The above thread pools are wrapped by the JDK for us. You can create the thread pool by calling the Executors class.
But ali’s convention: Thread pools are not allowed to be created using Excutors, but rather through the underlying class ThreadPoolExecutor. This approach allows developers to be more aware of the running rules of the thread pool and avoids the risk of resource exhaustion.
The thread pools described above are also created using ThreadPoolExecuotr.
public static ExecutorService newSingleThreadExecutor(){
return new FinalizableDelegatedExecutorService(new ThreadPoolExecutor(1.1.0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>()));
}
---
public static ExecutorService newFixedThreadPool(int nThreads){
return new FinalizableDelegatedExecutorService(new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>()));
}
---
public static ExecutorService newSingleThreadExecutor(){
return new FinalizableDelegatedExecutorService(new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>()));
}
Copy the code
ThreadPoolExecutor 7 parameters
- CorePoolSize: Initial capacity of the thread pool.
- MaximumPoolSize: specifies the maximum capacity of the thread pool.
- KeepAliveTime: keepAliveTime when a thread is idle.
- Unit: keepAliveTime Time unit.
- WorkQueue: A blocking queue used to store threads.
- Handler: Reject policy (policy to continue adding when the queue is full)
- AbortPolicy: Directly throws an exception. Default policy.
- CallerRunsPolicy: Executes the task with the caller’s thread.
- DiscardOldestPolicy: Discards the first task in the blocking queue and executes the current task.
- DiscardPolicy: Discards tasks directly.