1. Why use a thread pool (what’s wrong with creating threads the old way?)

The most typical scenario is that in Android RecycerView(a list similar to wechat message page), we start a child Thread in each RecycerView to load images. The traditional way to start threads includes Runnable, Thread and so on. It’s very easy to create a thread.

However, the declarations of these threads are scattered in the Adapter project of each RecycerView, and it is difficult for us to manage them centrally. If the picture itself is large, some other reference resources may not be released in time. If we run our fingers through the RecycerView quickly, threads are constantly being created, and the end result is OOM.

Faced with this problem, we refer to the practice in the operating system. The operating system is essentially an intermediate layer of software, which provides system calls to the upper user APP and manages the lower hardware uniformly. Thread is also a kind of resource, we also need to find a way to manage it centrally, so a new structure: thread pool, was proposed.

2. A few questions

There are two main ways to create threads in the past:

  • inheritanceThreadclass
  • implementationRunnableinterface

If our Thread pool is still implemented based on Thread class, we also need to build multiple threads externally when building specific tasks, which makes it difficult to achieve unified management. So, what we’re going to do is narrow it down to the smallest granularity: Runnable.

We originally gave Runnable to a Thread, but in Thread pool mode, who do we explicitly give Runnable to? The answer to this question, of course, is no longer Thread, but Thread pool.

In fact, in the operating system, we apply for resources and return resources are also operated with the operating system. The application program performs the underlying operations through system calls. Our App releases the memory and directly deals with the physical memory, which is obviously impossible.

Similarly, we use this pooling idea in many places in computers, such as connection pooling for databases, connection pooling for OkHttp, and so on.

Thread pools have the following advantages:

  1. Reduced thread creation time.
  2. Reduce resource consumption (reusing threads from the thread pool without having to create new ones every time).
  3. Easy thread management.
    • CorePoolSize: Number of core threads in the thread pool
    • MaximumPoolSize: Maximum number of threads. If the thread is full, it will not be created.
    • KeepAliveTime: How long can a thread hold without a task before it terminates (latency)

3. Related structures

3.1 the ExecutorService

The real thread pool interface, a common subclass: ThreadPoolExecutor.

3.2 Executors

This is a collection of thread pool performers, with a number of static methods inside that will help us create different thread pools.

Such as:

1. newCachedThreadPool()

Use to create a thread pool that can create new threads as needed.

2. newFixedThreadPool(n)

Use to build a reusable thread pool with a fixed number of threads.

3. newSingleThreadExecutor()

Use to create a thread pool with only one thread.

4. newScheduledThreadPool(n)

Create a thread pool that can schedule commands to run after a given delay or to execute them periodically.

Note that any thread pool object we build using the static method above can be received with the ExecutorService. Because the implementation of these static methods returns the result of the implementation class of the interface.

4 Callable and Future

(This is basic knowledge, equivalent to a review of the review again, read can be directly to the fifth point.) Let’s look at Runnable first. For a traditional Runnable interface, we can’t return a value because the override method is fixed.

@Override
public void run(){}
Copy the code

If, our method returns a value that seems impossible to achieve. So Callable solves that problem.

4.1.1 Callable

To use Callable, we need to implement the Callable interface and override the Call method:

class MyCallable implements Callable<Integer>{ @Override public Integer call() throws Exception { Thread.sleep(5000); return 12580; } public static void main(String[] args) throws Exception { MyCallable myCallable = new MyCallable(); int x = myCallable.call(); System.out.println(x); Println (75); system.out.println (75); system.out.println (75); } /** * output * 12580 * 75 */ }Copy the code

It is worth noting that calling Callable alone can cause blocking problems in method execution.

4.1.2 the Future

If you have written a web request for Future, you must be familiar with Future. Future means Future, that is, things that have not happened yet. For example, if you haven’t done your homework, we need to fill it up and submit it to the teacher after completion. So we first take a blank book to the teacher, and expect the teacher not to change to yours first. And in class, crazy make up homework, after class, to make up homework.

So the blank sheet of paper handed to the teacher is FutureData, the data of the future, which is naturally blank. The data submitted is RealData, which is the RealData after the operation. In fact, this design model is a design model for processing asynchronous requests. When the C end (Client, that is, the service end) sends to the S end (Server, that is, the Server), due to various reasons, the S end cannot respond quickly and will first return a FutureData, which does not contain the data we want. But when we really need to use the data, the S side sends the real data to the C side.

Take another example of multiple sub-tasks. The completion of main task A depends on the completion of sub-threads 1, 2, and 3. As long as one thread fails, task A is declared to have failed. Ask for failure and get results as quickly as possible.

It’s actually more difficult to do if we use Runnable because we want a return value. So Callable is a better choice. Therefore, we use a combination of Callable and Future to achieve this requirement. Although we can receive null or flag in time when the child thread returns an error, it is still difficult to stop the other two threads immediately when an error occurs. Here comes one design pattern: the Observer. That is, the observer mode. In this mode, the event producer conveys the event through the callback method, so that the event receiver receives the change of the event in time. An event producer needs to record a list of all observed objects and notifyListener to notify all event receivers. The receiver only needs to register and de-register events. The receiver calls back the method via the passed callback function so that it can immediately tell other threads to stop executing.

Third-party libraries have related implementations, such as Guava’s Listening ExecutorServer, which is also a thread pool, paired with the ListenableFuture implementation. And Java itself provides a related implementation: CompletableFuture. Both of these use callback methods to handle the result of an error.

FutureTask, like Runnable, will not execute unless it is loaded into a thread or thread pool:

Callable<Integer> callable = new Callable<Integer>() { @Override public Integer call() throws Exception { Thread.sleep(1500); return 12580; }}; FutureTask<Integer> futureData = new FutureTask(callable); new Thread(futureData).start(); // Future System.out.println("1"); System.out.println(futureData.get()); Print 12580 and 2 system.out.println ("2") after 1.5s;Copy the code

We need to implement a Future interface, and we need to implement five related methods.

The method name role
get This method is used to return results after the task is over
get(long timeout,TImeUnit unit) Long wait time returns results, time-bound tasks
cancel Can be used to stop a task, returning True on success
isDone Whether or not complete
isCancel Check whether the method is cancelled

5. Use of thread pools

Two methods to use thread pools. The execute and submit.

Where execute is used to build the execution of a task using Runnable, which is used to execute a Callable task.

Example:

/ / 1. Build the ExecutorService service = new Executors. NewFixedThreadPool (10); Execute (new Runnable()); // Create a thread pool with five threads. service.execute(new Runnable()); FutureTask futureTask = new FutureTask(new Runnable()); // Use FutureTask to receive calls with return values //3. Close service.shutdown(); // Close the connection poolCopy the code

ThreadPoolExecutor (ThreadPoolExecutor) ¶ The newFixedThreadPool (ThreadPoolExecutor, ThreadPoolExecutor, ThreadPoolExecutor, ThreadPoolExecutor, ThreadPoolExecutor, ThreadPoolExecutor, ThreadPoolExecutor, ThreadPoolExecutor, ThreadPoolExecutor, ThreadPoolExecutor) ¶

ExecutorService executorService = Executors.newFixedThreadPool(5); executorService.setMaximumPoolSize(500); ExecutorService (ThreadPoolExecutor).setMaximumpoolSize (500); System.out.println(((ThreadPoolExecutor) executorService).getMaximumPoolSize()); / / 500Copy the code

6. ThreadPoolExecutor

We first have an ExecutorService, and the thread pool is implemented from this interface. The common thread pool is ThreadPoolExecutor, which takes the following parameters:

Public ThreadPoolExecutor(int corePoolSize,// the size of the thread pool int maximumPoolSize,// the maximum number of keepAliveTime,// non-core threads (> TimeUnit unit,// BlockingQueue<Runnable> workQueue,// the BlockingQueue ThreadFactory for unprocessed threads ThreadFactory,// threadFactory RejectedExecutionHandler handler,// reject policy) {}Copy the code

6.1 CorePoolSize and maximumPoolSize

CorePoolSize this attribute and maximumPoolSize can easily be mistaken for MAX_SIZE of Size, which is not the case.

  1. First, when the number of threads in the pool is less than CorePoolSize, a new thread is created to process the request.
  2. When the pool size is equal to CorePoolSize, messages are put inworkQueueWhen a thread in the pool is free, goworkQueueTo retrieve the task and execute it.
  3. whenworkQueueCan’t put down the new task, thenThe new threadPool, process the request, if the pool size is fullmaximumPoolSizeUse theRejectedExecutionHandlerTo do rejection processing.
  4. When the number of threads in the pool is much greater thanCorePoolSize, the redundant threads waitKeepAliveTimeLong time, if no request can be processed on its own destruction.

So, in fact, CorePoolSize is just the number of base/core threads, and when this number is insufficient and the current thread is insufficient, we create new threads until the MaximumPoolSize number is reached, and then reject processing begins. In addition, KeepAliveTime is the holdtime of non-core threads; if they are idle for a long time, they are killed.

Figuratively speaking, the core thread is the regular work, when there is no work as long as a certain number of people on the line, more waste; Non-core thread is temporary workers, usually do not need, live when the need for a large number of temporary workers to work, work over, nothing, that is fired. But temporary workers can’t be infinite, or the office would be full. Live really much not line, can not be connected again, adopt reject strategy to carry on processing.

. Note that the Executors newCachedThreadPool (), the default Core and Max number is: 0 and Integer. MAX_VALUE.

6.2 ThreadFactory

The proposed factory pattern itself is to unify the construction process of a certain class and separate the realization of class from the concrete generation of class. We do this with a factory class called thread factory, for example we can set uniform names for threads:

class MyThreadFactory implements ThreadFactory{ @Override public Thread newThread(Runnable r) { Thread t = new Thread( R," thread name "+ string.valueof (math.random ())); return t; }}Copy the code

Then at the call:

ExecutorService executorService = Executors.newFixedThreadPool(5,new MyThreadFactory());
Copy the code

Passing in the factory, we build a thread with a custom name.

##6.3 RejectedExecutionHandler Reject the policy

As we know above, when the number of threads in the thread pool reaches MaximumPoolSize, we need to adopt certain policies to reject new tasks.

There are several strategies:

  • The default policy is AbortPolicy

This is the default thread pool strategy, if the thread pool is full, it discards the task, and throws an exception: RejectedExecutionException

  1. DiscardPolicy

The silent version of AbortPolicy is simply discarded without throwing an exception.

  1. DiscardOldestPolicy

Discard **” oldest “** tasks, i.e., the earliest entry tasks are discarded and no exceptions are thrown, which is queue compliant (tail in, head out).

  1. CallerRunsPolicy

If the blocking queue is full, then the main thread “does it all by itself,” actually executing the run method of the force Runnable directly.

  1. The custom

We can customize an implementation class that implements the RejectedExecutionHandler interface and implements the rejectedExecution method. The logic can be defined in the rejectedExecution method.

public class MyRejectPolicy implements RejectedExecutionHandler{ public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {system.out.println (" I'm full......" ); }}Copy the code
Reference source

1.JAVA Future class details

ThreadPoolExecutor and BlockingQueue