Write it up front

This project has set up a flag since the end of 20 years. After several years of study, I have a new understanding of many knowledge points when I look back. So while looking for internship preparation, combined with the previous learning reserve, to create a Java open source knowledge project mainly for fresh graduates and beginners, focus on Java rear end questions + analysis + key knowledge details + selected articles of the open source project, I hope it can accompany you and I have been progress!

Note: The content of this project is based on many bloggers (indicated source), materials, N books, as well as their own understanding, re-drawing, re-organizing language and so on. It is inevitable that individual efforts will be weak or inadequate, but there will always be updates/improvements. Every Star is an encouragement to me! I hope you enjoy it.

Note: all the pictures involved are not used in the network map bed, the article is open source to provide you.

Project name: Java-Ideal-interview

Github address: Java-Ideal-interview-github

Gitee address: Java-Ideal-interview-Gitee

In continuous update, online reading will be provided in the later stage, if you think Gitee or Github reading inconvenience, can be cloned to local with Typora and other editors comfortable reading

If the Github clone speed is too slow, you can use the Gitee repository in China

A summary of multithreading and concurrent knowledge

1. The basic

1.1 What is a process? What is a thread? (Conceptual level)

[Process] is the execution process of a program. It is the basic unit of the system running program and the independent unit of the system allocating and calling resources.

  • That is, the system running a program is a process, from creation, running, to the role of extinction.
  • Multi-process: Multiple tasks can be executed in the same time segment, increasing CPU usage.

Thread is a smaller unit of execution than a process. It is an execution unit of a process. Multiple threads can be created during the execution of a process.

  • Multithreading: An application has multiple execution paths, increasing application utilization.

1.1.1 The relationship and difference between threads and processes?

Connection: Threads are processes divided into smaller running units, that is, a process can have multiple threads. From a JVM perspective, multiple threads share a process’s heap and method area (which became a meta-space after JDK 1.8), but each thread has its own private program counter, virtual machine stack, and local method stack.

Differences: Processes exist independently, while threads in the same process are likely to interact. Thread switching is much less overhead than process (so it is called lightweight process), but is not conducive to resource management and protection, while process is the opposite, although the overhead is large, but conducive to management and protection.

Note: Details about program counters, virtual machine stacks, local method stacks, and more will be covered in the JVM section.

1.2 What are serial, concurrent and parallel?

Serial: Multiple tasks are executed in sequence

  • Example: The car is running low on gas. I’ll fill it up and pick you up.

Concurrency: Multiple programs running at the same time in the same period of time.

  • Example: You send your sister a message on wechat and ask her to help you pick up a delivery from the Courier station. Then you call your mother and ask her to help you pick up a delivery.

Parallelism: More than one program is executing at the same time.

  • Example: This is your braised chicken rice, and this is my dumpling platter. Let’s eat together.

What is the difference between concurrency and parallelism?

1.3 Why use multithreaded/concurrent programming?

In order to improve the utilization of resources, improve the efficiency and speed of the program

Note: the following content is quoted from GitHub@JavaGuide, Guide brother this answer I feel really very concise, respect the original, pay attention to the source oh ~

Let’s start with the overall picture:

  • From the bottom of the computer: thread can be compared to a lightweight process, is the smallest unit of program execution, the cost of switching and scheduling between threads is far less than the process. In addition, the multi-core CPU era means that multiple threads can run simultaneously, which reduces the overhead of thread context switching.
  • From the contemporary Internet development trend: now the system always requires millions or even tens of millions of concurrent, and multi-threaded concurrent programming is the development of high concurrency system foundation, the use of multi-threaded mechanism can greatly improve the overall concurrent ability and performance of the system.

Further to the bottom of the computer to discuss:

  • Single-core era: Multi-threading in the single-core era is mainly to improve the overall utilization of CPU and IO devices. For example, when there is only one thread, the CPU can calculate, and the IO device is idle. The CPU is idle during I/O operations. We can simply say that utilization of both is currently around 50%. This is not the case when there are two threads. When one thread performs CPU calculations, the other thread can perform IO operations, and both threads can ideally achieve 100% utilization.
  • Multi-core era: Multi-core era multi-threading is mainly to improve CPU utilization. For example, if we want to compute a complex task and we use only one thread, the CPU will only use one CPU core. Creating multiple threads allows multiple CPU cores to be used, thus improving CPU utilization.

1.4 Problems with multithreaded/concurrent programming

In a word: deadlocks, thread insecurity, memory leaks, etc

  • Concurrency can lead to resource sharing and competition, which can change the execution speed of programs, while also losing the original timing relationship

  • If concurrent programs do not share and compete for resources according to specific rules and methods, their execution results will inevitably lose closure and reproducibility

    • Loss of closure: The resource is shared and may be affected by the control logic of other programs, for example, data written to memory by one program may be modified by another program
    • Loss of reproducibility: When other factors interfere, the initial conditions are consistent, and the results may be inconsistent

1.4.1 What is a deadlock?

A deadlock is a condition in which two or more processes are blocked during execution, either by competing for resources or by communicating with each other. Because the thread is blocked indefinitely, the program cannot be terminated normally.

  • For example, thread 1 holds resource 1 and thread 2 holds resource 2, but both threads want to request resources from each other, but both threads do not release the existing resources. If there is no external force involved, they will fall into the deadlock state of waiting for each other.

Simulate deadlock status code:

public class DeadLockDemo {
    /** * resource 1 */
    private static Object resource1 = new Object();
    /** */ resource 2 */
    private static Object resource2 = new Object();

    public static void main(String[] args) {
        new Thread(() -> {
            Thread 1 has acquired the monitor lock for resource 1
            synchronized (resource1) {
                System.out.println(Thread.currentThread() + "Obtained resource 1");
                try {
                    // Sleep for 1s to let thread 2 get resource 2
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                System.out.println(Thread.currentThread() + "Waiting for resource 2");
                synchronized (resource2) {
                    System.out.println(Thread.currentThread() + "Obtained resource 2"); }}},Thread 1 "").start();

        new Thread(() -> {
            Thread 2 has acquired the monitor lock for resource 2
            synchronized (resource2){
                System.out.println(Thread.currentThread() + "Obtained resource 2");
                try {
                    // Hibernate for 1s to allow thread 1 to try to acquire resource 2
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                System.out.println(Thread.currentThread() + "Waiting for resource 1");
                synchronized (resource1){
                    System.out.println(Thread.currentThread() + "Obtained resource 1"); }}},Thread 2 "").start(); }}Copy the code

Running results:

The Thread (Thread1.5,main] The resource is obtained1The Thread (Thread2.5,main] The resource is obtained2The Thread (Thread1.5,main] Waiting to obtain resources2The Thread (Thread2.5,main] Waiting to obtain resources1
/ /... The deadlock state is displayed
Copy the code

In the above code, because [resource 1] and [resource 2] are held by [thread 1] and [thread 2] respectively, both parties want the resources of the other party, so they fall into the state of waiting for each other, that is, deadlock occurs.

If you want to solve this, one way is to break the circular waiting condition between them.

The code is as follows:

public class DeadLockDemo {
    /** * resource 1 */
    private static Object resource1 = new Object();
    /** */ resource 2 */
    private static Object resource2 = new Object();

    public static void main(String[] args) {
        new Thread(() -> {
            Thread 1 has acquired the monitor lock for resource 1
            synchronized (resource1) {
                System.out.println(Thread.currentThread() + "Obtained resource 1");
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                System.out.println(Thread.currentThread() + "Waiting for resource 2");
                synchronized (resource2) {
                    System.out.println(Thread.currentThread() + "Obtained resource 2"); }}},Thread 1 "").start();

        new Thread(() -> {
            Thread 2 has acquired the monitor lock for resource 1
            synchronized (resource1){
                System.out.println(Thread.currentThread() + "Obtained resource 1");
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                System.out.println(Thread.currentThread() + "Waiting for resource 2");
                synchronized (resource2){
                    System.out.println(Thread.currentThread() + "Obtained resource 2"); }}},Thread 2 "").start(); }}Copy the code

Running results:

The Thread (Thread1.5,main] The resource is obtained1The Thread (Thread1.5,main] Waiting to obtain resources2The Thread (Thread1.5,main] The resource is obtained2The Thread (Thread2.5,main] The resource is obtained1The Thread (Thread2.5,main] Waiting to obtain resources2The Thread (Thread2.5,main] The resource is obtained2
Copy the code

When holding thread 1 】 【 1 】 【 resources, also to request a thread 2 】 【 1 】 【 thread, resources 【 1 】 be occupied, that it can only wait, then to request hold thread 1 】 【 2 】 【 thread, also can get, and then released for thread 1 】 【 resources 【 1 】 and 【 2 】 resources holding state, thread 2 】 【 can to perform.

1.4.1.1 Prerequisites for Deadlocks

  • Mutually exclusive condition: A resource can be occupied by only one thread at any time.
  • Request and hold conditions: when a process requests another resource, it blocks, but holds on to its own resource.
  • Non-stripping condition: A thread can only release the resources it has acquired after it finishes using them, and cannot be forcibly stripped by other threads.
  • Cyclic waiting condition: A resource relationship that forms a circular wait between multiple processes.
    • Consider the relationships in the code above

1.4.2 What is Thread Safety?

Thread safety is defined as multiple threads executing a class that always behaves normally.

For example, Spring beans are singletons by default. It is unsafe to define a stateful variable in its member location.

public class AccountDaoImpl implements AccountDao {
	// Define a class member
    private int i = 1;
    
    public void addAccount(a) {
        System.out.println("User added successfully!"); System.out.println(i); i++; }}Copy the code

1.4.2.2 How to think about or solve the thread safety problem?

Most thread-safety issues are rarely dealt with explicitly by themselves, as frameworks such as SpringMVC, Druid, and so on operate behind the scenes

An easy way to tell is if multiple threads are accessing the same shared variable at the same time.

Directions to consider are as follows:

  • Ensure atomicity: Classes under the atomic package
  • Visibility: volatile keyword
  • Thread control: CountDownLatch/Semaphore
  • Collection: classes under the java.util.Concurrent package
  • Lock packages can also be considered after synchronized

1.5 What are the states of threads?

  • NEW state: The thread is constructed and enters the NEW state, which is READY by calling the start() method.

  • RUNNABLE: Running state. The operating system hides READY and running states in the JVM. (RUNNING), so we call them RUNNABLE.

    • READY: The NEW state (NEW) is entered by calling the start method.
    • RUNNING: The ready state is entered after the CPU time slice is obtained.
  • BLOCKED: When a thread calls a method synchronously, it enters the BLOCKED state without acquiring a lock.

  • WAITING: After executing wait(), a thread enters a WAITING state. In this state, the current thread must wait for other threads to perform some specific action (notification or interrupt) before returning to the running state.

  • TIME_WAITING: Basically the same as the wait state, but can automatically return to the running state at a specified time. You can enter this state by sleep(Long millis) or wait(Long millis)

  • TERMINATED state: indicates that the current thread is TERMINATED.

1.5.1 The run() method is executed when the start() method is called. Why not just call the run() method?

The start() method is used to start the newly created thread from the new state into the ready state, waiting for the allocation of time slice to start running. And the run() method is called inside start(). If the run() method is called directly, it is treated as a normal method and is called in the original thread; no new thread is started.

1.5.2 Differences and similarities between sleep() and Wait () methods

  • Both can suspend the execution of a thread.
  • The sleep() method does not release the lock, while the wait() method does.
  • The sleep() method is typically used to suspend execution of a thread, and the wait() method is typically used to communicate between threads
  • The sleep() method is similar to wait(long timeout) in that the thread wakes up automatically after the method completes execution. When wait() is complete, another thread calls notify() or notifyAll() on the same object

1.6 What is Context Switching?

In an age of single-core processor, operating system is already processing the task can be multithreaded (multicore cpus, the core of a CPU can only be used by one thread), a processor to each time slice thread distribution, thread can in their own time under the premise of not running out of operation (because of the time slice of time only a few tens of milliseconds, very short, So it looks like simultaneous). After that, the processor is deprived and suspended, the concept of cutting out, whereas the next thread is selected to occupy the processor to start or continue running. During the cut and cut process, the current task saves its state before switching to another task, so that it can be cut back to the state again next time. The process of the task from save to restore is a context switch.

1.7 Five ways to achieve multithreading

1.7.1 Inheriting Thread

  • Custom MyThread class, inherit Thread class

  • Override the run() method

public class MyThread extends Thread {
    @Override
    public void run(a) {
        for (int i = 0; i < 10; i++) {
            System.out.println(getName() + ":"+ i); }}}Copy the code

Create two threads, set their names and start

public class ThreadTest {
    public static void main(String[] args) {
        // Create two threads
        MyThread thread1 = new MyThread();
        MyThread thread2 = new MyThread();
        // Set the thread name
        thread1.setName("Thread - 1");
        thread2.setName("Thread - 2");
        // Start the threadthread1.start(); thread2.start(); }}Copy the code

1.7.2 Implementing the Runnable Interface

  • MyuRunnable implements the Runnable interface
  • Override the run() method
public class MyRunnable implements Runnable {
    @Override
    public void run(a) {
        for (int i = 0; i < 10; i++) {
            System.out.println(Thread.currentThread().getName() + ":"+ i); }}}Copy the code
  • Create an object of the MyRunable class
  • Create an object of the Thread class and pass the myRunnable object as a construction parameter
public class RunnableTest {
    public static void main(String[] args) {
        // Create an object of class MyRunnable
        MyRunnable myRunnable = new MyRunnable();
        // Create an object of the Thread class and pass the myRunnable object as a construction parameter
        Thread thread1 = new Thread(myRunnable, "Thread - 1");
        Thread thread2 = new Thread(myRunnable, "Thread - 2");
        // Start the threadthread1.start(); thread2.start(); }}Copy the code

Implement the benefits of the interface approach

  • You can avoid the limitations of Java single inheritance

  • Suitable for a number of the same program code to deal with the same resource situation, the line with the program code, data effectively separated, better reflects the object-oriented design idea

How can —— be understood to avoid the limitations of Java single inheritance

  • For example, if a class already has a parent that wants to implement multithreading but can no longer extend Thread directly (implements implements implements implements implements, but extends extends only), its parent doesn’t want to extend Thread because it doesn’t need to implement multithreading

1.7.3 Implementing the Callable Interface

Note: A Callable method can return a value and throw an exception, in contrast to an implementation implementing the Runnable interface.

public class MyCallable implements Callable<Integer> {
    @Override
    public Integer call(a) throws Exception {
        int result = 0;
        for (int i = 0; i < 10; i++) {
            System.out.println(Thread.currentThread().getName() + ":" + i);
            result += i;
        }
        returnresult; }}Copy the code

The test can extract the return value

public class CallableTest {
    public static void main(String[] args) {
        // Use FutureTask to receive the result of the operation
        FutureTask<Integer> futureTask1 = new FutureTask<>(new MyCallable());
        FutureTask<Integer> futureTask2 = new FutureTask<>(new MyCallable());
        // Start the thread
        new Thread(futureTask1).start();
        new Thread(futureTask2).start();
        try {
            // Retrieve the result of the operation
            Integer integer1 = futureTask1.get();
            Integer integer2 = futureTask2.get();
            // Print the result
            System.out.println(integer1);
            System.out.println(integer2);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch(ExecutionException e) { e.printStackTrace(); }}}Copy the code

1.7.4 Creating a Thread Pool by Using Executors

Note: Thread pooling is an important issue, so some of the methods and content highlighted here will be described separately later. I’m just going to do the basic implementation here

public class MyRunnable implements Runnable {
    @Override
    public void run(a) {
        for (int i = 0; i < 10; i++) {
            System.out.println(Thread.currentThread().getName() + ":"+ i); }}}Copy the code

Execute is used to submit tasks that do not require a return value, which is highlighted below

public class ExecutorsTest {
    public static void main(String[] args) {
        // Create a thread pool by following Executors
        ExecutorService executorService = Executors.newFixedThreadPool(8);
        MyRunnable myRunnable = new MyRunnable();
        // Start 5 threads
        for (int i = 0; i < 5; i++){ executorService.execute(myRunnable); } executorService.shutdown(); }}Copy the code

1.7.5 Creating a ThreadPool using ThreadPoolExecutor (Recommended)

public class MyCallable implements Callable<Integer> {
    @Override
    public Integer call(a) throws Exception {
        int result = 0;
        for (int i = 0; i < 10; i++) {
            System.out.println(Thread.currentThread().getName() + ":" + i);
            result += i;
        }
        returnresult; }}Copy the code

Submit a task that requires a return value using Submit

public class ThreadPoolExecutorTest {
    public static void main(String[] args) {
        // Create a thread pool using ThreadPoolExecutor
        ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(8.16.100, TimeUnit.MINUTES, new LinkedBlockingDeque<Runnable>(10));
        try {
            // Submit the taskFuture<? > future = threadPoolExecutor.submit(new MyCallable());
            // Get the return value
            System.out.println(future.get());
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch(ExecutionException e) { e.printStackTrace(); }}}Copy the code

1.8 Which thread scheduling model Java uses

A: Java uses a preemptive scheduling model

If our computer has only one CPU, the CPU can execute only one instruction at a time, and threads can execute instructions only if they are given a slice of CPU time, which is the right to use it. So how does Java make calls to threads?

There are two scheduling models for threads:

Time-sharing scheduling model: All threads take turns to use the CPU and allocate the CPU time slice equally for each thread

Preemptive scheduling model: priority is given to threads with higher priority to use CPU. If threads have the same priority, one thread will be randomly selected, and the thread with higher priority will get more CPU time slices.

Related methods:

// Returns the priority of the thread object
public final int getPriority(a)
// Change the priority of the thread
public final void setPriority(int newPriority)
Copy the code
  • The default thread priority is 5.

  • Thread priorities range from 1 to 10.

  • A high thread priority simply means that the thread has a high chance of acquiring CPU time slices, but the effect is best seen when the thread has a high number of times, or multiple runs.

1.9 Wake-up mechanism (producer-consumer Problem)

In the case of the multithreaded introduction, should often use cases, such as multiple Windows cinema tickets to demonstrate the multithreaded problem, but it still has some limitations, that is, we assume that votes is certain, but real life, often is a state of coexistence between supply and demand, such as to buy breakfast, when consumers bought some, In order to explore this scenario, all we need to learn is Java’s wake-on-wait mechanism

The producer-consumer problem, also known as the Bounded buffer problem, is a classic case of multi-process synchronization. This problem describes what happens when two processes that share a fixed-size buffer — the so-called “producer” and “consumer” — actually run. The producer’s primary role is to generate a certain amount of data to put in the buffer and then repeat the process. At the same time, consumers consume this data in the buffer. The key to this problem is to ensure that producers do not add data when the buffer is full and consumers do not consume data when the buffer is empty.

Let’s explain the problem in more general terms

Java uses a preemptive scheduling model

  • A: If the consumer gets the execution right from the CPU first, it will consume the data, but the current data is the default value. If it doesn’t make sense, it should wait for the data to make sense before consuming it. Just like the buyer into the shop early but not to make, the buyer can only wait to make consumption
  • B: If the producer gets the execution right of the CPU first, it will go back to producing data. However, after it has produced data, it will continue to have the execution right. It will continue to produce data. This is also like, the shop can not be endless to do breakfast, sell some, do again, to avoid loss

Sorting out ideas:

  • A: Producers — see if there is data, wait if there is data, produce if there is no data, and notify (wake up) consumers to consume data after production

  • B: Consumers — see if there is data, consume if there is data, wait if there is no data, notify (wake up) producers to produce data

    • Wake up – qualifies the thread in the thread pool to execute

The Object class provides three methods:

/ / wait for
wait()
// Wake up a single thread
notify()
// Wake up all threads
notifyAll()
Copy the code

Note: All three methods must be executed in a synchronized block (such as a synchronized block), and must be used with the lock identified so that the thread on which the methods operate can be determined

Let’s write a simple code to demonstrate:

Student: Student — consumption data

public class Student {
    private String name;
    private int age;
    private boolean flag; // Default is no data (false), if true, there is data
    // Add your own get set toString method with no arguments
}
Copy the code

Producer: Produces data when there is no data available

public class Producer implements Runnable {
    private Student student;
    private int n = 0;

    public Producer(Student student) {
        this.student = student;
    }

    @Override
    public void run(a) {
        while (true) {
            synchronized (student) {
                // Check if there is any data
                // If there is data, wait
                if (student.isFlag()) {
                    try {
                        // t1 Wait and release the lock
                        student.wait();
                    } catch(InterruptedException e) { e.printStackTrace(); }}// Only one type of data can be generated
                if (n % 2= =0) {
                    student.setName("Zhang");
                    student.setAge(22);
                } else {
                    student.setName("BWH");
                    student.setAge(25);
                }
                System.out.println(Thread.currentThread().getName() + "Produced data:" +student);
                n++;
                // Now that the data already exists, modify the tag
                student.setFlag(true);

                // Wake up the thread
                // Wake up T2, wake up does not mean you can execute immediately, you have to rob the CPU of executionstudent.notify(); }}}}Copy the code

Consumer: Consumer class – Consumption data when there is data

public class Consumer implements Runnable {
    private Student student;

    public Consumer(Student student) {
        this.student = student;
    }

    @Override
    public void run(a) {
        while (true) {
            synchronized (student) {
                // If there is no data, wait
                if(! student.isFlag()) {try {
                        student.wait();
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
                System.out.println(Thread.currentThread().getName() + "Consuming data:" + student);
                // Modify the tag
                student.setFlag(false);
                // Wake up thread T1student.notify(); }}}}Copy the code

Test it out:

public class StudentTest {
    public static void main(String[] args) {
        Student student = new Student();
		
        Producer producer = new Producer(student);
        Consumer consumer = new Consumer(student);

        Thread thread1 = new Thread(producer);
        Thread thread2 = newThread(consumer); thread1.start(); thread2.start(); }}Copy the code

Running results:

Thread-0Student{name='Joe', age=22, flag=false}
Thread-1Consume data: Student{name='Joe', age=22, flag=true}
Thread-0Student{name='BWH', age=25, flag=false}
Thread-1Consume data: Student{name='BWH', age=25, flag=true}...Copy the code

Note: this is only the simplest way, that is, to produce a data, to inform the consumer to consume, after the consumption, the consumer will inform the producer to produce data. The code can also be optimized by putting locks and notifications into Student, which simplifies the two run() methods.

2. Advanced

2.1 the CAS relevant

2.1.1 What is CAS

2.1.1.1 The Idea of No Lock (Pessimistic and Optimistic Strategies)

When it comes to CAS, we have to mention the idea of no lock, because the most common concurrency control means is locking, locking can achieve the current only one lock can access the critical area of resources, threads naturally also safe. This is essentially a pessimistic strategy, which assumes that every access to a critical section will conflict, so as long as one thread is accessing the resource, other threads will block and wait.

An optimistic lock is one that assumes that there is no conflict between the threads accessing the resource, that all threads do not need to wait, that if there is a conflict, the CAS technique is used to identify the conflict, and that if the conflict continues, it is retried until there is no conflict.

**2.1.1.2 Concept and understanding of CAS **

CAS stands for compare-and-swap, which means Compare and Swap.

It takes three arguments: V, A, and B

  • V: memory value
  • A: Current value (old value)
  • B: The new value to be modified

When the CAS is executed, the value of V is set to B only if the values of V and A are equal. If the values of V and A are different, it indicates that another thread may have done the update operation, and the current thread does nothing. Finally, the CAS returns the value of V.

In the multi-threaded case, when multiple threads use CAS to manipulate the same variable, only one will succeed, and the others will continue to try again.

It is this mechanism that enables CAS to be secure without locks, and it is also efficient in many cases.

In Java application provides a series of CAS operation classes, these classes in Java. Util. Concurrent. The atomic package, including AtomicInteger, for example, the class can be seen as the CA S operation of Integer, accumulative operation, You can just use it.

2.1.2 ABA problems caused by CAS

ABA problems are easy to understand. For example, thread 1 reads the current number as 66, thread 2 changes the current number to 666, and thread 3 changes the current number back to 66. For thread 1, it only sees that the current value of 66 is the same as the memory value of 66, and according to its mechanism, it will allow modification. Because in its eyes, this value has not been changed, but it has been changed by thread 2 and thread 3, which is the ABA problem.

Solution: Use AtomicStampedReference, which simply provides a version mechanism so that comparison is not just about memory value, but also about version number.

We’ll talk about this later with respect to Atomic.

2.1.3 JDK 1.8 Why is the LongAdder object recommended

13. [reference] Volatile solves the problem of multithreaded memory not being visible. Multiple reads on one write can solve variable synchronization problems, but multiple writes cannot solve thread safety problems. For count++, use the following class: AtomicInteger count = new AtomicInteger(); count.addAndGet(1); For JDK8, the push uses LongAdder objects, which perform better than AtomicLong (reducing the number of optimistic lock retries).

As mentioned in Point 13 of Chapter 1, 1.6 Concurrency Control in Alibaba Development Manual: If it is JDK8, it is recommended to use LongAdder object, which has better performance than AtomicLong (reduce optimistic lock retry times).

When AtomicLong accumulates, multiple threads are working on the same resource, only one thread can succeed, and the thread that fails will spin again, and this spin becomes a performance issue.

LongAdder scatters resources into arrays so that each thread only needs to operate on the variable values of its own array and the number of failures is reduced.

2.2 synchronized ※

2.2.1 What is synchronized?

Synchronized is a mutex that guarantees that only one thread at a time can enter a locked method/block of code, etc. It solves the problem of synchronization in accessing resources between multiple threads.

2.2.2 Where can synchronized locks be added?

Lock an instance of an object

synchronized void method(a) {... }Copy the code

Static methods lock the current Class instance and apply to all instances of that Class

synchronized void staic method(a) {... }Copy the code

Depending on the argument, you can lock pairs or classes

synchronized(this) {... }Copy the code

This is related to the principle of double check locks in singleton patterns and the problem of volatile

2.2.4 Are synchronized locks heavyweight?

In earlier versions of Java, synchronized was a heavyweight lock and inefficient. This is because it relies on the operating system’s mutex instructions to implement the lock, and Java threads are mapped to the operating system’s native threads, that is, all requests for lock resources must go through the kernel to execute system calls. So the operating system in the thread switch, all need to go through the user to -> kernel state process, and this process is relatively large time cost.

After JDK 1.6, synchronized was officially optimized by introducing biased locking and lightweight locking, which realized the locking logic at the JVM level and did not rely on the operating system, so there was no consumption of user mode and system mode switching.

2.2.4.1 Can you introduce the optimization of synchronized after JDK 1.6?

After JDK 1.6, biased locking, lightweight locking, spin locking, lock elimination, lock coarsening and other technologies were introduced to reduce the overhead of locking

Therefore, there are four types of lock status records: no-lock, biased lock, lightweight lock, and heavyweight lock. As the competition becomes more intense, locks will be upgraded. Be aware that locks can only be upgraded, not degraded.

Lock upgrade process (including partial lock, lightweight lock, heavyweight lock concept) :

  • In many cases, there is no multi-thread competition for locks, and locks are generally obtained by the same thread. If CAS operation is performed every time, performance consumption will be serious. In order to optimize this situation, biased locking is introduced, that is: When a thread accesses an object and obtains the lock, the thread ID will be stored in the Mark Word of the object header. In the future, it only needs to check whether the thread ID is consistent with the thread ID stored in the Mark Word of the object header each time. If the thread ID is consistent, the lock will be directly obtained without CAS operation.
  • If the thread ID in the biased lock is inconsistent, the CAS will try to modify the current thread ID. If the thread ID succeeds, the lock can still be obtained, but if it fails, it indicates that there is a competitive environment, and the lightweight lock will be upgraded. Under lightweight Lock, the current thread will create a Lock Record under the stack frame Lock Record, Lock Record will copy the Mark Word information to the created Lock Record, Lock Record Owner pointer to the Lock object. When the synchronization code is executed, the CAS attempts to point the Mark Word to the Lock Record of the thread stack frame. If the CAS modification succeeds, the lightweight Lock is acquired.
  • If the modification fails, it spins, and when it spins beyond a certain number, it upgrades to a heavyweight lock. A heavyweight lock blocks all threads except the one that currently owns the lock

Added: The layout of objects in memory is divided into three areas: object header + sample data + alignment padding

The object header contains two parts: Mark Word + type pointer (an array object, and a part that stores the length of the array)

  • Mark Word is used to store runtime data about the object itself, such as HashCode, GC generation age, lock status flags, thread-held locks, bias thread ids, and so on.

  • A type pointer points to the object’s class metadata, and the virtual machine uses this pointer to determine which class the object is an instance of.

Spin lock: When a thread is acquiring a lock, if the lock has already been acquired by another thread, the thread will loop and then check whether the lock can be successfully acquired until it exits the loop.

Lock coarsening. Under normal circumstances, in order to ensure the effective between multi-threaded concurrent, will ask each thread holding the lock time as short as possible, but in some cases, a program for the same lock uninterrupted request, synchronization and release, high frequency, will consume a certain amount of system resources, because to be particular about, synchronization and release to lock itself will lead to performance loss, The high frequency of lock requests can be detrimental to system performance optimization, even though the time of a single synchronization operation may be short. Lock coarsing teaches us that everything has a certain degree, and in some cases we want to consolidate multiple lock requests into a single request to reduce the performance cost of a large number of lock requests, synchronizations, and releases in a short period of time.

Lock elimination: Lock elimination is the removal of locks that cannot compete for shared resources by scanning the running context of the Java VIRTUAL machine during JIT compilation. Lock elimination can save meaningless lock request time

Java6 and above optimizations for synchronized, biased locking, lightweight and heavyweight locking, synchronized, Java lock elimination, and lock coarser

2.2.4 Principle of synchronized lock

The JVM implements method synchronization and code block synchronization based on entering and exiting Monitor objects, but the implementation details are different.

Synchronized is added to a block of synchronized statements

public class Demo1 {
	public void method(a) {
		synchronized (this) {
			System.out.println("synchronized"); }}}Copy the code

Run the decompile javap -c -s -v -l demo1. class to view the related bytecode file

As you can see, block synchronization is implemented using monitorenter, the starting position of the synchronized block, and Monitorexit, the ending position. When monitorenter is executed, the thread attempts to acquire the lock, the possession of the object monitor, monitor.

  • When a lock is acquired, if the lock counter is 0, it can be acquired, and then set the lock counter to 1
  • When the lock is released, the lock counter is set to 0 to indicate that the lock is released.
public class Demo2 {
	public synchronized void method(a) {
        System.out.println("synchronized"); }}Copy the code

Run the decompression javap -c -s -v -l demo2. class to view the related bytecode file

The synchronized method is identified by ACC_SYNCHRONIZED instead of Monitorenter and Monitorexit, indicating that the method is a synchronized method

2.2.5 The relation and difference between synchronized and ReentrantLock

2.2.5.1 similarities

  • Both are lock synchronization, and both are blocking synchronization (that is, one thread acquires the object lock, enters the block, and all other threads that want to access the block are blocked outside)

  • Both are reentrant locks, meaning that once a lock is acquired, the current thread can acquire it again, but not reentrant locks, resulting in deadlock

2.2.5.2 differences

  • Synchronized is a Java keyword that is mutually exclusive at the native syntax level and depends on the JVM. For example, optimizations made after JDK 1.6 are optimized at the virtual machine level. ReentrantLock is an API layer (JDK level) mutex that requires lock() and unlock() methods with a try/finally block
  • Flexibility and fine-grained differences: Synchronized guarantees locking and release by the compiler, while ReentrantLock manages locking and release itself. Flexibility, but also risks human error. ReentrantLock is more flexible and more fine-grained.
  • Several features added to ReentrantLock:
    • Interruptible wait: If the thread holding the lock is not released for a long time, the thread waiting can choose to give up and go to another task. This can be done with lock.lockInterruptibly()
    • Synchronized is a non-fair lock, and ReentrantLock is a non-fair lock by default. However, you can create an unfair lock at build time (the argument is true). Fair means first come first served.
    • Synchronized control of wait and wake up requires the combination of wait(), notify(), and notifyAll() of lock objects. ReentrantLock controls wait and wake requires a combination of await() and signal() and signalAll() methods of Condition

2.2.5.3 When to use ReentrantLock **

Synchronized is generally used when you need something that synchronized doesn’t have, but synchronized is generally used

2.3 volatile

2.3.1 What problems does volatile solve

2.3.1.1 Preventing instruction rearrangement

First of all, the instruction rearrangement problem we encountered in the singleton pattern, I will directly take part of my article at that time.

Double locking code:

If (lazy1 == null); if (lazy1 == null); if (lazy1 == null); For example, A enters first, but after B enters, it can also create A new instance, which will not achieve the purpose of singleton. You can try it yourself

The solution is to make a second judgment

// Gets the unique global access point for an instance of this class
public static Lazy1 getLazy1(a){
    // If the instance does not exist, a new instance is created, otherwise the existing instance is returned
    if (lazy1 == null) {
        / / lock
        synchronized(Lazy1.class){
            // Check for null the second time
            if (lazy1 == null){
                lazy1 = newLazy1(); }}}returnlazy1; } Duplicate codeCopy the code

Instruction rearrangement problem:

This way of locking in the appropriate position, as far as possible to reduce the impact of locking on the performance, also can achieve the desired effect

But this code, under certain conditions, there will be a problem, that is instruction rearrangement problem

Instruction reordering is a process by which the JVM optimizes instructions and improves program efficiency by maximizing parallelism without affecting the execution results of a single-threaded program.

What does that mean?

Lazy1 = new lazy1 (); This step is not an atomic operation, which means that the operation is divided into many steps

  • ① Allocate the memory space of the object
  • ② Execute the constructor to initialize the object
  • ③ Point to the object to the newly allocated memory space

But the JVM reorders this step for efficiency, for example:

  • ① Allocate the memory space of the object
  • ③ Point to the object to the newly allocated memory space, the object has not been initialized
  • ② Execute the constructor to initialize the object

When thread A reaches thread ②, thread B decides to lazy1! = null, but lazy1 has not been initialized yet, so there is a problem.

Time ThreadA ThreadB
t1 A: ① Allocate the memory space of the object
t2 A: ③ Points to the object to the newly allocated memory space, the object has not been initialized
t3 B: Check whether lazy1 is null
t4 B: That’s right. = null, returns an uninitialized object
t5 A: ② Initialize the object

The solution is simple — add the volatile keyword to the definition to avoid instruction reordering

2.3.1.2 Ensure visibility of variables

This brings us to the JMM, or Java memory model. Prior to JDK 1.2, Java’s memory model read variables from main memory. Under the current version of the Java memory model, threads can store variables in local memory, such as registers, rather than reading and writing directly from main memory. This can result in one thread accessing and modifying main memory data, while another thread uses local memory data, and the data is inconsistent.

Declaring the added variable volatile tells the JVM that the variable is making the shared write unstable and must be read from main memory each time.

2.3.2 The difference between synchronized and volatile

  • Volatile addresses the visibility of variables across multiple threads, while synchronized addresses the synchronization of access to resources across multiple threads
  • Volatile is a lightweight implementation of thread synchronization that performs better
  • Volatile can only be used with variables, while synchronized can modify methods and code blocks
  • The volatile keyword guarantees visibility, but not atomicity. The synchronized keyword guarantees both.

Reference: making @ JavaGuide

2.4 AQS related

2.4.1 What is AQS

Called the AQS, AbstractQueuedSynchronizer, located in Java. Util. Concurrent. The locks. It is a framework for building locks and synchronizers, such as ReentrantLock, Semaphore, CountDownLatch, etc., which are based on AQS.

2.4.2 Please explain the principle of AQS

The essence of AQS is to provide a set of templates, which maintains a FIRST-in, first-out CLH queue (bidirectional linked list) and a state state variable. AQS encapsulates each thread requesting shared resources into a node in a queue, which identifies its current state, such as shared state or exclusive state. And information about the front and rear drive nodes.

When a resource is requested, if the resource is idle, the thread that currently requests the resource is set to a valid thread, and the shared resource is set to a locked state. If the requested resource is occupied, then a mechanism of blocking waiting and being allocated by a waking lock is required. This is the meaning of CLH queues.

2.4.2.1 What is shared state and Exclusive State?

AQS shares two kinds of resources:

  • Exclusive: Only one thread can hold the lock, for exampleReentrantLock, which can be subdivided into fair and unfair locks
    • Fair lock: In a competitive environment, the thread that reaches the critical section first gets the lock before the thread that arrives last. In this case, the lock is queued in the order in the queue
    • Unfair lock: the first thread to get the lock is the first thread to get the lock.
    • The difference between a fair lock and an unfair lock is whether you try to get the lock, and if you try to get the lock, it’s definitely unfair, and if you just queue up and wait, it’s fair
  • Sharing mode: Multiple threads can execute simultaneously, for example: CountDownLatch,Semaphore

2.4.3 Customization and common implementation of synchronizer

First, what are the general steps for building a custom synchronizer

  • Inheritance AbstractQueuedSynchronizer classes, and rewrite the specified method (that is, for the process of resource acquisition and release)
  • By calling its template methods (because synchronizer design is based on method patterns), you are essentially calling your overridden methods

PS: Because different synchronizers compete for shared resources in different ways, the custom synchronizer only needs to implement the methods of resource acquisition and release. About specific thread waiting, queue maintenance, etc., AQS has been implemented behind the back.

Need to rewrite the way introduced:

Exclusive mode:

// Try to obtain the resource, return true on success, false on failure.
tryAcquire(int)
// Try to free the resource, return true on success, false on failure.
tryRelease(int)
Copy the code

Sharing mode:

// Try to obtain resources, positive number is successful, and there are resources left. Negative numbers fail. 0 indicates success, but no resources are left.
tryAcquireShared(int)
// Try to free the resource, return true on success, false on failure.
tryReleaseShared(int)
Copy the code

Additional:

// Whether the thread is monopolizing resources. You only need to implement it if you use condition.
isHeldExclusively() 
Copy the code

**2.4.3.1 Explain the process of ReentrantLock locking and unlocking **

ReentrantLock (thread 1, tryAcquire(int)); ReentrantLock (thread 1, tryAcquire(int)); So the following threads also make requests when they are municipal seashore. Thread 1: Unlock (), state = 0

  • However, due to the concept of reentrant, thread 1 can acquire the lock repeatedly while holding the resource, and the state will be accumulated. It will release the lock as many times as necessary to ensure that the state can return to 0.

**2.4.3.2 Please tell us about the process of locking CountDownLatch and releasing the lock **

CountDownLatch splits the task into a number of sub-threads. Its state is initialized with an N value instead of 0, and countDown() is executed for each sub-thread. Then state-1. State is 0, then unpark() caller thread, then caller thread will return from await() and do something else.

2.4.4 Usage scenarios of CountDownLatch

CountDownLatch ensures that all threads are blocked in one place until they finish executing.

For example, when we need to process a certain task, there is no necessary order for these tasks. After all these tasks are processed, we need to do something in the same order. Therefore, we can use CountDownLatch. Return from await() and continue with the other business logic later.

Note: It is recommended to use the ThreadPoolExecutor constructor to create a thread pool, as discussed below.

import com.google.common.util.concurrent.ThreadFactoryBuilder;
import java.util.concurrent.*;

public class Test {
    
    private static final int CORE_POOL_SIZE = 8;
    private static final int MAX_POOL_SIZE = 16;
    private static final int BLOCKING_QUEUE_SIZE = 10;
    private static final long KEEP_ALIVE_TIME = 10L;

    private static final ThreadFactory guavaThreadFactory =
            new ThreadFactoryBuilder().setNameFormat("thread-pool-%d").build();

    private static final ExecutorService exec = new ThreadPoolExecutor(CORE_POOL_SIZE,
            MAX_POOL_SIZE, KEEP_ALIVE_TIME, TimeUnit.SECONDS,
            new LinkedBlockingQueue<Runnable>(BLOCKING_QUEUE_SIZE), guavaThreadFactory);

    /** * Number of tasks processed */
    private static final int threadCount = 5;

    public static void main(String[] args) throws InterruptedException {

        final CountDownLatch countDownLatch = new CountDownLatch(threadCount);
        for (int i = 0; i < threadCount; i++) {
            final int currentNum = i;
            exec.execute(() -> {
                try {
                    System.out.println(Thread.currentThread().getName() + "Work in progress:" + currentNum);
                } catch (Exception e) {
                    e.printStackTrace();
                } finally {
                    // Indicates that a task has been completedcountDownLatch.countDown(); }}); } countDownLatch.await(); exec.shutdown(); System.out.println("That's all!"); }}Copy the code

Running results:

thread-pool-0Working on tasks:0
thread-pool-1Working on tasks:1
thread-pool-2Working on tasks:2
thread-pool-3Working on tasks:3
thread-pool-4Working on tasks:4That's it!Copy the code

2.4.5 Components

  • CountDownLatch: Used to coordinate synchronization issues between multiple threads, as shown in the code above, and generally used to control the wait of threads

  • CyclicBarrier: Similar to CountDownLatch, it can wait between threads, but more powerful. The literal idea is that multiple threads will block as soon as they reach the barrier, and only when the last thread has reached the barrier will the barrier open and continue working backwards. You can see that the feeling is very similar to that of CountDownLatch above

  • Semaphore allows multiple threads to access a resource simultaneously, in contrast to synchronized and ReentrantLock, both of which allow only one thread to access a resource at a time

2.5 ThreadLocal related

Normally, we create variables that can be accessed and modified by any thread, but ThreadLocal allows each thread to have its own private local variable, so that each thread can access its own private value. Isolation of thread data is realized.

2.5.1 Have you used ThreadLocal in any situation

Example 1:

Let’s start with a neat example. For example, in Shiro + JWT’s permission framework, we create a custom Filter that blocks all HTTP requests by either extracting the Token string or checking the validity of the Token, and then processing it according to the Token refresh mechanism you have designed. When new tokens are created, they are stored in Redis and custom ThreadLocalToken classes.

And the reason why we do that is because our goal is to pass the new token back to the front end in the response. Although the doFilterInternal() method is provided in our custom Filter (because we inherit AuthenticatingFilter), which helps us put the token into the response, it is a bit cumbersome to operate and requires reading the response data through the IO stream, The data is then parsed into JSON, and the new token is then put in.

But if we define an AOP aspect class, we can intercept all custom return objects, ServerResponse, and then add a new token, by wrapping around the notification. This is fairly straightforward, but there is no call relationship between the custom Filter and the AOP aspect, and we need to find a way to pass in the new token.

This is where ThreadLocal can be used, because data reads and writes within a ThreadLocal are proprietary and private within the same thread. The custom Filter and AOP aspect classes, on the other hand, execute on the same thread without changing threads, so you can safely put the token in ThreadLocal, and the AOP aspect class will fetch the token and add it to ServerResponse.

Example 2:

SimpleDateFormat is not thread-safe when formatting time, so you can use ThreadLocal to load SimpleDateFormat objects so that each thread has its own proprietary SimpleDateFormat

2.5.2 Simple Principle of ThreadLocal

Thread class

/* ThreadLocal values pertaining to this thread. This map is maintained * by the ThreadLocal class. */
ThreadLocal.ThreadLocalMap threadLocals = null;

/* * InheritableThreadLocal values pertaining to this thread. This map is * maintained by the InheritableThreadLocal class. */
ThreadLocal.ThreadLocalMap inheritableThreadLocals = null;
Copy the code

As you can see, the value of the ThreadLocal variable, threadLocals, is stored in a container of type ThreadLocalMap in a ThreadLocal. ThreadLocalMap is a specially customized HashMap. The essence of getting values from ThreadLocal’s Get set is to call ThreadLocalMap’s get set method.

public T get(a) {
    Thread t = Thread.currentThread();
    ThreadLocalMap map = getMap(t);
    if(map ! =null) {
        ThreadLocalMap.Entry e = map.getEntry(this);
        if(e ! =null) {
            @SuppressWarnings("unchecked")
            T result = (T)e.value;
            returnresult; }}returnsetInitialValue(); } -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --ThreadLocalMap getMap(Thread t) {
    returnt.threadLocals; } -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --/ / ThreadLocalMap getEntry
private Entry getEntry(ThreadLocal
        key) {
    int i = key.threadLocalHashCode & (table.length - 1);
    Entry e = table[i];
    if(e ! =null && e.get() == key)
        return e;
    else
        return getEntryAfterMiss(key, i, e);
}
Copy the code

ThreadLocalMap can store key-value pairs with ThreadLocal as keys and Object as values.

2.5.3 ThreadLocal memory Leaks

A key in a ThreadLocalMap is a weak reference and a value is a strong reference. If ThreadLocal is not strongly referenced by the outside world, it will be collected during garbage collection. But strong references are not cleaned up. This results in an Entry with a null key. If you keep doing this, you’re going to have a memory leak.

A Memory Leak refers to a program that fails to release or release dynamically allocated heap Memory for some reason, resulting in a waste of system Memory, slowing down the program and even crashing the system. — Baidu Encyclopedia

However, ThreadLocalMap takes this into account, and records with a null key are cleaned up when the GET set remove method is called. Therefore, it is recommended to call the remove method manually after using ThreadLocal.

2.5.3.1 Degree of four reference types

Prior to JDK1.2, the concept of a reference was that a reference type stored the starting address of a block of memory, representing a reference to that block of memory.

After JDK1.2, it is subdivided into strong reference, soft reference, weak reference and virtual reference.

  • Strong references: The garbage collector does not collect it, and when out of memory, the JVM would rather throw OutOfMemoryError than reclaim it.
  • Soft references: Recycling soft references is only considered if there is insufficient memory space.
  • Weak references: Weak references have a shorter declaration cycle than soft references. Weak references are reclaimed as soon as they are found by the garbage collector thread as it scans the memory area under its control, but because of the low priority of the garbage collector thread, they are generally not quickly discovered and reclaimed.
  • Virtual reference: The lowest level reference type that can be collected by the garbage collector at any time

2.7 Thread pool correlation

2.7.1 What is a thread pool? Why use it?

JVM with HotSpot is a one-to-one mapping of Java threads to kernel threads, which means that the creation and recycling of threads in Java depends on the actual operating system for kernel operations. That’s a lot of overhead, probably more than the time and resources it takes to perform the task

This problem is not just in threads, for example, database connection pooling and so on, so the idea of pooling has been used in many places for a long time.

Thread pools provide a platform for thread management and reuse, in addition to storing basic statistics. The benefits are as follows:

  • Reduce resource consumption: Reduce the cost of each thread creation and destruction by reusing threads.
  • Improved response time: Tasks no longer need to wait for a thread to be created, so the speed is greatly improved.
  • Improve thread manageability. Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for unified allocation, tuning, and monitoring.

2.7.2 What is the difference between implementing the Runnable Interface and the Callable Interface

  • Runnable has been around since JDK 1.0, while Callable was introduced after JDK 1.5.
  • The main difference is that the Runnable interface does not return results and exceptions, while the Callable interface does

This can be seen from the definitions and comments of both runs:

@FunctionalInterface
public interface Runnable {
    /**
     * When an object implementing interface <code>Runnable</code> is used
     * to create a thread, starting the thread causes the object's
     * <code>run</code> method to be called in that separately executing
     * thread.
     * <p>
     * The general contract of the method <code>run</code> is that it may
     * take any action whatsoever.
     *
     * @see     java.lang.Thread#run()
     */
    public abstract void run(a);
}
Copy the code
@FunctionalInterface
public interface Callable<V> {
    /**
     * Computes a result, or throws an exception if unable to do so.
     *
     * @return computed result
     * @throws Exception if unable to compute a result
     */
    V call(a) throws Exception;
}
Copy the code

2.7.3 Difference between execute() and submit()

  • The execute() method is used to submit tasks that do not require a return value, and there is also no way to determine whether the task was successfully executed by the thread pool.
  • The submit() method is used to submit the object that needs to return a value, and it returns a Future object to determine whether the task was successfully executed
    • The result can be obtained with the get() method: it blocks until the task is finished.
    • You can also use the get(Long timeout, timeUnit Unit) method to block the specified time and return immediately. Some tasks may not be completed.

You can jump to the two ways to start multithreading in the thread pool (# 1.7 Five Ways to Implement Multithreading) for a demonstration of both methods.

2.7.4 How to Create a Thread Pool

Java Development Manual of Alibaba chapter 1 programming specification, Section 6 Concurrent processing, article 4 gives a mandatory declaration, do not allow Executors, use ThreadPoolExecutor mode

【 Mandatory 】 Thread pools cannot be created by Executors. Use ThreadPoolExecutor to clear the running rules of the thread pool and avoid resource depletion

Executors return the thread pool object to the following disadvantages:

  • FixedThreadPool and SingleThreadExecutor: Allows requests with a queue length of Integer.MAX_VALUE, which can pile up requests to OOM.
  • CachedThreadPool and ScheduledThreadPool: The number of threads allowed to be created is integer. MAX_VALUE, which may create a large number of threads, resulting in OOM.

2.7.4.1 Creating a thread pool using ThreadPoolExecutor

Note: [2.4.4 CountDownLatch usage scenario](# 2.4.4 CountDownLatch usage scenario), [1.7 Five ways to implement multithreading](# 1.7 Five ways to implement multithreading) have two simple examples

The first is the constructor:

Let’s go straight to the longest and most complete explanation:

public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              ThreadFactory threadFactory,
                              RejectedExecutionHandler handler){... }Copy the code
  • CorePoolSize: number of core threads

  • The core thread will always be alive, even if there are no tasks to execute, and when the number of threads is smaller than the number of core threads, even if there are idle threads, the thread pool will preferentially create new threads for processing.

  • MaximumPoolSize: indicates the maximum number of threads

    • When the number of threads >= corePoolSize and the task queue is full, the thread pool creates a new thread to process the task
    • When the number of threads is equal to maxPoolSize and the task queue is full, the thread pool will be processed according to the reject policy
  • KeepAliveTime: thread idle time (s)

    • When the number of threads in the thread pool is greater than corePoolSize, threads outside the core thread will not be destroyed immediately if no new tasks are submitted. Instead, they will wait until the keepAliveTime is exceeded before being recycled for destruction
  • Unit: keepAliveTime unit

  • WorkQueue: When the number of core threads reaches the maximum, new tasks are queued to be executed. The size of data stored in the queue depends on the allocated memory

  • ThreadFactory: Used when creating projects, such as Google’s ThreadFactoryBuilder

  • Handler: rejectedExecutionHandler: rejects the task handler

    • When the number of threads reaches maximumPoolSize and the workQueue is full, new tasks are rejected

    • When a thread pool is called shutdown(), it waits for tasks in the pool to complete before shutting down. If a task is submitted between the call to shutdown() and the actual shutdown of the thread pool, the new task is rejected

    • The processing strategies are as follows:

      • AbortPolicy: refused to new tasks, throw an exception RejectedExecutionException anomalies.

        CallerRunsPolicy: This policy runs the current task in the caller thread as long as the thread pool is not closed. This ensures that the task is not actually discarded, but results in a significant performance degradation.

        DiscardPolicy: Directly ignoring discarding, not processing

        DiscardOldestPolicy: Removes the first entered task from the queue

2.7.4.2 Using Executors to Create a Thread Pool

Exector is an Executor tool class and a way to create a thread pool. But ThreadPoolExecutor is recommended

The following thread pool types are provided by Executors

  • NewCachedThreadPool: Create an elastic thread pool (the number of threads can be adjusted as required).

    • Features: Elastic management mode. If there are idle threads, the idle threads are preferentially reused. If there are no idle threads, a new thread is created to process the task.

    • Disadvantages: Thread wireless growth, memory overflow risk.

  • NewFixedThreadPool: Creates a thread pool with a fixed number of threads.

    • Features: fixed size thread pool. When a new task is submitted, if there are idle threads in the thread pool, the task will be executed. If not, the new task is temporarily stored in a task queue and processed when a thread is free.
    • Cons: No support for custom rejection policies, limited by a fixed number of threads, sometimes not a good thing.
  • NewScheduledThreadPool: Creates a thread pool with a fixed number of threads, but can execute tasks periodically.

    • Disadvantages: The task is a single thread execution, failure will affect other tasks.
  • NewSingleThreadExecutor: Create a thread pool with only one thread.

    • Features: Single-threaded thread pool. If additional tasks are submitted to this thread pool, the tasks are stored in a task queue until the thread is idle, and the tasks in the queue are executed in first-in, first-out order.
    • Disadvantages: Does not support concurrency, generally used not too much

    * * * is also used by ThreadPoolExecutor but with a lot of restrictions.

    2.8 Atomic class correlation

    2.8.1 What is an Atomic class?

    The Chinese word for an atom is the smallest unit that cannot be separated. Atomic classes, on the other hand, refer to classes characterized by atoms or atomic operations.

    2.8.2 What are the atomic classes in JUC package

    Basic types of

    • AtomicInteger: Shaping atom class
    • AtomicLong: long integer atomic class
    • AtomicBoolean: Boolean atomic class

    An array type

    • AtomicIntegerArray: Integer array atomic class
    • AtomicLongArray: Long integer array atomic class
    • AtomicReferenceArray: Reference type array atomic class

    Reference types

    • AtomicReference: Reference type atomic class

    • AtomicStampedReference: Atom updates a reference type with a version number.

      • [2.1.2 ABA problems caused by CAS](# 2.1.2 ABA problems caused by CAS)
    • AtomicMarkableReference: Reference type with marker bit

    • AtomicIntegerFieldUpdater: plastic field updater

    • AtomicLongFieldUpdater: updater for long shaping fields

    • AtomicReferenceFieldUpdater: reference type field updater