Hello, everyone, I am the third, face slag counterattack continues, this time we will play a game of another interview must ask knowledge — Java concurrency.

This article is a bit long, 40,000 words, with 60 Java questions and illustrations. People have liver numb, we can like, collect slowly look! Help me up! I can still liver!

basis

1. What is the difference between parallelism and concurrency?

From an operating system perspective, threads are the smallest unit of CPU allocation.

  • Parallelism is when two threads are executing at the same time. This requires two cpus to execute two separate threads.
  • Concurrency is when only one thread is executing at a time, but at the same time, two threads are executing. Concurrency relies on the CPU switching thread, which is extremely short, so it is basically insensitive to the user.

Just like when we go to the canteen for dinner, parallelism means that we queue up at multiple Windows and several aunts play dishes at the same time. Parallel is that we crowded in a window, aunt to this dozen spoonful, and scrambled to that dozen spoonful.

2. What are processes and threads?

To talk about threads, you have to talk about processes first.

  • Process: a process is a running activity of code on the data set. It is the basic unit of system resource allocation and scheduling.
  • Thread: A thread is the execution path of a process. There is at least one thread in a process. Multiple threads in a process share the resources of the process.

The operating system allocates resources to processes, but CPU resources are allocated to threads, because threads are the basic unit of CPU allocation.

In Java, for example, when we start main, we start a JVM process, and the main thread is one of the threads in that process, also known as the main thread.

There are multiple threads in a process that share the heap and method area resources of the process, but each thread has its own program counter and stack.

3. How many ways can threads be created?

There are three main ways to create threads in Java, namely, inheriting Thread class, implementing Runnable interface, and implementing Callable interface.

  • Inheriting the Thread class, overriding the run() method and calling the start() method to start the Thread
public class ThreadTest {

    /** * inherits Thread class */
    public static class MyThread extends Thread {
        @Override
        public void run(a) {
            System.out.println("This is child thread"); }}public static void main(String[] args) {
        MyThread thread = newMyThread(); thread.start(); }}Copy the code
  • Implement the Runnable interface, overriding the run() method
public class RunnableTask implements Runnable {
    public void run(a) {
        System.out.println("Runnable!");
    }

    public static void main(String[] args) {
        RunnableTask task = new RunnableTask();
        newThread(task).start(); }}Copy the code

Neither of these returns a value, but what if we need to get the result of the thread’s execution?

  • Implement the Callable interface and override the Call () method, which gets the return value of the task execution from FutureTask
public class CallerTask implements Callable<String> {
    public String call(a) throws Exception {
        return "Hello,i am running!";
    }

    public static void main(String[] args) {
        // Create an asynchronous task
        FutureTask<String> task=new FutureTask<String>(new CallerTask());
        // Start the thread
        new Thread(task).start();
        try {
            // Wait for execution to complete and get the return result
            String result=task.get();
            System.out.println(result);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch(ExecutionException e) { e.printStackTrace(); }}}Copy the code

4. Why run() is called when the start() method is called? Why not call run() directly?

When the JVM executes the start method, it creates a thread that executes the thread’s run method.

** Why can’t we just call the run() method? ** also knows that if you call Thread’s run() method directly, the run method will still run in the main Thread, which is equivalent to sequential execution, and will not be multithreaded.

5. What are the common thread scheduling methods?

Thread waits and notifications

The Object class has several functions for waiting and notifying threads.

  • Wait () : When thread A calls the wait() method of A shared variable, thread A is blocked and will return only if:

    • (1) Thread A calls notify() or notifyAll() of the shared object;

    • (2) Another thread calls interrupt() on thread A, and thread A returns InterruptedException.

  • Wait (long timeout) : This method takes one more timeout argument than wait(). The difference is that if thread A calls the wait(long timeout) method on the shared object and is not awakened by another thread within the specified timeout ms, the method will still return due to timeout.

  • Wait (long timeout, int nanos), which internally calls wait(long timout).

Above is the thread waiting method, and wake up the thread is mainly the following two methods:

  • Notify () : A thread A calling notify() on the shared object wakes up A thread that has been suspended after calling the wait series of methods on the shared variable. Multiple threads may be waiting on a shared variable, and it is random which waiting thread is awakened.
  • NotifyAll () : Unlike calling notify() on a shared variable, which wakes up one thread blocked on that shared variable, notifyAll() wakes up all threads suspended on that shared variable due to calls to the WAIT series.

The Thread class also provides a method for waiting methods:

  • Join () : If thread A executes thread.join(), it means that thread A is waiting for thread A to terminate

    Return from thread.join().

Thread to sleep

  • Sleep (long millis) : static method in Thread class. When an executing Thread A calls the Thread’s sleep method, Thread A temporarily cedes execution rights for A specified time, but the monitor resources owned by Thread A, such as locks, are still held. The function returns when the specified sleep time is up, and then participates in CPU scheduling. After obtaining CPU resources, the function can continue running.

Give priority

  • Yield () : A static method in the Thread class. When a Thread calls yield, it actually implies to the Thread scheduler that the current Thread is requesting to give up its CPU, but the Thread scheduler can ignore the hint unconditionally.

Thread the interrupt

Thread interrupt in Java is a cooperative mode between threads. The execution of the thread can not be terminated directly by setting the interrupt flag of the thread, but the interrupted thread will handle it according to the interrupt state.

  • Void interrupt() : Interrupts A thread. For example, while thread A is running, thread B can call the money interrupt() method to set the thread’s interrupt flag to true and return immediately. Setting flags is just setting flags, thread A is not actually interrupted and will continue.
  • Boolean isInterrupted() method: checks whether the current thread has been interrupted.
  • Boolean interrupted() : Checks whether the current thread has been interrupted. Unlike isInterrupted, this method clears the interrupt flag if it detects that the current thread has been interrupted.

6. How many states do threads have?

In Java, threads have six states:

state instructions
NEW Initial state: The thread is created, but the start() method has not been called
RUNNABLE Running state: Java threads refer to the ready and running states of the operating system as “running” in a catchy way
BLOCKED Blocked: Indicates that the thread is blocked on the lock
WAITING Wait state: a thread enters a wait state, which indicates that the current thread is waiting for other threads to do some specific action (notification or interrupt)
TIME_WAITING Timeout wait state: Unlike WAITIND, this state can return by itself at a specified time
TERMINATED Terminated: Indicates that the current thread has finished executing

Threads are not in a fixed state during their life cycle, but switch between different states as the code executes. Java thread states change as shown in the figure below:

7. What is thread context switching?

The purpose of multithreading is to make the most of the CPU, but as we know, concurrency is really one CPU for multiple threads.

In order to make the user feel that multiple threads are executing at the same time, CPU resources are allocated by time slice rotation, that is, each thread is allocated a time slice, and the thread occupies THE CPU to perform tasks in the time slice. When a thread runs out of time slices, it becomes ready and frees up the CPU for another thread. This is a context switch.

8. Does the daemon thread know?

There are two types of threads in Java: daemon threads and user threads.

The main function is called when the JVM starts, and the money in main is a user thread. Many daemon threads, such as garbage collection threads, are also started inside the JVM.

So what’s the difference between a daemon thread and a user thread? One of the differences is that the JVM exits normally when the last non-daemon thread bundle is left, regardless of whether or not a daemon thread currently exists, meaning that the JVM exits regardless of whether or not the daemon thread terminates. In other words, the JVM normally does not exit as long as a user thread is not finished.

9. What are the communication methods between threads?

  • The volatile and synchronized keywords

The keyword volatile can be used to modify a field (a member variable) by telling the program that any access to that variable needs to be taken from shared memory and that changes to it must be flushed back to shared memory synchronously, ensuring visibility to all threads.

The keyword synchronized can be used to modify methods or in the form of synchronized blocks. It mainly ensures that multiple threads can only have one thread in a method or synchronized block at the same time. It ensures the visibility and exclusivity of thread access to variables.

  • Wait/notification mechanism

Java’s built-in wait()/notify() mechanism allows one thread to modify the value of an object, while another thread senses the change and acts accordingly.

  • Pipeline input/output streams

The piped I/O stream differs from the normal file I/o stream or network I/o stream in that it is primarily used for data transfer between threads, and the medium of transfer is memory.

Piped input/output streams mainly include the following four concrete implementations: PipedOutputStream, PipedInputStream, PipedReader and PipedWriter. The first two are byte oriented and the last two are character oriented.

  • Using Thread. The join ()

If thread A executes thread.join(), thread A waits for thread A to terminate before returning from thread.join(). Thread provides join() as well as join(long millis) and JOIN (Long millis,int nanos).

  • Using ThreadLocal

ThreadLocal, or thread variable, is a storage structure with ThreadLocal objects as keys and arbitrary objects as values. This structure is attached to the thread, meaning that a thread can query a value bound to the thread based on a ThreadLocal object.

A value can be set using the set(T) method and retrieved from the current thread using the get() method.

About multithreading, in fact, there is a great probability of some pen questions, such as alternate printing, bank transfer, production and consumption model and so on, behind the old three will be a separate issue to check the common multithreading pen questions.

ThreadLocal

ThreadLocal isn’t used in a lot of scenarios, but it’s an interview veteran that has been blown up a thousand times, involving multithreading, data structures, and JVMS.

10. What is ThreadLocal?

ThreadLocal, also known as thread-local variables. If you create a ThreadLocal variable, each thread accessing the variable will have a local copy of the variable. When multiple threads manipulate the variable, they are actually manipulating the variable in their local memory, thus providing thread isolation and avoiding thread-safety issues.

  • create

Create a ThreadLoca variable localVariable that can be accessed concurrently by any thread.

// Create a ThreadLocal variable
public static ThreadLocal<String> localVariable = new ThreadLocal<>();
Copy the code
  • write

Threads can write variables anywhere using localVariable.

localVariable.set("Humble Three");Copy the code
  • read

Everywhere the thread reads the variables it writes.

localVariable.get();
Copy the code

11. Have you used ThreadLocal in your work?

Useful to have, used to do user information context storage.

Our system application is a typical MVC architecture. Every time a logged-in user accesses the interface, he/she will carry a token in the request header, according to which the basic information of the user can be resolved in the control layer. So what if user information is needed at both the service layer and the persistence layer, such as RPC calls, update user retrieval, and so on?

One way is to explicitly define user-related parameters, such as account numbers, user names… What if we need to change the code extensively and more or less intangibles?

In this case, we can use a ThreadLocal to block requests at the control layer to store user information in the ThreadLocal, so that we can retrieve user data from the ThreadLocal anywhere.

Data isolation for cookies, sessions, and so on in many other scenarios can also be implemented using ThreadLocal.

ThreadLocal is also used in database connection pooling:

  • Connections to the database connection pool are managed by ThreadLoca to ensure that all current thread operations are the same Connnection.

12. How is ThreadLocal implemented?

If we look at the set(T) method of ThreadLocal, we see that the current thread is fetched, the ThreadLocalMap is fetched, and the elements are stored in the map.

    public void set(T value) {
        // Get the current thread
        Thread t = Thread.currentThread();
        / / get ThreadLocalMap
        ThreadLocalMap map = getMap(t);
        // Save the current element to map
        if(map ! =null)
            map.set(this, value);
        else
            createMap(t, value);
    }
Copy the code

ThreadLocal implementation of secret in this ThreadLocalMap, can in the Thread class defines a type for ThreadLocal. ThreadLocalMap threadLocals member variables.

public class Thread implements Runnable {
   / / ThreadLocal ThreadLocalMap is the attribute of the Thread
   ThreadLocal.ThreadLocalMap threadLocals = null;
}
Copy the code

ThreadLocalMap, since it is called a Map, is definitely a <key,value> data structure. We all know that a map is an array of <key,value> nodes, but what about ThreadLocalMap nodes?

        static class Entry extends WeakReference<ThreadLocal<? >>{
            /** The value associated with this ThreadLocal. */
            Object value;

            / / the node classEntry(ThreadLocal<? > k, Object v) {/ / key assignment
                super(k);
                / / the value assignmentvalue = v; }}Copy the code

Here, key can be simply regarded as ThreadLocal, and value is the value put into the code. Of course, key is not ThreadLocal itself, but a WeakReference of it. It can be seen that Entry key inherits WeakReference. Let’s look at how key is assigned:

    public WeakReference(T referent) {
        super(referent);
    }
Copy the code

The assignment of key uses the assignment of WeakReference.

So, how do you answer the ThreadLocal principle? Answer these questions:

  • Thread class a type for ThreadLocal. ThreadLocalMap threadLocals instance variables, each Thread has a belongs to own ThreadLocalMap.
  • ThreadLocalMap maintains an array of entries. Each Entry represents a complete object. The key is a weak reference to ThreadLocal, and the value is a generic value of ThreadLocal.
  • When a thread sets a value to a ThreadLocal, it stores the value to its own ThreadLocalMap, and reads the value from a ThreadLocal to find the corresponding key in its own map, thus achieving thread isolation.
  • ThreadLocal itself does not store values; it simply acts as a key for threads to store values into ThreadLocalMap.

13. What is a ThreadLocal memory leak?

Let’s take a look at memory when using ThreadLocal. We all know that in a JVM, stack memory threads are private, storing references to objects, and heap memory threads are shared, storing object instances.

So, references to ThreadLocal and Thread are stored on the stack, and specific instances of them are stored on the heap.

The key used in ThreadLocalMap is a weak reference to ThreadLocal.

“Weak references: Whenever garbage collection is running, the memory used by this object is reclaimed, regardless of whether the JVM has sufficient memory.”

Weak references can easily be collected if a ThreadLocal (the Key of a ThreadLocalMap) is collected by the garbage collector, but a ThreadLocalMap has the same life cycle as a Thread. ThreadLocalMap’s key is missing, but its value is still there, causing a memory leak.

So how do you fix the memory leak?

Once ThreadLocal is used, the remove() method is called to free up memory.

ThreadLocal<String> localVariable = new ThreadLocal();
try {
    localVariable.set("Humble Three"); ... } finally { localVariable.remove(); }Copy the code

So why are keys designed to be weak references?

Keys are also designed to be weak references to prevent memory leaks.

If the key is designed to be a strong Reference, and the ThreadLocal Reference is destroyed, the strong Reference to the ThreadLoca will be lost, but the key will still be strong Reference to the ThreadLoca. That’s where the memory leak comes in.

14. What is the structure of ThreadLocalMap?

ThreadLocalMap, although called a Map, does not implement the Map interface, but is similar in structure to HashMap, focusing on two elements: the array of elements and the hash method.

  • Elements in the array

    An array of tables that stores elements of type Entry. Entry is a structure of ThreaLocal weak references as keys and Object as values.

 private Entry[] table;
Copy the code
  • Hash method

    The hash method is how to map the corresponding key to the corresponding subscript of the table array. The threadLocalHashCode of the key is hashed to the corresponding subscript of the table array, and then subtracts the length of the table array by one &.

int i = key.threadLocalHashCode & (table.length - 1);
Copy the code

There’s something about the threadLocalHashCode calculation here. Every time a ThreadLocal object is created, it adds 0x61C88647, which is a special value, Fibonacci number or golden ratio number. If the hash increment is this number, the advantage is that the hash distribution is very uniform.

    private static final int HASH_INCREMENT = 0x61c88647;
    
    private static int nextHashCode(a) {
        return nextHashCode.getAndAdd(HASH_INCREMENT);
    }
Copy the code

15. How does ThreadLocalMap resolve Hash collisions?

We probably all know that HashMap uses linked lists to resolve conflicts, known as chained addresses.

ThreadLocalMap does not use linked lists, and instead of using linked addresses to resolve conflicts, ThreadLocalMap uses a different approach — open addressing. What does open addressing mean? Simply put, the hole is occupied, so keep looking for an empty hole.

As shown in the figure above, if we insert a value=27, it should be in slot 4 after hash calculation. Slot 4 already has Entry data, and the key of the Entry data is not the same as the current one. In this case, the search will be linear backward until it finds the slot with null Entry and then it will stop searching and put the element into the empty slot.

During get, the system also locates the position in the table based on the hash value of the ThreadLocal object, and then determines whether the key in the Entry object in the slot is the same as the key in the GET. If not, the system determines the next position.

16. Is ThreadLocalMap capacity expansion mechanism understood?

At the end of the threadLocalmap.set () method, if no data has been cleaned up after the heuristic and the number of entries in the current hash array has reached the list expansion threshold (len*2/3), the rehash() logic is executed:

if(! cleanSomeSlots(i, sz) && sz >= threshold) rehash();Copy the code

Size >= threshold-threshold /4 = size >= threshold* 3/4 to determine whether to expand.

private void rehash(a) {
    // Clear expired entries
    expungeStaleEntries();

    / / capacity
    if (size >= threshold - threshold / 4)
        resize();
}

// Clear expired entries
private void expungeStaleEntries(a) {
    Entry[] tab = table;
    int len = tab.length;
    for (int j = 0; j < len; j++) {
        Entry e = tab[j];
        if(e ! =null && e.get() == null) expungeStaleEntry(j); }}Copy the code

Resize (), which is twice the size of the old array, iterates through the old table array, hashes to recalculate the position, opens the address to resolve the conflict, and places it on the newTab. All entry data in oldTab is already in newTab, and the table reference points to newTab

Specific code:

17. How do parent and child threads share data?

Can a parent thread pass values to a child thread using ThreadLocal? Of course not. So what to do?

Use another class called InheritableThreadLocal.

It’s easy to use. Set values in the InheritableThreadLocal instance of the main thread and get them in the child thread.

public class InheritableThreadLocalTest {
    
    public static void main(String[] args) {
        final ThreadLocal threadLocal = new InheritableThreadLocal();
        / / main thread
        threadLocal.set("Not good at technology.");
        / / the child thread
        Thread t = new Thread() {
            @Override
            public void run(a) {
                super.run();
                System.out.println("Yours truly,"+ threadLocal.get()); }}; t.start(); }}Copy the code

What’s the mechanism?

The principle is simple. There is another variable in the Thread class:

ThreadLocal.ThreadLocalMap inheritableThreadLocals = null;
Copy the code

If the parent Thread’s inheritableThreadLocals is not empty at thread. init, it is assigned to the current Thread’s inheritableThreadLocals.

        if(inheritThreadLocals && parent.inheritableThreadLocals ! =null)
            this.inheritableThreadLocals =
                ThreadLocal.createInheritedMap(parent.inheritableThreadLocals);
Copy the code

Java memory model

18. Describe your understanding of the Java Memory Model (JMM).

The Java Memory Model (JMM) is an abstract Model defined to mask differences in Memory access across hardware and operating systems.

JMM defines an abstract relationship between threads and Main Memory: Shared variables between threads are stored in Main Memory, and each thread has a private Local Memory where it stores copies of shared variables to read/write.

Abstract diagram of the Java memory model:

Local memory is an abstraction of the JMM and does not really exist. It actually covers caches, write buffers, registers, and other hardware and compiler optimizations.

Each core has its own controller and arithmetic unit. The controller contains a set of registers and operation controllers. The arithmetic unit performs arithmetic logic operations. Each core has its own level 1 cache, and in some architectures there is a level 2 cache shared by all cpus. So the working memory in the Java memory model corresponds to the Ll cache or THE L2 cache or the CPU register here.

19. What is your understanding of atomicity, visibility and order?

Atomicity, order, and visibility are fundamental concepts in concurrent programming, and many of the JMM’s technologies are built around these three characteristics.

  • Atomicity: Atomicity refers to the fact that an operation is indivisible and uninterruptible, either performed completely and without interruption by any factor, or not performed at all.
  • Visibility: Visibility means that when one thread changes the value of a shared variable, other threads immediately know about the change.
  • Orderliness: Orderliness refers to the sequential execution of code from front to back for a single thread. The program can be considered to be ordered under a single thread, but instruction rearrangement may occur when concurrent.

Analyze the atomicity of the following lines of code.

int i = 2;
int j = i;
i++;
i = i + 1;
Copy the code
  • The first sentence is the primitive type assignment, which is atomic.
  • First read the value of I and then assign it to j. The two-step operation does not guarantee atomicity.
  • First read the value of I, then +1, and finally assign the value of I.

How can atomicity, visibility, and order be guaranteed?

  • Atomicity: The JMM guarantees only basic atomicity and is required to guarantee atomicity of a code blocksynchronized .
  • Visibility: Java is leveragedvolatileKeywords to ensure visibility, and in addition,finalandsynchronizedIt also ensures visibility.
  • Order:synchronizedorvolatileCan ensure the order of operation between multiple threads.

20. What is reordering?

When executing a program, the compiler and processor often reorder instructions to improve performance. There are three types of reordering.

  1. Compiler optimized reordering. The compiler can rearrange the execution order of statements without changing the semantics of a single-threaded program.
  2. Instruction – level parallel reordering. Modern processors use instruction-level Parallelism (ILP) to overlap multiple instructions. If there is no data dependency, the processor can change the execution order of the machine instructions corresponding to the statement.
  3. Memory system reordering. Because the processor uses caching and read/write buffers, this makes the load and store operations appear to be out of order.

The sequence of instructions from the Java source code to the actual execution goes through the following three reorders, as shown below:

Singleton instance=new Singleton(); Singleton =new Singleton(); The corresponding JVM instructions are divided into three steps: allocate memory space > initialize objects > objects refer to allocated memory space, but after the instructions are reordered by the compiler, steps 2 and 3 May be reordered.

The JMM is a language-level memory model that ensures consistent memory visibility for programmers across compilers and processor platforms by disallowing certain types of compiler reordering and processor reordering.

21. Are there restrictions on order reordering? Happens-before?

Instruction reordering is also limited by two rules, happens-before and AS-if-serial.

Definition of happens-before:

  • If one action happens-before the other, the execution result of the first action will be visible to the second action, and the execution order of the first action precedes the second action.
  • The existence of a happens-before relationship between two operations does not mean that the specific implementation of the Java platform must be executed in the order specified by the happens-before relationship. The reorder is not illegal if the result of the reorder is the same as the result of the happens-before relationship

There are six rules that are relevant to all of us:

  • Procedure order rule: For every action in a thread, happens-before any subsequent action in that thread.
  • Monitor lock rule: a lock is unlocked, happens-before a lock is subsequently locked.
  • Volatile variable rule: Writes to a volatile field, happens-before any subsequent reads to that volatile field.
  • Transitivity: If A happens-before B, and B happens-before C, then A happens-before C.
  • Start () rule: if thread A performs an operation threadb.start () (starts ThreadB), then thread A’s threadb.start () operation happens before any operation in ThreadB.
  • Join () rule: if thread A performs the operation threadb.join () and returns successfully, any operation in ThreadB happens-before thread A returns successfully from threadb.join ().

22. What is as-if-serial? Is a single-threaded program necessarily sequential?

The as-if-serial semantics mean that the execution result of a single-threaded program cannot be changed no matter how much reordering is done (by the compiler and processor to improve parallelism). The compiler, runtime, and processor must comply with the AS-IF-Serial semantics.

To comply with the as-if-serial semantics, the compiler and processor do not reorder operations that have data dependencies because such reordering changes the execution result. However, if there are no data dependencies between the operations, they can be reordered by the compiler and processor. To illustrate, see the following code example that calculates the area of a circle.

double pi = 3.14;   // A
double r = 1.0;   // B 
double area = pi * r * r;   // C
Copy the code

Data dependencies for the three operations above:

There is A data dependency relationship between A and C, and there is A data dependency relationship between B and C. Therefore, C cannot be reordered before A and B in the final instruction sequence. But there is no data dependency between A and B, and the compiler and processor can reorder the execution order between A and B.

So in the end, the program might be executed in one of two ways:

The as-if-serial semantics protect single-threaded programs. Compilers, runtime, and processors that follow the as-if-serial semantics weave the Truman Show: single-threaded programs execute in “sequence.” The as-if-serial semantics make it so that we don’t have to worry about reordering, visibility issues in the single-thread case.

23. How does volatile work?

Volatile serves two purposes, ensuring visibility and order.

How does volatile guarantee visibility?

Volatile is a lighter alternative to synchronized locking to address the memory visibility of shared variables without the added overhead of context switching.

Volatile ensures that updates to a variable are immediately visible to other threads. When a variable is declared volatile, the thread does not cache the value in a register or elsewhere when writing to the variable. Instead, it flusher the value back to main memory. Instead of using values in the current thread’s local memory.

For example, if we declare A volatile int x= 0, and thread A changes x=1, the new value is flushed back to main memory. When thread B reads x, the local memory variable is emptied, and the new value is fetched from main memory.

How does volatile guarantee order?

Reordering can be divided into compiler reordering and processor reordering, and Valatile ensures orderliness by limiting both types of reordering separately.

To implement the memory semantics of volatile, when the bytecode is generated, the compiler inserts a memory barrier into the instruction sequence to prevent a particular type of handler from reordering.

  1. Insert one before each volatile writeStoreStorebarrier
  2. Insert one after each volatile writeStoreLoadbarrier
  3. Insert one after each volatile readLoadLoadbarrier
  4. Insert one after each volatile readLoadStorebarrier

The lock

24. Synchronized? How to use it?

Synchronized is used to keep code atomic.

Synchronized has three main uses:

  • Modified instance method: lock the current object instance, before entering the synchronization code to obtain the current object instance lock
synchronized void method(a) {
  // Business code
}
Copy the code
  • Modify static methods: That is, lock the current class on all object instances of the class and obtain the lock of the current class before entering the synchronization code. Static members do not belong to any instance object, they are members of the class (static means that this is a static resource of the class, no matter how many objects are new, there is only one copy).

    If thread A calls the non-static synchronized method of an instance object, and thread B calls the static synchronized method of the class that the instance object belongs to, this is allowed. Mutual exclusion does not occur. Because a lock used to access a static synchronized method is the current class lock, and a lock used to access a non-static synchronized method is the current instance object lock.

synchronized void staic method(a) {
 // Business code
}
Copy the code
  • Modifies code block: Specifies the lock object and locks the given object/class. Synchronized (this | object) said before entering the synchronization code base for a given object lock. Synchronized (class. Class) means acquiring the lock of the current class before entering the synchronized code
synchronized(this) {
 // Business code
}
Copy the code

25. The realization principle of synchronized?

How does synchronized lock?

When we use synchronized, we realize that we don’t have to lock and unlock ourselves because the JVM does it for us.

  1. When synchronized modifiers a code block, the JVM uses monitorenter, which points to the beginning of the synchronized code block, and Monitorexit, which points to the end of the synchronized code block.

    Synchronizeddemo. class synchronizedDemo. class synchronizedDemo. class synchronizedDemo. class

  1. When synchronized modifies a synchronized method, the JVM implements synchronization with the ACC_SYNCHRONIZED flag, which identifies the method as a synchronized method.

    You can also write some code to decompile and look at that.

What does synchronized lock?

Monitorenter, Monitorexit, or ACC_SYNCHRONIZED are all implemented based on Monitor.

The instance object structure has the object header, and inside the object header there’s a structure called Mark Word, and the Mark Word pointer points to the Monitor.

Monitor is a synchronization tool, or a synchronization mechanism. In the Java Virtual Machine (HotSpot), Monitor is implemented by ObjectMonitor and can be called an internal lock, or Monitor lock.

How ObjectMonitor works:

  • ObjectMonitor has two queues, _WaitSet and _EntryList, to hold the list of ObjectWaiter objects.
  • _owner, _count + 1 when the thread that gets the Monitor object enters the _OWNER section. If the thread calls wait(), the Monitor object is released and _owner is returned to empty, _count-1. At the same time, the wait thread enters _WaitSet and waits to be woken up.
ObjectMonitor() {
    _header       = NULL;
    _count        = 0; // Count the number of times the thread acquires the lock
    _waiters      = 0,
    _recursions   = 0;  // The number of lock reentrant times
    _object       = NULL;
    _owner        = NULL;  // point to the thread holding the ObjectMonitor object
    _WaitSet      = NULL;  // Threads in wait state are added to _WaitSet
    _WaitSetLock  = 0 ;
    _Responsible  = NULL ;
    _succ         = NULL ;
    _cxq          = NULL ;
    FreeNext      = NULL ;
    _EntryList    = NULL ;  // Threads in the waiting block state are added to the list
    _SpinFreq     = 0 ;
    _SpinClock    = 0 ;
    OwnerIsThread = 0 ;
  }
Copy the code

An analogy can be given to a visit to a hospital [18] :

  • First, patients register at the front desk of the outpatient hall or the self-service registration machine;

  • Then, after the registration, the patient found the corresponding clinic for treatment:

    • Only one patient can be seen in the clinic at a time;
    • If the clinic is free at this time, directly enter the medical treatment;
    • If there are other patients in the clinic at this time, the current patient enters the waiting room and waits to be called.
  • At the end of the treatment, out of the waiting room, the next patient in the waiting room into the waiting room.

This process is similar to Monitor:

  • Outpatient hall: all threads to be entered must register at the Entry Set to be qualified;
  • Consultation room: Only one thread can see a doctor in the consultation room **_Owner**, and the thread will leave after treatment
  • Waiting room: Enter the Wait Set when the waiting room is busy, and call a new thread from the Wait Set when the waiting room is free

So we know what synchronization is locking up:

  • Monitorenter, which determines that the thread with the ACC_SYNCHRONIZED identifier that entered the method first will have the owner of Monitor and the counter +1.
  • Monitorexit, which when executed exits, counters -1 and returns to 0 to be acquired by other incoming threads.

26. In addition to atomicity, synchronized visibility, orderliness, and reentrancy can be implemented.

How does synchronized guarantee visibility?

  • Before the thread locks, it will empty the value of the shared variable in the working memory, so that when using the shared variable, it needs to read the latest value from the main memory.
  • After a thread is locked, other threads cannot obtain the shared variables in main memory.
  • The thread must flush the latest value of the shared variable to main memory before it can be unlocked.

How does synchronized guarantee order?

Synchronized is an exclusive block of code that can only be owned by one thread at a time, so synchronized ensures that code is executed on a single thread at the same time.

Because of the as-if-serial semantics, single-threaded programs can guarantee that the end result is ordered, but not that instructions will not be rearranged.

Therefore, the order guaranteed by synchronized is the order of execution results, rather than the order to prevent instruction rearrangement.

How does synchronized implement reentrant?

Synchronized is a reentrant lock. That is, a thread is allowed to request twice that it own a critical resource for an object lock.

Synchronized locks have a counter that records the number of times a thread acquires the lock. After executing the corresponding code block, the counter is -1 until the counter is cleared and the lock is released.

The reason is that it’s reentrant. Synchronized lock objects have a counter that counts +1 as the thread acquires the lock and -1 as the thread completes execution until the lock is cleared.

27. Lock the upgrade? Synchronized optimization?

To understand lock escalation, we need to know what the state of different locks is. What does this state refer to?

In the Java object header, there is a structure called the Mark Word tag field that changes with the state of the lock.

Mark Word is a 64-bit virtual machine.

Mark Word stores the running data of the object itself, such as hash code, GC generation age, lock status flag, bias timestamp (Epoch), etc.

What optimizations did Synchronized make?

Prior to JDK1.6, synchronized implementations directly called Enter and exit of ObjectMonitor, known as heavyweight locks. Starting with JDK6, the HotSpot virtual machine development team optimized Java locks by adding adaptive spin, lock elimination, lock coarser, lightweight locking, and biased locking to improve synchronized performance.

  • Biased locking: in the case of no contention, just store the current thread pointer in Mark Word, no CAS operation.

  • Lightweight locks: Reduce the performance cost of operating system mutex compared to heavyweight locks when there is no multi-thread contention. However, if there is lock contention, there is an additional CAS operation overhead in addition to the mutex itself.

  • Spin locks: Reduce unnecessary CPU context switches. When a lightweight lock is upgraded to a heavyweight lock, spin locks are used

  • Lock coarsening: Multiple consecutive lock and unlock operations are linked together to expand a wider range of locks.

  • Lock elimination: When the virtual machine just-in-time compiler runs, it removes locks that require synchronization on code but detect that there is no possibility of competing for shared data.

What is the process of lock escalation?

Lock upgrade direction: no lock –> bias lock –> lightweight lock —-> heavyweight lock, this direction is basically irreversible.

Let’s look at the upgrade process:

Biased locking:

Partial lock acquisition:

  1. Judge whether it is biased state — whether the lock mark in MarkWord is’ 01 ‘and whether the biased lock is’ 1′
  2. If it is biased, check whether the thread ID is the current thread. If so, go to step ‘5’; otherwise, go to Step ‘3’.
  3. Compete for the lock through CAS operation. If the competition succeeds, set the thread ID in MarkWord to the current thread ID, and then execute ‘5’. If the competition fails, execute ‘4’.
  4. If the CAS fails to obtain the biased lock, there is a competition. When safepoint is reached, the thread that acquired the bias lock is suspended, the bias lock is upgraded to a lightweight lock, and the thread that is blocked at the safepoint continues to execute the synchronized code block
  5. Execute sync code

Partial lock undo:

  1. Biased lock will not be released (revoked) actively, and will only be executed when other threads compete. Since the cancellation needs to know the stack status of the thread holding the biased lock, it will be executed at SafePoint, and the thread (T) holding the biased lock has two situations: “2” and “3”.
  2. Cancel —-T If the thread exits the synchronization code block or is no longer alive, cancel the bias lock directly and become lock free —- If the state reaches the threshold 20, the batch rebias is executed
  3. Upgrade —-T thread is still synchronizing the code block, then the partial lock of T thread is upgraded to lightweight lock. The current thread performs the lock acquisition step in the lightweight lock state —- and performs batch undo when the status reaches the threshold of 40

Lightweight lock:

Lightweight lock acquisition:

  1. When a lock is added, the JVM determines whether a time-heavyweight lock is present, and if not, it marks a space in the current thread stack frame as the lock record for that lock, and copies the lock object MarkWord into the lock record
  2. After the copy is successful, the JVM uses the CAS operation to update the object header MarkWord to a pointer to the lock record, and the owner pointer in the lock record to the MarkWord in the object header. If successful, execute ‘3’, otherwise execute ‘4’
  3. If the update succeeds, the current thread holds the object lock and the object MarkWord lock flag is set to ’00’, indicating that the object is in a lightweight lock state
  4. When the update fails, the JVM first checks whether the object MarkWord points to the lock record in the current thread stack frame and executes’ 5 ‘if so, or’ 4 ‘otherwise.
  5. Represents lock reentrant; The product of the herbivore product is added to the stack frame of the current thread and the product of the herbivore product is set to null.
  6. It indicates that the lock object has been preempted by another thread. The spin wait is performed (10 times by default). If the wait times reaches the threshold and no lock is obtained, the lock is upgraded to a heavyweight lock

In general, the upgrade process is simple:

Complete upgrade process:

28. Describe the difference between synchronized and ReentrantLock.

This question can be answered from the implementation, functional characteristics, performance and other dimensions of lock:

  • Synchronized is a Java language keyword based on JVM implementation. ReentrantLock is implemented at the API level of the JDK (typically with lock() and unlock() methods combined with try/finally blocks).
  • Performance: synchronized performed much worse than ReenTrantLock before JDK1.6 lock optimization. But JDK6 starts with adaptive spin, lock elimination, and so on, and the performance is almost the same.
  • Features:ReentrantLock adds some advanced features over synchronized, such as wait interruptible, fair locking, and selective notification.
    • ReentrantLock provides a mechanism to interrupt a thread waiting for a lock, using lock.lockInterruptibly()
    • ReentrantLock can specify whether the lock is fair or unfair. Synchronized can only be an unfair lock. The so-called fair lock is that the line that waits first gets the lock first.
    • Synchronized implements the wait/notification mechanism with wait() and notify()/notifyAll() methods. ReentrantLock implements the wait/notification mechanism with Condition interface and newCondition() method.
    • ReentrantLock requires manual declarations to lock and release the lock, usually in conjunction with finally release the lock. Synchronized does not release the lock manually.

The following table lists the differences between the two locks:

29. What does AQS know?

AbstractQueuedSynchronizer abstract synchronous queue, referred to as “AQS, it is the foundation of Java and contract, and contract awarding the lock is based on AQS.

  • AQS is a bidirectional queue based on a FIFO. A Node class is defined inside the AQS. The SHARED Node in the Node is used to mark that the thread is blocked and suspended when obtaining a SHARED resource and then placed in the AQS queue
  • AQS uses a volatile modified int member variable state to represent the synchronization state. If the synchronization state is successfully modified, the lock is obtained. Volatile ensures the visibility of the variable between multiple threads
  • There are two ways to obtain state, exclusive mode and shared mode. If one thread uses the exclusive mode to obtain resources, other threads will be blocked after failing to obtain state. One thread fetches the resource in shared mode, and another thread fetches the resource in CAS mode.
  • If a shared resource is occupied, a blocking wake-up mechanism is required to ensure lock allocation. AQS adds threads that failed to compete for shared resources to a variant of the CLH queue.

A brief introduction to CLH: Craig, Landin and Hagersten queues, yesUnidirectional linked list implementation of the queue. The application thread spins only on local variables,It constantly polls the state of the precursor, if it is foundThe precursor node releases the lock and terminates the spin

Queues in AQS are virtual bidirectional queues of CLH variants that allocate locks by encapsulating each thread requesting a shared resource as a node:

CLH variant wait queues in AQS have the following characteristics:

  • AQS queue is a bidirectional linked list, but also FIFO first in first out feature
  • The queue structure is made up of the Head and Tail nodes, and the visibility is guaranteed by volatile modifier
  • The Head points to a node that has acquired a lock. It is a virtual node, and the node itself does not hold specific threads
  • If the synchronization status cannot be obtained, the node will be spun to acquire the lock. If the spin fails for a certain number of times, the thread will be blocked. Compared with CLH queue, the performance is better

Ps :AQS source code there are a lot of details to ask, it is recommended to have a good look at the AQS source code.

30.ReentrantLockHow does it work?

A ReentrantLock is a reentrant exclusive lock. Only one thread can acquire the lock. Other threads that acquire the lock are blocked and placed in the lock’s blocking queue.

Take a look at ReentrantLock’s locking operations:

    // Create an unfair lock
    ReentrantLock lock = new ReentrantLock();
    // Get the lock operation
    lock.lock();
    try {
        // Execute code logic
    } catch (Exception ex) {
        // ...
    } finally {
        // Unlock operation
        lock.unlock();
    }
Copy the code

The new ReentrantLock() constructor creates an unfair lock NonfairSync by default.

Fair lock FairSync

  1. A fair lock means that multiple threads acquire locks in the order in which they apply for locks. The thread directly enters the queue to queue, and the first thread in the queue can obtain the lock
  2. The advantage of a fair lock is that the thread waiting for the lock does not starve. The disadvantage is that the overall throughput efficiency is lower than that of the unfair lock. All threads except the first thread in the waiting queue will block, and the cost of CPU waking up the blocked thread is higher than that of the unfair lock

Non-fair lock NonfairSync

  • An unfair lock is a process in which multiple threads attempt to acquire the lock directly and wait at the end of the queue if they fail to acquire the lock. But if the lock is available, the thread can get it without blocking
  • The advantage of unfair locking is that it reduces the overhead of invoking threads, and the overall throughput is high, because threads have a chance to acquire the lock without blocking and the CPU does not have to wake up all threads. The downside is that threads in a waiting queue might starve to death or wait too long to acquire locks

The default object is created when lock() :

  • If the lock is not currently occupied by another thread, and the current thread has not acquired the lock before, the current thread acquires the lock, sets the owner of the current lock to the current thread, and sets the status of AQS to 1, and then returns directly. If the current thread has acquired the lock before, it simply increments the AQS status value by one and returns.
  • If the lock is already held by another thread, an unfair lock will attempt to acquire the lock. If the lock fails, the thread calling this method will be put into the AQS queue and blocked.

31. How does ReentrantLock implement fair locking?

The new ReentrantLock() constructor creates an unfair lock NonfairSync by default

public ReentrantLock(a) {
    sync = new NonfairSync();
}
Copy the code

You can also create FairSync by passing in a specific argument to the create lock constructor

ReentrantLock lock = new ReentrantLock(true);
--- ReentrantLock
// True indicates a fair lock, false indicates an unfair lock
public ReentrantLock(boolean fair) {
    sync = fair ? new FairSync() : new NonfairSync();
}
Copy the code

FairSync and NonfairSync represent fair and unfair locks, and both are ReentrantLock static inner classes that implement different lock semantics.

There are two differences between unfair locks and fair locks:

  1. When an unfair lock is called, CAS will be called to grab the lock. If the lock is not occupied at this time, CAS will return the lock.
  2. After a CAS failure, an unfair lock will enter the tryAcquire method just like a fair lock. In the tryAcquire method, if the lock is released (state == 0), the unjust lock will be directly seized by CAS. However, fair lock will determine whether there is a thread waiting in the queue, if there is, it will not grab the lock, obediently queue to the back.

Relatively speaking, unfair locks have better performance because of their higher throughput. Of course, unfair locking makes the timing of lock acquisition more uncertain and can lead to chronically hungry threads in the blocking queue.

32. The CAS? What do we know about CAS?

CAS, called CompareAndSwap, CompareAndSwap, mainly through instructions from the processor to ensure atomicity of operations.

The CAS instruction contains three parameters: A, the memory address of the shared variable, B, the expected value, and C, the new value of the shared variable.

The value at address A in memory can be updated to the new value C only when the value at address A in memory is equal to B. As a CPU instruction, the CAS instruction itself is atomically guaranteed.

33. What’s wrong with CAS? How to solve it?

The three classic problems of CAS:

ABA problem

In A concurrent environment, if the initial condition is A, the data will be modified if A is found. But even though we’re looking at A, maybe A goes to B, and B goes back to A. At this point, A has changed from A. Even if the data is successfully modified, there may be problems.

How to solve the ABA problem?

  • Add the version number

Each time you modify A variable, add 1 to the version number of the variable, so that A->B->A has changed, although the value of A has not changed, but the version number has changed, and then judge the version number will find that A has been changed. Referring to the version number of the optimistic lock provides a validation of the data.

Java provides the AtomicStampReference class, whose compareAndSet method first checks whether the current object reference value is equal to the expected reference value, and whether the current Stamp flag is equal to the expected flag, if all equal, The reference value and the value of the stamp flag are updated atomically to the given updated value.

Cyclic performance overhead

Spin CAS, if executed in a loop that never succeeds, can impose a very high execution overhead on the CPU.

How to solve the problem of cyclic performance overhead?

In Java, many places that use spin CAS have a limit on how many spins they can spin, after which they stop spinning.

Atomic operations of only one variable can be guaranteed

CAS guarantees atomicity of operations performed on one variable. CAS cannot directly guarantee atomicity of operations performed on multiple variables.

How to solve the atomic operation problem that can only guarantee one variable?

  • Consider using locks to ensure atomicity of operations
  • Consider merging variables, encapsulating variables into an object, and ensuring atomicity through an AtomicReference.

34. What does Java do to ensure atomicity? How to ensure that i++ results in multiple threads are correct?

  • Use cyclic atom classes, such as AtomicInteger, to implement i++ atomic operations
  • Use a lock under the juc package, such as ReentrantLock, and lock lock.lock() on the i++ operation to achieve atomicity
  • Use synchronized to lock the i++ operation

35. What do you know about atomic operation classes?

When A program updates A variable, if multiple threads update that variable at the same time, it might get an unexpected value, such as variable I =1, thread A updates variable I +1, thread B updates variable I +1, and after two threads, it might not be 3, but 2. Since both thread A and thread B get “I” as “1” when updating variable “I”, this is an unsafe update operation. Generally, synchronized is used to solve this problem. Synchronized ensures that multiple threads do not update variable “I” at the same time.

In fact, in addition, there are more lightweight, the choice of Java since JDK 1.5 provides a Java. Util. Concurrent. Atomic package, this package of atoms in the action class provides a highly efficient, usage simple, the performance thread safe to update a variable way.

Since there are many types of variables, there are altogether 13 classes in the Atomic package, which belong to 4 types of Atomic update methods, namely Atomic update basic type, Atomic update array, Atomic update reference and Atomic update attribute (field).

The classes in the Atomic package are mostly wrapped classes implemented using Unsafe.

Using Atomic updates to base types, the Atomic package provides the following three classes:

  • AtomicBoolean: Atom updates Boolean type.

  • AtomicInteger: Atom update integer.

  • AtomicLong: Atom updates long integers.

The Atomic package provides the following four classes to update an element in an array:

  • AtomicIntegerArray: Atom updates the elements of an integer array.

  • AtomicLongArray: Atom updates elements in a long integer array.

  • AtomicReferenceArray: The atom updates the element in the array of reference types.

  • The AtomicIntegerArray class provides an atomic way to update an array of integers

Atomic update AtomicInteger of the base type, which can update only one variable, is used to update the class provided by the reference type if multiple variables are to be updated atomically. The Atomic package provides the following three classes:

  • AtomicReference: Atomic update reference type.

  • AtomicReferenceFieldUpdater: atomic updates a reference type in the field.

  • AtomicMarkableReference: Atom updates reference types with marker bits. You can atomically update a Boolean tag bit and reference type. The constructor is AtomicMarkableReference (V initialRef, Boolean initialMark).

If you need to update a field in a class atomically, you need to use the Atomic update field class. The Atomic package provides the following three classes for Atomic field updates:

  • AtomicIntegerFieldUpdater: atomic updates integer field updater.
  • AtomicLongFieldUpdater: A updater that atomically updates long integer fields.
  • AtomicStampedReference: Atom updates a reference type with a version number. This class associates integer values with references and can be used for atomic update data and the version number of the data, which can solve ABA problems that may occur when atomic updates are made using CAS.

36. How does AtomicInteger work?

In a word: implement using CAS.

Take AtomicInteger as an example:

    public final int getAndIncrement(a) {
        return unsafe.getAndAddInt(this, valueOffset, 1);
    }
Copy the code

Using an instance of the Unsafe class to add operations, take a look at the CAS operation in action:

    public final int getAndAddInt(Object var1, long var2, int var4) {
        int var5;
        do {
            var5 = this.getIntVolatile(var1, var2);
        } while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4));

        return var5;
    }
Copy the code

CompareAndSwapInt is a native method that operates on int variables based on CAS. The rest of the atomic manipulation classes are pretty much the same.

37. Thread deadlock understood? How can it be avoided?

Deadlock refers to the phenomenon that two or more threads are waiting for each other during execution due to competing for resources. Without external force, these threads will wait for each other and cannot continue to run.

So why do deadlocks occur? The following four conditions must be met for a deadlock to occur:

  • Mutually exclusive condition: a thread can use a resource that has been acquired by another thread. That is, the resource is occupied by only one thread at a time. If another thread requests the resource at this point, the requester can only wait until the thread holding the resource releases it.
  • Request and hold condition: a thread has already held at least one resource, but it makes a new resource request, and the new resource has been occupied by another thread, so the current thread will be blocked, but the blocking does not release its acquired resources.
  • Inalienable condition: A thread cannot preempt a resource until it uses it up. It can release the resource only after it uses it up.
  • Loop waiting condition: when deadlock occurs, there must be a thread — resource loop chain, namely thread set {T0, T1, T2… Tn} T0 is waiting for a resource used by T1, Tl1 is waiting for a resource used by T2…… Tn is waiting for a resource already occupied by T0.

How do you avoid deadlocks? The answer is to break at least one condition for deadlocks to occur.

  • There’s no way to break the mutual exclusion condition, because locking is mutual exclusion. But there is a way to destroy the other three conditions, how do you do it?

  • For the “request and hold” condition, you can request all resources at once.

  • For the condition of “inalienable”, the thread that occupies part of the resource can actively release the resource it occupies when applying for other resources, so that the condition of “unpreemption” is broken.

  • The “loop waiting” condition can be prevented by applying resources sequentially. Sequential application means that resources are in linear order. When applying, you can apply for resources with small sequence number first and then apply for resources with large sequence number. In this way, there is no loop after linearization.

38. How to check deadlocks?

You can use the JDK command-line tool to check whether:

  1. Use JPS to find the Running Java process: JPS -l
  2. Use jstack to view thread stack information: jstack -l process ID

You can basically see the deadlock information.

You can also use graphical tools, such as JConsole. After a thread deadlock occurs, click the Deadlock detected button in the JConsole thread panel to see the thread deadlock information.

Concurrent utility class

39.CountDownLatch?

CountDownLatch, or reverse counter, has two common application scenarios [18] :

Scenario 1: Coordinating child thread termination: Wait for all child threads to finish

CountDownLatch allows one or more threads to wait for other threads to complete an operation.

For example, honor of Kings, which many of us like to play, has to wait until everyone is online before it can be played.

CountDownLatch mimics this scenario (see [18]) :

Create five players: Daqiao, Lanling King, Anqila, Nezha and Kai. The main thread must be confirmed by all of them before it can continue to run.

In this code, the new CountDownLatch(5) user creates the initial number of latches, each player completes status confirmation with countdownlatch.countdown (), and the main thread waits with countdownlatch.await ().

    public static void main(String[] args) throws InterruptedException {
        CountDownLatch countDownLatch = new CountDownLatch(5); Thread = big JoenewThread(countDownLatch::countDown); King threadling =newThread(countDownLatch::countDown); Thread Andra =newThread(countDownLatch::countDown); The Thread which zha =newThread(countDownLatch::countDown); Thread armoured =new Thread(() -> {
            try {
                // Just a moment, please go to the bathroom, I'll be right there...
                Thread.sleep(1500);
                countDownLatch.countDown();
            } catch(InterruptedException ignored) {} }); Big Joe. The start (); Lanling king. The start (); Its pull. The start (); Which zha. Start (); Armoured. Start (); countDownLatch.await(); System.out.println("All players in position!");
    }
Copy the code

Scenario 2. Coordinating the start time of child threads: Unify the start time of each thread

A similar scenario exists in Kings, where each player’s initial state must be the same at the start of the game. You can’t have some players out of the pack, some just born.

So we all have to be born together

In this scenario, five threads are still used to represent the five players: Daqiao, Lanling King, Anchira, Nezha and Kai. Note that while each player has invoked the start() thread, they are waiting for the countDownLatch signal at run time and will not proceed until that signal is received.

    public static void main(String[] args) throws InterruptedException {
        CountDownLatch countDownLatch = new CountDownLatch(1); Thread = big JoenewThread(() -> waitToFight(countDownLatch)); King threadling =newThread(() -> waitToFight(countDownLatch)); Thread Andra =newThread(() -> waitToFight(countDownLatch)); The Thread which zha =newThread(() -> waitToFight(countDownLatch)); Thread armoured =newThread(() -> waitToFight(countDownLatch)); Big Joe. The start (); Lanling king. The start (); Its pull. The start (); Which zha. Start (); Armoured. Start (); Thread.sleep(1000);
        countDownLatch.countDown();
        System.out.println("Hostiles have five seconds to reach the battlefield. All hands on deck!");
    }

    private static void waitToFight(CountDownLatch countDownLatch) {
        try {
            countDownLatch.await(); // Wait here for the signal to continue
            System.out.println("Roger that, attack!");
        } catch(InterruptedException e) { e.printStackTrace(); }}Copy the code

CountDownLatch also has few core methods:

  • await()Wait for the LATCH to drop to 0.
  • boolean await(long timeout, TimeUnit unit): Waiting for the latch to drop to 0, but the timeout period can be set. If a player runs out of time and is not confirmed, then re-match, you can’t wait forever for a player.
  • countDown(): Number of latches decreased by 1;
  • getCount(): Obtains the current latch number.

40.CyclicBarrier?

CyclicBarrier literally means CyclicBarrier. What it does is allow a group of threads to block when they reach a barrier (also known as a synchronization point), and the barrier will not open until the last thread reaches the barrier, and all threads blocked by the barrier will continue to run.

It is similar to CountDownLatch in that it can coordinate multiple terminating actions and perform specific actions when they end, but the CyclicBarrier is different from CountDownLatch.

I don’t know if you’ve ever heard of John Khan jr., a young upstart, but he has two major hates in his life: “If you want to get married, you don’t want to get married, you don’t want to get married. Let’s go back to what happened: Little John met Niigaki Noyi after the imperial government, and decided to choose his first concubine. He confessed to Noyi and waited for her response. However, Niigaki Knot clothes responded to marry the star wild source, little John heartbroken, vowed not to marry life, suddenly found suzuki Ai, so little John decided to choose second concubine, love to take care of, waiting for the response.

We simulated this scenario with our code and found that CountDownLatch was unable to do anything because the use of CountDownLatch was one-time and could not be reused, whereas the wait for CountDownLatch was twice. At this point, we can do this with a CyclicBarrier because it can be reused.

Running results:

The core method of CyclicBarrier is still await() :

  • If the current thread is not the first to reach the barrier, it will wait until all other threads reach it, unless the barrier is interrupted, removed, reset, etc.

To abstract the above example, it essentially looks like this:

41. What is the difference between CyclicBarrier and CountDownLatch?

The core difference between the two [18] :

  • Countdownlatches are one-time, while cyclicBarriers can be set up multiple times for reuse.
  • The child threads in CountDownLatch cannot wait for other threads, but can only complete their own tasks. Individual threads in a CyclicBarrier can wait for other threads

They are distinguished by a table:

CyclicBarrier CountDownLatch
Cyclicbarriers are reusable, where threads wait for all threads to complete their tasks. At that point, the barriers will be removed and specific actions can be performed selectively. CountDownLatch is one-time, with different threads working on the same counter until the counter reaches zero.
CyclicBarrier is geared towards the number of threads CountDownLatch is for tasks
When using CyclicBarrier, you must specify in your construct the number of threads participating in the collaboration, and these threads must call the await() method With CountDownLatch, you must specify the number of tasks, and it doesn’t matter which threads perform those tasks
Cyclicbarriers can be reused after all threads have been released CountDownLatch is no longer available when the counter is 0
In a CyclicBarrier, if a thread encounters interrupts, times out, etc., then all the threads in await will experience problems In CountDownLatch, if one thread has a problem, the other threads are not affected

42. Understand Semaphore?

Semaphore is used to control the number of threads accessing a particular resource at the same time. It coordinates threads to ensure proper use of common resources.

As abstract as it sounds, there are so many cars nowadays that parking is one of the most difficult problems when driving outside. The parking space in the parking lot is limited and can only allow a number of vehicles to park. If there is space in the parking lot, then the display board shows the green light and the remaining parking space, and the vehicles can enter. If the parking lot is empty, the sign shows a green light and the number zero, and the car has to wait. If a car leaves a full parking lot, the board turns green again, showing the number of empty Spaces, and waiting cars get in.

When a vehicle enters the parking lot, the thread is executing. When it leaves the parking lot, the thread is finished executing. When it sees a red light, the thread is blocked and cannot execute.

Let’s look at one more use of Semaphore: It can be used for flow control, especially in applications where common resources are limited, such as database connections.

If there is a demand, want to read tens of thousands of file data, because is IO intensive tasks, we can start the dozens of thread concurrently, but if after read into memory, also need to be stored in the database, and database connections only 10, only 10 threads at the same time we must control access to the database connection save the data, Otherwise, an error will be reported and the database connection cannot be obtained. At this point, Semaphore can be used for flow control as follows:

public class SemaphoreTest {
    private static final int THREAD_COUNT = 30;
    private static ExecutorService threadPool = Executors.newFixedThreadPool(THREAD_COUNT);
    private static Semaphore s = new Semaphore(10);

    public static void main(String[] args) {
        for (int i = 0; i < THREAD_COUNT; i++) {
            threadPool.execute(new Runnable() {
                @Override
                public void run(a) {
                    try {
                        s.acquire();
                        System.out.println("save data");
                        s.release();
                    } catch(InterruptedException e) { } } }); } threadPool.shutdown(); }}Copy the code

In the code, there are 30 threads executing, but only 10 are allowed to execute concurrently. Semaphore (int permits) accepts an integer indicating the number of permits available. Semaphore (10) indicates that 10 threads are allowed to obtain the license, i.e., the maximum number of concurrent requests is 10. Semaphore is also very simple to use. First the thread uses Semaphore’s acquire() method to acquire a license, and then calls Release () to return the license. You can also try to obtain a license using the tryAcquire() method.

Is it understood?

Sanoer is a tool class used for collaboration between threads. Used for data exchange between threads. It provides a synchronization point at which two threads can exchange data with each other.

The two threads exchange data using the Exchange method. If the first thread executes the Exchange () method first, it will wait until the second thread executes the exchange method. When both threads reach the synchronization point, the two threads can exchange data, passing the data produced by each thread to the other.

Sano1100it is used in the genetic algorithm, which needs to select two people for mating, and the data of the two people is exchanged and the crossover rule is used to obtain two mating results. For example, we need to manually input the paper bank statement into electronic bank statement. In order to avoid mistakes, two people from AB are used for input. After input into Excel, the system needs to load the two Excel and proofread the two Excel data to check whether the input is consistent.

public class ExchangerTest {
    private static final Exchanger<String> exgr = new Exchanger<String>();
    private static ExecutorService threadPool = Executors.newFixedThreadPool(2);

    public static void main(String[] args) {
        threadPool.execute(new Runnable() {
            @Override
            public void run(a) {
                try {
                    String A = "Bank statement A"; // A Input bank statement data
                    exgr.exchange(A);
                } catch (InterruptedException e) {
                }
            }
        });
        threadPool.execute(new Runnable() {
            @Override
            public void run(a) {
                try {
                    String B = "Bank statement B"; // B input bank statement data
                    String A = exgr.exchange("B");
                    System.out.println("Are data A and B consistent?" + A.equals(B) + ", A typed:"
                            + A + ", B input is: + B);
                } catch(InterruptedException e) { } } }); threadPool.shutdown(); }}Copy the code

If either thread does not execute the exchange() method, it will wait forever. If you are worried about special circumstances, you can use exchange(V x, long timeOut, TimeUnit Unit) to set the maximum wait time.

The thread pool

44. What is a thread pool?

Thread pool: Simply put, it is a pool that manages threads.

  • It helps us manage threads without increasing resource consumption by creating and destroying threads. Because a thread is actually an object, creating an object requires a class loading process, destroying an object, and going through the GC garbage collection process, all of which require resource overhead.
  • Improve response speed. If the task arrives, it will be much slower than taking the thread from the pool and creating a new thread to execute it.
  • Reuse. When the thread is used up, it is put back into the pool, which can achieve the effect of reuse and save resources.

45. Can you talk about the use of thread pools at work?

Before, we had a requirement to connect with a third party and push data to the third party. Multi-threading was introduced to improve the efficiency of data push, and thread pool was used to manage threads.

The main code is as follows:

Full executable code address: gitee.com/fighter3/th…

The thread pool parameters are as follows:

  • CorePoolSize: The number of cpus x 2 is selected as the core parameter of the thread

  • MaximumPoolSize: The maximum number of threads selected is the same as the number of core threads

  • KeepAliveTime: The lifetime of non-core idle threads is set to 0

  • Unit: Non-core threads are kept alive for timeunit. SECONDS

  • WorkQueue: Thread pool wait queue, blocking queue with LinkedBlockingQueue

Synchronized is also used to lock data to ensure that it will not be pushed repeatedly:

  synchronized (PushProcessServiceImpl.class) {}
Copy the code

Ps: This example is just a simple data push, in fact, can also be combined with other services, such as data cleaning, data statistics, can be applied.

46. Can you briefly describe the thread pool workflow?

To use a popular metaphor:

There is a business hall, a total of six Windows, now open three Windows, now there are three Windows sitting three salesgirl little sister in business.

What might happen to the third brother when he runs a business?

  1. The third brother found a space in the business window, directly to the little sister for business.

  1. The third brother found no free window and waited in line.

  1. The third brother found that there was no vacant window and the waiting area was full. Bengbu was settled. The manager asked the sister who had a rest to hurry back to work and wait for those with an earlier area code to hurry to the new window. The younger sister is more laborious, if a period of time discovers they can need not continue business, the manager lets them continue to rest.

  1. The third one, six Windows are full, waiting area has no place. Old three nasty, want to make, the manager hurriedly came out, how should the manager do?

  1. Our banking system has collapsed

  2. Go to whoever told you to do it

  3. Look, you’re in a hurry. Put a plug in the line

  4. I can’t do it today. I can’t do it another day

The above flow is almost similar to the general flow of JDK thread pools,

  1. The three Windows in the business correspond to the number of core thread pools: corePoolSize
  2. Total number of operating Windows 6 corresponds to maximumPoolSize
  3. Open temporary Windows in how long no one to close corresponding: unit
  4. The queue area is the wait queue: workQueue
  5. RejectedExecutionHandler If the command cannot be RejectedExecutionHandler is displayed
  6. ThreadFactory this parameter is a threadFactory in the JDK and is used to create thread objects.

So our thread pool workflow is easier to understand:

  1. When a thread pool is created, there are no threads in it. The task queue is passed in as a parameter. However, even if there are tasks in the queue, the thread pool will not execute them immediately.
  2. When the execute() method is called to add a task, the thread pool makes the following judgments:
  • If the number of running threads is less than corePoolSize, create a thread to run the task immediately.
  • If the number of running threads is greater than or equal to corePoolSize, the task is queued;
  • If the queue is full and the number of running threads is smaller than maximumPoolSize, create a non-core thread to run the task immediately.
  • If the queue is full and the number of running threads is greater than or equal to maximumPoolSize, the thread pool will process accordingly according to the rejection policy.

  1. When a thread completes a task, it takes the next task from the queue and executes it.

  2. When a thread has nothing to do for more than a certain keepAliveTime, the thread pool determines that if the number of threads currently running is greater than corePoolSize, the thread is stopped. So after all the tasks in the thread pool are complete, it will eventually shrink to the size of corePoolSize.

47. What are the main parameters of the thread pool?

CorePoolSize, maximumPoolSize, workQueue, and Handler are the seven parameters of a thread pool.

  1. corePoolSize

This value is used to initialize the number of core threads in the thread pool. When the number of thread pools in the thread pool is less than corePoolSize, the system creates a thread pool by default by adding one task. When the number of threads is equal to corePoolSize, new tasks are appended to the workQueue.

  1. maximumPoolSize

MaximumPoolSize specifies the maximum number of threads allowed = (number of non-core threads + number of core threads). When BlockingQueue is also full, but the total number of threads in the pool is less than maximumPoolSize, a new thread will be created.

  1. keepAliveTime

Non-core thread =(maximumPoolSize – corePoolSize), the maximum time that a non-core thread can live without working.

  1. unit

The unit of time that non-core threads in a thread pool remain alive

  • TimeUnit.DAYS; day
  • TimeUnit.HOURS; hours
  • TimeUnit.MINUTES; minutes
  • TimeUnit.SECONDS; seconds
  • TimeUnit.MILLISECONDS; ms
  • TimeUnit.MICROSECONDS; microseconds
  • TimeUnit.NANOSECONDS; A nanosecond
  1. workQueue

The thread pool wait queue maintains Runnable objects waiting to be executed. When running with thread count = corePoolSize, new tasks are added to the workQueue. If the workQueue is also full, try to execute tasks with non-core threads. The wait queue should be bounded as much as possible.

  1. threadFactory

Factory used when creating a new thread, which can be used to specify the thread name, whether it is a daemon thread, etc.

  1. handler

A saturation policy executed when corePoolSize, workQueue, and maximumPoolSize are all unavailable.

48. What are the denial policies for thread pools?

To help you remember:

  • AbortPolicy: Directly throws an exception. This policy is used by default
  • CallerRunsPolicy: Executes the task with the caller’s thread
  • DiscardOldestPolicy: Discards the oldest task in the blocking queue, i.e. the task at the top of the queue
  • DiscardPolicy: Discards the current task

To implement your own rejection policy, implement the RejectedExecutionHandler interface.

49. What kinds of work queues do thread pools have?

Commonly used blocking queues are as follows:

  • ArrayBlockingQueue: ArrayBlockingQueue (bounded queue) is a bounded blocking queue implemented in arrays, sorted by FIFO.
  • LinkedBlockingQueue: LinkedBlockingQueue is a LinkedBlockingQueue based on the structure of the list. Tasks are sorted by FIFO. If the queue size is not set, the queue will be a borderless blocking queue with a maximum length of integer.max_value. Throughput is generally higher than ArrayBlockingQuene; The newFixedThreadPool thread pool uses this queue
  • DelayQueue: DelayQueue (DelayQueue) is a queue for delayed execution of a task at a scheduled period. Sort from smallest to largest by the specified execution time, otherwise by the order in which inserts were made to the queue. The newScheduledThreadPool thread pool uses this queue.
  • PriorityBlockingQueue: PriorityBlockingQueue is an unbounded blocking queue with a priority
  • SynchronousQueue will: SynchronousQueue is a blocking queue that does not store elements. Each insert operation must wait until another thread calls the remove operation. Otherwise, the insert operation is blocked and throughput is usually higher than LinkedBlockingQuene. The newCachedThreadPool thread pool uses this queue.

50. What is the difference between submit execute and Submit?

  1. Execute is used to submit tasks that do not require a return value
threadsPool.execute(new Runnable() { 
    @Override public void run(a) { 
        // TODO Auto-generated method stub } 
    });
Copy the code
  1. The submit() method is used to submit tasks that require a return value. The thread pool returns an object of type Future, which can be used to determine whether the task was successfully executed, and the return value can be obtained through the Future’s get() method
Future<Object> future = executor.submit(harReturnValuetask); 
try { Object s = future.get(); } catch (InterruptedException e) { 
    // Handle the interrupt exception
} catch (ExecutionException e) { 
    // Handle the exception that tasks cannot be executed
} finally { 
    // Close the thread pool executor.shutdown();
}
Copy the code

51. Do you know how to close the thread pool?

A thread pool can be shutdown by calling its shutdown or shutdownNow methods. They work by iterating through worker threads in a thread pool and then interrupting them by calling the thread_by-thread_interrupt method, so tasks that cannot respond to interrupts may never be terminated.

Shutdown () sets the thread pool state to shutdown and does not stop immediately:

  1. Stop receiving external Submit tasks
  2. Internal running tasks and tasks waiting in the queue will run out
  3. Wait until the second step is complete, then really stop

ShutdownNow () sets the thread pool state to stop. It usually stops immediately, but not necessarily:

  1. As with shutdown(), stop receiving externally submitted tasks first
  2. Ignore tasks waiting in the queue
  3. Try to interrupt a running task
  4. Returns a list of unexecuted tasks

The simple differences between Shutdown and ShutdownNow are as follows:

  • ShutdownNow () immediately stops the thread pool, and running and waiting tasks are stopped. This works immediately, but is risky.
  • Shutdown () simply closes the submission channel. Submit () is invalid; The internal tasks run their own way and then stop the thread pool completely.

52. How should the number of threads in the thread pool be configured?

Threads are a scarce resource in Java, and the thread pool is neither bigger nor smaller. Tasks are computation-intensive, IO – intensive, and hybrid.

  1. Computationally intensive: most of it is CPU and memory, encryption, logical operations, business processing, etc.
  2. IO intensive: database link, network communication transmission, etc.

General rule of thumb, parameter configuration for different types of thread pools:

  1. It is generally recommended that the thread pool not be too large for computing intensive purposes. Generally, the number of cpus is +1, which is due to the possibility of missing pages (that is, there may be some data in the hard disk that requires an extra thread to read the data into memory). If the number of thread pools is too large, thread context switches and task scheduling may occur frequently. The code for obtaining the current CPU core number is as follows:
Runtime.getRuntime().availableProcessors();
Copy the code
  1. IO intensive: the number of threads is appropriately larger, and the number of Cpu cores in the machine is 2.
  2. Hybrid: It can be divided into CPU intensive and IO intensive tasks by taking into account the root case. If the execution time difference is not significant, splitting can improve throughput, and vice versa.

Of course, there is no fixed formula in practical application, which needs to be adjusted in combination with testing and monitoring.

53. What are the common thread pools?

There are four main types of thread pool created by the tool class Excutors. Note that alibaba “Java Development Manual” forbids the use of this method to create a thread pool.

  • NewFixedThreadPool (thread pool with a fixed number of threads)

  • NewCachedThreadPool (thread pool for cacheable threads)

  • NewSingleThreadExecutor (single-threaded thread pool)

  • NewScheduledThreadPool (thread pool for timed and periodic execution)

54. Can you explain how four common thread pools work?

The construction of the first three thread pools directly calls the constructor of ThreadPoolExecutor.

newSingleThreadExecutor

  public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory) {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1.1.0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue<Runnable>(),
                                    threadFactory));
    }
Copy the code

Thread pool characteristics

  • The number of core threads is 1
  • The maximum number of threads is also 1
  • The blocking queue is the unbounded queue LinkedBlockingQueue, which may result in OOM
  • KeepAliveTime of 0

Workflow:

  • Submit a task
  • If there is a thread in the thread pool, create a new thread to execute the task
  • If so, add the task to the blocking queue
  • The current unique thread, fetching tasks from the queue, finishing one, fetching more tasks, one thread executing the task.

Applicable scenario

This mode applies to the scenario where tasks are executed sequentially, one by one.

newFixedThreadPool

  public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory threadFactory) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>(),
                                      threadFactory);
    }
Copy the code

Thread pool features:

  • The number of core threads is the same size as the maximum number of threads
  • There is no such thing as non-idle time, i.e., keepAliveTime 0
  • Blocking queue is unbounded queue LinkedBlockingQueue and may result in OOM

Workflow:

  • Submit a task
  • If the number of threads is less than the core thread, the core thread is created to execute the task
  • If the number of threads equals core threads, add the task to the LinkedBlockingQueue blocking queue
  • If the thread finishes executing the task, it blocks the queue to fetch the task and continues executing.

Usage scenarios

FixedThreadPool is suitable for processing CPU-intensive tasks, ensuring that the CPU is allocated as few threads as possible while being used by worker threads for a long period of time, that is, it is suitable for performing long-running tasks.

newCachedThreadPool

   public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>(),
                                      threadFactory);
    }
Copy the code

Thread pool features:

  • The number of core threads is 0
  • The maximum number of threads is integer. MAX_VALUE
  • The blocking queue is SynchronousQueue
  • Non-core threads have an idle lifetime of 60 seconds

When the speed of submitting tasks is greater than the speed of processing them, one thread must be created each time a task is submitted. In extreme cases, too many threads are created and CPU and memory resources are exhausted. Since threads that are idle for 60 seconds are terminated, cachedThreadPools that remain idle for a long time do not consume any resources.

Workflow:

  • Submit a task
  • Because there are no core threads, the task is added directly to the SynchronousQueue.
  • Determine if there are idle threads, and if there are, fetch the task and execute it.
  • If there are no free threads, a new thread is created to execute.
  • The thread that completes the task can survive for 60 seconds, and if it receives the task during this period, it can continue to live; Otherwise, it is destroyed.

Applicable scenario

Used to perform a large number of small, short-term tasks concurrently.

newScheduledThreadPool

    public ScheduledThreadPoolExecutor(int corePoolSize) {
        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
              new DelayedWorkQueue());
    }
Copy the code

Thread pool characteristics

  • The maximum number of threads is integer. MAX_VALUE, which also risks OOM
  • The blocking queue is DelayedWorkQueue
  • KeepAliveTime of 0
  • ScheduleAtFixedRate () : Executes at a certain rate
  • ScheduleWithFixedDelay () : Executes after a certain delay

Working mechanism

  • The thread gets the expired ScheduledFutureTask (delayqueue.take ()) from DelayQueue. The ScheduledFutureTask time is greater than or equal to the current time.
  • The thread executes the ScheduledFutureTask.
  • The thread modifies the ScheduledFutureTask’s time variable to the next time it will be executed.
  • The thread puts the ScheduledFutureTask after the time change back into the DelayQueue (delayqueue.add ()).

Usage scenarios

Scenarios where tasks are executed periodically and the number of threads needs to be limited

Do thread pools with unbounded queues cause any problems?

For example, newFixedThreadPool uses an unbounded queue LinkedBlockingQueue. If a thread takes a long time to execute a task, the queue will accumulate more tasks, resulting in a continuous increase in machine memory usage, and eventually OOM.

Do you know how to handle the thread pool exception?

When a task is processed using a thread pool, the task code may throw a RuntimeException, which may be caught by the thread pool, or a new thread may be created to replace the exception thread. We may not be aware that the task has an exception, so we need to consider thread pool exceptions.

Common exception handling methods:

56. Can you tell us how many states thread pools have?

Thread pool has this several status: RUNNING, SHUTDOWN, STOP, TIDYING, TERMINATED.

   // Thread pool status
   private static final int RUNNING    = -1 << COUNT_BITS;
   private static final int SHUTDOWN   =  0 << COUNT_BITS;
   private static final int STOP       =  1 << COUNT_BITS;
   private static final int TIDYING    =  2 << COUNT_BITS;
   private static final int TERMINATED =  3 << COUNT_BITS;
Copy the code

Thread pool state switching diagram:

RUNNING

  • The thread pool in this state receives new tasks and processes tasks in the blocking queue;
  • Call the shutdown() method of the thread pool to switch to the shutdown state;
  • Call the thread pool shutdownNow() method to switch to the STOP state;

SHUTDOWN

  • The thread pool in this state does not receive new tasks, but processes tasks in the blocking queue;
  • The queue is empty, and the tasks executed in the thread pool are also empty and enter the TIDYING state.

STOP

  • A thread in this state does not receive new tasks, does not process tasks in a blocking queue, and interrupts running tasks;
  • The task executed in the thread pool is empty and enters the TIDYING state.

TIDYING

  • This status indicates that all tasks have been run and terminated, and the number of recorded tasks is 0.
  • Terminated () is executed and the state is terminated

TERMINATED

  • This status indicates that the thread pool is completely terminated

57. How does a thread pool implement dynamic parameter modification?

Thread pools provide several setter methods to set thread pool parameters.

There are two main ideas:

  • Under our microservices architecture, you can use configuration centers such as Nacos, Apollo, etc., or you can develop your own configuration centers. The business service reads the thread pool configuration and obtains the corresponding thread pool instance to modify the thread pool parameters.

  • If you limit the use of the configuration center, you can also extend ThreadPoolExecutor yourself, overriding methods that listen for thread pool parameters to change dynamically.

Thread pool tuning?

There is no fixed formula for thread pool configuration. Thread pools are usually evaluated in advance. Common evaluation schemes are as follows:

Adequate testing should also be conducted before going live, and a sound thread pool monitoring mechanism should be established after going live.

In the event, the monitoring alarm mechanism is combined to analyze the thread pool problems, or the optimization point is combined with the thread pool dynamic parameter configuration mechanism to adjust the configuration.

Should pay attention to observe carefully afterwards, adjust at any time.

Specific tuning cases can be found on the Meituan technology blog [7].

Can you design and implement a thread pool?

⭐ this question appears frequently in Ali’s interview

If someone used to talk about thread pools that way, I would have understood! , of course, we implement our own, just need to grasp the thread pool core process – reference [6] :

Our own implementation completes this core process:

  • There are N worker threads in the thread pool
  • Submit tasks to a thread pool to run
  • If the thread pool is full, queue the task
  • Finally, when there is free time, the task in the queue is acquired to execute

Implementation code [6] :

This completes a class that implements the main flow of the thread pool.

59. What should I do if a single machine thread pool fails?

We can do transaction management or persistent processing for tasks that are being processed or blocked in the blocking queue, and when the operation cannot continue due to power failure or system crash, we can undo the operation that has been successfully executed by way of backtracking log. Then re-execute the entire blocking queue.

That is, persist blocking queues; Processing task transaction control; After the power failure, the task is being rolled back and the operation can be recovered using logs. Blocking queue data reloading after server restart.

Concurrent containers and frameworks

For some concurrent containers, check out the Reverse: The Java collection Serial 30, which has q&AS for CopyOnWriteList and ConcurrentHashMap, two thread-safe container classes.

60. Understand the Fork/Join framework?

Fork/Join framework is a framework provided by Java7 for parallel execution of tasks. It is a framework that divides large tasks into several small tasks and finally summarizes the results of each small task to obtain the results of large tasks.

To master the Fork/Join framework, you first need to understand two point, divide-and-conquer, work-stealing algorithms.

Divide and conquer

In fact, the definition of Fork/Join framework reflects the idea of divide and conquer: a problem with a scale of N is decomposed into K smaller sub-problems, which are independent from each other and have the same nature as the original problem. The solution of the original problem can be obtained by solving the subproblem.

Job stealing algorithm

Large tasks are broken into smaller tasks, and these smaller tasks are placed in different queues to create separate threads to execute the tasks in the queue.

So the problem is that some threads work block and some threads work slow. The finished thread can’t leave it idle. It has to help the unfinished thread. It steals a task from another thread’s queue to execute. This is called job stealing.

When work theft occurs, they both access the same queue. In order to reduce the competition between the stolen and stolen threads, usually the task uses a double-endian queue. The stolen thread always takes the task from the head of the double-endian queue, and the stolen thread always takes the task from the tail of the double-endian queue.

Look at an example of a Fork/Join framework application that evaluates the sum from 1 to n: 1+2+3+… +n

  • Set a threshold for splitting tasks. If the number of tasks exceeds the threshold, the tasks will be split
  • The task has a result, so you need to inherit RecursiveTask
public class CountTask extends RecursiveTask<Integer> {
    private static final int THRESHOLD = 16; / / threshold
    private int start;
    private int end;

    public CountTask(int start, int end) {
        this.start = start;
        this.end = end;
    }

    @Override
    protected Integer compute(a) {
        int sum = 0;
        // Count tasks if they are small enough
        boolean canCompute = (end - start) <= THRESHOLD;
        if (canCompute) {
            for (inti = start; i <= end; i++) { sum += i; }}else {
            // If the task is larger than the threshold, it is divided into two subtasks
            int middle = (start + end) / 2;
            CountTask leftTask = new CountTask(start, middle);
            CountTask rightTask = new CountTask(middle + 1, end);
            // Execute subtasks
            leftTask.fork();
            rightTask.fork(); // Wait for subtasks to complete and get their results
            int leftResult = leftTask.join();
            int rightResult = rightTask.join(); // Merge subtasks
            sum = leftResult + rightResult;
        }
        return sum;
    }

    public static void main(String[] args) {
        ForkJoinPool forkJoinPool = new ForkJoinPool(); // Create a task that computes 1+2+3+4
        CountTask task = new CountTask(1.100); // Execute a task
        Future<Integer> result = forkJoinPool.submit(task);
        try {
            System.out.println(result.get());
        } catch (InterruptedException e) {
        } catch (ExecutionException e) {
        }
    }
    
}
Copy the code

The main difference between ForkJoinTask and regular tasks is that it implements the Compute method, in which the Task is first determined to be small enough and then executed. If it is large, it must be split into two subtasks. When each subtask is fork, compute is used to see if the current subtask needs to be divided into subtasks. If not, the current subtask is executed and the result is returned. Using the Join method waits for the subtask to complete and get its results.



Reference:

[1]. The Art of Concurrent Programming in Java

[2]. Java Development Programming Practice

[3]. Personal collection of 80 multithreading and hair questions (1-10 answer analysis) :juejin.cn/post/685457…

[4] ai Xiaoxian, I Want to Enter dachang

[5]. The basic knowledge of Java concurrency, I cleaned with mind mapping: fighter3.blog.csdn.net/article/det…

[6]. Geek Time “Java Concurrent Programming In Action”

[7]. The Beauty of Concurrent Programming in Java

[8]. Word graphic | a chat already and AQS that something (read not you find me) : juejin. Cn/post / 689627…

[9]. Understanding the Java Virtual Machine

[10]. How to implement blocking queue :juejin.cn/post/697794…

[11]. Speak true The absolute let you easily learn thread pool: mp.weixin.qq.com/s/dTMH1Tdxi…

[12]. Java thread pool :juejin.cn/post/684490…

[13] “Have you ever used multithreading in a project?” asked the interviewer. Just tell him the case! : juejin. Cn/post / 693645…

[14]. “Java Face Book Manual”

[15]. Java thread pool implementation principle and its practice in Meituan business: tech.meituan.com/2020/04/02/…

[16] Interviewer: Young man, have you read the source code for ThreadLocal? (Swastika deep parsing ThreadLocal) :juejin.cn/post/684490…

[17]. The interviewer asked me what is JMM:zhuanlan.zhihu.com/p/258393139

[18]. King concurrent Class :juejin.cn/column/6963…

[19]. Synchronized lock escalation process in detail: www.cnblogs.com/suixing123/…