The following is a summary of some recent concurrent programming articles, through reading these articles we look at the large factory interview concurrent programming problem is not so headache. Today give you a summary of the interview rate is very high several multi-threaded interview questions, I hope to learn and interview can be helpful. Note: the code in the article itself to achieve a better effect oh!

  • Synchronized keyword usage, underlying principles, underlying optimizations after JDK1.6, and comparison with ReenTrantLock
  • Summary of JUC’s Atomic classes
  • Prerequisite for concurrent programming interview: AQS principle and AQS synchronization component summary

This article has been added to the open source document JavaGuide (a document that covers the core knowledge that most Java programmers need to know). Address: github.com/Snailclimb/… .

Tencent’s popular cloud cloud products 1 to fold, send 13000 yuan renewal/upgrade package: cloud.tencent.com/redirect.ph…

Tencent cloud new user large vouchers: cloud.tencent.com/redirect.ph…

Synchronized in an interview

1.1 Say what you know about synchronized

The synchronized keyword addresses the synchronization of access to resources between multiple threads. The synchronized keyword ensures that only one thread can execute a method or code block modified by it at any time.

In addition, synchronized was a heavyweight Lock in earlier Versions of Java, which was inefficient because the monitor Lock relied on the underlying operating system’s Mutex Lock, and Java threads were mapped onto the operating system’s native threads. If you want to suspend or wake up a thread, the operating system needs to help complete it. When the operating system realizes the switch between threads, it needs to transform from the user state to the kernel state, which requires a relatively long time and a relatively high time cost, which is why the efficiency of early synchronized is low. The good news is that Java has optimized synchronized from the JVM level since Java 6, so synchronized lock efficiency is now well optimized. JDK1.6 introduces a number of optimizations to lock implementation, such as spin locking, adaptive spin locking, lock elimination, lock coarser, biased locking, lightweight locking and other techniques to reduce the overhead of locking operations.

1.2 How do you use the keyword synchronized in the project

The three main ways of using synchronized:

  • Modifier instance method, used to lock the current object instance, before entering the synchronization code to obtain the current object instance lock
  • Modifies a static method that locks the current class object before entering the synchronized code. A static member of a class does not belong to any instance object. A static member of a class does not belong to any instance object. So if thread A calls the non-static synchronized method of an instance object, and thread B calls the static synchronized method of the class that the instance object belongs to, this is allowed, and mutual exclusion does not occur. Because a lock used to access a static synchronized method is the current class lock, and a lock used to access a non-static synchronized method is the current instance object lock.
  • Modifies a block of code that specifies a lock object, locks a given object, and acquires the lock for the given object before entering a synchronized code base. Like the synchronized method, the synchronized(this) block locks the current object. Static methods and synchronized(class) blocks lock classes. Adding the synchronized keyword to a nonstatic static method locks an object instance. Note also that synchronized(String a) is not used in the JVM.

Here’s an example of synchronized in a typical interview.

Often in interviews, interviewers will say, “You know the singleton? Write it for me! Explain to me the principle of double check lock to realize simple profit mode!”

Double checklock implementation object singleton (thread-safe)

public class Singleton {

    private volatile static Singleton uniqueInstance;

    private Singleton(a) {}public static Singleton getUniqueInstance(a) {
       // Check whether the object has been instantiated before entering the lock code
        if (uniqueInstance == null) {
            // Class objects are locked
            synchronized (Singleton.class) {
                if (uniqueInstance == null) {
                    uniqueInstance = newSingleton(); }}}returnuniqueInstance; }}Copy the code

Also, note that it is necessary to use the volatile keyword modifier for uniqueInstance.

It is also necessary to use the volatile keyword for uniqueInstance. UniqueInstance = new Singleton(); This code is actually executed in three steps:

  1. Allocate memory space for uniqueInstance
  2. Initialize uniqueInstance
  3. Point uniqueInstance to the allocated memory address

However, due to the reordering nature of the JVM, the order of execution can become 1->3->2. Instruction reordering does not cause preemptive problems in a single-threaded environment, but causes a thread to acquire an uninitialized instance in a multi-threaded environment. For example, if thread T1 executes 1 and 3, T2 calls getUniqueInstance() and finds that uniqueInstance is not empty, so it returns uniqueInstance, but the uniqueInstance has not yet been initialized.

Using volatile prevents the JVM from reordering instructions, ensuring that they can run properly in multithreaded environments.

1.3 The basic principle of synchronized keyword

The underlying principles of the synchronized keyword belong at the JVM level.

① Synchronized statements

public class SynchronizedDemo {
	public void method(a) {
		synchronized (this) {
			System.out.println(Synchronized code block); }}}Copy the code

Use the JAVap command to query the SynchronizedDemo class bytecode information: Java command to generate the compiled.class file. Then execute javap -c -s -v -l synchronizedDemo.class.

From the above we can see:

Synchronized blocks are implemented using monitorenter and Monitorexit directives, where Monitorenter points to the start of the synchronized block and Monitorexit specifies the end of the synchronized block. When monitorenter is executed, the thread attempts to acquire the lock, that is, the monitor. (The monitor object exists in the object header of every Java object, and synchronized locks are acquired this way. This is why any object in Java can be used as a lock. When the counter is 0, it can be successfully obtained, and then set the lock counter to 1, that is, add 1. Accordingly, after monitorexit, the lock counter is set to 0 to indicate that the lock is released. If the object lock fails to be acquired, the current thread blocks and waits until the lock is released by another thread.

② Synchronized modified methods

public class SynchronizedDemo2 {
	public synchronized void method(a) {
		System.out.println("A synchronized method"); }}Copy the code

Instead of monitorenter and Monitorexit, synchronized does replace the monitorenter with the ACC_SYNCHRONIZED flag, which identifies the method as a synchronized method. The JVM uses the ACC_SYNCHRONIZED access flag to tell if a method is declared to be a synchronized method and to perform the corresponding synchronized call.

1.4 Talk about the underlying optimization of synchronized keyword after JDK1.6, can you introduce these optimization in detail

JDK1.6 introduces a number of optimizations to the implementation of locks, such as biased locking, lightweight locking, spin locking, adaptive spin locking, lock elimination, lock coarsing and other techniques to reduce the overhead of locking operations.

Locks mainly exist in four states, the order is: no lock state, biased lock state, lightweight lock state, heavyweight lock state, they will gradually upgrade with the fierce competition. Note that locks can be upgraded and not degraded. This strategy is intended to improve the efficiency of acquiring and releasing locks.

See detailed information on these optimizations: synchronized keyword use, underlying principles, underlying optimizations after JDK1.6, and comparisons to ReenTrantLock

1.5 Talk about the difference between synchronized and ReenTrantLock

① Both are reentrant locks

Both are reentrant locks. The concept of “reentrant lock” is that you can acquire your own internal lock again. For example, if a thread acquires a lock on an object that has not yet been released, it can still acquire the lock when it wants to acquire it again. If the lock is not reentrant, a deadlock will occur. Each time the same thread acquires a lock, the lock counter increases by 1, so the lock can’t be released until the lock counter drops to 0.

② Synchronized relies on the JVM while ReenTrantLock relies on the API

Synchronized is implemented by the JVM. As mentioned earlier, the VIRTUAL machine team has made many optimizations for the synchronized keyword in JDK1.6, but these optimizations are implemented at the virtual machine level and are not directly exposed to us. ReenTrantLock is implemented at the JDK level (that is, the API level, which requires lock() and UNLOCK methods with a try/finally block), so we can look at its source code to see how it is implemented.

③ ReenTrantLock adds some advanced features over synchronized

ReenTrantLock adds some advanced features over synchronized. There are three main points: (1) waiting can be interrupted; ② Can realize fair lock; ③ Can realize selective notification (lock can bind multiple conditions)

  • ReenTrantLock provides a mechanism to interrupt a thread waiting for a lock, using lock.lockInterruptibly(). This means that a thread that is waiting can choose to give up waiting and process something else instead.
  • ReenTrantLock can specify whether the lock is fair or unfair. Synchronized can only be an unfair lock. The so-called fair lock is that the line that waits first gets the lock first.ReenTrantLock is unfair by default and can be passed through the ReenTrantLock classReentrantLock(boolean fair)Construct methods to determine whether it is fair.
  • The synchronized keyword, combined with wait() and notify/notifyAll(), implements the wait/notification mechanism, as does the ReentrantLock class, but with the aid of the Condition interface and newCondition(). Condition was introduced after JDK1.5, and it has great flexibility. For example, multiple Condition instances (object monitors) can be created in a Lock object. Thread objects can be registered in a given Condition. In this way, thread notification can be carried out selectively and thread scheduling is more flexible. When notifying with notify/notifyAll(), the thread being notified is selected by the JVM. Using the ReentrantLock class in conjunction with Condition instances to implement “selective notification” is very important and is provided by default by the Condition interface. The synchronized keyword is equivalent to a single Condition instance in the Lock object, to which all threads are registered. A notifyAll() method notifies all waiting threads, which is inefficient, whereas a Condition instance’s signalAll() method only wakes up all waiting threads registered in the Condition instance.

If you want to use the above features, then ReenTrantLock is a good choice.

④ Performance is no longer the selection criteria

The thread pool in the interview

2.1 Talk about the Java memory model

Prior to JDK1.2, Java’s memory model implementation always read variables from main memory (that is, shared memory) without special attention. Under the current Java memory model, threads can store variables in local memory (such as machine registers) rather than reading or writing directly to main memory. This can cause one thread to change the value of a variable in main memory, while another thread continues to use its copy of the value in the register, resulting in data inconsistency.

To solve this problem, declare the variable volatile, which indicates to the JVM that the variable is volatile and that it is read into main memory each time it is used.

In plain English, the primary purpose of the volatile keyword is to ensure visibility of variables and then to prevent instruction reordering.

2.2 Differences between synchronized and volatile

The synchronized keyword is compared with the volatile keyword

  • Volatile is a lightweight implementation of thread synchronization, so volatile certainly performs better than synchronized. However, the volatile keyword can only be used with variables and the synchronized keyword can modify methods and blocks of code. After JavaSE1.6, the implementation efficiency of synchronized keyword has been significantly improved, mainly including biased lock and lightweight lock introduced to reduce performance consumption caused by lock acquisition and lock release, and other kinds of optimization. In actual development, more scenes use synchronized keyword.
  • Multithreaded access to the volatile keyword does not block, while the synchronized keyword may block
  • The volatile keyword guarantees visibility, but not atomicity. The synchronized keyword guarantees both.
  • The volatile keyword addresses the visibility of variables across multiple threads, while the synchronized keyword addresses the synchronization of access to resources across multiple threads.

The thread pool in the interview

3.1 Why thread pools?

Thread pools provide a way to limit and manage resources (including performing a task). Each thread pool also maintains some basic statistics, such as the number of completed tasks.

Here are some of the benefits of using thread pools, borrowed from The Art of Concurrent Programming in Java:

  • Reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads.
  • Improve response speed. When a task arrives, it can be executed immediately without waiting for the thread to be created.
  • Improve thread manageability. Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for unified allocation, tuning, and monitoring.

3.2 Difference between Runnable interface and Callable interface

A Runnable or Callable interface that needs to be implemented if you want the thread pool to perform tasks. Runnable interface or Callable interface implementation class can be ThreadPoolExecutor or ScheduledThreadPoolExecutor execution. The difference is that the Runnable interface does not return results while the Callable interface does.

Note: The conversion between Runnable objects and Callable objects can be implemented by tool Executors. (Executors. Callable (Runnable Task) or Executors. Callable (Runnable Task, Object Resule).

3.3 What is the difference between executing () and submit()?

1) Execute () is used to submit tasks that do not require a return value, so it is impossible to determine whether the task is successfully executed by the thread pool;

2) The submit() method is used to submit tasks that require a return value. The thread pool returns an object of type Future that determines whether the task was successfully executed, and the return value can be retrieved from the Future’s get() method. The get() method blocks the current thread until the task is complete. Instead, get(long timeout, The TimeUnit unit method blocks the current thread for a while and then returns immediately, possibly without completing the task.

3.4 How to Create a Thread Pool

** Alibaba Java Development Manual forces thread pool to be created by ThreadPoolExecutor instead of using Executors. **

Executors return thread pool object pros:

  • FixedThreadPool and SingleThreadExecutor: Allows requests with a queue length of Integer.MAX_VALUE, which can pile up requests to OOM.
  • CachedThreadPool and ScheduledThreadPool: The number of threads allowed to be created is integer. MAX_VALUE, which may create a large number of threads, resulting in OOM.

Method 1: through the constructor

Method 2: Use the Executor framework tool Executors to implement the configuration

  • FixedThreadPool: This method returns a thread pool with a fixed number of threads. The number of threads in this thread pool is always the same. When a new task is submitted, it is executed immediately if there are idle threads in the thread pool. If no, the new task is temporarily stored in a task queue. When a thread is idle, the task in the task queue will be processed.
  • SingleThreadExecutor: method returns a thread pool with only one thread. If additional tasks are submitted to the thread pool, the tasks are stored in a task queue and executed in a first-in, first-out order until the thread is idle.
  • CachedThreadPool: This method returns a thread pool that can adjust the number of threads as needed. The number of threads in the thread pool is uncertain, but if there are free threads that can be reused, reusable threads are preferred. If all threads are working and a new task is submitted, a new thread is created to process the task. All threads will return to the thread pool for reuse after completing the current task.

The following figure shows the method in the tool class of Executors:

In the interview, you should write about the Atomic class

4.1 Introduction to the Atomic Atomic class

Atomic means atom in Chinese. In chemistry, we know that atoms are the smallest units of ordinary matter and are indivisible in chemical reactions. In our case, Atomic means that an operation is uninterruptible. Even when multiple threads execute together, once an operation is started, it cannot be disturbed by other threads.

So, atomic classes are simply classes with atomic/atomic manipulation characteristics.

And contract awarding Java. Util. Concurrent atomic classes are stored in Java. The util. Concurrent. Atomic, as shown in the figure below.

4.2 What are the four atomic classes in the JUC package?

Basic types of

Update base types atomically

  • AtomicInteger: Shaping atom class
  • AtomicLong: long integer atomic class
  • AtomicBoolean: Boolean atomic class

An array type

Update an element in an array atomically

  • AtomicIntegerArray: Integer array atomic class
  • AtomicLongArray: Long integer array atomic class
  • AtomicReferenceArray: Reference type array atomic class

Reference types

  • AtomicReference: Reference type atomic class
  • AtomicStampedRerence: Atom updates the field atomic class in the reference type
  • AtomicMarkableReference: Atom updates reference types with marker bits

Object property modification type

  • AtomicIntegerFieldUpdater: atomic updates plastic field updater
  • AtomicLongFieldUpdater: An updater that atomically updates long shaping fields
  • AtomicStampedReference: Atom updates a reference type with a version number. This class associates integer values with references and can be used to resolve atomic update data and the version number of the data, and can solve ABA problems that may occur when atomic updates are made using CAS.

4.3 Explain the use of AtomicInteger

Common methods of the AtomicInteger class

public final int get(a) // Get the current value
public final int getAndSet(int newValue)// Get the current value and set the new value
public final int getAndIncrement(a)// Get the current value and increment it
public final int getAndDecrement(a) // Get the current value and decrement it
public final int getAndAdd(int delta) // Get the current value and add the expected value
boolean compareAndSet(int expect, int update) // If the input value is equal to the expected value, set it atomically to the input value (update)
public final void lazySet(int newValue)// Finally set to newValue. Using the lazySet setting may cause other threads to read the old value for a short time later.
Copy the code

Example use of the AtomicInteger class

With AtomicInteger, thread-safety can be achieved without the increment() method being locked.

class AtomicIntegerTest {
        private AtomicInteger count = new AtomicInteger();
      // With AtomicInteger, thread-safe methods do not need to be locked.
        public void increment(a) {
                  count.incrementAndGet();
        }
     
       public int getCount(a) {
                returncount.get(); }}Copy the code

4.4 Could you please briefly introduce the principle of AtomicInteger class to me

AtomicInteger thread safety principle simple analysis

AtomicInteger class AtomicInteger

    // Setup to use Unsafe.compareAndSwapInt for updates
    private static final Unsafe unsafe = Unsafe.getUnsafe();
    private static final long valueOffset;

    static {
        try {
            valueOffset = unsafe.objectFieldOffset
                (AtomicInteger.class.getDeclaredField("value"));
        } catch (Exception ex) { throw newError(ex); }}private volatile int value;
Copy the code

AtomicInteger mainly uses CAS (compare and swap) + volatile and native methods to ensure atomic operations, thus avoiding the high overhead of synchronized and greatly improving the execution efficiency.

The CAS works by comparing the desired value with an original value and updating it to a new value if they are identical. The Broadening class’s objectFieldOffset() method is a local method that retrieves the “original value” memory address and returns valueOffset. In addition, value is a volatile variable that is visible in memory, so the JVM can ensure that the latest value is available to any thread at any time.

You can learn more about the Atomic class in my article: Summary of the Atomic class in JUC

Five AQS

5.1 introduce AQS

AQS called (AbstractQueuedSynchronizer), all the classes in Java. The util. Concurrent. The locks package below.

AQS is a framework for building locks and synchronizers. It is easy and efficient to build a wide range of synchronizers, such as ReentrantLock, Semaphore, and others such as ReentrantReadWriteLock. SynchronousQueue, FutureTask, and so on are all based on AQS. Of course, we can also use AQS to construct synchronizers for our own needs very easily and easily.

5.2 AQS Principle analysis

This section on the Principles of AQS references some blogs, with links at the end of section 5.2.

When asked about concurrency knowledge in an interview, most people will be asked “Please tell me your understanding of the PRINCIPLE of AQS”. Below give you an example for everyone to participate in the interview is not back, we must be if their own ideas, even if can not join their own ideas to ensure that they can popular speak out rather than back out.

Most of the following content in fact in the AQS class annotation has been given, but is English looking at a bit more difficult, interested in the source code can be seen.

5.2.1 AQS Principle Overview

The core idea of AQS is that if the requested shared resource is idle, the thread of the current requested resource is set as a valid worker thread, and the shared resource is set to the locked state. If the requested shared resource is occupied, then a mechanism is needed for thread blocking and waiting and lock allocation when woken up. This mechanism is implemented by CLH queue lock, which is to queue the thread that can not acquire the lock temporarily.

CLH(Craig,Landin,and Hagersten) queue is a virtual bidirectional queue (virtual bidirectional queue has no queue instance, only the association relationship between nodes). AQS encapsulates each thread requesting shared resources into a Node (Node) of a CLH lock queue to achieve lock allocation.

Watch a AQS (AbstractQueuedSynchronizer) schematic diagram:

AQS uses an int member variable to indicate synchronization status, and queues the resource threads through the built-in FIFO queue. AQS uses CAS to perform atomic operations on the synchronization state to modify its value.

private volatile int state;// Share variables with volatile modifier to ensure thread visibility
Copy the code

The status information is manipulated by procted type getState, setState, and compareAndSetState


// Returns the current value of the synchronization status
protected final int getState(a) {  
        return state;
}
 // Set the synchronization status
protected final void setState(int newState) { 
        state = newState;
}
Update If the value of the current synchronization state is equal to expect (expected)
protected final boolean compareAndSetState(int expect, int update) {
        return unsafe.compareAndSwapInt(this, stateOffset, expect, update);
}
Copy the code

5.2.2 AQS resource sharing method

AQS defines two resource sharing modes

  • ExclusiveExclusive: Only one thread can execute, such as ReentrantLock. And can be divided into fair locks and unfair locks:
    • Fair lock: The first to come gets the lock according to the order in which the threads are queued
    • Unfair lock: when a thread wants to acquire a lock, it directly grabs the lock regardless of the queue order
  • Share: Multiple threads can execute simultaneously, such as Semaphore/CountDownLatch. Semaphore, CountDownLatCh, CyclicBarrier, and ReadWriteLock are all covered later.

ReentrantReadWriteLock can be considered a composite because a ReentrantReadWriteLock is a read/write lock that allows multiple threads to read a resource simultaneously.

Different custom synchronizers compete for shared resources in different ways. The implementation of custom synchronizer only needs to realize the acquisition and release of shared resource state. As for the maintenance of specific thread waiting queue (such as failure to acquire resources in queue/wake up queue, etc.), AQS has been implemented at the top level.

5.2.3 AQS uses template method mode at the bottom

The synchronizer design is based on the template method pattern. If you need to customize the synchronizer, the general way is like this (the template method pattern is a classic application) :

  1. Users inherit AbstractQueuedSynchronizer and overwrite the specified method. (These overrides are simple, just get and release the shared resource state.)
  2. Combine AQS in an implementation of a custom synchronous component and call its template methods, which in turn call consumer-overridden methods.

This is very different from the way we used to implement interfaces, which is a classic use of the template method pattern.

AQS uses the template method pattern. To customize the synchronizer, you need to rewrite the following template methods provided by AQS:

isHeldExclusively()// Whether the thread is monopolizing resources. You only need to implement it if you use condition.
tryAcquire(int)// Exclusive mode. Attempts to obtain the resource return true on success and false on failure.
tryRelease(int)// Exclusive mode. Attempts to free resources return true on success and false on failure.
tryAcquireShared(int)// Share mode. Attempt to obtain resources. Negative numbers indicate failure; 0 indicates success, but no available resources are available. A positive number indicates success and free resources.
tryReleaseShared(int)// Share mode. Attempts to free resources return true on success and false on failure.

Copy the code

By default, throw an UnsupportedOperationException in every way. The implementation of these methods must be internal thread-safe and should generally be short rather than blocking. The other methods in the AQS class are final and therefore cannot be used by other classes. Only these methods can be used by other classes.

In the case of ReentrantLock, state is initialized to 0, indicating that the state is not locked. When thread A locks (), tryAcquire() is called to monopolize the lock and state+1. After that, another thread will fail to tryAcquire() until the unlock() of thread A reaches state=0. Of course, thread A can repeatedly acquire the lock itself before releasing it (state will accumulate), which is the concept of reentrant. But be careful how many times you get it and how many times you release it, so that state can go back to zero.

In the CountDownLatch example, the task is divided into N child threads to execute, and state is initialized to N (note that N must be consistent with the number of threads). CountDown () countDown() once each child thread executes in parallel, and state is reduced by 1. After all child threads have finished executing (i.e., state=0), unpark() the calling thread, and then the calling thread returns from the await() function to continue the residual action.

In general, custom synchronizers are either exclusive or shared methods, and they only need to implement either Tryacquire-TryRelease or tryAcquireShared. However, AQS also supports both exclusive and shared custom synchronizers, such as ReentrantReadWriteLock.

Recommend two articles AQS principle and related source code analysis:

  • www.cnblogs.com/waterystone…
  • www.cnblogs.com/chengxiao/a…

5.3 AQS Component Summary

  • Semaphore – Allows multiple threads to access a resource simultaneously: Synchronized and ReentrantLock allow only one thread to access a resource at a time, while Semaphore allows multiple threads to access a resource simultaneously.
  • CountDownLatch (backtimer) : CountDownLatch is a synchronization utility class that coordinates synchronization between multiple threads. This tool is usually used to control thread waiting, which allows a thread to wait until the countdown is over before starting execution.
  • CyclicBarrier: A CyclicBarrier is very similar to CountDownLatch in that it can also implement technical waits between threads, but it is more complex and powerful than CountDownLatch. The main application scenario is similar to CountDownLatch. CyclicBarrier literally means CyclicBarrier. What it does is allow a group of threads to block when they reach a barrier (also known as a synchronization point), and the barrier will not open until the last thread reaches the barrier, and all threads blocked by the barrier will continue to work. The CyclicBarrier’s default constructor is CyclicBarrier(int parties), whose argument is the number of threads that the barrier intercepts, and each thread calls the await method to tell the CyclicBarrier that I have reached the barrier and the current thread is blocked.

For more information on this section of AQS, see my article: Concurrent Programming Interview Essentials: AQS Principles and AQS Synchronization components summary

Reference

  • In-depth Understanding of the Java Virtual Machine
  • Practical Java High Concurrency Programming
  • The Art of Concurrent Programming in Java
  • www.cnblogs.com/waterystone…
  • www.cnblogs.com/chengxiao/a…

[Highly recommended! Non-advertising!] Ali Cloud Double 11 mattress wool activities (10.29-11.12) : m.aliyun.com/act/team111… . In a word, new users can buy it at a 10% discount (1 core 2G server only 8.3/ month, cheaper than the student computer, it is really strongly recommended to keep for 3 years). Old users can join my team, and then share their own links, can get red envelopes and 25% cash back, our team is currently 300 new, so can be ranked in the first 100, behind can share millions of cash (according to the number of new share cash, pull more points more! Do not restart the war team yourself, do not participate in the cash later).

If you are in full bloom, breeze. Welcome to follow my wechat official account: “Java Interview Clearance Manual”, a wechat official account with temperature. Public account background reply keyword “1”, you can get a small gift I carefully prepared for free oh!