Preface – What do you get from reading this article

1. Understand the basic knowledge of concurrent programming, such as deadlock, Synchronized, already, CAS, thread pool, etc

2. Learn some common questions about multithreading to see if you really have a grasp of multithreading

A deadlock

The so-called deadlock refers to two or more threads in the execution process, due to the competition for resources caused by a kind of waiting for each other, if there is no external force, they will not be able to advance

Necessary conditions for a deadlock to occur: The following four conditions must be met for a deadlock to occur. If any of these conditions is not true, the deadlock will not occur.

(1) Mutually exclusive condition: a resource can only be used by one process at a time. The log can only pass one person at a time.

(2) Request and hold conditions: when a process is blocked by requesting resources, it holds on to acquired resources. B does not exit the deck, and A does not exit the deck.

(3) Non-deprivation condition: the resources obtained by the process cannot be forcibly deprived before they are used up. A cannot force B to exit the bridge deck, nor can B force A to exit the bridge deck.

(4) Circular waiting condition: a circular waiting resource relationship is formed between several processes. If b does not exit the deck, A cannot pass, and if A does not exit the deck, B cannot pass.

Vernacular version

1. Multiple operators request multiple resources

2. Competing for resources in the wrong order

3. Hold on to resources

How do I avoid deadlocks

Deadlocks can be avoided in some cases. Here are three techniques to avoid deadlocks:

1. Lock order (threads lock in a certain order)

2. Lock time limit (when a thread tries to acquire a Lock, it adds a certain time limit. After the time limit, it will give up the request for the Lock and release the Lock it owns) to get the Lock through the Lock method

3. Deadlock detection

ThreadLocal

What is a ThreadLocal?

ThreadLocal’s existence must be solving a problem, so what is that problem?

The question is: How do you bind data to a thread so that the data can only be accessed in the bound thread and not in other threads?

ThreadLocal provides a convenient solution to this problem, which is called data isolation between threads. This class provides thread-local variables. This class provides thread-local variables. This class provides thread-local variables. It makes it easy to set up or retrieve thread-private data. Thread-private data is also known as thread-local variables.

A detailed implementation of ThreadLocal can be found at juejin.cn/post/684490…

ThreadLocal memory leak

Memory leaks from ThreadLocal are a common interview problem

Cause of ThreadLocal memory leak

The primary cause of memory leaks is the design of entries in ThreadLocalMap, its internal class.

WeakReference<ThreadLocal<? >>, that is, the key of the Entry is a weak reference, so the key’ will be reclaimed in garbage collection, but the value corresponding to the key will not be reclaimed. In this case, the key is null and the value has a value.

If the key is empty, the value is invalid, and over time, the value accumulation will lead to memory leaks.

How to fix memory leaks

ThreadLocal calls its remove() method to clean up data each time it is used. Because its remove method will actively remove the current key and value(Entry).

E.clear () is the key used to clear Entry, which calls the method in WeakReference :this.referent = null

ExpungeStaleEntry (I) is used to clear the value corresponding to the Entry, which will be covered later.

How do JDK developers avoid memory leaks

The designers of ThreadLocal were aware of this (memory leaks) and buried value erases for key=null in some methods.

Take the get() method provided by ThreadLocal, which calls ThreadLocalMap#getEntry() to checksum the key and erase the null key.If the key is null, the getEntryAfterMiss() method is called, in which expungeStaleEntry(I) is called if k == null; Methods.

The expungeStaleEntry(I) method completes the assignment of the value corresponding to the key whose key is null, freeing space to avoid memory leaks.

It also iterates over the next entry with an empty key and assigns value to null, waiting for the next GC to free its space.

By doing so, you can avoid memory leaks as much as possible, but it won’t completely solve the memory leak problem. For example, in extreme cases we create ThreadLocal but do not call set, get, remove, etc. The best solution is to manually call remove() when ThreadLocal is exhausted.

Why does ThreadLocalMap Key use weak references

As we have seen above, using weak references can cause memory leaks

So why isn’t key set to strong reference?

If key is set to strong reference, threadLocal=null when the threadLocal instance is freed, but threadLocal will have strong references to threadLocalMap, Threadlocalmap. Entry also strongly references threadLocal, which will result in threadLocal not being properly collected by GC.

Weak references can cause memory leaks, but there are set, get, and remove methods that erase null keys.

CAS

CAS: Compare and Swap.

Jdk5 adds and packages java.util.concurrent.*, and the following classes use the CAS algorithm to implement an optimistic lock that differs from synchronouse locks. Prior to JDK 5, the Java language relied on the synchronized keyword, an exclusive or pessimistic lock

CAS is A lock-free algorithm. CAS has 3 operands, memory value V, old expected value A, and new value B to be modified. Change the memory value V to B if and only if the expected value A and the memory value V are the same, otherwise do nothing

CAS takes advantage of the CAS instruction supported by all modern processors and loops this instruction until it succeeds.

Possible problems with CAS

1. The problem of ABA

This can be resolved through version control

AtomicMarkableReference can solve this problem by using Boolean variables — indicating whether the reference variable has been changed, regardless of how many times the intermediate variable has changed

AtomicStampedReference can also be resolved, where initialStamp (time stamp) is used in the constructor to uniquely identify a reference variable that has been changed several times along the way

2. Cost

3. Atomic operations of only one shared variable can be guaranteed

AtomicRefrence solves this problem by packaging multiple variables into a single object to use

The thread pool

Why thread pools

Thread pools are the most widely used concurrency framework in Java, and can be used by almost any program that needs to perform tasks asynchronously or concurrently. There are three benefits to using thread pools properly during development.

First: reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads.

Second: improve response speed. When a task arrives, it can be executed immediately without waiting for the thread to be created. Suppose a server completes a task in terms of time T1 creates the thread, T2 executes the task in the thread, and T3 destroys the thread. If: T1 + T3 is much larger than T2, thread pools can be used to improve server performance. Thread pool technology focuses on shortening or adjusting T1 and T3 times to improve server application performance. It schedules T1 and T3 at the start and end of the server program, or at some idle time, so that the server program does not have the overhead of T1 and T3 when processing client requests.

Third: improve thread manageability. Threads are scarce resources. If created without limit, they consume system resources and degrade system stability. Thread pools can be used for uniform allocation, tuning, and monitoring.

Meaning of thread pool parameters

corePoolSize

The base size of the thread pool, which is the size of the pool when no tasks need to be executed, and more threads will only be created if the work queue is full. Here it is important to note: in the newly created ThreadPoolExecutor, thread does not immediately start, but will wait to have a task submitted to start, unless call prestartCoreThread/prestartAllCoreThreads core thread start in advance. Taking into account the effects of keepAliveTime and allowCoreThreadTimeOut timeout parameters, the thread pool size is not necessarily corePoolSize when no tasks need to be executed.

maximumPoolSize

The maximum number of threads allowed in the thread pool. The number of current threads in the thread pool does not exceed this value. If the queue is full and the current number of threads is less than maximumPoolSize, a new thread is created to execute the task. The largestPoolSize variable is worth mentioning here, which records the maximum number of threads that have ever occurred in the lifetime of the thread pool. Why once? After the thread pool is created, setMaximumPoolSize() can be called to change the maximum number of threads to run.

poolSize

The number of threads currently in the thread pool. When this value is 0, it means that there are no threads and the thread pool terminates. PoolSize cannot exceed maximumPoolSize at any one time.

Blocking queue waiting

The process for submitting a new task is obvious:

If the current size of the thread pool does not reach the base size (poolSize < corePoolSize), add a new thread to handle the newly submitted task.

2. If the current size has reached the basic size, the new submitted task is submitted to the blocking queue queue, waiting for processing workqueue. offer(command);

If the poolSize of the queue is not maximumPoolSize, create a new thread to process the task.

4. If the queue is full and the current number of threads has reached the upper limit, it means that the capacity of the thread pool has reached its limit and new tasks need to be rejected. How to reject new tasks depends on the saturation policy of the thread pool RejectedExecutionHandler.

Next, let’s look at what the allowCoreThreadTimeOut and keepAliveTime properties mean. In a high-stress situation, the pool is busy when all threads in the pool are working on newly submitted or queued tasks. If the pressure is low, then many thread pools may be idle, and in order to save system resources and reclaim those unused idle threads, some timeout mechanism must be provided as part of the thread pool sizing strategy. CorePoolSize and maximumPoolSize control how new threads are created; Controls how threads are destroyed with allowCoreThreadTimeOut and keepAliveTime.

allowCoreThreadTimeOut

This property controls whether the core thread is allowed to timeout out. If false,core threads stay alive even when idle.If true, core threads use keepAliveTime to time out waiting for work. If the thread pool size reaches corePoolSize, the thread pool keeps the core threads alive regardless of whether there are any tasks that need to be executed. Note that this property is only used to control the core thread.

keepAliveTime

If a thread stays idle longer than this value, it will exit due to timeout. For example, if the core size of the thread pool is corePoolSize=5, and the current poolSize is poolSize =8, threads that exceed the core size are determined by keepAliveTime to exit. AllowCoreThreadTimeOut If the core size of the thread pool corePoolSize=5 and the current poolSize =5, then all threads in the pool are core threads and the thread exits depending on allowCoreThreadTimeOut

Some common concurrent programming interview questions

What is the difference between sychronied and static methods? What is visibility?

An object lock is used on an object instance method, or on an object instance. A class lock is used on a class static method, or on a class class object. As we know, there can be many object instances of a class, but each class has only one class object, so the object locks of different object instances are not interfering with each other, but each class has only one class lock.

However, it is important to note that class locking is only a concept, not a real thing. In fact, class locking is the corresponding class object of each class. Class and object locks are also non-interfering.

Visibility means that when multiple threads access the same variable and one thread changes the value of the variable, other threads can immediately see the changed value.

Since threads must do all operations on variables in working memory and cannot read or write variables directly from main memory, for shared variables V, they are first in their own working memory and then synchronized to main memory. But it will not be brushed in time to main memory, but there will be a certain time difference. Obviously, thread A’s operations on variable V are not visible to thread B.

To solve the problem of shared object visibility, we can use the volatile keyword or locking.

What kinds of locks are there?

Principle of CAS lockless programming

The CAS() instruction is supported by most current processors, but the algorithm implemented by each vendor is different. Each CAS operation contains three operators: A memory address V, A desired value A, and A new value b. If the value stored at the address is equal to the desired value A, the value of the address is assigned to the new value b. Otherwise, no operation is performed.

The basic idea of CAS is that if the value at this address is the same as the expected value, it assigns it a new value, otherwise it does nothing, but returns what the original value was. Cyclic CAS means that the CAS operation is repeated in a loop until it succeeds.

There are also three major problems with CAS.

Implementation principle of ReentrantLock.

A thread can repeatedly enter any block of code synchronized with a lock it already owns. Synchronized and ReentrantLock are reentrant locks. In terms of implementation, when a thread obtains a lock, it decides that if the thread that obtains the lock is itself, it simply accumulates the counter. Every time the lock is released, it accumulates the counter until the calculator returns to zero, indicating that the thread has completely released the lock.

The bottom layer is the use of JUC AQS to achieve.

Principles of Synchronized and the difference between Synchronized and ReentrantLock

Principle of synchronized (this) : Involves two commands: monitorenter, Monitorexit; Monitorenter and Monitorexit do not synchronize the synchronized methods. The constant pool of monitorenter and Monitorexit contains the ACC_SYNCHRONIZED identifier.

The JVM implements method synchronization based on this identifier: When a method is called, the calling instruction will check whether the ACC_SYNCHRONIZED access flag of the method is set. If so, the executing thread will acquire monitor first and execute the method body only after the method is successfully acquired. Monitor will be released after the method is executed. During method execution, the same Monitor object is no longer available to any other thread.

Monitorenter and Monitorexit can be understood as a specific lock. Two important properties are held in this lock: counters and Pointers.

The counter represents how many times the current thread has accessed the lock;

The pointer points to the thread that holds the lock.

It is represented by a graph as follows:

The lock counter defaults to 0. A value of 0 indicates that the lock is not held by another thread when monitorenter is executed. The thread increments the counter by one and points the pointer in the lock to itself. When the Monitorexit directive is executed, it decrement the counter by one.

The difference between Lock and synchronized

1. Source:

Lock is an interface, while synchronized is a Java keyword and synchronized is a built-in language implementation.

2. Check whether the lock is released if an exception occurs:

Synchronized will automatically release the occupied lock when an exception occurs, so there will not be deadlocks; When a LOCK exception occurs, the occupied lock is not released. You must manually unlock the lock to release it, which may cause a deadlock. (It’s best to wrap synchronized blocks ina try catch, and then write unlock to avoid deadlocks.)

3. Check whether the response is interrupted

In the lock waiting process, interrupt can be used to interrupt the wait, while synchronized can only wait for the release of the lock and cannot respond to the interrupt.

4. Check whether the lock is acquired

Lock can use trylock to determine whether a Lock has been acquired, whereas synchronized cannot.

Performance of 5.

Lock improves the efficiency of read operations by multiple threads. (Read/write separation can be achieved with ReadWritelock)

In terms of performance, if the resource competition is not fierce, the performance of the two is similar, but when the resource competition is very fierce (that is, there are a large number of threads competing at the same time), the performance of Lock is much better than synchronized. So, in the specific use of appropriate circumstances to choose.

6. The synchronous

Synchronized uses the wait, notify, and notifyAll scheduling mechanisms of Object, while Lock uses Condition to schedule threads

Differences and scope of Synchronized static and non-static locks

An object lock is used on an object instance method, or on an object instance. A class lock is used on a class static method, or on a class class object. As we know, there can be many object instances of a class, but each class has only one class object, so the object locks of different object instances are not interfering with each other, but each class has only one class lock.

However, it is important to note that class locking is only a concept, not a real thing. In fact, class locking is the corresponding class object of each class. Class and object locks are also non-interfering.

Does volatile keep threads safe? What’s the role on the DCL?

Volatile guarantees the visibility and order of the variable being modified, ensuring that in singleton mode, the order in which objects are created is guaranteed

1. Allocate memory space

2. Instantiate the instance object

3. Reference instance to the allocated memory space. Now instance has a memory address and is no longer null, ensuring that instance is either null or fully initialized

What’s the difference between volatile and synchronize?

Volatile is the lightest synchronization mechanism.

Volatile ensures the visibility of different threads operating on the variable. When one thread changes the value of a variable, the new value is immediately visible to other threads. But volatile does not guarantee atomicity, so write composition in multiple threads can cause thread-safety problems.

The keyword synchronized can be used to modify methods or in the form of synchronized blocks. It mainly ensures that multiple threads can only have one thread in a method or synchronized block at the same time. It ensures the visibility and exclusivity of thread access to variables, also known as the built-in lock mechanism

What is a daemon thread? How do you exit a thread?

A Daemon thread is a support thread because it is used primarily for background scheduling and support work within a program. This means that a Java virtual machine will exit when there are no non-Daemon threads in the machine. Threads can be set to Daemon threads by calling thread.setdaemon (true). We don’t usually use it, for example garbage collection threads are Daemon threads.

Thread termination:

Either the run execution completed, or an unhandled exception was thrown causing the thread to terminate prematurely.

Suspend, resume, and stop operations correspond to the Thread APIS suspend(), resume(), and stop(). But these apis are out of date, which means they are not recommended. As a result, the program may work in an indeterminate state.

A safe interrupt isInterrupted by calling the interrupt() method of thread A, or by calling isInterrupted() to see if the interrupted thread isInterrupted. You can also call the static thread.interrupted () method to check whether the current Thread has been interrupted, but thread.interrupted () also changes the interrupt flag bit to false

How does the wait thread wake it up?

Yield () method: Causes the current thread to yield CPU ownership, but the yield time is not specified. Lock resources are not released. All threads that perform yield() are likely to be selected by the operating system and executed immediately after they enter the ready state.

Either yield() or sleep() will not release the lock held by the current thread.

After the wait() method is called, the lock held by the current thread is released, and after the current thread is awakened, the lock is contested again. The code following the wait method is executed only after the lock is contended.

Wait is usually used for interthread interactions, sleep is usually used to pause execution, and yield() causes the current thread to relinquch CPU ownership.

Wait threads wake up using notify/notifyAll()

Talk about thread pool fundamentals.

There are three benefits to using thread pools properly during development.

First: reduce resource consumption. Second: improve response speed. Third: improve thread manageability.

1) If there are fewer threads currently running than corePoolSize, a new thread is created to perform the task (note that this step requires a global lock).

2) Add the task to BlockingQueue if the number of running threads is equal to or greater than corePoolSize.

3) If the task cannot be added to the BlockingQueue (queue is full), create a new thread to process the task.

4) if you create a new thread will make the currently running thread beyond maximumPoolSize, task will be rejected, and call the RejectedExecutionHandler. RejectedExecution () method.

conclusion

The resources

The ThreadLocal principle

Can you handle it? ThreadLocal asks.