This article lists more than a dozen frequently asked questions about multithreading, and provides brief answers, similar to quick questions, which can be used for knowledge overview or interview surprise. The knowledge points involved in each question can be further explored and another article can be written.
How many ways can Java create threads?
There are three main ways to create multithreading in Java
Create a Thread class by inheriting the Thread class
2. Create thread classes through the Runnable interface
Create threads with Callable and Future
Callable and Future create thread code examples:
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.FutureTask;
public class CallableThreadTest implements Callable<Integer> {
public static void main(String[] args) {
CallableThreadTest ctt = new CallableThreadTest();
FutureTask<Integer> futureTask = new FutureTask<>(ctt);
for (int i = 0; i < 100; i++) {
System.out.println(Thread.currentThread().getName() + "The value of the loop variable I of + i);
if (i == 20) {
new Thread(futureTask, "Thread with return value").start(); }}try {
System.out.println("Return value of child thread:" + futureTask.get());
} catch (InterruptedException e) {
e.printStackTrace();
} catch(ExecutionException e) { e.printStackTrace(); }}@Override
public Integer call(a) throws Exception {
int i = 0;
for (; i < 100; i++) {
System.out.println(Thread.currentThread().getName() + "" + i);
}
returni; }}Copy the code
A comparison of three ways to create threads
Thread.currentthread ();} thread.currentThread ();} thread.currentThread ()
Callable specifies that the overridden method is call(), Runnable specifies that the overridden method is run().
3. The Callable task returns a value after execution. The Runnable task does not return a value.
4. The call method can throw an exception, but the run method cannot.
5. Run the Callab task to get a Future object that represents the result of the asynchronous calculation. It provides a way to check that a calculation is complete, wait for it to complete, and retrieve the results of the calculation. Through the Fut object, you can know the execution status of the task, cancel the execution of the task, and obtain the execution interface.
How many states does a thread have (life cycle)
There are six states in the running cycle of a Java Thread, as defined and explained in java.lang.thread. State:
NEW: Indicates that the thread has just been created and has not been started.
RUNNABLE: Indicates that the thread is running normally, but there may be some operation that takes time to calculate I/O wait, CPU time slice switch, etc. In this state, the wait is usually for other system resources, but not for locks, sleeps, etc.
BLOCKED: a state in which multiple threads have synchronized actions, such as a synchronized block that is awaiting release from another thread, or a reentrant synchronized block that calls wait(), where the thread is waiting to enter a critical section.
WAITING: a thread holding a lock has invoked its wait method and is WAITING for notify/notifyAll from another thread/lock owner so that the thread can proceed. The difference between BLOCKED and WAITING is that the thread is WAITING outside the critical point to enter, while the thread waits for notify inside the critical point. When the thread calls the join method and joins another thread, it will also enter the WAITING state and wait for the completion of the thread joined by it
TIMED_WAITING: This state is a time-limited WAITING state. It usually occurs when wait(long),join(long) is called, and another thread is in TIMED_WAITING state after sleep.
TERMINATED: This state indicates that the thread’s run method is TERMINATED and is essentially dead (at the time the thread may not be recycled if it is persisted).
C. synchronized and Lock
Synchronized is a Java built-in keyword, and Lock is an interface class at the JVM level.
2. Synchronized cannot determine whether a Lock is obtained. Lock can determine whether a Lock is obtained.
3. Synchronized automatically releases the lock (a, the lock is released after the thread executes the synchronized code; Lock is a display Lock. You need to release the Lock manually in finally (unlock() or set timeout). Otherwise, the thread is likely to deadlock.
4. Synchronized locks are reentrant, uninterruptible, and unfair, while Lock locks are reentrant, judgeable, and fair/unfair.
5, synchronized Lock is suitable for a small number of code synchronization problems, Lock Lock is suitable for a large number of code synchronization problems.
What’s the difference between sleep and wait
1. Sleep is a static method of Thread and wait is a member of Object
2. The SLEpp method causes the current thread to suspend execution for a specified time, releasing the CPU to other threads, but its monitoring state remains, and it will automatically resume running when the specified time is up. The thread does not release the object lock after the sleep method is called.
When the wait method is invoked, the thread waives the lock and enters the lock pool. The thread enters the lock pool only after notify() or notifyAll() is invoked for the object.
3. The sleep method can be used anywhere. The wait method can only be used in synchronized methods and synchronized code blocks.
What is a deadlock?
1. What is a deadlock
Deadlock refers to a deadlock caused by competing for resources in the running process of multiple processes. When in this deadlock, they will not be able to move forward without external forces.
2. Cause of deadlock
1) Deadlock due to resource competition: When the number of resources shared by multiple processes in the system is insufficient to meet the needs of all processes, resource competition and deadlock will occur.
2) Deadlock occurs due to improper progress sequence.
3. Four necessary conditions for deadlock
1) Mutually exclusive conditions: a resource can only be occupied by one process at a time
2) Request and hold conditions: When a process is blocked by requests for other resources, it holds on to acquired resources.
3) Non-deprivation condition: the process can not be deprived of the resources it has acquired before the end of use, but can only release the resources when the end of use.
4) Loop waiting condition: when deadlock occurs, there must be a process — a loop chain of resources.
4. Basic methods for handling deadlocks
1) Deadlock prevention: Break the conditions necessary to cause deadlocks by setting restrictions. This operation may affect system performance.
2) Deadlock avoidance: In the process of resource allocation, some algorithms are used to prevent the system from entering an unsafe state, so as to avoid deadlock. The better known algorithm is the banker algorithm.
3) Deadlock detection: Deadlock is allowed to occur, but through the system detection, take some measures to clear the deadlock.
4) Unlock deadlock: used together with check deadlock.
Volatile: Volatile
Related concepts, visibility, atomicity, and orderliness in the Java memory model
1. The principle of volatile
The Java language provides a weaker synchronization mechanism, known as volatile variables, to ensure that changes to variables are notified to other threads.
When a variable is declared as volatile, both the compiler and the runtime notice that the variable is shared and therefore do not reorder operations on it with other memory operations. Volatile variables are not cached in registers or hidden from other processors, so when a vola variable is read, it always returns the most recently written value.
Access to volatile variables is not locked and therefore does not block the thread of execution, making volatile variables a lighter synchronization mechanism than the sychronized keyword.
2. Two characteristics of volatile
1) Make this variable visible to all threads.
2) Forbid instruction reordering optimization.
3. Volatile performance
Volatile has almost the same read performance cost as normal variables, but writes are slower because it requires inserting many memory-barrier instructions into the native code to keep the processor from executing out of order.
4. Does volatile guarantee atomicity?
One misconception is that the volatile keyword guarantees visibility but not atomicity. Visibility only ensures that the latest value is read each time, but volatile does not guarantee atomicity of operations on variables.
5. Does volatile guarantee order?
The volatile keyword mentioned earlier prevents instruction reordering, so volatile ensures some degree of order.
The volatile keyword disallows instruction reordering in two ways:
1) When a program performs a read or write to a volatile variable, all changes to the preceding operation must have occurred and the result must be visible to subsequent operations; The operation behind it has certainly not taken place;
2) During instruction optimization, statements that access volatile variables must not be executed after them, nor must statements that access volatile variables be executed before them.
6. Principle and implementation mechanism of volatile
Here’s an excerpt from Understanding the Java Virtual Machine:
“Looking at the assembly code generated with and without volatile, we found that volatile had an extra lock prefix.”
The LOCK prefix directive actually acts as a memory barrier (also known as a memory fence) that provides three functions:
1) It ensures that instruction reordering does not place subsequent instructions in front of the memory barrier, nor does it place previous instructions behind the memory barrier, that is, by the time the instruction reaches the memory barrier, all operations in front of it have been completed;
2) It forces changes to the cache to be written to main memory immediately;
3) If it is a write operation, it invalidates the corresponding cache line in the other CPU.
7. Volatile scenarios
Synchronized prevents multiple threads from executing the same piece of code at the same time, which can affect program execution efficiency. Volatile can in some cases be attributed to synchronized. However, it is important to note that volatile cannot replace synchronized. The volatile keyword does not guarantee atomicity. In general, two conditions must be met to use volatile:
1) Write operations to variables do not depend on the current value
2) This variable is not contained in invariants with other variables. The operation must be atomic in order for programs that use volatile to run concurrently.
Here are two scenarios where volatile is used in Java
1. Amount of status markers
volatile boolean inited = false;
/ / thread 1:
context = loadContext();
inited = true;
/ / thread 2:
while(! inited ){ sleep() } doSomethingwithconfig(context);Copy the code
2, double check
class Singleton{
private volatile static Singleton instance = null;
private Singleton(a) {}public static Singleton getInstance(a) {
if(instance==null) {
synchronized (Singleton.class) {
if(instance==null)
instance = newSingleton(); }}returninstance; }}Copy the code
What is the CAS mechanism
CAS stands for Compare and Swap, which translates as Compare and replace.
The CAS mechanism uses three basic operands: the memory address V, the old expected value A, and the new value B to be modified.
To change the value of A variable, the actual value of the memory address is changed to the new value B only if the expected value of the variable A is equal to the actual value of the memory address V.
From the ideological point of view, synchronized belongs to the pessimistic lock, pessimistic that the concurrency in the program is very serious, all strictly guard;
CAS, on the other hand, is an optimistic lock. It is optimistic that the actual concurrency is not that serious, so the application is constantly retrying updates.
CAS was used in the Atomic family of classes provided by Java, as well as in the underlying implementation of the Lock family of classes, even before Synchronized became heavyweight locking in Java 1.6 and above.
CAS faults:
1) Excessive CPU overhead
In the case of high concurrency, if many threads repeatedly try to update a variable and fail to update it, the cycle will put a lot of stress on the CPU.
2) Atomicity of code blocks cannot be guaranteed
CAS only guarantees the atomicity of one variable, but not the entire code block. For example, synchronized is used to update the atomicity of three variables.
3) ABA problems
This is the biggest problem with CAS.
The change of A variable from A->B->A, although both values A are the same, but the condition is different, will cause the program to produce incorrect results.
As shown in the figure above, for example, in the ATM case, the original memory address V is 100 deposit, the user withdraws 50 yuan, B is updated to 50, and then two threads 1 and 2 are submitted at the same time. Ideally, one thread succeeds and the other thread fails. At this point, the user’s friend transfers him 50, updates him to 100, which is thread 3, and expects the memory address V to be 100. Three threads are executed concurrently, thread 1 -> thread 3 -> thread 2, and the result is memory address V 50. This result is definitely not correct.
Solution to ABA problem: To achieve A rigorous CAS mechanism, we should not only compare the actual value of expected value A and memory address V, but also compare the version number of variables.
Eight, how to understand pessimistic lock and optimistic lock
Pessimistic locking
Always assume the worst, each time to fetch the data all think that people will change, so every time to get the data will be locked, so that other people want to get this data will be blocked until it got locked (Shared resources to only one thread at a time using, other threads blocked, after use to transfer resources to other threads). Traditional relational databases use this locking mechanism, such as table lock, row lock, read lock, write lock, etc., are locked before operation. Exclusive locks such as synchronized and ReentrantLock in Java use the pessimistic lock concept.
Optimistic locking
Always assume the best case, every time I go to get the data, I think others will not modify it, so I will not lock it, but when updating, I will judge whether others have updated the data during this period, which can be achieved by using the version number mechanism and CAS algorithm. As similar to wite_condition mechanism of database, and Java. Java util. Concurrent. The Atomic package below the Atomic variable action class is also a way to use optimistic locking CAS algorithm implementation.
Two types of lock usage scenarios
From the introduction of the two kinds of locks, we can know that the two kinds of locks have their own advantages and disadvantages, and can not be regarded as better than the other. For example, optimistic lock is suitable for the situation of read and write more and write less, that is, when conflicts really rarely happen, it can save the cost of locking and increase the throughput of the whole system. In the case of over-write and under-read applications, conflicts often occur, which leads to repeated retry of upper-layer applications, which degrades performance. Pessimistic locking is more appropriate for over-write and under-read applications.
What are biased locks, lightweight locks and heavyweight locks
The concept of these locks can be customized. Here’s a quick look at the differences.
Biased locking:
Advantages: locking and unlocking require no additional consumption, and there is only a nanosecond difference between the execution of asynchronous methods.
Disadvantages: Additional lock cancellation costs if threads are competing for locks.
Application scenario: This method applies to the scenario where only one thread accesses a synchronized code block
Lightweight locks: Such as CAS locks
Advantages: Competing threads do not block, improving the response time of the program.
Disadvantages: Applying spin can consume CPU if the thread of lock contention is never available.
Application scenario: The lock takes a short time due to the high response time
Heavyweight locks: such as synchronized
Advantages: Threads do not use spin and consume no CPU
Disadvantages: Thread blocking, slow response time
Application scenario: The lock takes a long time due to throughput pursuit.
Talk about ThreadLocal
1. What is ThreadLocal
ThreadLocal provides local variables to threads. These variables differ from normal variables in that each thread that accesses them (through its get, set methods) has its own, independent initialization copy of the variable.
An instance of ThreadLocal typically expects its state to be associated with a private static field of a thread’s class (such as User ID or Transaction ID).
2. Where ThreadLocal is used
ThreadLocal is used with multiple threads, if there is only one thread, ThreadLocal is not required.
In summary, there are two types of uses:
1) Save thread context information, available anywhere;
2) Thread-safe, to avoid the performance loss of thread-safe synchronization in some cases.
Saves thread context information, which can be retrieved anywhere.
Due to the nature of ThreadLocal, the same thread that is set in one place can then be retrieved anywhere else, which can be used to store thread context information.
For example, to associate a string of subsequent requests, you can use ThreadLocal to set and get the request ID in any subsequent logging method to string the entire request together.
In addition, Spring transaction management uses ThreadLocal to store connections so that each DAO can obtain the same Connection and perform transaction rollback and commit operations.
Thread-safe, to avoid the performance penalty of having to synchronize when thread-safe is considered.
ThreadLocal provides a new way to solve the concurrency problem of multithreaded programs. But ThreadLocal has its limitations, and let’s take a look at the Ali specification.
Each thread reading or writing data to or from a ThreadLocal is threadisolated, so ThreadLocal cannot solve the problem of updating shared objects.
3. What are the details of ThreadLocal
The Thread class has a member attribute, threadLocals (type threadLocal. ThreadLocalMap), which means that each Thread has a ThreadLocalMap, from which data is read and written.
An Entry key pointing to a ThreadLocal is a WeakReference.
Java reference types: strong reference, soft reference, weak reference, virtual reference. Weak references are used to describe non-essential objects, and if the JVM does garbage collection and the object is only associated with weak references, regardless of whether memory is sufficient, the object will be collected.
ThreadLocal is reclaimed when only the key of an Entry in a ThreadLocalMap points to a ThreadLocal.
After a ThreadLocal is garbage collected, the key of the Entry in the ThreadLocalMap becomes null, and the Entry is a strong reference that stores objects in it. Therefore, ThreadLocalMap does some extra collection work.
4. ThreadLocal best practices
Due to the long lifetime of ThreadLocal, if we set a large amount of data into a ThreadLocal, although the set and get methods will do extra cleaning in certain cases, if the ThreadLocal is garbage collected, The Entry key in ThreadLocalMap becomes null, and there is no further operation on set, get, etc.
There are memory leaks, so for best implementation, we should actively clean up when not in use with the remove method.
If a ThreadLocal is static, the key value of an Entry in a ThreadLoalMap will always be there, so the remove method will be able to locate it correctly and remove it successfully.
5. Application scenarios
Consider using ThreadLocal when some data is thread-scoped and different threads have different copies of the data.
Stateless, high concurrency scenarios where independent replicas do not affect service logic;
If the business logic is heavily dependent on replica variables, ThreadLocal is not appropriate.
Synchronized solves concurrency conflicts by sacrificing time through thread waiting;
ThreadLocal stores one copy of data per thread, sacrificing space to resolve concurrency conflicts.
Talk about ConcurrentHashMap
1. What technology does ConcurrentHashMap use to ensure thread-safety?
Jdk1.7: Segment + HashEntry to implement;
Jdk1.8: method involving Segment overbulk, using Node+CAS+Synchronized to ensure thread safety /
2. Should the Get method of ConcurrentHashMap be locked? Why?
No, the GET method uses the unsafe method to ensure thread safety.
3. Is the ConcurrentHashMap iterator strongly or weakly consistent? A HashMap?
ConcurrentHashMap is weakly consistent, and HashMap is strongly consistent.
The former can support in the process of iteration, add new elements to the map, while a HashMap throw ConcurrentModificationException,
Because HashMap contains a modification counter, the iterator will use this counter when you call its next() method to get the next element.
4, ConcurrentHashMap1.7 and 1.8 difference?
The jdk1.7 lock granularity is segment-based and contains multiple hashentries. The JDK1.8 implementation reduces the lock granularity to Node.
Data structure, jdk1.7 is Segment+HashEntry, jdk1.8 is array + linked list +CAS+synchronized
5. In ConcurrentHashMap1.8, when will you use linked lists and when will you use red-black trees?
Initializations are all linked lists. If the list is longer than the default 8 bits, the node is converted to a red-black tree.
CountDownLatch
1. What is CountDownLatch?
CountDownLatch was introduced in java1.6, along with CyclicBarrier, Semaphore,
ConcurrentHashMap and BlockingQueue, both of which exist under the java.util.concurrent package. CountDownLatch this class
The ability to use one or more threads to wait for other threads to finish their work before executing.
For example, the main thread of an application wants to wait until the thread responsible for starting framework services has started all framework services.
2. The realization principle of CountDownLatch
CountDownLatch is implemented through a counter whose initial value is the number of threads. Each time a thread completes its task,
The counter is going to decrease by one. When the value of the counter reaches 0, it indicates that all threads have completed their tasks and are then waiting on the lock
You can resume the task. The constructor defined in the countDownlatch.java class:
The count in the constructor is actually the number of threads to wait for the lock. This value can only be set once, and CountDownLatch does not provide any mechanism to reset the latch.
The first interaction with CountDownLatch is for the main thread to wait for other threads. The main thread must be called immediately after another thread is started
CountDownLatch. Await () method. Operations on the main thread will block on this method until the other threads finish their work.
The other N threads must reference the latch object because they need to notify the CountDownLatch object that they have completed their task.
This notification mechanism is done through the countdownlatch.countdown () method; Each time this method is called, it is initialized in the constructor
The count value is going to decrease by 1. So when N threads have called this method and count equals zero, the main thread can resume its task with await() method.
3. The main methods in the CountDownLatch class
Await () : the current thread waits until the counter is 0;
CountDown () : The counter decreases by 1, which is called by another thread. Represents the completion of other thread tasks.
4. The application scenario of CountDownLatch
Try to list a few application scenarios, as well as others.
1) Achieve maximum concurrency:
Sometimes we want to start multiple threads at the same time for maximum parallelism. For example, we want to test a singleton class that creates an initial count of 1
CountDownLatch and let all threads wait on the lock, so we can easily complete the test. Because we only need to call the countDown() method once to have all the waiting threads resume execution at the same time.
2) Wait for N threads to complete their tasks before starting execution: for example, the application startup class ensures that all N external systems are up and running before processing user requests.
3) Deadlock detection: A very convenient usage scenario is that you can use N threads to access a shared resource, the number of threads is different in each test phase, and try to deadlock.
5. CyclicBarrier and CountDownLatch
1) A thread of CyclicBarrier runs to a certain point and stops running until all threads reach a point, and all threads restart;
After the CountDownLatch thread reaches a certain point, the count is -1, and the thread continues until the count reaches 0, when it stops.
2) CyclicBarrier can only wake up one task; CountDownLatch wakes up multiple tasks;
3) CyclicBarrier can be reused, but CountDownLatch cannot be reused. When the count is 0, CountDownLatch is no longer available.