Because collecting interview questions and exercises can take time, I’ve collected 50 top questions about Java multithreading and concurrency from many candidates, with the answers at the end.

1. What is a thread?

What is thread-safe and thread-unsafe?

3. What is spinlock?

What is the Java Memory model?

5. What is CAS?

6. What are optimistic and pessimistic locks?

7. What is AQS?

8. What is atomic operation? What atomic classes are available in the Java Concurrency API?

9. What is Executors Framework?

What is a blocking queue? How to implement the producer-consumer model using blocking queues?

11. What are Callable and Future?

12. What is FutureTask?

What is the implementation of synchronous and concurrent containers?

14. What is multithreading? The advantages and disadvantages?

What is multithreaded context switching?

16. What is the design concept and function of ThreadLocal?

17, ThreadPool usage and advantages?

Other things in the Concurrent package: ArrayBlockingQueue, CountDownLatch, etc.

19. The difference between synchronized and ReentrantLock

20. What does Semaphore do?

The Java Concurrency API is a set of Concurrency Concurrency variables. What are the advantages over synchronization?

Hashtable size() = return count ();

23. What is the concurrency of ConcurrentHashMap?

ReentrantReadWriteLock Read/write lock usage?

What is the difference between CyclicBarrier and CountDownLatch?

26. LockSupport tool?

27. Condition interface and its implementation principle?

28. What is the interpretation of Fork/Join framework?

What is the difference between wait() and sleep()?

The five states of a thread (created, ready, running, blocked and dead)?

31, What is the difference between start() and run()?

32, What is the difference between Runnable interface and Callable interface?

33. What does the volatile keyword do?

34, How to obtain thread dump file in Java?

What is the difference between threads and processes?

36. How many ways (four) can threads be implemented?

37. How to use thread pools for high concurrency and short task execution time? How can thread pools be used for businesses with low concurrency and long task execution times? How can a business with high concurrency and long business execution time use thread pools?

38. What happens if the thread pool queue is full when you submit a task?

Lock level: method lock, object lock, class lock?

40. What happens if a thread in a synchronized block throws an exception?

What’s the difference between concurrency and parallellism?

42, how to ensure that i++ results in multiple threads is correct?

43. What happens to a thread with a runtime exception?

44. How do I share data between two threads?

What is the role of the producer-consumer model?

How do I wake up a blocked thread?

47. What is the thread scheduling algorithm used in Java

Singleton thread safety?

49, Which thread calls the static block constructor of the thread class?

Which is a better choice, synchronous method or synchronous block?

How do I detect deadlocks? How do I prevent deadlocks?


The following are the answers to the first five questions. If you need the rest of the interview questions, pay attention to my private chat reply to the Java thread interview.


What is a thread?

Thread is the smallest unit that the operating system can schedule operations. It is contained in the process and is the actual operating unit in the process. It can use multi-threading to speed up operations.

For example, if it takes 100 milliseconds for one thread to complete a task, then it takes 10 threads to complete the task in 10 milliseconds


What is thread-safe and thread-unsafe?

Popular say: lock is thread safe, not lock is thread unsafe

Thread safety

Thread safety: When multi-threaded access, the locking mechanism is adopted. When one thread accesses a certain data of the class, it is protected and cannot be accessed by other threads. Other threads can use the data until the thread finishes reading. There will be no data inconsistencies or data contamination.

The same instance object of a thread-safe counter class can be used by multiple threads without miscalculation. Obviously you can split collection classes into two groups, thread-safe and non-thread-safe. Vector is thread-safe using a synchronous approach, whereas ArrayList, like it, is not thread-safe.

Thread insecurity

Thread unsafe: Data access protection is not provided. Multiple threads may change data successively, resulting in dirty data

If your code is in a process that has multiple threads running at the same time, those threads may be running the code at the same time. If the result of each run is the same as the result of a single thread run, and the values of other variables are the same as expected, it is thread-safe.

Thread-safety issues are caused by global and static variables. If there are only reads and no writes on a global variable or static variable per thread, the global variable is generally thread-safe. If you have multiple threads performing write operations at the same time, you generally need to consider thread synchronization, otherwise it may affect thread safety.


What is a spin lock?

The basic concept

Spin-lock is a low-level synchronization mechanism in the SMP architecture.

When thread A wants to acquire an optional lock that is held by another thread lock, thread A selects itself in A loop to check if the lock is available.

Note:

[if! SupportLists]§ [endif] Since the CPU is not released when spinning, the thread holding the spin lock should release the spin lock as soon as possible, otherwise the thread waiting for the spin lock will keep spinning there, wasting CPU time.

[if! SupportLists]§ [endif] Threads holding spin locks should release the spin locks before sleep so that other threads can acquire the spin locks.

Implement spin locking

reference

https://segmentfault.com/q/1010000000530936

A simple while will do the trick.

Current JVM implementations use CPU to spin, and if doNotify is not called for a long time, the doWait method spins all the time, which can be too CPU consuming.

[if !supportLists]1. [endif]public class MyWaitNotify3{

[if !supportLists]2. [endif]

[if !supportLists]3. [endif] MonitorObjectmyMonitorObject = new MonitorObject();

[if !supportLists]4. [endif] booleanwasSignalled = false;

[if !supportLists]5. [endif]

[if !supportLists]6. [endif] public void doWait(){

[if !supportLists]7. [endif] synchronized(myMonitorObject){

[if !supportLists]8. [endif] while(! wasSignalled){

[if !supportLists]9. [endif] try{

[if !supportLists]10. [endif] myMonitorObject.wait();

[if !supportLists]11. [endif] }catch(InterruptedException e){… }

[if !supportLists]12. [endif] }

[if !supportLists]13. [endif] //clear signal and continue running.

[if !supportLists]14. [endif] wasSignalled =false;

[if !supportLists]15. [endif] }

[if !supportLists]16. [endif] }

[if !supportLists]17. [endif]

[if !supportLists]18. [endif] public voiddoNotify(){

[if !supportLists]19. [endif] synchronized(myMonitorObject){

[if !supportLists]20. [endif] wasSignalled =true;

[if !supportLists]21. [endif] myMonitorObject.notify();

[if !supportLists]22. [endif] }

[if !supportLists]23. [endif] }

[if !supportLists]24. [endif]}


What is the Java Memory model?

The Java memory model describes what behaviors are legal in multithreaded code and how threads interact through memory. It describes the relationship between “variables in a program” and “the underlying details of getting or storing them from memory or registers.” The Java memory model does this correctly by using a variety of hardware and compiler optimizations.

Java includes several language-level keywords, including volatile, final, and synchronized, to help programmers describe the concurrency requirements of a program to the compiler. The Java memory model defines the behavior of volatile and synchronized and, more importantly, ensures that synchronized Java programs run correctly across all processor architectures.

The “write operations of one thread are visible to other threads” problem is caused by the compiler reordering code. For example, the compiler will move code when it thinks it is more efficient to move a write operation later in the program, as long as it does not change the semantics of the program. If the compiler delays an operation, other threads may not see the result of the operation until it is complete, reflecting the impact of caching.

In addition, writing operations to memory can be moved to earlier times in the program. In this case, other threads may see a write operation in the program earlier than it actually happened. All this flexibility is designed to allow the compiler, runtime, or hardware flexibility to perform operations in the best order possible. Within the limits of the memory model, we can achieve higher performance.

Take a look at the following code to show a simple example:

[if !supportLists]1. [endif]ClassReordering {

[if !supportLists]2. [endif]

[if !supportLists]3. [endif] int x = 0, y = 0;

[if !supportLists]4. [endif]

[if !supportLists]5. [endif] public void writer()

{

[if !supportLists]6. [endif] x =1;

[if !supportLists]7. [endif] y =2;

[if !supportLists]8. [endif] }

[if !supportLists]9. [endif]

[if !supportLists]10. [endif] public void reader()

{

[if !supportLists]11. [endif] int r1 = y;

[if !supportLists]12. [endif] int r2 = x;

[if !supportLists]13. [endif] }

[if !supportLists]14. [endif]}

Let’s see if we execute this code in two concurrent threads and read the Y variable to get the value 2. Because this is written later than the X variable, the programmer may assume that reading the X variable will definitely yield 1. However, the write operation may have been reordered. If the reorder occurs, then the write to Y variable can occur, the read to both variables follows, and the write to X can occur. The result of the program may be that the r1 variable has the value 2, but the R2 variable has the value 0.

But interviewers, sometimes think differently, think of the JVM memory structure

The JVM memory structure has three main blocks: heap memory, method area, and stack.

Heap memory is the largest chunk of the JVM, consisting of the young generation and the old generation. The young generation memory is divided into three parts: Eden space, From Survivor space, and To Survivor space. By default, the young generation is allocated in a 8:1:1 ratio. The method area stores class information, constants, static variables, and other data. It is shared by threads. To distinguish it from the Java Heap, the method area is also nicknamed non-heap. The stack is divided into Java virtual machine stack and local method stack mainly used for method execution.

The memory of a JAVA JVM can be divided into three areas: heap, stack, and Method.

Java Heap

[if! SupportLists]§ [endif] Can be set with parameters -xms and -xmx

[if! SupportLists]1. [endif] The Java heap is shared by all threads and is the largest chunk of the Java heap in the memory managed by the Java VM.

[if! SupportLists]2. [endif] The only purpose of the Java heap is to store object instances, almost all object instances and arrays are here.

[if! SupportLists]3. [Endif] The Java heap can be subdivided into the new generation and the old generation for better memory recycling and allocation. More detailed are the Eden space, From Survivor space, and To Survivor zone.

[if! SupportLists]§ [endif] New generation: includes Eden area, From Survivor area, and To Survivor area. The default size of Eden is Survivor=8:1.

[if! SupportLists]§ [endif] Old age: Objects that survive N garbage collections in the young generation are placed in the old generation. Therefore, you can think of the tenured generation as holding objects with long life cycles.

[if! SupportLists]1. [endif]Survivor space Java heaps can be in physically non-contiguous memory space as long as they are logically contiguous (like our disk space). When implemented, it can be either fixed size or extensible).

According to the Java Virtual Machine specification, OutOfMemoryError is thrown when the method area cannot meet memory allocation requirements.

Java Virtual Machine Stack

The parameter stack frame is the basic data structure of the method run time. The stack size can be set by -xSS

1. The Java virtual machine stack is thread-private and has the same lifetime as the thread.

[if! SupportLists]1. [endif] The process that each method is invoked until the execution is complete corresponds to the process that a stack frame is pushed into and removed from the VM stack.

[if! SupportLists]2. [endif] Vm stack is a memory model (that is, bytecode) service that executes Java methods: Each method is executed with a stack frame, which is used to store information such as local variation table, operand stack, dynamic link, method exit, etc.

[if! SupportLists]§ [endif] Local variable list: 32-bit variable slot, which stores various basic data types, object references, and returnAddress types known at compile time.

[if! SupportLists]§ [endif] Operand stack: Stack-based execution engine where the virtual machine uses the operand stack as its workspace, from which most instructions pop up data, perform operations, and then push the results back onto the operand stack.

[if! SupportLists]§ [endif] Dynamic linking: Each stack frame contains a reference to the method that the stack frame belongs to in the runtime constant pool (part of the method area). This reference is held to support dynamic wiring during method calls. The Class file has a large number of symbolic references in the constant pool, and the method invocation instructions in the bytecode take symbolic references to methods in the constant pool as arguments. Some of these symbolic references are converted to direct references during class loading or the first time they are used, which is called static resolution. The other part will be converted to direct application during each run, this part is called dynamic connection

[if! SupportLists]§ [endif] Method exit: Returns the location where the method was called, restores the local variable and operand stack of the upper method, and pushes it to the caller’s operand stack if there is no return value.

[if! SupportLists]1. [endif] The memory space required by the local variable table is allocated at compile time. When entering a method, it is completely determined how much local variable space the method needs to allocate in the frame.


[if! SupportLists]2. [endif] Does not change the size of the local variable table while the method is running. It mainly stores various basic data types, object reference types and returnAddress types known at compile time.


Java virtual machine stack, which specifies two exceptions:

[if! SupportLists]1. [endif] StackOverflowError is raised if the thread request depth is greater than the depth allowed by the virtual machine.

[if! SupportLists]2. [endif] An OutOfMemoryError is raised if the VM stack is dynamically expanded and sufficient memory cannot be allocated during the expansion.

Local method stack

The stack size can be set by -xss

[if! SupportLists]1. [endif] Vm stack Performs Java method (that is, bytecode) services for virtual machines.

[if! SupportLists]2. [endif] The local method stack serves the Native method used by the VM. Some virtual machines, such as the Sun HotSpot VIRTUAL machine, simply combine the local method stack with the virtual machine stack

Method Area

This parameter can be set using -xx :MaxPermSize

[if! SupportLists]1. [endif] The thread shares the memory area, which is used to store the class information, constants, and static variables that have been loaded by the VM. This area is also called Permanent

Generation).


[if! SupportLists]2. [endif] Although the Java Virtual Machine specification describes the method area as a logical part of the Heap, it has an alias called non-heap, which is supposed to distinguish it from the Java Heap.


[if! SupportLists]3. [Endif] How to implement the method area, which belongs to the implementation details of the VM and is not restricted by VM specifications.


[if! SupportLists]4. [endif] The method area mainly stores Java class definition information and has little relation to garbage collection. You can choose not to implement garbage collection in the method area, but it is not without garbage collection.


[if! SupportLists]5. The target of memory reclamation in the [endif] method area is to reclaim the constant pool and unload the type.


[if! SupportLists]6. [endif] Runtime constant pool, which is also part of the method area. After the VM loads the Class, the data in the constant pool is added to the runtime constant pool.


Run-time constant pool

Before JDK1.6, the string constant pool was in the method area. The JDK1.7 string constant pool has been moved to the heap.

The parameters -xx :PermSize and -xx :MaxPermSize can be set

[if! SupportLists]§ [endif] Constant Pool: Constant Pool data is determined at compile time and is part of the Class file. Stores constants in classes, methods, interfaces, and so on, as well as string constants.

[if! SupportLists]§ [endif] String Pool/String Constant Pool (String Pool/String Constant Pool) : A part of the Constant Pool that stores the String type data generated in compile-time classes.

[if! SupportLists]§ [endif] Runtime Constant Pool: Part of the method area that is shared by all threads. The virtual machine loads the Class and puts the data from the constant pool into the runtime constant pool. Constant pool: Can be thought of as a repository of resources in a Class file. It is the data type in the Class file structure that is most associated with other project resources.

[if! SupportLists]1. [endif] The constant pool stores two types of constants: Literals and Symbolic references

The Reference).


[if! SupportLists]2. [endif] Literal: text string, constant value declared as final, etc.


[if! SupportLists]3. [endif] Symbol reference: Fully Qualified Name of class and interface, field Name and Descriptor, method Name and Descriptor.


Direct memory

This can be specified with -xx :MaxDirectMemorySize, or if not, defaults to the maximum Java heap size (specified by -xmx).

[if ! SupportLists]§ [Endif] Direct Memory is not part of the data area during the running of the VIRTUAL machine, nor is it defined in the Java Virtual Machine specification. However, it is also frequently used. It may also cause an OutOfMemoryError to occur.

Let me summarize it a little bit more simply

Java Heap

This can be set with the -xms and -xmx arguments

[if! SupportLists]1. [endif] The Java heap is shared by all threads and is the largest chunk of the Java heap in the memory managed by the Java VM

[if! SupportLists]2. [endif] The only purpose of the Java heap is to store object instances, almost all object instances and arrays are here

[if! SupportLists]3. [Endif] The Java heap can be subdivided into the new generation and the old generation for better memory recycling and allocation. More detailed are the Eden space, From Survivor space, and To Survivor zone

[if! SupportLists]§ [endif] New generation: includes Eden area, From Survivor area, and To Survivor area. The default size of Eden is Survivor=8:1.

[if! SupportLists]§ [endif] Old age: Objects that survive N garbage collections in the young generation are placed in the old generation. Therefore, you can think of the tenured generation as holding objects with long life cycles.

Java Virtual Machine Stack

The parameter stack frame is the basic data structure of the method run time. The stack size can be set by -xSS

[if! SupportLists]1. [endif] The Java virtual machine stack is thread-private and has the same lifetime as the thread.

[if! SupportLists]2. [endif] The process that each method is invoked until the execution is complete corresponds to the process that a stack frame is pushed into and removed from the VM stack.

[if! SupportLists]3. [endif] Vm stack is a memory model (that is, bytecode) service that executes Java methods: Each method is executed with a stack frame, which is used to store information such as local variation table, operand stack, dynamic link, method exit, etc

Method Area

This parameter can be set using -xx :MaxPermSize

[if! SupportLists]1. [endif] Thread shared memory area), which stores the class information, constants, and static variables loaded by the VM, that is, the code compiled by the compiler. Method area is also called Permanent

Generation).


[if! SupportLists]2. The [endif] method area mainly stores Java class definition information and has little relation to garbage collection. You can choose not to implement garbage collection in the method area, but it is not without garbage collection.


[if! SupportLists]3. The memory reclamation target of the [endif] method area is mainly for the collection of constant pools and the uninstallation of types.


[if! SupportLists]4. [endif] The runtime constant pool, which is also part of the method area. After the VM loads the Class, the data in the constant pool is added to the runtime constant pool.


What is CAS?

CAS (compare and swap) abbreviation, Chinese translation to compare and exchange.

CAS directly calls THE CPU’s CMPXCHG instruction using Java Native Interface (JNI) without using the JVM.

Using CAS instruction of CPU, at the same time with the help of JNI to complete Java non-blocking algorithm, realize atomic operation. Other atomic operations are performed using similar properties.

The entire java.util.Concurrent is built on TOP of CAS, so J.U.C offers a significant performance boost over synchronized blocking algorithms.

CAS is an optimistic locking technique. When multiple threads attempt to update the same variable using CAS, only one thread can update the value of the variable. However, all other threads fail.

The CAS application

CAS has three operands, the memory value V, the old expected value A, and the new value B to modify. Change the memory value V to B if and only if the expected value A and memory value V are the same, otherwise nothing is done.

Advantages of the CAS

Ensure that read – change – write operations to memory are atomic operations

CAS shortcomings

Although CAS is very efficient in solving atomic operations, there are still three problems with CAS. ABA problems, long cycle time, high overhead, and atomic operations that only guarantee a shared variable

conclusion

[if! SupportLists]1. [Endif] Using CAS greatly reduces program performance when thread conflicts are serious. CAS is only suitable for use when thread collisions are low.

[if! SupportLists]2. [endif]synchronized has been improved and optimized after JDK1.6. The underlying implementation of synchronized mainly relies on lock-free queues. The basic idea of synchronized is spin-back blocking, which continues to compete for locks after competitive switching, sacrificing fairness slightly, but achieving high throughput. Performance similar to CAS can be achieved with fewer thread collisions; In the case of serious thread conflicts, the performance is much higher than that of CAS.


Above are the answers to the first five questions, need the rest of the interview questions answers, pay attention to my simple letter reply Java thread interview for.

You can also add group: 650385180, the answer is in the shared area of the group, for everyone to download free.

Here is my summary of some knowledge points about multi-threading to master, there are also learning materials in the group.