preface
Personal collection of 80 Java multithreading/concurrent classic interview questions, now give 11-20 answer analysis ha, and upload github ha ~
Github.com/whx123/Java…
Personal collection of 80 multithreading and hair questions (1-10 answer analysis)
11. Why use thread pools? Java thread pool internal mechanisms, the role of parameters, several types of work blocking queues, thread pool types and usage scenarios
Answer these points:
- Why thread pools?
- Java thread pool principle
- Thread pool core parameters
- Several work blocking queues
- Thread pool misuse
- Thread pool types and usage scenarios
Why thread pools?
Thread pool: A pool that manages threads.
- Manage threads to avoid increased resource consumption for thread creation and thread destruction.
- Improve response speed.
- Reuse.
Java thread pool execution principle
To visualize thread pool execution, use an analogy:
- Core threads are compared to company employees
- Non-core threads are compared to outsourced employees
- Blocking queues are compared to requirements pools
- Submitting tasks is like making a request
Thread pool core parameters
public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler)
Copy the code
- CorePoolSize: indicates the maximum number of core threads in a thread pool
- MaximumPoolSize: specifies the maximum number of threads in the thread pool
- KeepAliveTime: The lifetime of non-core threads in the thread pool
- Unit: indicates the unit of idle thread lifetime
- WorkQueue: block queue for storing tasks
- ThreadFactory: Used to set the factory for creating threads. You can give the threads a meaningful name to facilitate troubleshooting.
- Handler: Saturation policy event of a line city. There are four types of rejection policies.
Four rejection strategies
- AbortPolicy(throws an exception, default)
- DiscardPolicy(Simply discard tasks)
- DiscardOldestPolicy (Discarding the oldest task in the queue and recommitting the current task to the thread pool)
- CallerRunsPolicy (to be processed by the thread from which the thread pool call is made)
Several work blocking queues
- ArrayBlockingQueue (array bounded blocking queue, sorted by FIFO)
- LinkedBlockingQueue (a block queue based on a linked list structure, sorting tasks in FIFO, size optionally set, otherwise it will be an unbounded block queue)
- DelayQueue (a queue for delayed execution of a task’s scheduled period)
- PriorityBlockingQueue (unbounded blocking queue with priority)
- SynchronousQueue (a blocking queue that does not store elements, and each insert must wait until another thread calls remove, otherwise the insert remains blocked)
Thread pool misuse
Improper use of thread pools can cause memory problems
If you are interested, read this article: Source code analysis -newFixedThreadPool thread pool memory surge problem
Thread pool types and usage scenarios
- newFixedThreadPool
It is suitable for processing CPU-intensive tasks. It ensures that the CPU is allocated as few threads as possible when it is used by workers for a long time, that is, it is suitable for performing long-term tasks.
- newCachedThreadPool
Used to perform a large number of small, short-term tasks concurrently.
- newSingleThreadExecutor
This mode applies to the scenario where tasks are executed sequentially, one by one.
- newScheduledThreadPool
Scenarios where tasks are executed periodically and the number of threads needs to be limited
- newWorkStealingPool
Creating a thread pool with enough threads to maintain parallelism levels will, through work stealing, keep a multi-core CPU from being idle. There will always be a ForkJoinPool for the CPU to run, essentially.
Java thread Pool parsing
12. Talk about understanding the keyword volatile
Volatile is a popular question for interviewers, and can be used to answer the following points:
- Function of vlatile variable
- Memory models for modern computers (sniffing techniques, MESI protocol, bus)
- Java Memory Model (JMM)
- What is visibility?
- Instruction reordering
- Memory semantics for volatile
- as-if-serial
- Happens-before
- Does volatile solve atomicity? Why is that?
- Underlying principles of volatile, how to ensure visibility and forbid instruction rearrangements (memory barriers)
What does the Vlatile variable do?
- Ensure that variables are visible to all threads
- Disallow command reordering
The memory model of modern computers
- The caches include L1, L2 and L3 caches ~
- Cache consistency protocol, you can understand the MESI protocol
- A Bus is a common communication trunk through which the CPU and other functional components of a computer communicate.
- The processor uses sniffing techniques to keep its internal cache, system memory, and other processors’ cached data consistent across the bus.
Java Memory Model (JMM)
What is visibility?
Visibility means that when one thread modifies a shared variable, another thread can read the changed value.
Instruction reordering
Instruction reordering is when instructions may be reordered by the compiler and CPU during program execution to improve performance.
Memory semantics for volatile
- When a volatile variable is written, the JMM flusher the value of the shared variable from the thread’s local memory to main memory.
- When a volatile variable is read, the JMM invalidates the thread’s local memory. The thread will next read the shared variable from main memory.
as-if-serial
If viewed within this thread, all operations are in order; That is, no matter how much reordering is done (by the compiler and processor for greater parallelism), the execution of a (single-threaded) program does not change.
Double PI = 3.14; //A double r = 1.0; //B double area = pi * r * r; //CCopy the code
Step C depends on steps A and B. Because of instruction reordering, the program execution sequence may be A->B->C, or B->A->C, but C cannot be executed before A or B, which would violate the as-IF-serial semantics.
Happens-before
In the Java language, there is a happens-before principle:
- Sequence rule: In a thread, actions written earlier take place before those written later, in order of control flow.
- Pipe lock rule: an unLock operation occurs first and a subsequent lock operation occurs with the same lock
- Volatile variable rules: Writes to a variable occur first before reads to that variable occur later
- Thread start rule: The start() method of the Thread object occurs first for each action of the Thread
- Thread termination rules: All operations in a Thread occur before Thread termination detection. Thread termination can be detected by the end of thread.join () method and the return value of thread.isalive ()
- Thread interrupt rule: A call to the threadinterrupt () method occurs before code in the interrupted thread detects the occurrence of an interrupt event
- Object finalization rule: The finalization of an object occurs first at the beginning of its Finalize () method
- Transitivity: If operation A precedes operation B and operation B precedes operation C, it follows that operation A precedes operation C
Does volatile solve atomicity? Why is that?
No, i++ is an example where atomicity is synchronzied or lock
public class Test { public volatile int race = 0; public void increase() { race++; } public static void main(String[] args) { final Test test = new Test(); for(int i=0; i<10; i++){ new Thread(){ public void run() { for(int j=0; j<100; j++) test.increase(); }; }.start(); } while(thread.activecount ()>1) thread.yield (); System.out.println(test.race); }}Copy the code
Underlying principles of volatile, how to ensure visibility and forbid instruction rearrangements (memory barriers)
Volatile variables, when translated into assembly code, will have an additional lock prefix. The LOCK instruction acts as a memory barrier that guarantees the following:
- 1. Do not reorder subsequent instructions to the position before the memory barrier
- 2. Write the CPU cache to the memory
- 3. If it is a write operation, the corresponding cache of other processors will be invalid.
2, 3 ensure visibility, the first point to prohibit instruction rearrangement ~
If you are interested, you can read this article :Java programmer interview essential: Volatile all-round parsing
13, AQS components, implementation principle
AQS, namely AbstractQueuedSynchronizer, is the foundation of building lock or other synchronous component framework, it USES a int member variable synchronous state, through the built-in FIFO queue to complete resources for thread line work. Answer the following key points:
- State State maintenance.
- CLH queue
- ConditionObject notice
- Template method design patterns
- Exclusive and share mode.
- Custom synchronizer.
- AQS family bucket some extensions, such as: ReentrantLock, etc.
State State maintenance
- State, int, the state of the lock. Volatile ensures visibility in multiple threads.
- The getState() and setState() methods use final modifications to restrict subclasses of AQS from overwriting them both.
- The compareAndSetState () method adopts CAS algorithm operation with optimistic lock idea to ensure thread safety and guarantee state
Atomicity of Settings.
CAS optimistic locking is a practice to solve concurrency problems
CLH queue
CLH(Craig, Landin, and Hagersten LOCKS) synchronization queues are FIFO bidirectional queues that internally record the first and last queue elements through the head and tail nodes. The queue elements are of type Node. AQS rely on it to complete the synchronization state of the state of management, the current thread if acquiring the synchronization state failure, AQS will wait for the current thread already wait state information structure into a Node (the Node) and add it to the CLH synchronous queue, blocks the current thread at the same time, when the sync release, will wake the first Node (fair), Make it try again to get the synchronization status.
ConditionObject notice
As we all know, synchronized control synchronization can cooperate with Object’s wait(), notify(), and notifyAll() series methods to realize wait/notification mode. And the Lock? It provides the Condition interface, with await(),signal(),signalAll() and other methods to implement the wait/notification mechanism. ConditionObject implements the Condition interface, which supports Condition variables for AQS.
ConditionObject and CLH ConditionObject
- The thread that calls the await() method is added to the conditionObject wait queue and wakes up the next CLH head node in the queue.
- The firstWaiter is added to the CLH queue of the AQS after the thread calls Singnal () on one of the conditionObjects.
- When a thread calls unLock() to release the lock, the next node (in this case firtWaiter) of the HEAD node in the CLH queue is woken up.
Template method design patterns
What is a template design pattern?
Define the skeleton of an algorithm in a method, deferring some steps to subclasses. The template approach allows subclasses to redefine certain steps in an algorithm without changing the structure of the algorithm.
The typical AQS design pattern is the template method design pattern. The derivative implementation of AQS ReentrantLock (Semaphore) embodies this design pattern. For example, AQS provides tryAcquire, tryAcquireShared and other template methods to achieve custom synchronizer for subclasses.
Exclusive and share mode
- Exclusive: Only one thread holds the synchronization state at a time, such as ReentrantLock. Can divide for fair lock and non – fair lock again.
- Shared mode: Multiple threads can execute simultaneously, such as Semaphore/CountDownLatch are the products of shared mode.
Custom synchronizer
If you want to implement a custom lock, you first need to determine whether you want to implement an exclusive lock or a shared lock, define the meaning of the atomic variable state, define an inner class to inherit AQS, and override the corresponding template method
AQS Family barrel some extensions.
Semaphore, CountDownLatch, ReentrantLock
Can see before I this article ha, AQS analysis and actual combat
What is pseudo sharing in multi-threaded environment
- What is pseudo-sharing
- How to solve the fake sharing problem
What is pseudo-sharing
Fake share definition?
The CPU cache is cached in the unit of cache line. When multiple threads modify independent variables in the same cache line, the performance of each variable will be affected. This is pseudo-sharing
Modern computer models, everybody remember that? Volatile omnidirectional parsing: Volatile omnidirectional parsing: Volatile omnidirectional parsing
- CPU execution speed is several orders of magnitude faster than memory speed. In order to improve execution efficiency, modern computer models evolve CPU, cache (L1, L2, L3), and memory model.
- When the CPU performs an operation, if it first queries data from L1 cache and fails to find it, it searches L2 cache, and so on until it obtains data from memory.
- To avoid frequently fetching data from memory, clever scientists have designed cache rows that are 64 bytes in size.
It is because of the cache line that the pseudo-sharing problem exists, as shown in the figure below:
Assume that data A and B are loaded into the same cache row.
- When thread 1 changes the value of A, CPU1 notifies other CPU cores that the current Cache line is invalid.
- If thread 2 initiates a change to thread B, core2 will re-read the Cache line data from main memory because the Cache line is no longer valid. After reading this, CPU2 notifies the other CPU cores that the current Cache line is no longer valid because it is changing the value of B.
- If the contents of the same Cache line are read and written by multiple threads, it is easy to compete with each other and write back to main memory frequently, which can greatly degrade performance.
How to solve the fake sharing problem
Since pseudo-sharing results from independent variables stored on the same Cache line, a Cache line size is 64 bytes. Then, we can use space for time, i.e. data padding, to split independent variables between different Cache lines ~
Shared memory Demo example:
public class FalseShareTest { public static void main(String[] args) throws InterruptedException { Rectangle rectangle = new Rectangle(); long beginTime = System.currentTimeMillis(); Thread thread1 = new Thread(() -> { for (int i = 0; i < 100000000; i++) { rectangle.a = rectangle.a + 1; }}); Thread thread2 = new Thread(() -> { for (int i = 0; i < 100000000; i++) { rectangle.b = rectangle.b + 1; }}); thread1.start(); thread2.start(); thread1.join(); thread2.join(); System.out.println(" run time "+ (system.CurrentTimemillis () -beginTime)); } } class Rectangle { volatile long a; volatile long b; }Copy the code
Running results:
Execution time 2815Copy the code
A long is 8 bytes. We don’t have 7 long variables between a and b. What is the output? As follows:
class Rectangle {
volatile long a;
long a1,a2,a3,a4,a5,a6,a7;
volatile long b;
}
Copy the code
Running results:
Run time 1113Copy the code
You can find that using the way of filling data, let read and write variables split into different cache lines, can be very high performance ~
What’s the difference between Runnable and Callable?
- The Callable interface method is call() and the Runnable method is run();
- The Call method of the Callable interface has a return value and supports generics. The Run method of the Runnable interface has no return value.
- The Call () method of the Callable interface allows exceptions to be thrown; The Runnable interface run() method cannot continue to throw exceptions;
@functionalInterface public interface Callable<V> {/** * Supports generic V with a return value. Throws exceptions */ V call() throws Exception; } @functionalInterface public interface Runnable {/** * public void run(); }Copy the code
Take a look at the demo code, so it should be easier to understand a bit ha ~
Public class CallableRunnableTest {public static void main(String[] args) {public static void main(String[] args) { ExecutorService executorService = Executors.newFixedThreadPool(5); Callable<String> Callable =new Callable<String>() {@override public String Call () throws Exception {return "Hello, callable"; }}; // Support generic Future<String> futureCallable = executorService.submit(callable); // Support generic Future<String> futureCallable = executorService.submit(callable); try { System.out.println(futureCallable.get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } Runnable Runnable = new Runnable() {@override public void run() {system.out.println (" hello, Runnable "); }}; Future<? > futureRunnable = executorService.submit(runnable); try { System.out.println(futureRunnable.get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } executorService.shutdown(); }}Copy the code
Running results:
Hello callable Hello Runnable nullCopy the code
16. The difference between wait(),notify() and suspend(),resume()
- Wait () causes the thread to enter the blocking wait state and releases the lock
- Notify () wakes up a thread in the wait state and is usually used with wait ().
- Suspend () puts a thread into a blocked state and does not resume automatically. Resume () must be called before the thread can be put back into an executable state. The suspend() method can easily cause deadlock problems.
- The resume() method is used with the suspend() method.
Suspend () is not recommended. After the suspend() method is invoked, the thread does not release assets (such as locks) that it has held. Instead, it holds resources and goes to sleep, which may cause deadlocks.
17.Condition interface and its implementation principle
- The Condition interface is compared to the Object monitor method
- The Condition interface uses Demo
- Condition realization principle
The Condition interface is compared to the Object monitor method
Java Object (Object) provides wait(), notify(), and notifyAll() methods. Synchronized implements wait/notification mode. The Condition interface with Lock implements a similar wait/notification mechanism with await(),signal(),signalAll(), etc.
Compare the item | Object monitoring method | Condition |
---|---|---|
precondition | Get the lock of the object | Call lock.lock () to get the Lock, and call lock.newcondition () to get the Condition object |
Call way | Call object.wait() directly | Call directly, condition.await() |
Number of waiting queues | 1 | multiple |
The current thread releases the lock and enters the wait state | support | support |
Does not respond to interrupts in wait state | Does not support | support |
The current thread releases the lock and enters the timeout wait state | support | support |
The current thread releases the lock and enters the wait state to some future time | Does not support | support |
Wakes up a thread in the wait queue | support | support |
Wake up all threads in the wait queue | support | support |
The Condition interface uses Demo
public class ConditionTest { Lock lock = new ReentrantLock(); Condition condition = lock.newCondition(); public void conditionWait() throws InterruptedException { lock.lock(); try { condition.await(); } finally { lock.unlock(); } } public void conditionSignal() throws InterruptedException { lock.lock(); try { condition.signal(); } finally { lock.unlock(); }}}Copy the code
Condition realization principle
In fact, the synchronous Node types in the queue and waiting queue are AbstractQueuedSynchronizer synchronizer of a static inner class. The Node, then we diagram about the implementation principle of Condition ~
Basic structure diagram of wait queues
A Condition contains a wait queue, and the Condition has a first node (firstWaiter) and a last node (lastWaiter). The current thread calls the condition.await () method, which constructs the node from the current thread and adds the node to the queue from the tail
AQS structure
ConditionI is used in conjunction with Lock, and its base level is related to the synchronizer (AQS). The synchronizer has a synchronization queue and multiple wait queues ~
Waiting for the
When the await() method is called, the head node equivalent to the synchronization queue (the node that has acquired the lock) is moved to the Condition’s wait queue.
notice
Calling the signal() method of Condition wakes up the node that has waited the longest in the queue (the first node), at
The node is moved to the synchronization queue before being woken up.
18, how to tune the thread pool, how to confirm the maximum number?
In the Java Concurrency in Practice, there is a formula for evaluating the thread size of a thread pool
Nthreads=NcpuUcpu(1+w/c)
- Ncpu = Number of CPU cores
- Ucpu = CPU usage, 0 to 1
- W/C= ratio of waiting time to calculation time
Assuming the CPU is 100% running, the formula is
Nthreads=Ncpu*(1+w/c)
Copy the code
To estimate, let’s say:
- For I/O intensive applications (such as database data interaction, file upload and download, and network data transfer, etc.), I/O operations generally take time. The ratio of waiting time to computation time (W/C) is greater than 1. Therefore, the optimal number of threads is estimated to be Nthreads=Ncpu* (1+1) = 2Ncpu.
- For CPU-intensive applications (such as programs with complex algorithms), the optimal situation is no wait, w=0, Nthreads=Ncpu. For computationally intensive tasks, on a system with N processors, optimal efficiency is usually achieved when the size of the thread pool is N+1. So Nthreads = Ncpu+1
Do you have specific reference? For example
If the average CPU run time per thread is 0.5s, the thread wait time (non-CPU run time, such as IO) is 1.5s, and the number of CPU cores is 8, then the thread pool size =(1+1.5/05)*8 =32.
Reference online this article, written very good, interested friends can go to have a look at ha:
- Determines the number of concurrent threads in the thread pool based on CPU cores
19. If there are three threads, T1, T2 and T3, how can you ensure that T2 is executed after T1 and T3 after T2?
You can solve this problem using the Join method. For example, in thread A, calling thread B’s join method means ** : THREAD A waits for thread B to finish (freeing CPU execution) before continuing to execute. 支那
The code is as follows:
public class ThreadTest { public static void main(String[] args) { Thread spring = new Thread(new SeasonThreadTask (" spring ")); Thread summer = new Thread(new SeasonThreadTask); Thread autumn = new Thread(new SeasonThreadTask); Try {// Start spring.start(); // The main thread waits for spring to finish executing, then spring.join(); // Start the thread summer.start(); Summer.join (); // Wait for thread summer to finish executing, then execute summer.join(); // The thread starts autumn. Start (); // The main thread waits for the thread autumn to complete, and then executes autumn.join(); } catch (InterruptedException e) { e.printStackTrace(); } } } class SeasonThreadTask implements Runnable{ private String name; public SeasonThreadTask(String name){ this.name = name; } @Override public void run() { for (int i = 1; i <4; I++) {System. Out. Println (enclosing the name + ":" + I + ""); try { Thread.sleep(100); } catch (InterruptedException e) { e.printStackTrace(); }}}}Copy the code
Running results:
Spring has come: 1 Spring has come: 2 Spring has come: 3 Summer has Come: 1 Summer has Come: 2 Summer has Come: 3 Fall has come: 1 Fall has come: 2 Fall has come: 3Copy the code
20. What is LockSupport?
- LockSupport role
- Park and unpark are different from wait and notify
- What does an Object Blocker do?
LockSupport is a utility class whose main purpose is to suspend and wake up threads, and it is the basis for creating locks and other synchronization classes.
public static void park(); Public static void parkNanos(Object Blocker, long nanos) public static void parkNanos(Object Blocker, long nanos); Public static void parkUntil(Object Blocker, long deadline); Public static void park(Object blocker); Public static void unpark(Thread Thread); // Wake up the current threadCopy the code
Here’s an example:
public class LockSupportTest { public static void main(String[] args) { CarThread carThread = new CarThread(); Carthread. setName(" lauslaus "); carThread.start(); try { Thread.currentThread().sleep(2000); carThread.park(); Thread.currentThread().sleep(2000); carThread.unPark(); } catch (InterruptedException e) { e.printStackTrace(); } } static class CarThread extends Thread{ private boolean isStop = false; @override public void run() {system.out.println (this.getName() + "running "); While (true) {if (isStop) {system.out.println (this.getName() + "The car stopped "); LockSupport.park(); } system.out.println (this.getName() + "The car is still running "); try { Thread.sleep(1000L); } catch (InterruptedException e) { e.printStackTrace(); } } } public void park() { isStop = true; System.out.println(" stop, check for drunk driving "); } public void unPark(){ isStop = false; LockSupport.unpark(this); // Wake up the current thread system.out.println (" you don't drink, keep driving "); }}}Copy the code
Running results:
The Rolls-Royce's on the move the Rolls-Royce's still on the move the Rolls-Royce's still on the move Pull over, check for DRUNK driving The Rolls-Royce's stopped, man you're not drunk driving, Keep driving, the Rolls-Royce is still running, the Rolls-Royce is still running, the Rolls-Royce is still runningCopy the code
LockSupport’s park and unpark implementations are somewhat similar to wait and notify. but
- Park does not need to acquire an object lock
- Park does not throw InterruptedException when it interrupts, and it is up to Park to determine the status of the interruption
- When using park and unpark, you don’t have to worry about park timing issues causing deadlocks
- LockSupport does not need to be in a synchronized code block
- Unpark can wake up a specified thread, while Notify can only wake up a random thread
What does an Object Blocker do?
It is convenient to see the specific blocked object information when the thread dumps.
The public,
Reference and thanks
- The Art of Concurrent Programming in Java
- What is False sharing?
- Determines the number of concurrent threads in the thread pool based on CPU cores
- Usage and principle of LockSupport
- Discuss cached rows and pseudo-sharing