preface
Recently a lot of fans friends say the private chat I can sort out a multi-threaded interview questions out, said in a recent interview with Laos is asked a question was asked this very be agitated, some time ago is busy no time to back to direct messages, see direct messages two days ago I also hurriedly took two days to sort out this piece of interview question come out to let everyone see, Not much to say to all summed up in the following!
1. What is the use of multithreading?
A question that may seem ridiculous to many people: why do I care if I can use multithreading? In my opinion, this answer is even more bullshit. The so-called know how to know why, will use only know how to know, why to use is to know why, only to know how to know how to know how can be said to be a knowledge point to use freely. OK, here’s what I think about it:
(1) Give play to the advantages of multi-core CPU
With the progress of the industry, now notebook, desktop and even commercial application servers are at least dual-core, 4-core, 8-core and even 16-core are not uncommon. If it is a single-threaded program, then 50% of the dual-core CPU is wasted, and 75% of the 4-core CPU is wasted. The so-called multithreading on a single-core CPU is fake multithreading. The processor can only process one piece of logic at a time, but the threads switch quickly and look like multiple threads are running at the same time. Multi-threading on the multi-core CPU is the real multi-threading, it can make your multi-section of logic work at the same time, multi-threading, can really play the advantages of the multi-core CPU, to make full use of the PURPOSE of the CPU.
(2) Prevent blocking
From the point of view of program efficiency, single-core CPU will not give full play to the advantages of multithreading, but will cause the thread context switch because of running multithreading on single-core CPU, and reduce the overall efficiency of the program. But with a single-core CPU we still have to apply multithreading just to prevent blocking. Imagine if a single-core CPU uses a single thread, and if that thread blocks, say, reading data remotely, and the peer doesn’t return and doesn’t set a timeout, your entire program will stop running before the data comes back. Multithreading prevents this problem. Multiple threads are running at the same time, and even if the code in one thread is blocked in reading data, it does not affect the execution of other tasks.
(3) Easy to model
This is another advantage that is not so obvious. Let’s say you have A big task A, single-threaded programming, and it’s A lot to think about, and it’s A lot of trouble to model the entire program. However, if the big task A is broken down into several small tasks, task B, task C and task D, respectively build the program model, and run these tasks separately through multi-threading, it will be much simpler.
###2, the way to create threads
A more common problem, generally there are two kinds:
(1) Inherit the Thread class
(2) Implement Runnable interface
As for which is better, it goes without saying that the latter is better, because the way to implement interfaces is more flexible than the way to inherit classes, and also reduces coupling between programs. Interface oriented programming is also at the heart of the six principles of design pattern.
3. The difference between the start() and run() methods
The multithreaded nature is only apparent when the start() method is called, and the code in the run() method is executed alternately in different threads. If only the run() method is called, the code is executed synchronously, and one thread must wait for all the code in its run() method to complete before another thread can execute its run() method.
4. Difference between Runnable interface and Callable interface
A bit of a deep question, but also see a Java programmer to learn the breadth of knowledge.
The return value of the run() method in the Runnable interface is void, and all it does is execute the code in the run() method; The Call () method in the Callable interface, which returns a value, is a generic type that can be used in conjunction with Future and FutureTask to retrieve the result of asynchronous execution.
And this is actually a very useful feature, because one of the big reasons that multithreading is more difficult and more complex than single threading is because multithreading is so unpredictable. Does a thread execute? How long did a thread execute? Is the expected data already assigned when a thread executes? We don’t know. All we can do is wait for the multithreaded task to finish. Callable+Future/FutureTask can retrieve the results of multiple threads. It can cancel the task if it waits too long to retrieve the required data, which is really useful.
5. Difference between CyclicBarrier and CountDownLatch
Two similar looking classes, both under java.util.concurrent, can be used to indicate that code is running at a point. The difference is that:
(1) After a thread of CyclicBarrier runs to a certain point, the thread will stop running, and all threads will not restart until all threads have reached this point; CountDownLatch is not the case. When a thread gets to a certain point, it just gives a value of -1 and the thread keeps running
(2) CyclicBarrier can only evoke one task, and CountDownLatch can evoke multiple tasks
(3) CyclicBarrier is reusable, but CountDownLatch is not reusable. If the count value is 0, CountDownLatch is no longer available
The role of the volatile keyword
A very important question is that every Java programmer who learns and applies multithreading must master. A prerequisite for understanding the use of volatile is an understanding of the Java memory model, which is not covered here. As you can see from Point 31, volatile serves two main purposes:
(1) Multithreading mainly revolves around the two characteristics of visibility and atomicity, using volatile variables to ensure that they are visible across multiple threads, that is, every time a volatile variable is read, it is the latest data
(2) The low-level implementation of the code is not as simple as we see the high-level language —-Java program, its execution is Java code –> bytecode –> according to the bytecode to execute the corresponding C/C++ code –> C/C++ code is compiled into assembly language –> and hardware circuit interaction, in reality, The JVM may reorder instructions for better performance, and some unexpected problems can occur with multithreading. Using volatile disables semantic reordering, which, of course, reduces code execution efficiency
From the point of the practice, is an important role of volatile and CAS, guarantees the atomicity and detail can see Java. Util. Concurrent. The atomic classes under the package, such as AtomicInteger.
7. What is thread safety
Again, a theoretical question, and there are many different answers, but I’ll give you the one that I think explains it best: if your code always gets the same results when it executes in multiple threads as it does when it executes in a single thread, then your code is thread-safe.
There are several levels of thread safety:
(1) Immutable
Classes such as String, Integer, and Long are final and cannot be changed by any thread unless a new one is created, so immutable objects can be used directly ina multithreaded environment without any synchronization
(2) Absolute thread safety
Regardless of the runtime environment, callers do not need additional synchronization measures. There’s a lot of extra cost to doing this, and most of the classes in Java that label themselves as thread-safe are actually not thread-safe, but there are some classes that are thread-safe, like CopyOnWriteArrayList, CopyOnWriteArraySet
(3) Relative thread safety
Thread-safe means thread-safe. Methods like add and remove are atomic operations that do not interrupt a Vector, but only that much. If a thread is iterating over a Vector and another thread is adding the Vector at the same time, 99% of the cases will appear ConcurrentModificationException, that is, the fail mechanism – fast.
(4) The thread is not safe
ArrayList, LinkedList, HashMap, etc are all thread-unsafe classes
How to obtain thread dump file in Java
Thread dump is the best way to solve problems such as dead loops, deadlocks, blocking, and slow page opening. A thread stack can be retrieved in two steps:
(1) access to the thread of pid, can use the JPS command, in Linux environment can also use ps – ef | grep Java
(2) To print the thread stack, you can use the jstack PID command, in Linux environment can also use kill -3 PID
On an additional note, the Thread class provides a getStackTrace() method that can also be used to retrieve the Thread stack. This is an instance method, so this method is tied to a specific thread instance, and each fetch gets the stack that a particular thread is currently running,
9. What happens to a thread with a runtime exception
If the exception is not caught, the thread stops executing. Another important point is that if this thread holds a monitor for an object, the object monitor is immediately released
How do I share data between two threads
This is done by sharing objects between threads, and then evoking and waiting with wait/notify/notifyAll, await/signal/signalAll. For example, BlockingQueue is designed to share data between threads
What’s the difference between sleep and wait
Sleep and wait can both be used to give up the CPU for a certain amount of time. The difference is that if a thread holds the monitor for an object, sleep does not give up the monitor for that object, while wait does
12. What is the role of the producer-consumer model
It’s a theoretical question, but an important one:
(1) The most important function of the producer-consumer model is to improve the operating efficiency of the whole system by balancing the production capacity of producers and the consumption capacity of consumers
(2) Decoupling, which is an incidental function of the producer-consumer model. Decoupling means that there is less connection between producers and consumers, and the less connection there is, the more they can develop independently without mutual restriction
13. What does ThreadLocal do
Simple said ThreadLocal is a kind of to the practice of trading space for time, in each Thread maintains a method to implement ThreadLocal. With open address ThreadLocalMap, isolating data, data is not Shared, nature is no Thread safety issues
14. Why wait() and notify()/notifyAll() are called in synchronous blocks
This is mandatory by the JDK; both wait() and notify()/notifyAll() must acquire the lock on the object before being called
15. What is the difference between wait() and notify()/notifyAll() when giving up object monitors
The difference between wait() and notify()/notifyAll() is that wait() releases the object monitor immediately, while notify()/notifyAll() waits for the rest of the thread code to complete before abandoning the object monitor.
16. Why use thread pools
Avoid frequent creation and destruction of threads to achieve reuse of thread objects. In addition, using thread pools gives you the flexibility to control the number of concurrency depending on your project.
17, how to check if a thread has object monitor
I didn’t know there was a way to tell if a thread was holding an object monitor until I saw a multi-threaded interview question online: The Thread class provides a holdsLock(Object obj) method that returns true if and only if the Object obj’s monitor is held by a Thread. Note that this is static, meaning that the Thread refers to the current Thread.
The difference between synchronized and ReentrantLock
Synchronized is a keyword like if, else, for, and while. ReentrantLock is a class. This is the essential difference between synchronized and while. Since ReentrantLock is a class, it provides more flexible features than synchronized. It can be inherited, can have methods, and can have a variety of class variables. ReentrantLock has more extensibility than synchronized in several aspects:
(1) ReentrantLock can set the waiting time for lock acquisition, so as to avoid deadlock
(2) ReentrantLock can obtain information about various locks
(3) ReentrantLock can flexibly implement multi-way notification
In addition, the locking mechanism is also different. The underlying ReentrantLock is the Unsafe park method, while synchronized is the mark Word object header. I’m not sure.
19. What is the concurrency of ConcurrentHashMap
ConcurrentHashMap concurrency is the size of the segment. The default value is 16, which means that up to 16 threads can operate the ConcurrentHashMap at the same time. This is also the biggest advantage of ConcurrentHashMap over Hashtable. In any case, can Hashtable have two threads fetching data from Hashtable at the same time?
20. What is ReadWriteLock
To be clear, it’s not that ReentrantLock is bad, it’s just that ReentrantLock is sometimes limited. ReentrantLock may be used to prevent data inconsistency caused by thread A writing data and thread B reading data. However, if thread C is reading data and thread D is also reading data, the read data does not change the data. There is no need to lock the data, but the lock is still locked, which reduces the performance of the program.
Because of this, the read-write lock ReadWriteLock was born. ReadWriteLock is a read/write lock interface. ReentrantReadWriteLock is a concrete implementation of the ReadWriteLock interface. It enables read and write separation. Read and write, write and read, and write and write are mutually exclusive, improving read and write performance.
21. What is FutureTask
This was actually mentioned earlier, FutureTask represents a task for asynchronous computation. FutureTask can pass in a concrete implementation class of Callable, which can wait for the result of the asynchronous operation, determine whether the task has been completed, and cancel the task. Of course, since FutureTask is also an implementation class of the Runnable interface, it can also be put into a thread pool.
How do I find which thread uses the longest CPU in Linux
This is a practical question, which I think is quite meaningful. Here’s what you can do:
(1) to obtain the pid of the project, the JSP or ps – ef | grep Java, the front has spoken
(2) Top-H-P PID, the sequence cannot be changed
This prints out the current project, the percentage of CPU time consumed by each thread. Note that the type here is LWP, that is, the thread number of the native thread of the operating system. I have not deployed Java projects in Linux environment, so THERE is no way to show screenshots. Friends, if the company uses Linux environment to deploy projects, you can try.
With top-H-P PID + JPS PID, it is easy to find the thread stack of a thread with a high CPU usage, so as to locate the cause of the high CPU usage, which is usually caused by an infinite loop caused by improper code operations.
Last but not least, top-H-P PID outputs LWP in decimal, and JPS PID outputs local thread numbers in hexadecimal. After conversion, we can locate the current thread stack of the thread with the highest CPU usage.
Java programming to write a program that will cause a deadlock
When I first saw this topic, I thought it was a very good question. Most people know how A deadlock works: Thread A and thread B wait for each other to hold A lock, causing the program to continue in an infinite loop. Of course, it is limited to this, ask how to write a deadlock program do not know, this situation is simply do not understand what is a deadlock, understand a theory is finished, in practice encounter deadlock problem is basically not visible.
To really understand what a deadlock is, there are a few steps:
(1) Two threads hold two Object objects: lock1 and lock2. These two locks act as locks for synchronized code blocks;
Thread.sleep(XXX), thread.sleep (XXX), thread.sleep (XXX), thread.sleep (XXX) The main reason for this is to prevent thread 1 from starting the lock1 and lock2 object locks in a row
(3) Thread 2’s run() method synchronization code block first acquire lock2 object lock, then acquire lock1 object lock, of course, then lock1 object lock has been held by thread 1 lock, thread 2 must be waiting for thread 1 release lock1 object lock
When thread 1 is finished sleeping, thread 2 has acquired the object lock of lock2, thread 1 attempts to acquire the object lock of Lock2, and a deadlock is formed. Code will not write, accounting for a bit too much space, Java multithreading 7: deadlock this article has, is the above step of the code implementation.
24, How to wake up a blocked thread
If a thread is blocking because it called wait(), sleep(), or join(), you can interrupt it and wake it up by throwing InterruptedException. If the thread encounters AN IO block, there is nothing to be done, because IO is implemented by the operating system, and Java code has no direct access to the operating system.
25. How does immutable objects help multithreading
As mentioned earlier, immutable objects guarantee the memory visibility of objects, and reading immutable objects does not require additional synchronization, which improves code execution efficiency.
26. What is multithreaded context switching
Context switching in multithreading is the process of switching CPU control from one thread that is already running to another thread that is ready and waiting for CPU execution.
27. What happens if the thread pool queue is full when you submit a task
Here’s the distinction:
If you are using an unbounded queue LinkedBlockingQueue, that is, an unbounded queue, it doesn’t matter, just keep adding tasks to the blocking queue for execution, because LinkedBlockingQueue can be thought of as an almost infinite queue that can hold tasks indefinitely
If you are using a bounded queue such as ArrayBlockingQueue, the task is first added to the ArrayBlockingQueue. When the ArrayBlockingQueue is full, the number of threads is increased based on maximumPoolSize. If the number of threads increases and ArrayBlockingQueue continues to fill up, then the full task will be processed using the RejectedExecutionHandler policy, which is AbortPolicy by default
28. What is the thread scheduling algorithm used in Java
Preemptive. After a thread runs out of CPU, the operating system calculates a total priority based on thread priority, thread hunger, etc., and allocates the next time slice to a particular thread.
What does thread.sleep (0) do
This question is related to the one above, and I’m connected. Due to Java’s preemptive Thread scheduling algorithm, it may occur that a Thread often obtains CPU control. In order to enable some threads with lower priorities to obtain CPU control, thread.sleep (0) can be used to manually trigger an operation of the operating system to allocate time slices. This is also an exercise in balancing CPU control.
30. What is spin
A lot of synchronized code is just some very simple code, the execution time is very fast, in this case, the waiting thread locking may not be a worthwhile operation, because thread blocking involves user state and kernel state switch issues. Since synchronized code executes so fast, it’s a good idea not to block a thread waiting for a lock, but to do a busy loop at synchronized’s boundary, which is known as spin. It may be a better strategy to block if you do several busy cycles and find that the lock has not been acquired.
What is the Java memory model
The Java memory model defines a specification for multithreaded access to Java memory. The Java Memory model is more than just a few words. Let me briefly summarize several parts of the Java memory model:
(1) The Java memory model divides memory into main memory and working memory. The state of the class, the variables that are shared between the classes, are stored in main memory, and every time a Java thread uses the variables in main memory, it reads them once and makes a copy of them in its working memory, and when it runs its own thread code, it uses those variables, It operates on its own working memory. After the threaded code completes execution, the latest values are updated to main memory
(2) Several atomic operations are defined to manipulate variables in main and working memory
(3) Rules for the use of volatile variables are defined
(4) The happens-before principle defines the rules that operation A must precede operation B, For example, in the same thread, the code that precedes the control flow must precede the code that precedes the control flow, and an unlock action must precede the action that locks the same lock. If a piece of code does not comply with all the happens-before rules, it must be thread-unsafe
32. What is CAS
CAS, short for Compare and Swap. Suppose there are three operands: the memory value V, the old expected value A, and the value to be modified B. Change the memory value to B and return true if and only if the expected value A and the memory value V are the same, otherwise do nothing and return false. Of course, CAS must be volatile to ensure that the most recent value in main memory is retrieved each time. Otherwise, the old expected value, A, will remain the same for A thread and will never succeed as long as the CAS operation fails.
What is the optimistic lock and pessimistic lock
Optimistic locking (1) : just like its name, for concurrent operation thread safety problem between state, optimistic optimistic locking that competition does not always happen, so it doesn’t need to hold the lock, will compare – to replace the two actions as an atomic operation to try to modify variables in memory, if failure, said conflict, then there should be a corresponding retry logic.
(2) pessimistic locks: or, like its name, for concurrent operation thread safety problem between pessimistic, pessimistic locking think competition is always happen, so every time of operating resources, will hold an exclusive lock, as synchronized, willy-nilly, directly on the lock operation resources.
34, What is AQS
Simple said AQS, called the AQS AbstractQueuedSychronizer, translation should be abstract queue synchronizer.
If CAS is the foundation of java.util.Concurrent, then AQS is the core of Java and packet issuance, used by ReentrantLock, CountDownLatch, Semaphore, etc. AQS actually join all entries in a bidirectional queue. For example, ReentrantLock. All waiting threads are placed in an Entry queue and connected into a bidirectional queue. Then the actual first Entry of the two-way queue starts running.
AQS defines all operations on two-way queues, and only the tryLock and tryRelease methods are available to developers, who can override the tryLock and tryRelease methods to implement their own concurrency.
Singleton thread safety
The threadbare issue is that singleton thread-safety means that instances of a class can only be created once in a multithreaded environment. There are several ways to write the singleton pattern, so LET me summarize:
(1) Writing method of Hunhan-style singleton pattern: thread safety
(2) Lazy singleton writing: not thread-safe
(3) Double lock singleton mode writing: thread safety
36. What does Semaphore do
A Semaphore is a Semaphore that limits the number of concurrent requests for a block of code. Semaphore has a constructor that passes in an integer of type n to indicate that a piece of code can be accessed by at most n threads. If more than n is passed, wait until one thread finishes executing the block before the next thread enters. If the Semaphore constructor passes an int n=1, it becomes synchronized.
Return count (); return count ();
This is a puzzle I had before, and I don’t know if you’ve thought about it. If there are multiple statements in a method that operate on the same class variable, leaving the lock unlocked in a multithreaded environment will cause thread-safety problems.
There are two main reasons for my understanding of this problem in my work and study.
(1) Only one thread can execute synchronous methods of a fixed class at a time, but for asynchronous methods of a class, multiple threads can access them at the same time. Thread B can call the size() method to read the current number of elements in the Hashtable. The value it reads may not be the latest. Thread A may have finished adding data. But without size++, thread B has already read size, so it must be inaccurate for thread B to read size. Synchronizing size() means that thread B can call size() only after thread A has finished calling put, thus ensuring thread-safety
(2) It is important to remember that the CPU executes code, not Java code. Java code is ultimately translated and executed into machine code, which is the code that actually interacts with the hardware circuits. Even if you see only one line of Java code, and even if you see only one line of bytecode generated after Java code is compiled, that doesn’t mean there is only one operation for this statement to the bottom. If a return count statement is translated into three assembly statements, one assembly statement corresponding to its machine code, it is entirely possible to execute the first sentence, the thread switch.
38, Thread class constructor, static block is called by which thread
This is a very tricky and tricky question. Remember: the thread constructor, the static block, is called by the thread in which the new thread belongs, whereas the code in the run method is called by the thread itself.
For example, if Thread1 is new in Thread2 and Thread2 is new in main, then:
Thread2’s constructor, static block, is called by the main thread, and Thread2’s run() method is called by Thread2 itself
(2) The constructor, static block, of Thread1 is called by Thread2, and the run() method of Thread1 is called by Thread1
Synchronization method and synchronization block, which is the better choice
Synchronized blocks, which means that code outside the synchronized block is executed asynchronously, which is more efficient than synchronizing the entire method. As a rule of thumb, the smaller the scope of synchronization, the better.
With this in mind, I should add that while less scope is better, there is an optimization method in the Java virtual machine called lock coarsing, which is to make the scope larger. This is useful, for example, StringBuffer, which is a thread-safe class. Naturally, the most common append() method is a synchronous method, and when we write code we repeatedly append the string, which means we repeatedly lock -> unlock, which is bad for performance, Because this means that the Java virtual machine is repeatedly switching between kernel and user mode on this thread, the Java virtual machine will coarse the code for multiple append calls with a lock, extending the multiple Append operations to the end of the append method. As a large synchronized block, this reduces the number of lock -> unlock times, effectively improving the efficiency of code execution.
40. How can thread pools be used for high concurrency and short task execution times? How can thread pools be used for businesses with low concurrency and long task execution times? How can a business with high concurrency and long business execution time use thread pools?
This is a question I came across on the Concurrent programming web site, and I put it last so that everyone can see it and think about it, because it’s a very good, very practical, very professional question. My personal view on this is as follows:
(1) For businesses with high concurrency and short task execution time, the number of threads in the thread pool can be set to THE number of CPU cores +1 to reduce the switching of thread context
(2) Businesses with low concurrency and long task execution time should be distinguished:
A) If the service time is long and the IO operation is concentrated, that is, the IO intensive task, because the IO operation does not occupy the CPU, so do not let all the CPU idle, you can increase the number of threads in the thread pool, so that the CPU can handle more services
B) If the business hours are long and concentrated on computing operations, that is, computationally intensive tasks, this cannot be done. As in (1), the number of threads in the thread pool should be set to a smaller number to reduce thread context switching
(3) High concurrency and long service execution time. The key to solving this type of task lies not in the thread pool but in the design of the overall architecture. The first step is to see whether some data in these businesses can be cached, and the second step is to increase the number of servers. Finally, problems with long business execution times may also need to be analyzed to see if tasks can be split and decoupled using middleware.