This article appears in the Interview Cheat Sheet series on Github at github.com/cosen1024/J… Domestic Gitee(downloadable PDF) : gitee.com/cosen1024/J…

1. What is the difference between threads and processes?

Thread has many characteristics of traditional Process, so it is also called light-weight Process or Process element. The traditional Process is called a heavy-weight Process, which is equivalent to a task with only one thread. In operating systems that introduce threading, it is common for a process to have several threads, or at least one thread.

Fundamental difference: Processes are the basic unit of operating system resource allocation, while threads are the basic unit of processor task scheduling and execution

Resource overhead: Each process has its own code and data space (program context), and switching between programs can be expensive; Threads can be regarded as lightweight processes. The same type of threads share code and data space, and each thread has its own independent running stack and program counter (PC). Switching between threads has little overhead.

Inclusion: If there are multiple threads in a process, the execution process is not one line, but multiple lines (threads). Threads are part of a process, so they are also called lightweight or lightweight processes.

Memory allocation: Threads of the same process share the address space and resources of the process, while the address space and resources of the process are independent of each other

Impact relationship: the crash of one process in protected mode has no impact on other processes, but the crash of one thread kills the entire process. So multi-processing is more robust than multi-threading.

Execution: Each independent process has entry points for program execution, sequential execution sequences, and program exits. However, threads cannot be executed independently and must be dependent on the application, which provides control over the execution of multiple threads, both of which can be executed concurrently

2. How do you compare the three ways to create threads?

1) Create multithreading by implementing Runnable and Callable interfaces.

Advantages are:

Thread classes simply implement the Runnable or Callable interfaces and can inherit from other classes.

In this way, multiple threads can share the same target object, so it is very suitable for the situation of multiple same threads to deal with the same resource, so that the CPU, code and data can be separated, forming a clear model, which better reflects the idea of object-oriented.

Disadvantages are:

Programming is a little more complicated, and if you want to access the currentThread, you must use the thread.currentthread () method.

2) Create threads by inheriting the Thread class

Advantages are:

If you need access to the currentThread, use this instead of thread.currentthread ().

Disadvantages are:

Thread classes already inherit from Thread, so they cannot inherit from other parent classes.

3) Difference between Runnable and Callable

  • The Callable specified (overridden) method is call() and the Runnable specified (overridden) method is run().
  • Callable tasks return values after execution, while Runnable tasks do not.
  • The Call method can throw an exception, but the run method cannot.
  • Running the Callable task retrieves a Future object that represents the result of the asynchronous calculation. It provides a way to check that a calculation is complete, wait for it to complete, and retrieve the results of the calculation. By using the Future object, you can know the execution status of the task, cancel the execution of the task, and obtain the execution result.

3. Why multithreading?

  • From the bottom of the computer: thread can be compared to a lightweight process, is the smallest unit of program execution, the cost of switching and scheduling between threads is far less than the process. In addition, the multi-core CPU era means that multiple threads can run simultaneously, which reduces the overhead of thread context switching.
  • From the contemporary Internet development trend: now the system always requires millions or even tens of millions of concurrent, and multi-threaded concurrent programming is the development of high concurrency system foundation, the use of multi-threaded mechanism can greatly improve the overall concurrent ability and performance of the system.

From the bottom of the computer:

  • Single-core era: Multi-threading in the single-core era is mainly to improve the overall utilization of CPU and IO devices. For example, when there is only one thread, the CPU can calculate, and the IO device is idle. The CPU is idle during I/O operations. We can simply say that utilization of both is currently around 50%. This is not the case when there are two threads. When one thread performs CPU calculations, the other thread can perform IO operations, and both threads can ideally achieve 100% utilization.
  • Multi-core era: Multi-core era multi-threading is mainly to improve CPU utilization. For example, if we want to compute a complex task and we use only one thread, the CPU will only use one CPU core. Creating multiple threads allows multiple CPU cores to be used, thus improving CPU utilization.

4. Thread state flow?

The life cycle of a thread and its five basic states:

Java threads have five basic states

1) New state (New) : When the Thread object pair is created, it enters the New state, such as Thread t = New MyThread();

2) Runnable: when the thread object’s start() method is called (t.start();) , the thread enters the ready state. When a thread is in the ready state, it is ready to be executed by the CPU, not when t.start() is executed.

3) Running state: When the CPU starts to schedule the thread in the ready state, the thread can actually execute, that is, enter the Running state. Note: The thread state is the only entry into the running state, that is, to enter the running state execution, the thread must first be in the ready state;

4) Blocked: A thread in the running state temporarily gives up the use of the CPU for some reason, stops execution, and enters the Blocked state until it enters the ready state and has the opportunity to be called by the CPU again to enter the running state. According to different causes of blocking, blocking states can be divided into three types:

1. Waiting to block: A thread in the running state executes wait() to make it enter the waiting state.

2. Synchronized blocking – IF a thread fails to acquire a synchronized lock (because the lock is occupied by another thread), it enters the synchronized blocking state.

3. Other blocking – The thread enters a blocking state by calling its sleep() or join() or by issuing an I/O request. When the sleep() state times out, when the join() wait thread terminates or times out, or when I/O processing is complete, the thread goes back to the ready state.

5) Dead: the thread finishes executing or exits the run() method due to an exception, and the thread ends its life cycle.

5. What is a thread deadlock? How do I avoid deadlocks?

A deadlock

  • Multiple threads are blocked at the same time, and one or all of them are waiting for a resource to be released. Because the thread is blocked indefinitely, the program cannot terminate normally.

A deadlock must have the following four conditions:

  • Mutually exclusive condition: This resource can only be occupied by one thread at any time.
  • Request and hold conditions: when a process is blocked by requesting resources, it holds on to acquired resources.
  • Non-deprivation condition: a thread cannot forcibly deprivate a resource it has acquired before using it up. The resource can be released only after the thread uses it up.
  • Circular waiting condition: a circular waiting resource relationship is formed between several processes.

How do I avoid thread deadlocks?

Just break one of the four conditions that cause the deadlock

  • Breaking the mutex condition We can’t break this condition because we use locks to make them mutually exclusive (critical resources require exclusive access)
  • Break request and hold condition to request all resources at once.
  • If the thread cannot apply for other resources, it can release the resources it occupies.
  • Breaking the circular wait condition is prevented by requesting resources sequentially. Apply for resources in a certain order, and release resources in reverse order. Break the loop wait condition.
  • Lock sorting: specifies the order in which locks are acquired. For example, A thread must acquire locks A and B before it can operate on A resource. Deadlocks can be avoided by specifying the order in which locks are acquired, such as specifying that only the thread that acquired lock A is eligible to acquire lock B. This is often considered a good way to resolve deadlocks.
  • Use reentrantLock. try(long,TimeUnit) in explicit locks to request locks

6. Common comparisons

Runnable vs Callable

  • Callable was only introduced in Java 1.5 to handle use cases not supported by Runnable. The Callable interface can return results or throw check exceptions
  • The Runnable interface does not return results or throw check exceptions,
  • If the task does not need to return results or throw exceptions, it is recommended to use the Runnable interface so that the code looks more concise
  • Tool Executors convert Runnable objects to Callable objects. (Executors. Callable (Runnable Task) or Executors. Callable (Runnable Task, Object Resule))

execute() vs submit()

  • The execute() method is used to submit tasks that do not require a return value, so there is no way to determine whether the task was successfully executed by the thread pool;
  • The submit() method is used to submit tasks that require a return value. The thread pool returns an object of type Future, which can be used to determine whether the task was successfully executed. (The return value can be obtained from the Future’s get() method, which blocks the current thread until the task is complete. Instead, get(long timeout, The TimeUnit unit method blocks the current thread for a while and then returns immediately, possibly without completing the task.

shutdown()VSshutdownNow()

  • Shutdown () : Closes the thread pool. The state of the thread pool changes to shutdown. The thread pool is no longer accepting new tasks, but the tasks in the queue must complete.
  • ShutdownNow () : Closes the thread pool and changes its state to STOP. The thread pool terminates the currently running task, stops processing the queued task and returns the List of tasks awaiting execution. ShutdownNow works by iterating through worker threads in a thread pool and then interrupting them one by one by calling interrupt methods, so tasks that cannot respond to interrupts may never end

isTerminated() VS isShutdown()

  • IsShutDown Returns true when the shutdown() method is called.
  • IsTerminated returns true after the shutdown() method is called and all submitted tasks have completed

7. What are the differences and similarities between sleep() and Wait ()?

The difference between

  • Sleep method: static method of Thread class, the current Thread will sleep n milliseconds, the Thread enters the blocking state. When it is time to sleep, the block is unblocked and the CPU is ready to run. Sleep does not release locks (if any).
  • The wait method is an Object method and must be used together with synchronized. The thread blocks. When notify or NotifyAll is invoked, the block is cleared. However, the mutex will not be runnable until it is reoccupied. When you sleep, the mutex is released.
  • The sleep method does not release the lock, while the wait method does.
  • Sleep is usually used to pause Wait execution and is usually used for interthread interaction/communication
  • After the sleep() method completes, the thread wakes up automatically. Or you can use wait(long timeout) to wake up automatically. After wait() is called, the thread does not wake up automatically. Other threads need to call notify() or notifyAll() on the same object

The same

  • Both can suspend the execution of a thread.

8. Why do we call the run() method when we call the start() method, and why can’t we call the run() method directly


  • New a Thread enters the new state. Calling start() performs the corresponding preparation of the thread, and then automatically executes the contents of the run() method. (Calling start() starts a thread and puts it into a ready state, ready to run when the time slice is allocated.) This is true multithreading.
  • Executing the run() method directly treats the run method as a normal method in the main thread and does not execute it in a thread, so it is not multithreading. The start method is called to start a thread and put it into a ready state, while the run method is just a normal method call on thread and is executed in the main thread.

9. What is ThreadLocal? What’s the use?

ThreadLocal is a local thread copy variable utility class. It is mainly used to make a mapping between the private thread and the copy object stored by the thread. The variables between each thread do not interfere with each other. In the high concurrency scenario, stateless call can be realized, especially suitable for the scenario where each thread relies on impassable variable values to complete operations.

Simple said ThreadLocal is a kind of to the practice of trading space for time, in each Thread maintains a method to implement ThreadLocal. With open address ThreadLocalMap, isolating data, data is not Shared, nature is no Thread safety issues.

10. What is the use of yield in Thread?

The Yield method suspends the currently executing thread object to allow other threads of the same priority to execute. It is a static method and only guarantees that the current thread will give up CPU usage, but does not guarantee that other threads will be able to use CPU usage. A thread that performs yield() may be executed immediately after entering the pause state.

11. What is the Fork Join framework in Java?

The Fork Join framework is an efficient tool emerging in JDK7 that allows Java developers to take full advantage of multiple processors on modern servers. It is designed for those that can be recursively divided into many submodules, with the goal of using all available processing power to improve the performance of the program. A big advantage of the fork Join framework is that it uses a work-stealing algorithm, so worker threads that can do more can steal tasks from other threads to execute them.

12. Differences between synchronized and ReentrantLock

1. Both are reentrant locks

Reentrant lock: A reentrant lock, also known as a recursive lock, means that the same lock can be acquired multiple times within a thread. For example: A thread is a lock in the implementation of the method, the method call again in another need to lock the same way, call the method, the thread can be directly executed, without having to obtain locks, both are the same thread every enter, lock the counter from 1-6, so have to wait until the lock counter is reduced to zero in order to release the lock.

Synchronized relies on the JVM while ReentrantLock relies on the API

  • Synchronized is implemented on the JVM, and as mentioned earlier, the VIRTUAL machine team has made many optimizations for the synchronized keyword in JDK1.6, but these optimizations are implemented at the virtual machine level
  • ReentrantLock is implemented at the JDK level (that is, at the API level, with lock() and unlock() methods and try/finally blocks).

3.ReentrantLock adds some advanced features over synchronized

ReentrantLock adds some advanced features over synchronized. There are three main points: (1) waiting can be interrupted; ② Can realize fair lock; ③ Can realize selective notification (lock can bind multiple conditions)

  • The wait is interruptible. Use lock.lockInterruptibly() to implement this mechanism. This means that a thread that is waiting can choose to give up waiting and process something else instead.
  • ReentrantLock can specify whether the lock is fair or unfair. Synchronized can only be an unfair lock. The so-called fair lock is that the line that waits first gets the lock first. ReentrantLock is unfair by default. You can specify whether it is fair by using the ReentrantLock(Boolean fair) constructor of the ReentrantLock class.
  • Reentrantlock-like thread objects can be registered in the specified Condition, allowing selective thread notification and more flexibility in scheduling threads. When notify()/notifyAll() is used, the thread to be notified is selected by the JVM. The ReentrantLock class and Condition instance can be used to implement selective notification.

4. Use selection

  • Unless you need to use the advanced features of ReentrantLock, use synchronized.
  • Synchronized is a locking mechanism implemented by the JVM and natively supported by the JVM, while ReentrantLock is not supported by all JDK versions. And with synchronized you don’t have to worry about deadlocks if the lock is not released, because the JVM ensures that the lock is released

13. Talk about the use of volatile and how it works?

Two levels of semantics for volatile:

1. Volatile ensures visibility to all threads: When a volatile variable is modified, the new value is immediately updated to all threads. In a multithreaded environment, the value of a volatile variable must be the latest.

2. After JDK1.5, volatile completely avoids instruction reordering optimization and implements order.

How volatile works:

Obtaining assembly code from the JIT (the just-in-time Java compiler that interprets the bytecode as machine language and sends it to the processor), volatile added the LOCK ADDL instruction, which acts as a memory barrier so that instructions following the lock instruction cannot be reordered to the location in front of the barrier. This is why the singleton pattern can be implemented in JDK1.5 using double-lock detection.

Another meaning of the lock prefix is to cause the value of a volatile variable in the thread’s working memory to be immediately written to main memory and to invalidate that volatile variable shared by other threads, so that other threads must re-read the value from main memory.

See this article principle: www.javazhiyin.com/61019.html

14. The difference between synchronized and volatile?

  • Volatile is a lightweight implementation of thread synchronization, so volatile certainly performs better than synchronized.
  • The volatile keyword can only be used with variables, while the synchronized keyword can modify methods and blocks of code. The synchronized keyword is more likely to be used in actual development.
  • Multithreaded access to the volatile keyword does not block, while the synchronized keyword may block
  • The volatile keyword guarantees visibility, but not atomicity. The synchronized keyword guarantees both
  • The volatile keyword addresses the visibility of variables across multiple threads, while the synchronized keyword addresses the synchronization of access to resources across multiple threads.

Here I also recommend a collection of computer books warehouse, the warehouse has hundreds of classic CS e-books, read the classic books will be deeper ~

Click this link to get you to the list of must-read books (PDF download included)

Github also has a repository at github.com/cosen1024/a… Welcome to star.