4 How do I use JDK commands to analyze and troubleshoot deadlocks and loops?

The top command: displays the current active processes. By default, the list is sorted by CPU usage and refreshed every 5 seconds. You can also choose different sorting methods, for example, m is a shortcut to sort by memory usage.

Run the top-hp pid command

It can track and retrieve the most CPU consuming threads in a given process in real time. Then the corresponding thread stack information is extracted by jStack method.

The JPS command:

JPS is used to view the specific state of all processes in the JVM, including the process ID, the path where the process was started, and so on.

Jstack command:

  1. Check the Java program crash generated core file to obtain the Java stack and native stack information of the core file;
  2. Check the Java Stack and native Stack information of the running Java program: a) Check that the running Java program appears hung state; B) Trace the Java call stack, profiler.

Thread state analysis:

Runnable: This state indicates that the thread has all the running conditions, is in the run queue ready for the operating system to schedule, or is running.

Wait on condition: This state occurs when a thread is waiting for a condition to occur. The reason can be analyzed in combination with StackTrace. The most common case is that a thread is waiting for a network read or write; Another common case where a Wait on condition occurs is when the thread is in a sleep, and when the time for sleep is up, it will be woken up.

Waiting for monitor entry and in object.wait () : Waiting for monitor entry and in object.wait () : Waiting for monitor entry and in object.wait () Monitor is Java’s primary means of achieving mutual exclusion and collaboration between threads. It can be viewed as a lock on an object or Class. There is only one Monitor for each object.

Jinfo command:

Jinfo allows you to observe the running environment parameters of a running Java program: parameters include Java System properties and JVM command-line parameters; The configuration information for crashed Java applications can also be found in the core file.

Jstat command:

Jstat uses JVM built-in instructions for real-time command-line monitoring of Java application resources and performance, including Heap size and garbage collection status.

Jmap command:

Observe the physical memory footprint of the running JVM, including Heap size, Perm size, and so on.

4.1 Java Infinite Loop Analysis

  • View the process ID: TOP or JPS

  • Displays all threads of the current JAVA program by CPU usage: top-HP 3230

In fact, this place by CPU usage is not easy to understand, to determine the run time may be more illustrative.

  • Convert the longest running local thread ID (3244) to hexadecimal 0xCAc.

  • Generate thread stack log file jstack -l 3230 > jstack.log;

  • Open the stack log and search for “0xCAc” :

  • It’s easy to find the call thread stack for an infinite loop.

4.2 Java Deadlock Analysis

In the preparation of multithreaded programs, if the synchronization mechanism is not properly used, it may cause deadlock of the program, often shown as program pause, or no longer respond to user requests. For example, in the following example, there is a typical deadlock situation:

5 Java J.U.C package principle implementation and CAS algorithm?

5.1 Java multi-threaded synchronization mechanism

In modern multiprocessor systems, improving the parallel execution ability of programs is the key to effectively utilize CPU resources. In order to coordinate the concurrent access between multiple threads effectively, appropriate synchronization mechanism must be adopted to coordinate the competition. Currently commonly used multi-threaded synchronization mechanisms can be divided into the following three types:

Volatile variables: Lightweight multithreaded synchronization that does not cause context switches or thread scheduling. Only memory visibility is guaranteed, not atomicity.

CAS atomic instruction: Lightweight multithreaded synchronization mechanism that does not cause context switching and thread scheduling. It provides both memory visibility and atomized update assurance.

Internal and explicit locks: Heavyweight multithreaded synchronization mechanisms, which can cause context switches and thread scheduling, provide both memory visibility and atomicity.

In this context, CAS refers to a special instruction that operates on shared data in memory, widely supported by modern cpus. This instruction does atomic reads and writes to shared data in memory. A quick rundown of how this instruction works: First, the CPU compares the data in memory to be changed with the expected value. Then, when the two values are equal, the CPU replaces the value in memory with the new value. Otherwise, no operation is performed. Finally, the CPU returns the old value. This sequence of operations is atomic. They may seem complex, but they are the fundamental reason why Java 5 concurrency is better than the old locking mechanism. In simple terms, CAS means “what I think the old value should be, if so, update the old value to the new value, otherwise leave it unchanged, and tell me what the old value was”.

In mild to moderate contention cases, non-blocking algorithms outperform blocking algorithms because most of the time the CAS succeeds on the first try, and the overhead of contention occurs without thread suspension and context switching, just a few more iterations of the loop. Uncontested CAS are much cheaper than uncontested locks (this statement must be true, since uncontested locks involve CAS plus extra processing), and contested CAS involve shorter latency than contested lock acquisition.

5.2 Java J.U.C package principle implementation

Java CAS will use modern processors provide efficient machine level atomic instructions, these atoms instruction atomically to memory read – to – write operations, this is the key to achieve synchronization in multiprocessor (in essence, can support atomic reading – to – writing instruction computing machines, calculate the Turing machine is order equivalent asynchronous machine, So any modern multiprocessor will support some kind of atomic instruction that can perform atomic read-modif-write operations on memory. Meanwhile, read/write of volatile variables and CAS enable communication between threads. Taken together, these features form the building blocks for the implementation of the concurrent package. If we look closely at the source code implementation of the Concurrent package, we will find a common implementation pattern:

  1. First, declare the shared variable volatile;
  2. Then, atomic conditional update of CAS is used to achieve synchronization between threads.
  3. At the same time, the thread communication is realized with volatile read/write and CAS memory semantics.

AQS, non-blocking data structure and the atomic variable classes (Java. Util. Concurrent. Atomic package’s classes), the concurrent is the base class in the package using this model, and the top class in the concurrent bag is dependent on the base class. As a whole, the implementation of the Concurrent package looks like this:

What is the essential difference between locked and unlocked?

The essence of locking and locking is to solve the thread safety problem of race resources in the case of concurrency. Locking only further weakens the “exclusivity” to increase concurrency and maximize CPU usage.

No lock will only wait for CPU control switch, while a lock will wait for “lock release” and “CPU control switch”.

Finally, in the case of no lock, CPU usage does not increase, throughput decreases, and system resources are limited. Even if the number of threads increases, it will only increase the switching cost of CPU control of threads, and the unified scheduling under THE control of CPU will only lead to waiting.

7 Is volatile thread-safe?

The volatile keyword guarantees only two things:

  1. Visibility of volatile variables between threads;
  2. Disallow reordering of volatile operation instructions;

But the most important non-guarantee, with regard to thread safety, is the non-guarantee atomicity of Volatile variable operation instructions.

What is atomic operation:

An operation performed by multiple threads is considered atomic if any thread has either completed all or none of the steps of the operation.

With regard to instruction reordering, in a single thread, the JVM can run instruction reordering before and after the reordering as long as it does not affect the JVM’s execution results.

The meaning of instruction reordering is:

The JVM can properly reorder machine instructions according to the characteristics of the processor (CPU multi-level cache system, multi-core processor, etc.) to make the machine instructions more consistent with the EXECUTION characteristics of the CPU, and maximize the performance of the machine.

But in the case of instruction reordering in the case of multiple threads, the results of reordering may not be the same. The main reason for the inconsistencies here is thread-safety, which can exist even if the instructions are not reordered. This is the problem of atomicity, or the problem of no locking;