preface
In production, our applications sometimes hang or run slowly, and it can be a hassle to determine the root cause. Thread dumps show a snapshot of the current state of a running Java process. However, the generated data contains multiple long files. Therefore, we need to analyze Java thread dumps and mine problems from a large amount of unrelated information.
Today, we’ll look at how to filter out unwanted data to effectively diagnose performance problems. In addition, we’ll learn how to detect bottlenecks and even simple bugs.
Virtual machine Threads in the JVM
The JVM uses threads to perform every internal and external operation. As you know, the GC process has its own thread, and each task in a Java application has its own thread.
Threads go through various states during their lifetime. Each thread has an execution stack that tracks the current operation. In addition, the JVM stores all methods that have been successfully called before. Therefore, the entire stack can be analyzed to see what happens to the application when an error occurs.
Get Java thread dumps
After the application runs, there are several ways to generate Java thread dumps for diagnostics. Today, we will use two utilities in the JDK7+ installation package. First, we’ll execute the JVM Process Status (JPS) command to discover our application’s PID Process:
[root@centos1 service]# jps
3507 jar
3525 Jps
Copy the code
Second, we get the PID of the application. We will then use JStack to capture thread dumps. Finally, we store the results in a text file:
[root@centos1 service]# jstack -l 3507 > bam-thread-dump.txt
Copy the code
Java thread Dump structure
2021-02-05 15:23:03 Full Thread Dump Java HotSpot(TM) 64-bit Server VM (25.191-B12 mixed mode): "Attach Listener" #33 daemon prio=9 os_prio=0 tid=0x00007fd6c4001000 nid=0xe5d waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE Locked ownable synchronizers: - None "DestroyJavaVM" #32 prio=5 os_prio=0 tid=0x00007fd708009800 nid=0xdb4 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE Locked ownable synchronizers: - None "http-nio-8000-AsyncTimeout" #30 daemon prio=5 os_prio=0 tid=0x00007fd709895000 nid=0xdf1 waiting on condition [0x00007fd6dde79000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.coyote.AbstractProtocol$AsyncTimeout.run(AbstractProtocol.java:1133) at java.lang.Thread.run(Thread.java:748) Locked ownable synchronizers: - None "http-nio-8000-Acceptor-0" #29 daemon prio=5 os_prio=0 tid=0x00007fd70b7a0800 nid=0xdf0 runnable [0x00007fd6ddf7a000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) - locked <0x000000078ed72eb8> (a java.lang.Object) at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:455) at java.lang.Thread.run(Thread.java:748) Locked ownable synchronizers: - None "http-nio-8000-ClientPoller-1" #28 daemon prio=5 os_prio=0 tid=0x00007fd70b79f000 nid=0xdef runnable [0x00007fd6de07b000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x000000078f1c7a00> (a sun.nio.ch.Util$3) - locked <0x000000078f1c79f0> (a java.util.Collections$UnmodifiableSet) - locked <0x000000078f1c78d8> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:796) at java.lang.Thread.run(Thread.java:748) Locked ownable synchronizers: - None "http-nio-8000-ClientPoller-0" #27 daemon prio=5 os_prio=0 tid=0x00007fd7085b0000 nid=0xdee runnable [0x00007fd6de17c000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked <0x000000078f1c6048> (a sun.nio.ch.Util$3) - locked <0x000000078f1c6038> (a java.util.Collections$UnmodifiableSet) - locked <0x000000078f1c5f20> (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:796) at java.lang.Thread.run(Thread.java:748) Locked ownable synchronizers: - NoneCopy the code
Let’s look at the generated Java thread Dump. The first line shows the timestamp and the second line shows the JVM information:
2021-02-05 15:23:03
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.191-b12 mixed mode):
Copy the code
Dump then displays a list of threads. Each thread contains the following information:
- Name: If the developer includes a meaningful thread Name, it can provide useful information
- Priority (prior): indicates the Priority of the thread
- Java ID (TID): indicates the unique ID given by the VM JVM
- Native ID (NID): unique ID given by the operating system to extract dependencies with CPU or memory processing
- State: indicates the actual status of the thread
- Stack Trace: The most important source of information to see what’s happening in our application
We can see thread snapshots from top to bottom to see what different threads are doing.
From the whole document, extract part of the content and analyze it:
"http-nio-8000-exec-1" #17 daemon prio=5 os_prio=0 tid=0x00007fd70bc75800 nid=0xde4 waiting on condition [0x00007fd6f089d000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000078edd19b8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:103)
at org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:31)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
Copy the code
Shows how the thread executes and eventually stops on JNI.
"VM Thread" os_prio=0 tid=0x00007fd70813e800 nid=0xdb9 runnable
"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x00007fd70801e800 nid=0xdb5 runnable
"GC task thread#1 (ParallelGC)" os_prio=0 tid=0x00007fd708020800 nid=0xdb6 runnable
"GC task thread#2 (ParallelGC)" os_prio=0 tid=0x00007fd708022000 nid=0xdb7 runnable
"GC task thread#3 (ParallelGC)" os_prio=0 tid=0x00007fd708024000 nid=0xdb8 runnable
"VM Periodic Task Thread" os_prio=0 tid=0x00007fd708194800 nid=0xdc3 waiting on condition
JNI global references: 909
Copy the code
At the end, we notice that there are several additional threads performing background operations such as garbage collection (GC) or object termination:
This section describes the application scenarios of thread Dump
To understand how our application works, we need to effectively analyze the generated snapshots. In the Dump file, we get a lot of information about precise data for all threads. However, we need to defragment the log files and do some filtering and grouping to extract useful hints from the stack trace. Once we have the Dump file ready, we can use different tools to analyze the problem.
Synchronization issues
Analysis is performed by observing thread status. We need to focus on threads that are RUNNABLE or BLOCKED, and ultimately TIMED_WAITING. These thread states can guide us in analyzing problems between multiple threads
- In a deadlock case, multiple threads running hold a synchronized block on a shared object
- In thread contention, when a thread is blocked waiting for another thread to complete
Run problem
As a rule of thumb, for unusually high CPU usage, we need only look at RUNNABLE threads. We will use thread Dump files and other commands to get additional information. One such command is top-h-ppID, which shows which threads in a particular process are consuming operating system resources. We also need to look at internal JVM threads, such as GC, just in case. On the other hand, when processing performance is abnormally low, we examine BLOCKED threads.
In these cases, a Dump is definitely not enough to understand what’s going on. To compare stacks for the same thread at different times, we need to Dump a lot at similar intervals. For one thing, a snapshot is not always enough to find the root of a problem. On the other hand, we need to avoid redundancy (too much information) between snapshots.
To understand how threads change over time, the recommended best practice is to Dump at least 3 times every 10 seconds. Another useful technique is to break Dump files into smaller pieces to avoid crashes when loading files.
advice
To find the root of the problem effectively, we need to organize a lot of information in the stack trace. We will therefore consider the following proposals:
-
In execution problems, taking multiple snapshots at 10-second intervals will help focus on the real problem. If necessary, splitting the file is also recommended to avoid loading crashes
-
Use naming to better identify source code when creating new threads
-
Depending on the problem, internal JVM processing (such as GC) is ignored
-
Watch out for long running or blocked threads when CPU or memory usage is abnormal
-
Use top-H-PPID to associate thread stacks with CPU processing
-
Most importantly, use the Analyzer tool
Manually analyzing Java thread dumps can be difficult. For simple applications, the thread causing the problem can be identified. For complex situations, we need tools to simplify the task.
Online tools
Several tools available online are recommended. We need to take security into consideration when using this software. Keep in mind that we use threading tools and inevitably share thread logs with third parties.
FastThread
FastThread is probably the best online tool for analyzing thread Dump files in production environments. It provides a very nice graphical user interface. It also includes features such as CPU utilization for threads, stack length, and the most common and complex methods:
FastThread integrates with the RESTAPI to automatically analyze thread dumps. With a simple cURL command, you can send the results immediately. The main disadvantage is security, as it stores stack traces in the cloud.
JStack Review
JStack Review is an online tool for analyzing dumps. From a security perspective, it does not store data to the cloud, which is a major advantage of using it. It provides a graphical overview of all threads, showing the methods that are running and grouping them by state. It is important that JStack Review separate the thread that produces the stack from other threads, for example, internal processes. Finally, it also includes synchronizers and ignored lines:
Spotify Online Java Thread Dump Analyzer
Spotify Online Java Thread Dump analyzer is an online open source tool written in JavaScript scripts. It displays the results in plain text, separating threads with and without stacks. It also shows the top-level methods in the running thread:
Offline application
We can also use several separate applications locally.
JProfiler
JProfiler is the most powerful tool on the market and is well known in the Java developer community. You can test the functionality with a 10-day trial license. JProfiler allows you to create configuration files and attach running applications to them. It includes features for spotting problems on the spot, such as CPU and memory usage and database analysis. It also supports integration with the IDE:
IBM Thread Monitor and Dump Analyzer for Java (TMDA)
IBM TMDA can be used to identify thread contention, deadlocks, and bottlenecks. It is freely distributed and maintained without any warranty or support from IBM:
Irockel Thread Dump Analyser (TDA)
Irockel TDA is a standalone open source tool licensed under LGPLv2.1. The last version (V2.4) was released in August 2020, so it is well maintained. It displays thread Dump as a tree and also provides some statistics to simplify searching:
Eclipse Memory Analyzer (EMAT)
Eclipse Memory Profiler is a fast and feature-rich Java heap profiler that helps you find memory leaks and reduce memory consumption.
conclusion
Knowing what threads are actually doing will help you control the execution of your application. Do it, no problem!!