This article is participating in the Java Theme Month – Java Debug Notes Event, see the event link for details
This article mainly shares a typical case of memory leakage encountered online before, for the convenience of avoiding pits, as well as the corresponding troubleshooting ideas when encountering problems related to memory leakage.
All screenshots and contents have been desensitized, and related systems and interfaces are described as system SERVICE A and interface B.
First, background
One day in May, at about 20:40, the mobile phone suddenly received an alarm: the memory usage of service A exceeded 75%, and the r&d student happened to be on duty. The monitoring found that the interface performance of B reached A maximum value.
After the problem synchronization, I checked the interface log and found no obvious abnormality. Then I checked the JVM monitoring and found that a Full GC was triggered at 20:40, and the heap memory was almost Full, but the execution time of this Full GC was as long as ten seconds. It can be said that the execution of this Full GC was not smooth. Therefore, extreme interface performance is caused by Full GC.
After the problem is exposed and preliminatively identified as an abnormal heap memory, the container is restarted so as not to affect the interface performance and online services. The heap memory is reset, and the interface performance and memory return to normal.
Ii. Problem observation and follow-up
We need to figure out what is causing the heap to fill up abnormally.
1. A Service background
Service demand of A is low, and there is no frequent iteration online. The last iteration was launched 2 months ago.
This is the first memory alarm since the service was launched.
At the same time, it was also a problem that occurred after nearly 2 months of the last iteration going online, which was probably related to the code going online last time.
2. A service JVM configuration
Querying JVM configuration
First, execute: JCMD to find the process ID of the Java service
In the second step, execute jmap-heap PID to get the JVM configuration information as shown in the following figure
As shown in the figure, the heap memory is configured with 5G, wherein
Young generation: Eden area (1879M) + S0(85M) + S1(84M) = 2,048m
Old age: 3072M
The garbage collector uses Java8’s default throughput-focused parallel collector.
You can confirm that, in general, this heap size must be sufficient.
3. Service memory usage
A stable Java application, due to Java’s own garbage collection mechanism (Young GC + Full GC), should have a stable range of heap memory and a stable range of memory usage even if it runs for a long time.
However, as the figure shows, over time, memory usage is gradually increasing, but there is no decreasing trend.
In addition, by checking the system Young GC monitoring during this period, it is found that the remaining heap memory after Yong GC is getting bigger and bigger as time goes by, even after Full GC occurs, the heap memory is still getting bigger and bigger.
In summary, all indications indicate that service A has A high probability of heap memory leakage, and it is related to the coming online of the last iteration!
3. Problem location
Now that we have identified the Java heap memory leak, we need to identify exactly what objects are occupying the heap? What program causes so much heap memory? Why is this heap memory not reclaimed by GC?
Use the MAT tool to view heap memory object usage
First, execute: JCMD, again finding our Java process ID
The second step is to use the jmap command to print the dump binary log to analyze the object storage information in the heap memory
Run jmap -dump:format=b,file=/logs/dump.log pid
After the command is executed, download the generated log file to a local PC. (You are advised to compress the log file before downloading it.)
Step 3, use the MAT tool to check heap memory usage
Memory Analyzer Tool (MAT) is a Java heap Memory analysis Tool that can effectively help us troubleshoot Memory leaks. You can query the detailed usage details by yourself.
The dump log has 4 G’s… The memoryAnalyzer. ini configuration file of MAT needs to be modified as -xmx5120m because logs are too large. Otherwise, the memoryAnalyzer. ini configuration file cannot be analyzed
MAT imports dump logs and finds that the fastJson IdentityHashMap Entry array occupies most of the heap memory. The following figure
So far, two key clues have been found: fastJson and IdentityHashMap.
It doesn’t matter if you don’t know the clues. The first thing to do is Google: fastJson IdentityHashMap
You’ll find a lot of memory leak related posts and the truth will become clear
Through other blogs and source call chains found below
// Probably for performance, in the parseObject method of fastJson
// An ObjectDeserializer object is generated for the class in the generic
// and cache it in the IdentityHashMap container
// IdentityHashMap is a linear-probe Hash table whose key is generated by system. identityHashCode(key).
// That is, an object has a unique key and, unlike the object's HashCode method, cannot be overridden.
// In this case, its key is a Type object, which means that as long as the objects are different, different values must be cached in the hash table
private final IdentityHashMap<Type, ObjectDeserializer> deserializers = new IdentityHashMap<Type, ObjectDeserializer>();
Copy the code
According to fastJson and then associated with the changes of the last iteration, locate the relevant code, as shown in the figure below
Explanation: This is a utility class for converting JSON-structured strings into objects with generic types.
For example, convert a JSON string to a Response<List<UserInfo>> object.
Combined with the above “IdentityHashMap cache in fastJson’s parseObject method”, the causes of the memory leak are summarized as follows
- Each time the utility class is invoked, the ParameterizedTypeImpl object is instantiated
- The jsonObject.parseObject (json, type) method puts the Type object (that is, the ParameterizedTypeImpl object) into the IdentityHashMap cache
- IdentityHashMap is a linear Hash table whose key is generated by System. IdentityHashCode (key). It is a large array into which every instantiation of a Type object is placed
- So as the utility class continues to be called, the IdentityHashMap cache array gets bigger and bigger without being GC until the heap is full
Fourth, problem solving
- Delete the utility class that triggered the memory leak
- For json string conversion with generic objects, separate implementation.
Reference github.com/alibaba/fas… The use of the TypeReference
After the modification, the memory usage returns to normal
V. Summary of problems
- This is a memory leak due to improper fastJson usage
- Many problems often rise to a certain amount of dosage will be exposed
- Be careful when using third-party tools, and research others’ experiences
Vi. Reference materials
Blog.csdn.net/bohu83/arti MAT tools use introduction 】 【…
【 fastjson deserialization use undeserved cause memory leak 】 www.cnblogs.com/liqipeng/p/…
【 “com. Alibaba. Fastjson” memory leak problems 】 www.jianshu.com/p/adfde1a31…
[GitHub-Issue: parseObject has memory leaks] github.com/alibaba/fas…
Github-wiki: TypeReference use github.com/alibaba/fas…