Find out the JVM health of your online system using Jstat
Powerful Jstat
It makes it easy to see the current running system, the Eden, Survivor, and older memory usage of its JVM, as well as the number and duration of Young and Full GC executions. Through these indicators, we can easily analyze the current operating conditions of the system, judge the current system memory usage pressure and GC pressure, as well as whether the memory allocation is reasonable. Let’s take a little look at the use of the Jstat tool.
jstat -gc PID
After running this command, you will see the following:
parameter | instructions |
---|---|
S0C | This is the size of the From Survivor zone |
S1C | This is the size of the To Survivor zone |
S0U | This is the size of memory currently used by the From Survivor zone |
S1U | This is the size of memory currently used by the To Survivor section |
EC | This is the size of the Eden district |
EU | This is the size of memory currently used by the Eden area |
OC | This is the size of the old age |
OU | This is how much memory is currently used in the old years |
MC | This is the size of the method area (persistent generation, metadata area) |
MU | This is the amount of memory currently used for the method area (persistent generation, metadata area) |
YGC | This is the number of Young GC’s that the system has run so far |
YGCT | This is how long the Young GC takes |
FGC | This is the number of Full GC’s that the system has run so far |
FGCT | This is how long Full GC takes |
GCT | This is the total time for all GCS |
Jstat Other parameters of the command
In addition to the jstat-gc command above, which is the most commonly used, he has a few commands that he can see in more detail, as shown below:
The command | explain |
---|---|
jstat -gccapacity PID | Heap memory analysis |
jstat -gcnew PID | Young generation GC analysis, where TT and MTT can see the age at which the object survived in the young generation and the maximum age at which it survived |
jstat -gcnewcapacity PID | Young generation memory analysis |
jstat -gcold PID | GC analysis of old age |
jstat -gcoldcapacity PID | Old age memory analysis |
jstat -gcmetacapacity PID | Metadata area memory analysis |
How exactly do you use the Jstat tool
Here are some tips for using the Jstat tool: What do we want to know about the JVM processes on the line? These include: Growth rate of new generation objects, frequency of Young GC, duration of Young GC, how many objects survive after each Young GC, how many objects enter old age after each Young GC, growth rate of old age objects, frequency of Full GC, duration of Full GC. With this information in mind, we can use the JVMGC optimization methods we analyzed in the previous few weeks to allocate memory to keep objects in the young generation as much as possible and avoid frequent Full GC. This is the best performance optimization for the JVM!
The rate of growth of the new generation of objects
Run the following command on your Linux machine online: Jstat -gc PID 1000 10 jstat stat 1000 10 jstat stat 1000 Observe the change in Eden area object occupancy in the JVM over time. For example, after executing this command, it is shown that 200MB memory is used in Eden in the first second, 205MB memory is used in Eden in the second second, 209MB memory is used in Eden in the third second, and so on. At this point you can easily deduce that the system is adding about 5MB of objects per second. And here you can use flexibly according to the situation of your system, for example, your system load is very low, not necessarily every second requests, so you can adjust the above 1 second to 1 minute, or even 10 minutes, to see your system every 1 minute or 10 minutes about how many objects. There is also a general system has peak and daily two states, such as the system peak time with a lot of people, at this time you should use the command to see the peak time of the object growth rate. Then you have to look at the growth rate of objects during non-peak daily hours. Along these lines, you basically have a pretty good idea of the object growth rate during peak and daily periods of your online system.
Young GC firing frequency and time spent each time
In fact, it is easy to predict how often Young GCS are triggered, because you know the system’s peak and daily object growth rates, so it is very easy to predict how often Young GCS are triggered at peak and how often Young GCS are triggered at daily. For example, if you have 800MB of memory in Eden, you will find that 5MB objects are added every second at peak times, and the peak time is about once every 3 minutes. The daily phase adds 0.5MB objects per second, so it takes about half an hour for the daily phase to trigger a Young GC. What about the average time per Young GC? Simple, as I told you earlier, jstat will tell you how many Young GCS have occurred on your system so far and the total time taken for those Young GCS. For example, a total of 260 Young GCS occurred after 24 hours of system operation, with a total time of 20s. On average, each Young GC takes tens of milliseconds. You probably know that every Young GC will cause the system to pause for tens of milliseconds.
How many objects are alive and old after each Young GC
There is no way to tell directly how many objects will survive each Young GC, but there is a way to make a rough guess. We have calculated how often a Young GC occurs during peak times, such as a Young GC every 3 minutes, so we can run the following jstat command: jstat -gc PID 180000 10 This is the equivalent of asking him to perform a statistic every three minutes, 10 times in a row. At this point, you can observe that every three minutes a Young GC occurs, at which point the Eden, Survivor, and old age objects change. Normally, the Eden area would be almost full and then have few objects in it again. For example, a space of 800MB used tens of MB. Survivor zones will definitely put some live objects, and older ages may increase the usage of some objects. So the key here is to look at the growth rate of objects in the old age. From a normal point of view, old objects are unlikely to keep growing rapidly, because ordinary systems don’t have that many long-lived objects. If you find, for example, that older objects grow by tens of MB after each Young GC, chances are you have too many surviving objects after one Young GC. If there are too many surviving objects, dynamic age determination rules may be triggered to enter the old age after being placed into the Survivor area, or the Survivor area may not be put into the old age, so most surviving objects enter the old age. This is the most common case. This is ok if your older generation is adding a few hundred KB or a few MB of objects every time after the Young GC, but it is definitely not normal if the older generation is growing rapidly. So with this observation strategy, you can see how many objects are alive after each Young GC. In fact, objects in the Survivor region are alive as well as objects entering the old age. You can also know the growth rate of old age objects, for example, every 3 minutes Young GC, 50MB objects will enter old age every time, this is the growth rate of age objects, 50MB every 3 minutes.
Full GC trigger timing and elapsed time
Once you know the growth rate of old age objects, it’s pretty clear when a Full GC will trigger. For example, if you have 800MB of memory in old age, adding 50MB of objects every 3 minutes will trigger a Full GC roughly every hour. Then you can see the total number of Full GC runs and the total elapsed time of the system printed by jstat. For example, the total elapsed time of 10 Full GC runs is 30 seconds. Each Full GC takes about 3 seconds.
Use JMAP and JHAT to map the object distribution of the online system
Use JMap to understand the memory area of the system at runtime
Sometimes we find that the JVM is adding objects very quickly and want to see what objects are taking up so much memory. If you find objects in your code that you can optimize the timing of creation to avoid that object’s footprint, you might even optimize the code the other way around. Of course, if it weren’t for the OOM extreme, there wouldn’t be much need to rush to optimize the code. In this article we will learn how to understand the distribution of objects in the online system JVM. For example, we found in last week’s case that there are always 500kb unknown objects in the young generation. Will you be curious? It would be nice to be able to see what these 500KB objects are in the JVM, so it’s useful to learn this technique. Let’s start with a command:
jmap -heap PID
Copy the code
This command can print out a series of information, we don’t have to paste the detailed information, because the content is too long, actually not very meaningful, because you can read the literal meaning of the things inside. Let me just give you a quick idea of what’s going to be printed here. Roughly speaking, this information will print out some parameters related to the heap memory, and then some basic information about each region in the current heap memory, such as the Eden region total capacity, used capacity, remaining capacity, the total capacity of the two Survivor regions, used capacity, and remaining capacity. The total capacity, used capacity and remaining capacity of the old age. But you might want to know that jstat already exists. That’s right, so you don’t usually use Jmap to look at this information, because it’s not all jSTAT yet, because there are no gc statistics.
Use JMap to understand object distribution at system runtime
A useful way to use the jmap command is as follows:
jmap -histo PID
Copy the code
This command will print information similar to the following:
He sorts objects in descending order by the amount of memory they take up, putting the objects that take up the most memory at the top.
So if you just want to get a snapshot of how much memory is being used by objects in the current JVM, just use the jmap-histo command
You can quickly see which objects are currently occupying a large amount of memory.
Generate a heap memory dump snapshot using Jmap
But if you’re just looking at a snapshot of the memory usage of the above objects, it doesn’t feel like enough. If you want to get a little more detailed, you can use the jmap command to generate a snapshot of the heap memory and place it in a file, as follows:
jmap -dump:live,format=b,file=dump.hprof PID
Copy the code
Hrpof this command creates a dump.hrpof file in the current directory. This file is in binary format and you can’t open it directly.
Analyze heap roll-out snapshots in the browser using Jhat
Then you can use Jhat to analyze heap snapshots. Jhat has a built-in Web server that allows you to graphically analyze heap dumps from your browser using the following command to start the Jhat server
jhat dump.hprof
Copy the code
You can then access the current machine’s port number 7000 in your browser to graphically analyze the distribution of objects in the heap memory.
From Test to Live: How to analyze JVM health and Optimize it properly
Predictive optimization after the system is developed
It is a self-estimation of how many requests the system will make per second, how many objects will be created per request, how much memory will be used, what configuration the machine should use, how much memory should be given to the Young generation, how often Young GC fires, how fast objects should be passed into the old generation, how much memory should be given to the old generation, and how often Full GC fires. These are things that you can reasonably predict based on your own code. After the estimation is complete, you can use the optimization ideas described in previous cases to set some initial JVM parameters for your system such as heap memory size, young generation size, Eden and Survivor ratio, old age size, threshold for large objects, threshold for older objects to enter old age, In fact, the optimization idea can be simply put in one sentence: try to make the surviving objects after each Young GC less than 50% of Survivor regions, and keep them in the Young generation. Try not to let your subject age. Minimize the frequency of Full GC to avoid the impact of frequent Full GC on JVM performance.
JVM optimization during system pressure measurement
From the local unit test, to the system integration test, and then to the functional test of the test environment, the pressure test of the pre-release environment, to ensure that all the functions of the system are normal and the performance, stability and concurrency are normal under a certain pressure, and finally will be deployed to the production environment. A key part of this process is the stress test of the pre-release environment. In this process, some stress test tools are used to simulate, say, 1000 users accessing the system at the same time, causing 500 requests per second, and then see if the system can hold up to 500 requests per second. At the same time, see whether the response delay of each interface of the system is within 200ms, that is, the interface performance can not be too slow, or simulate millions of single table data in the database, and then see whether the system can run stably. Then, according to the running status of THE JVM in the pressure test environment, if it is found that objects enter the old age too fast, it may be because the Young generation is too small, which leads to frequent Young GC. Then, many objects still survive during the Young GC, and the result is that Survivor is too small, which leads to many objects frequently enter the old age. Of course, it could be something else.
Keep in mind that true optimization must be based on what you observe on your system, and then adjust the memory distribution appropriately. There is no fixed TEMPLATE for JVM optimization.
JVM monitoring of online systems
Once your system is online, you need to monitor the JVM of your online system, and this monitoring usually takes place in one of two ways. The first method is “low”, which means using jstat, Jmap, jhat, etc. during peak and off-peak periods to check whether the JVM is running properly on the online system and whether GC is Full frequently. If there is optimization, if not, usually every day to see the time, or every week to see.
The second approach is more common in mid – to large-sized companies, which are known to deploy dedicated monitoring systems, such as Zabbix, OpenFalcon, Ganglia, and so on. Any system you deploy can then send JVM statistics to these monitoring systems. As you can in the monitoring system of visual interface, see you need all of the indicators, including your curves of each memory area of the object, can be directly see the object growth of Eden area, also will tell you Young GC in frequency and time consuming, including the object of the old s growth as well as the Full GC frequency and time consuming. And these tools also allow you to set up monitoring. That is, you can specify a monitoring rule, such as a JVM on an online system, that will send you an alarm if more than five Full GCS occur within 10 minutes. Send it to your email, text message, so you don’t have to look at it every day. However, the use of these monitoring tools is beyond the scope of our column, because they are not necessarily the same for every company, and they are not necessarily the same for every company. If you are interested, you can learn about them by yourself. For example, “OpenFalcon Monitoring the JVM,” you will see a lot of information.
Summary in a short sentence: For the system running online, either manually monitor it with the command line tool and optimize it if problems are found, or automatically monitor it by relying on the company’s monitoring system to visually check the running status of the daily system.
From: www.cnblogs.com/klvchen/art…