In the real enterprise Java application development and maintenance, sometimes we will encounter the following problems:

OutOfMemoryError, insufficient memory

Memory leaks

The thread deadlock

Lock Contention

Java processes consume too much CPU. Procedure

.

These problems may be ignored by many people in daily development and maintenance (for example, some people just restart the server or increase the memory, but do not look into the root of the problem), but to understand and solve these problems is a necessary requirement for Java programmers to advance. This article will introduce some of the most commonly used JVM performance tuning monitoring tools in the hope of providing some inspiration.

And the use of these monitoring and tuning tools, whether you are operations, development, testing, are a must master.

A, JPS (Java Virtual Machine Process Status Tool)

JPS is mainly used to output information about the status of processes running in the JVM. The syntax is as follows:

jps[options][hostid]

If you do not specify hostid, the default value is the current host or server.

The command line parameters are described as follows:

-q does not print the class name, Jar name, or arguments passed to the main method

-m Prints the arguments passed to the main method

-l Displays the full name of the main class or Jar

-v prints the parameters passed into the JVM

For example:

root@ubuntu:/# jps -m -l

2458 org. Artifactory. Standalone. Main. The main/usr/local/artifactory — 2.2.5 / etc/jetty. The XML

29920com.sun.tools.hat.Main -port9998/tmp/dump.dat

3149org.apache.catalina.startup.Bootstrap start

30972sun.tools.jps.Jps -m -l

8247org.apache.catalina.startup.Bootstrap start

25687com.sun.tools.hat.Main -port9999dump.dat

21711mrf-center.jar

B、 jstack

Jstack is used to view stack information about threads in a Java process. The syntax is as follows:

jstack[option]pid

jstack[option]executablecore

jstack[option][server-id@]remote-hostname-or-ip

The command line parameters are described as follows:

Listing: -l long listings will display additional lock information, using jstack -l PID to keep track of lock holdings in case of deadlock. -m mixed mode will display Java stack information as well as C/C++ stack information (e.g. Native methods).

Jstack is used a lot in JVM performance tuning because it can locate thread stacks and specific code based on stack information. Here is an example to find the most cpu-consuming Java thread in a Java process and locate the stack information using the commands ps, top, printf, jstack, and grep.

The first step is to find the Java process ID. The name of the Java application I have deployed on the server is MRF-Center:

root@ubuntu:/# ps -ef | grep mrf-center | grep -v grep

root217111114:47pts/300:02:10java -jar mrf-center.jar

Ps-lfp pid or ps-mp pid -o THREAD, tid, time, or top-HP PID.




The TIME column is the CPU TIME consumed by each Java thread. The longest CPU TIME is used by the thread whose ID is 21742

printf”%x\n”21742

We get 21742 with a hex value of 54ee, which we’ll use next.

Jstack is used to output the stack information for process 21711, and grep from the hexadecimal value of the thread ID as follows:

root@ubuntu:/# jstack 21711 | grep 54ee

“PollIntervalRetrySchedulerThread”prio=10tid=0x00007f950043e000nid=0x54eeinObject.wait() [0x00007f94c6eda000]

Can see CPU consumption in PollIntervalRetrySchedulerThread Object of this class. The wait (), I was looking for under my code, orientation to the following code:

// Idle wait

getLog().info(“Thread [“+ getName() +”] is idle waiting…” ); schedulerThreadState = PollTaskSchedulerThreadState.IdleWaiting;

longnow = System.currentTimeMillis();

longwaitTime = now + getIdleWaitTime();

longtimeUntilContinue = waitTime – now;

synchronized(sigLock) {try{if(! halted.get()) { sigLock.wait(timeUntilContinue); } }catch(InterruptedException ignore) { }}

It is the idle wait code for the polling task, and siglock. wait(timeUntilContinue) corresponds to object.wait ().

C, JMAP (Memory Map), and JHAT (Java Heap Analysis Tool)

Jmap is used to check heap memory usage, usually in combination with JHAT.

Jmap syntax is as follows:

jmap[option]pid

jmap[option]executablecore

jmap[option][server-id@]remote-hostname-or-ip

If you are running on a 64-bit JVM, you may need to specify the -j-d64 command option parameter.

jmap-permstat pid

Print the class loader of the process and the persistent generation object information loaded by the class loader, output: class loader name, whether the object is alive (unreliable), object address, parent class loader, loaded class size and other information, as shown below:




Use jmap-heap PID to view process heap memory usage, including the GC algorithm used, heap configuration parameters, and heap memory usage in each generation. Take the following example:

root@ubuntu:/# jmap -heap 21711

Attaching to process ID21711, please wait… Debugger Attached successfully.Server Compiler Detected.JVM version IS20.10-B01

usingthread-local object allocation.Parallel GC with4thread(s)Heap Configuration:MinHeapFreeRatio =40

MaxHeapFreeRatio =70

MaxHeapSize =2067791872(1972.0MB)NewSize =1310720(1.25MB)MaxNewSize =17592186044415MBOldSize =5439488(5.1875MB)NewRatio = 2

SurvivorRatio =8

MaxPermSize =85983232(82.0MB)Heap Usage:PS Young GenerationEden Space: Capacity = 6422528 (6.125 MB) 2 = 5445552 (5.1932830810546875 MB) free = 84.78829520089286% of the 976976 (0.9317169189453125 MB) UsedFrom Space: Capacity =131072(0.125MB) Used =98304(0.09375MB) Free =32768(0.03125MB)75.0% usedTo Space: Capacity =131072(0.125MB) Used =0(0.0MB) Free =131072(0.125MB)0.0% usedPS Old Generation Capacity =35258368(33.625MB) used =4119544(3.9287033081054688MB) Free =31138824(29.69629669189453MB)11.683876009235595% usedPS Perm Generation Capacity =52428800(50.0MB) used =26075168(24.867218017578125MB) Free =26353632(25.132781982421875MB)49.73443603515625% used….

Jmap-histo [:live] pid = jmap-histo [:live] pid = jmap-histo [:live] pid

root@ubuntu:/# jmap -histo:live 21711 | more

num#instances #bytes class name———————————————-1:3844555977362:3844552372883:350037495044:6085832426005:350027152646: 279621314247:55431317400[I8:137141010768[C9:47521003344[B10:122563965611:14194454208java.lang.String12:3809396136java.la ng.Class13:4979311952[S14:5598287064[[I15:3028266464java.lang.reflect.Method16:28016352017:4355139360java.util.HashMap$E ntry18:1869138568[Ljava.util.HashMap$Entry; 19:244397720java.util.LinkedHashMap$Entry20:207282880java.lang.ref.SoftReference21:180771528[Ljava.lang.Object; 22:220670592java.lang.ref.WeakReference23:93452304java.util.LinkedHashMap24:87148776java.beans.MethodDescriptor25:144246 144java.util.concurrent.ConcurrentHashMap$HashEntry26:80438592java.util.HashMap27:94837920java.util.concurrent.Concurren tHashMap$Segment28:162135696[Ljava.lang.Class;29:131334880[Ljava.lang.String; 30:139633504java.util.LinkedList$Entry31:46233264java.lang.reflect.Field32:102432768java.util.Hashtable$Entry33:94831440 [Ljava.util.concurrent.ConcurrentHashMap$HashEntry;

Class name is an object type, described as follows:

Bbyte

Cchar

Ddouble

Ffloat

Iint

Jlong

Zboolean

[array, such as [I for int[][L+ class name other object

Another common scenario is to dump the process memory usage into a file using Jmap and then analyze it using JHAT. The jmap dump command is in the following format:

jmap -dump:format=b,file=dumpFileName pid

Dump (ID: 21711);

root@ubuntu:/# jmap -dump:format=b,file=/tmp/dump.dat 21711

Dumping heap to /tmp/dump.dat … Heap dump file created

Dump files can be viewed with MAT, VisualVM and other tools, here using JHAT to view:

root@ubuntu:/# jhat -port 9998 /tmp/dump.dat

Readingfrom/tmp/dump.dat… Dump file created Tue Jan2817:46:14CST2014Snapshot read, resolving… Resolving132207objects… Chasing references, expect26dots…………………….. Eliminating duplicate references…………………….. Snapshot resolved.Started HTTP serveronport9998Serverisready.

Jhat -j -xmx512m -port 9998 / TMP /dump.dat if the Dump file is too large, you may need to add -j -xmx512m to specify maximum heap memory. Then you can enter the host address :9998 in the browser to view:




The last item supports OQL (Object Query Language).

D, Jstat (JVM Statistical Monitoring Tool)

The syntax is as follows:

jstat[ generalOption | outputOptions vmid [interval[s|ms][count]] ]

The vmID is the ID of a Java VIRTUAL machine (VM). On Linux/Unix, it is generally the ID of a process. Interval is the sampling interval. Count is the number of samples. For example, the following output is GC information, sampling interval is 250ms, sampling number is 4:

root@ubuntu:/# jstat -gc 21711 250 4

S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT

192.0192.064.00.06144.01854.932000.04111.655296.025472.77020.43130.2180.649

192.0192.064.00.06144.01972.232000.04111.655296.025472.77020.43130.2180.649

192.0192.064.00.06144.01972.232000.04111.655296.025472.77020.43130.2180.649

192.0192.064.00.06144.02109.732000.04111.655296.025472.77020.43130.2180.649

To see what the above columns mean, look at the JVM heap memory layout:




As can be seen:

Heap memory = young + old + permanent

Young generation = Eden zone + two Survivor zones (From and To)

Now explain the meaning of each column:

S0C, S1C, S0U, S1U: Survivor 0/1 zone

EC, EU: Capacity and usage in Eden

OC and OU: indicates the capacity and usage of the aged generation

PC and PU: capacity and usage of the permanent generation

YGC and YGT: indicates the number and duration of GC in the young generation

FGC and FGCT: Full GC count and Full GC duration

GCT: indicates the total GC time

E, Hprof (Heap/CPU Profiling Tool)

Hprof is able to show CPU usage and statistics on heap memory usage.

The syntax is as follows:

java-agentlib:hprof[=options]ToBeProfiledClass

java-Xrunprof[:options]ToBeProfiledClass

javac-J-agentlib:hprof[=options]ToBeProfiledClass

The full command options are as follows:

Option Name and Value Description Default

——————— ———– ——-

heap=dump|sites|all heap profiling all

cpu=samples|times|old CPU usage off

monitor=y|n monitor contention n

format=a|b text(txt) or binary output a

file=<file> write data to file java.hprof[.txt]

net=<host>:<port> send data over a socket off

depth=<size> stack trace depth 4

interval=<ms> sample interval in ms 10

Cutoff =<value> output cutoff point 0.0001

lineno=y|n line number in traces? y

thread=y|n thread in traces? n

doe=y|n dump on exit? y

msa=y|n Solaris micro state accounting n

force=y|n force output to <file> y

verbose=y|n print messages about dumps y

Here are some examples from the official guide.

Sample CPU Usage Sampling Profiling(CPU =samples) :

java-agentlib:hprof=cpu=samples,interval=20,depth=3Hello

Sample CPU consumption every 20 milliseconds, stack depth 3, and generate a profile named java.hprofe.txt in the current directory.

Example of CPU Usage Times Profiling(CPU = Times), which can obtain more fine-grained CPU consumption information than the CPU Usage Sampling Profile, down to the beginning and end of each method call, It is implemented using bytecode injection (BCI) :

javac -J-agentlib:hprof=cpu=timesHello.java

Example of Heap Allocation Profiling(Heap = Sites) :

javac-J-agentlib:hprof=heap=sites Hello.java

An example of Heap Dump(Heap = Dump), which generates more detailed Heap Dump information than the above Heap Allocation Profiling:

javac -J-agentlib:hprof=heap=dumpHello.java

Although adding the -xrunprof :heap=sites parameter to the JVM startup parameter can generate a CPU/ heap Profile, it has a significant impact on JVM performance and is not recommended for online server environments.

Welcome Java engineers who have worked for one to five years to join Java Programmer development: 854393687

Group provides free Java architecture learning materials (which have high availability, high concurrency, high performance and distributed, Jvm performance tuning, Spring source code, MyBatis, Netty, Redis, Kafka, Mysql, Zookeeper, Tomcat, Docker, Dubbo, multiple knowledge architecture data, such as the Nginx) reasonable use their every minute and second time to learn to improve yourself, don’t use “no time” to hide his ideas on the lazy! While young, hard to fight, to the future of their own account!