Interviewer: How about we talk about JVM tuning today?
Interviewer: Have you had any experience tuning a JVM in a production environment?
Candidate: No
Interviewer:…
Candidate: HMM… Here’s the thing. The general way we think about optimizing systems is this
Candidates: 1. Generally speaking, relational database is the first bottleneck, the first check whether it is a database problem
Candidates :(in this process, we need to evaluate whether the index we built is reasonable, whether we need to introduce distributed cache, whether we need to separate databases and tables, etc.)
Candidates: 2. Then, we will consider whether we need to expand (both horizontally and vertically)
Candidate :(in this process, we will suspect that the system is under too much pressure or the hardware capacity of the system is insufficient to cause frequent problems of the system)
Candidates: 3. Next, review and optimize at the application code level
Candidate: Expansion is not endless, there is money inside and out. In this process, we examine whether our code is wasteful or logically optimized, such as processing certain requests in parallel.)
Candidates: 4. Next, check and optimize at the JVM level
Candidate :(after reviewing the code, this process is to see if the JVM has multiple GC issues, etc.)
Candidates: 5. Finally, check the network and operating system level
Candidate :(this process checks whether the memory /CPU/ network/disk read/write indicators are normal, etc.)
Candidates: In most cases, by step 3, the JVM and the parameters on the machine that the “operations team” set for us have met most of the requirements.
Candidate: Other teams have previously found interface processing timeouts in “big Push”, and various monitors suspected FULL GC as the cause
Candidate: The first idea is not to tune various JVM parameters for optimization, but to add machines directly
Candidate :(the roughest way to solve the problem is the easiest, expand YYDS)
Interviewer: Exactly
Candidate: However, I have learned about jVM-related tuning commands and ideas.
Candidate: As I understand it, tuning the JVM means “tuning parameters” in conjunction with your existing business, with “understanding” the JVM memory structure and various garbage collectors, so that your application can run smoothly.
Candidates: General tuning JVMS We thought there would be several metrics to look at: throughput, pause time, and garbage collection frequency
Candidate: Based on these metrics, we may need to adjust:
Candidates: 1. Memory area size and related strategies (such as how much memory for the whole heap, how much memory for the new generation, how much memory for the old generation, how much Survivor, promotion conditions for the old generation, etc.)
Candidates: for example (-xmx: sets the maximum value of the heap, -xms: sets the initial value of the heap, -xmn: indicates the size of the young generation, -xx :SurvivorRatio: Eden/SurvivorRatio, etc)
Candidates: (Rule of thumb: IO intensive can make the “young generation” space slightly larger, since most objects die in the young generation. Memory intensive can be slightly larger “old age” space, objects will live longer)
Candidates: 2. Garbage collector (select the appropriate garbage collector and various tuning parameters for each garbage collector)
Candidate: such as: (- XX + UseG1GC: used to specify the JVM garbage collector for G1, – XX: MaxGCPauseMillis: set target pause time, – XX: InitiatingHeapOccupancyPercent: When the total heap memory usage reaches a certain percentage, the global concurrent marking phase is initiated, etc.)
Candidates: Yes, it’s all a case by case basis (if you know the basics of the JVM, you can’t tune it if you don’t)
Candidate: The JVM is already “out of the box” in most scenarios
Interviewer: Exactly
Candidate: Generally we are “encountered problems” after the tuning, and encountered problems need to use a variety of “tools” to troubleshoot
Candidate: 1. Run the JPS command to view basic Java process information (process number and main class). This command is commonly used to see how many Java processes are running on the current server, their process number, and what the load main class is
Candidates: 2. Run the jstat command to view Java process statistics (class loading, compilation, GC summary and statistics of each memory area). This command is often used to look at GC situations
Candidate: 3. Run the jinfo command to view and adjust the Running parameters of the Java process.
Candidate: 4. Run the jmap command to view the Memory information of the Java process. This command is often used to dump JVM Memory information into a file and then analyze the file using MAT(Memory Analyzer Tool)
Candidate: 5. Run the jstack command to view JVM thread information. This command checks for deadlock-related problems in common terms
Candidates: 6. Arthas, which is currently popular, covers many of the above commands and comes with a graphical interface. It’s also my go-to screening and analysis tool
Interviewer: HMM… All right. Earlier in your talk about the JVM, you mentioned that during the “interpret” phase, there are two ways to interpret bytecode information into machine instructions: the bytecode interpreter and the just-in-time compiler (JIT).
Interviewer: Do you know anything about JIT optimization for the JVM?
Candidates: JIT optimization techniques are well known for two types: method inlining and escape analysis
Candidate: Method inlining is copying the code of the target method into the calling method, avoiding the actual method call
Candidate: Because stack frames are generated for each method call (pushing out of the stack to record the location of the method call, and so on), the optimization of “method inlining” can improve performance
Candidate: There are also related parameters that we can specify in the JVM (-xx :MaxFreqInlineSize, -xx :MaxInlineSize)
Candidates: Escape analysis is an analysis technique that determines whether an object is referenced by an external method or accessed by an external thread, and if it is not, it can be optimized, for example:
Lock elimination (synchronous ignore) : This object is only accessed inside the method and is not referenced elsewhere, so it must be thread-safe and lock related code can be ignored
Candidates: 2. Stack allocation: This object is only accessed within the method itself, so it is allocated directly on the stack.
Candidates: 3. Scalar replacement/separation objects: When the program actually executes, it can create its member variables instead of creating the object. After splitting the object, the member variables of the object can be allocated on the stack or register, so that the original object does not need to allocate memory space
Candidate: But with all that said, JIT optimization varies from JVM version to JVM version (again, this is just a guide
Interviewer: I see.
9 common CMS GC problems in Java
Welcome to follow my wechat official account [Java3y] to talk about Java interview, on line interview series continue to update!
Online Interviewer – Mobile seriesTwo continuous updates a week!
Line – to – line interviewers – computer – end seriesTwo continuous updates a week!
Original is not easy!! Three times!!