Learning about the JVM is bound to involve the just-in-time compiler because it is so important. Understand its basic principle and optimization means, in the programming process can let us have a kind of open ren Du two pulse feeling. For example, many of your friends will be asked the following question in the interview: is Java compiled or interpreted? Once you understand Java’s just-in-time compiler, you will be able to answer these questions easily, explain the optimization techniques that JVM uses for just-in-time compilers, and gain a deeper understanding of the principles behind the code in practice. This article gives you a comprehensive overview of the Java just-in-time compiler.
Just-in-time compiler
In some commercial virtual machines, such as HotSpot, Java applications are first interpreted by an Interceptor. This is why Java is called explain-based execution. But when the virtual machine detects that a particular piece of Code or method is running particularly frequently, it marks it as “Hot Spot Code.”
For hot code, virtual machine adopts various measures to improve its execution efficiency, because the execution is frequent. If the execution efficiency can be improved, the cost performance is relatively high. To do this, at run time, the virtual machine compiles this code into machine code specific to the local platform and performs various levels of deep optimization. These optimizations are performed by a Compiler, also known as a Just In Time Compiler (JIT).
So, to be precise, Java is executed based on interpretation and compilation for virtual machines like HotSpot. Here’s a diagram to illustrate the process:
The coexistence of interpreters and compilers
First, it is important to note that not all Java virtual machines have an interpreter and compiler architecture, but many mainstream commercial virtual machines, such as HotSpot, have both.
Why does Java use an interpreter to “drag down” the performance of a program when the just-in-time compiler is optimized at all levels? This is because both the interpreter and the compiler have their own advantages: when a program needs to be started and executed quickly, the interpreter can be used first, saving compilation time and executing immediately. When the program is running in an environment where memory resources are limited (such as some embedded systems), the interpreter execution can be used to save memory, whereas the compiler execution can be used to improve efficiency. In addition, if a “rare trap” occurs after compilation, you can fall back to explain execution through reverse optimization.
As the Java virtual machine runs, the interpreter and the just-in-time compiler can work together to complement each other. Whether interpreted with an interpreter or compiled with a compiler, the resulting bytecode needs to be translated into the corresponding platform’s native machine code instructions. Some services don’t value startup time, while others do, and you need to balance that with an interpreter and just-in-time compiler.
A comparison can be made between the compiler and interpreter in terms of compilation time and compilation space. First, look at the compile time overhead.
What we mean by A JIT is faster than an interpreter only if the compiled code for “hot code” executes faster than the interpreter interprets. As you can see from the figure above, JIT compilation takes one more “execute compile” step than the interpreter for code that is executed only once, so it is slower than the interpreter for code that is executed only once. Code that executes once usually includes code that is called only once (such as constructors), code that has no loops, and so on, where JIT is clearly not worth the cost.
Second, look at the overhead in compile space. For a typical Java method, a 10x bloat ratio for the size of the compiled code relative to the bytecode is normal. Only code that executes frequently is worth compiling, and compiling all of your code dramatically increases the amount of space it takes up, leading to “code explosions.” This is why some JVMS do not use JIT compilation alone, opting instead for a hybrid execution engine with an interpreter +JIT compiler.
Two just-in-time compilers for HotSpot
The HotSpot VIRTUAL machine has two just-in-time compilers for different application scenarios: Client Complier and Server Complier (C1 and C2 compilers for short), which are used on the Client and Server side respectively. Client Complier provides higher compile speed, while Server Complier provides better compile quality.
The main differences between the JVM Server mode and the client mode are as follows: – Server mode is slow to start up, but improves performance once it is up and running. The reason for this is that when the vm is running in -client mode, it uses a lightweight compiler code-named C1, whereas when the VM is running in -server mode, it uses a relatively heavy compiler code-named C2. The C2 compiler compiles more thoroughly than the C1 compiler and performs better when serviced.
By default, the C1 or C2 compiler is used, depending on the mode in which the virtual machine is running. The HotSpot vm automatically selects a running mode based on its version and the hardware performance of the host machine. You can also use the -client or -server parameters to force the VM to run in client or Server mode.
The current mainstream HotSpot virtual machine uses the interpreter to work with one of the compilers by default, which is called Mixed Mode. The -xint parameter can be used to force the vm to run in “Interpreted Mode” where the compiler does not intervene at all. By using -xcomp to force the virtual machine to run in “Compiled Mode,” compilation Mode takes precedence, but the interpreter still has to enter the execution if compilation cannot take place. You can run the vm Java -version command to view the default operating mode.
In the above example, we can not only see that the mode adopted is mixed mode, but also that the Server mode is adopted.
Hot spot detection
The basic functions of the JIT compiler have been explained above, so how does it identify hot code? To determine if a piece of code is Hot code, also known as HotSpot Detection, there are two common methods: sample-based HotSpot Detection and counter based HotSpot Detection (used by HotSpot).
Sample Based Hot Spot Detection (SAMple-based Hot Spot Detection) : The virtual machine periodically checks the top of each thread stack. If a method frequently appears on the top of the stack, it is defined as a “Hot Spot method”. The implementation is simple, efficient and easy to obtain method call relationships. However, Reduce, which is difficult to identify methods, is prone to thread blocking or other external disturbances.
Counter Based Hot Spot Detection: A Counter is established for each method (even a block of code), and if the number of executions exceeds a threshold, it is considered a “Hot Spot method”. The statistical results are accurate and rigorous, but the implementation is troublesome, and the method invocation relationship cannot be directly obtained.
The HotSpot virtual machine defaults to counter based HotSpot detection, with two types of counters: method call counters and backside counters. Methods or code blocks are compiled into local code when the counter value is greater than the default threshold or specified threshold.
Method call counter, which records the number of method calls. The default threshold for CompileThreadhold is 1500 in Client mode and 10000 in Server mode. You can run -xx :CompileThreadhold to set this threshold. If nothing is done, a method call counter counts not the absolute number of times a method is called, but a relative frequency of execution, the number of times a method is called over a period of time. When a certain Time limit is exceeded but the number of calls has not reached the threshold, the call Counter of the method is halved, called Counter Decay of the method call Counter. This period is called the statistical Half Life Time of the method. You can use the VM parameter -xx :CounterHalfLifeTime to set the time of the half-age period, in seconds. The JIT compilation interaction diagram is as follows:
A loopback counter that counts the number of times the loop body code is executed in a method. In the bytecode, the instruction that controls the redirection is called “Back Edge”. The purpose of setting up the Back Edge counter system is to trigger OSR compilation. Counter the threshold, the HotSpot offers – XX: BackEdgeThreshold to set, but actually use the current virtual machine – XX: OnStackReplacePercentage to adjust the threshold indirectly, computation formula is as follows:
- In Client mode, the formula is CompileThreshold X OSR ratio / 100. Where the OSR ratio defaults to 933, then the threshold of the back counter is 13995.
- In Server mode, Formula for the method call counter threshold (the Compile Threashold) X (OSR ratio (OnStackReplacePercentage) – interpreter monitoring (InterpreterProfilePercent)) / 100 “. Including onStackReplacePercentage the default value is 140, InterpreterProfilePercentage defaults to 33, if all the default values, then the Server mode virtual machine back to the side counter threshold for 10700.
The corresponding flow chart is as follows:
Unlike the method counter, the backside counter does not count heat decay, so it counts the absolute number of times the method loops. When the counter overflows, it also adjusts the value of the method counter to the overflow state, so that the standard compilation process is performed the next time the method is entered.
Performance comparison of different modes
To understand the different JVM compilation modes, let’s write a simple test example to test the performance of different compilers. It is important to note that the following test procedures and scenarios are not rigorous, but just give you a general idea of the differences between different modes. If accurate testing is needed, the best way to do it is under strict benchmarks.
public class JitTest { private static final Random random = new Random(); private static final int NUMS = 99999999; public static void main(String[] args) { long start = System.currentTimeMillis(); int count = 0; for (int i = 0; i < NUMS; i++) { count += random.nextInt(10); } System.out.println("count: " + count + ",time cost : " + (System.currentTimeMillis() - start)); }}Copy the code
During the test, the compilation information was printed by adding the virtual machine parameter “-xx :+PrintCompilation”.
The JVM parameter is “-xint-xx :+PrintCompilation”, then the main method is executed, and the following information is printed:
count: 449945612,time cost : 33989
Copy the code
It took about 34 seconds. At the same time, the console does not print compilation information, which proves that the just-in-time compiler is not working.
Modify the VM parameters as follows: “-xcomp-xx :+PrintCompilation”. Run the main method to print the following information:
Where, the relevant consumption time printed in the code is:
count: 450031537,time cost : 10593
Copy the code
It takes 10 seconds and generates a lot of compile information.
Finally, test the vm again in mixed mode, change the vm parameter to “-xx :+PrintCompilation”, and execute the main method:
The compile information is printed and it takes less than a second to execute the same code.
After the above cursory test, it is found that in the above example, the time is in descending order: mixed mode < pure compiled mode < pure interpreted mode. Of course, if more precise and accurate testing is needed, there is also a need for strict benchmarking conditions.
Compiler optimization technique
The just-in-time compiler is fast for another reason: when compiling native code, the virtual machine design team uses almost all of the optimizations. As a result, the native code produced by the just-in-time compiler is superior to the bytecode produced by javac. Let’s take a look at some optimization techniques that just-in-time compilers use to produce native code.
First, one of the classic language-independent optimization techniques: common subexpression elimination. If an expression E has been evaluated, and the values of all variables in E have not changed since the previous evaluation, then the occurrence of E becomes a common subexpression. For this expression, there is no need to spend time evaluating it, just replace E with the result of the previously evaluated expression. Example: int d = (c*b) * 12 + a+ (a+ b * c) -> int d = E * 12 + a+ (a+ E).
Second, one of the classic language-related optimization techniques: array range checking elimination. When accessing an array element in the Java language, the system automatically checks the range of the upper and lower bounds, and throws an exception if the bounds are exceeded. For the execution subsystem of the virtual machine, each read or write to an array element carries an implicit conditional operation, which can be a performance burden for program code with a large number of array accesses. Java can determine the scope according to the data flow analysis at compile time, thus eliminating the upper and lower bounds checking and saving multiple condition judgment operations.
Third, one of the most important optimization techniques: method inlining. The simple idea is to “copy” the target method’s code into the calling method, eliminating some useless code. However, the inline process in a real JVM is complex and will not be analyzed here.
Fourth, one of the most cutting-edge optimization techniques: escape analysis. The basic behavior of escape analysis is analyzing object dynamic scope: when an object is defined in a method, it may be referenced by an external method, such as passing it as a call parameter to another method, called method escape. It may even be accessed by an external thread, such as an instance variable assigned to a class variable or accessible from another thread, called thread escape. If you can prove that an object cannot escape from a method or thread, that is, no other method or thread can access it in any way, you can perform some efficient optimizations:
- Stack allocation: Assign local objects that do not escape to the stack, and that object is automatically destroyed at the end of the method, reducing the stress on the garbage collection system.
- Synchronization elimination: If the variable does not have thread escape, that is, cannot be accessed by other threads, then there is no contention for reading or writing the variable and synchronization can be eliminated.
- Scalar substitution: A scalar is a data type that cannot be decomposed, such as the original data type and reference type. Aggregations are things that can be further decomposed, such as objects in Java. Scalar substitution If an object cannot be accessed externally and the object can be split, the actual execution may not create the object, but instead create its member variables that are used by the method. This approach not only allows the object’s member variables to be allocated and read and written on the stack, but also creates conditions for further optimization.
Other more optimization, refer to Wiki:wiki.openjdk.java.net/display/hot OpenJKD…
summary
Through the above learning, we must have a deeper understanding of the operation principle of just-in-time compilation, use scenarios, use procedures, code judgment, optimization of technical items and so on. Understanding these underlying principles can help you write code, troubleshoot problems, and tune performance. For the content mentioned in this article, we also suggest that you practice, experience, in order to deepen the impression.
The Java Real-time compiler
1. In-depth Understanding of the Java VIRTUAL Machine 2.blog.csdn.net/riemann_/ar… 3.https://www.jianshu.com/p/fbced5b34eff\
Program new horizon
\
The public account “program new vision”, a platform for simultaneous improvement of soft power and hard technology, provides massive information
\