Compiled languages such as C and C++ perform far better than Java, but the generated code can only be executed on a limited number of systems, giving Java its foundation (JVM-cross-platform)

Early Java runtimes offered performance levels far below those of compiled languages such as C and C++.

Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter: Interpreter Execution speed is as slow as you can imagine

Later, Java was optimized by JIT Compiler (just-in-time Compiler) and held the top position in Web development for decades. For example, JavaScript, which is a relative of Java, can be used in THE V9 engine through JIT to create a front-end Web one-stop service (nodeJS full stack). If you are interested in ECMAScript evolution history (1): The Coronation History of JavaScript as the King of Web Scripting Languages

When Java executes the Runtime environment, the JIT compiles the class every time it encounters a class, producing a fairly compact binary code that takes a little time to compile in exchange for later execution speeds. This is a significant increase in efficiency, but it’s not top notch because some Java files are rarely executed. The time it takes to compile them can be much longer than the time it takes the translator to perform the translation, and overall, the time it takes is not reduced.

Based on JIT experience, dynamic compiler came out, which dynamically predicted what needs to be compile and what needs to be translated. Therefore, dynamic compiler includes both translator and compiler.

JIT dynamic compilation

While the rumoured “write once, run anywhere” mantra of Java programming may not be strictly true in all cases, it certainly is for a large number of applications. Native compilation, on the other hand, is inherently platform-specific.

So how does the Java platform achieve the performance of native compilation without sacrificing platform independence? The answer is dynamic compilation using a JIT compiler, which has been used for a decade

Although platform independence is maintained through JIT compilation, it comes at a cost. Because it is compiled while the program is executing, the time it takes to compile the code counts toward the execution time of the program. As anyone who has written large C or C++ programs knows, the compilation process tends to be slow.

Overcome slow compilation process

  1. Compile all the code, but do not perform any time-consuming analysis and transformation, so the code can be generated quickly. Because of the speed at which the code is generated, although the overhead of compilation is obvious, it can easily be masked by the performance improvements from repeatedly executing native code.

  2. Allocate compilation resources to a small number of frequently executed methods (often called hot methods). The low compilation overhead is more easily masked by the performance benefits of repeatedly executing hot code. Many applications execute only a small number of hot methods, so this approach effectively minimizes compilation performance costs.

One of the major complexities of dynamic compilers is the tradeoff between knowing the expected benefits of compiled code and how much the execution of a method contributes to the overall performance of the program. In an extreme case, after the program executes, you know exactly which methods contribute the most to the performance of this particular execution, but compiling those methods is useless because the program is already complete. At the other extreme, it is impossible to know which methods are important until the program is executed, but the potential benefits of each method are maximized. Most dynamic compilers operate somewhere between these two extremes by balancing the importance of knowing the expected benefits of the method.

The fact that the Java language requires classes to be loaded dynamically has important implications for the design of the Java compiler. What if the other classes referenced by the code to be compiled haven’t been loaded?

For example, a method needs to read the static field value of a class that has not yet been loaded. The Java language requires that the class be loaded and parsed into the current JVM the first time a class reference is performed. The reference is not resolved until the first execution, which means there is no address from which to load the static field.

How does the compiler handle this possibility?

The compiler generates code to load and parse classes when they are not loaded. Once the class is parsed, the original code location is modified in a thread-safe way to access the address of the static field directly because it is already known.

A great deal of effort has gone into the IBM JIT compiler to use secure and efficient code patching techniques, so that after the class is parsed, the executed native code only loads the value of the field as if the field had been parsed at compile time. Another approach is to generate code that always checks to see if the field has been resolved before finding out where it is, and then loads the value. This simple process can cause serious performance problems for fields that go from unresolved to parsed and are frequently accessed.

Pros/cons of dynamic compilation

Compiling Java programs dynamically has some important advantages and can generate code even better than a statically compiled language. Modern JIT compilers often insert hooks into generated code to gather information about program behavior so that dynamic behavior can be better optimized if a method is selected for recompilation.

However, dynamic compilation does have some disadvantages that make it less than an ideal solution in some situations

Because it takes time to identify frequently executed methods and compile them, applications typically go through a preparation process during which performance does not reach its peak.

There are several reasons for performance problems during this preparation:

  • First, a large amount of initial compilation can directly affect the startup time of an application. Not only do these compilations delay the steady-state performance of the application (imagine a Web server going through an initial phase before it can do any real useful work), but methods that are frequently executed during the preparatory phase may not have much effect on the steady-state performance of the application. JIT compilation can be wasteful if it delays startup and does not significantly improve the long-term performance of the application. While all modern JVMS perform tuning to reduce startup delays, it is not always possible to fully resolve the problem.

  • Some applications simply cannot tolerate the delays associated with dynamic compilation. Interactive applications such as GUI interfaces are examples of this. In this case, the compilation activity may adversely affect user usage without significantly improving the performance of the application.

  • Finally, applications used in real-time environments with strict task timings may not be able to tolerate the uncertain performance impact of compilation or the memory overhead of the dynamic compiler itself.

Therefore, while JIT compilation technology has been able to provide performance levels comparable to (or better than) static language performance, dynamic compilation is not suitable for some applications. In these cases, ahead-of-time (AOT) compilation of Java code may be the appropriate solution.

AOT precompile

Dynamic class loading **** is a challenge for dynamic JIT compilers and an even more important issue for AOT compilation. The class is loaded only when the executing code references it. Because the AOT compilation is done before the program executes, the compiler has no way of predicting which classes are loaded. That is, the compiler cannot know the address of any static field, the offset of any instance field of any object, or the actual target of any call, even for direct (non-virtual) calls. If the prediction of any of this information turns out to be wrong when the code is executed, it means the code is wrong and Java consistency has been sacrificed.

Because the code can be executed in any environment, the class file may be different from when the code was compiled. For example, one JVM instance might load a class from a specific location on disk, while a later instance might load the class from a different location or even a network. Imagine a development environment where bug fixes are in progress: the contents of class files may vary from application to application execution. In addition, Java code may not even exist before the program executes: The Java Reflection service, for example, often generates new classes at runtime to support the program’s behavior.

The lack of information about statics, fields, classes, and methods means that most of the functionality of the optimization framework in the Java compiler is severely limited. Inlining is probably the most important optimization applied by static or dynamic compilers, but it can no longer be used because the compiler is not aware of the target method being called.

JIT vs JIT

  • JIT: A hierarchical mechanism that has high throughput, runtime performance bonuses, can run faster, dynamically generate code, etc., but is relatively slow to start and requires time and frequency to trigger the JIT

  • AOT: Low memory footprint, fast startup time, can run without runtime, and statically link runtime directly to the final program, but no runtime performance bonus, can not be further optimized based on the program running

Look at the answer of brother Wheel:

AOT is pregenerated machine code, which is similar to languages like C++. Choosing to do so usually means you lose a feature — for example

  1. – virtual functions can also be template functions in C#

  2. Use reflection to create a new template class (List<>, int <>, List<>)

  3. 【class Fuck{public Fuck<Fuck> Shit{get;set;}} 【class Fuck{public Fuck<Fuck> Shit{get;set;}}

All of these features require that you have to run there to generate machine code.

Another advantage of JIT is that it is convenient to do profiling based optimization. Of course, this slows things down a little bit, but it’s imperceptible to humans.

The defect of AOT

  • Application optimization after application installation and system upgrade takes time (recompile, convert program code to machine language)

  • The optimized file takes up additional storage space (caching the transformation results)

In summary:

  • Using JIT compilation during development can shorten the development cycle of a product. One of the most popular features of the Flutter, hot overload, is based on this feature.

  • Using AOT at release time eliminates the need to create inefficient method call mappings between cross-platform JavaScript code and Native Android and iOS code as React Native does.

JIT and AOT coexist

Each JIT and AOT has its own unique features. A combination of the two, such as the Flutter+Dart, not only works on the client side, but also on the server side.

Dart

Dart is one of the few languages that supports both JIT (Just In Time) and AOT (Ahead of Time).

Dart draws on the design of other high-level languages, such as Smalltalk’s Image technology, the JVM’s HotSpot, and Dart compilation. Dart implements a language container that can perform well at startup speed and performance. Dart provides AoT and JIT compilation, and THE JIT has Kernel and AppJIT operating modes

The dart advantage

  • Dart uses a JIT during development, so each change does not need to be compiled into bytecode. It saves a lot of time.

  • Use AOT to generate efficient ARM code in your deployment to ensure efficient performance.

JIT compile-on-time at run time, used during the development cycle, can dynamically deliver and execute code, development tests are efficient, but runtime speed and execution performance suffer due to run-time compilation.

Dart is fast and executes well.

Android

On Android 7.0, the JIT compiler is used again, using a mixed AOT/JIT compilation strategy, with the following features:

  1. Dex is no longer compiled when the application is installed

  2. When the App is running, the dex file is directly executed through the parser, and the hotspot functions are identified and compiled by the JIT and stored in the JIT Code cache. A profile file is generated to record the information of the hotspot functions.

  3. When the phone is in IDLE or Charging state, the system scans the profile file in the App directory and executes the AOT procedure to compile it.

Dalvik and ART are two running environments of Android, which can also be called JIT of Android virtual machine, and AOT are two different compilation strategies adopted by Android virtual machine

Reference content:

Introduction to JIT&AOT www.jianshu.com/p/ac079e7fc…

JIT (dynamic compilation) and AOT compile (static) compilation techniques is www.cnblogs.com/tinytiny/p/…

Say Android Dalvik, ART and JIT, AOT zhuanlan.zhihu.com/p/53723652

What are the advantages and disadvantages of JIT and AOT? – hez2010 answer – zhihu www.zhihu.com/question/23…

What are the advantages and disadvantages of JIT and AOT? Chubby swollen answer – zhihu www.zhihu.com/question/23…

Reprint the home station article “JIT – dynamic compilation and AOT – static compilation: Java/Java/JavaScript/Dart blah”, please indicate the source: www.zhoulujun.cn/html/theory…