Introduction to the

In the last article, we looked at the introduction of the JIT compiler to improve interpretation performance. Today, we will take a look at some of the optimization aspects of the JVM in JDK14 as a whole, and we can learn from them.

More highlights:

  • Blockchain from getting started to Giving up series tutorials – with ongoing updates covering cryptography, Hyperledger, Ethereum,Libra, Bitcoin and more
  • Spring Boot 2.X Series tutorials: Learn Spring Boot from Scratch in seven days – continuous updates
  • Spring 5.X Series tutorials: Everything you can think of in Spring5 – constantly updated
  • Java Programmer from Handyman to Expert to God (2020 edition) – Ongoing updates with detailed articles and tutorials

String compression

Brother F, the JIT you told me last time really benefited me a lot. There are so many unknown stories in the JVM. I wonder if there are other ways that the JVM can improve its performance besides JIT?

There are of course many different postures, but let’s start with the string compression introduced in JDK9, which was mentioned earlier.

Prior to JDK9, the underlying storage structure of a String was a char[]. A char takes up two bytes of storage.

Since most strings are expressed in Latin-1 encoding, only one byte of storage is required; two bytes are a waste.

So after JDK9, the underlying storage of strings becomes byte[].

Currently, String supports two encoding formats: LATIN1 and UTF16.

LATIN1 needs to be stored in one byte. UTF16 requires two or four bytes for storage.

In JDK9, string compression is enabled by default. You can use

 -XX:-CompactStrings
Copy the code

To control it.

Tiered Compilation

In order to improve JIT compilation efficiency and meet the compilation requirements of different levels, the concept of hierarchical compilation is introduced.

Roughly speaking, hierarchical compilation can be divided into three layers:

  1. The first layer is to disable the C1 and C2 compilers, at which time no JIT is performed.
  2. The second layer is to turn only the C1 compiler on, because the C1 compiler only does some simple JIT optimizations, so this works for the general case.
  3. The third layer is to enable both C1 and C2 compilers.

In JDK7, you can use the following command to enable hierarchical compilation:

-XX:+TieredCompilation
Copy the code

After JDK8, hierarchical compilation is now the default option, so you don’t need to manually enable it.

Code Cache hierarchy

Code Cache is the memory space used to store compiled machine Code. The machine Code generated by JIT compilation is stored in the Code Cache.

The Code Cache is a contiguous memory space organized as a single heap.

Using only one code heap can cause more or less performance problems. To improve code cache utilization, the JVM introduces code cache layering.

What do I mean by layering?

The advantage of sorting machine code types into different categories is that it is easier for the JVM to scan and find, reducing cache fragmentation and increasing efficiency.

Here are the three layers of Code Cache:

New JIT compiler Graal

In previous articles, we introduced JIT compilers, which are written in C/C++.

The new Graal JIT compiler is written in Java. Yes, you read that right, a JIT compiler written in Java.

Is there a chicken-and-egg feeling? It doesn’t matter, however, that Graal can actually improve JIT compilation performance.

Graal is released with the JDK as an internal module: jdk.internal.vm.compiler.

Graal and JVM communicate via JVMCI (JVM Compiler Interface). JVMCI is also an internal module: jdk.internal.vm.ci.

Note that Graal only in Linux – 64 version of the JVM support, you need to use – XX: + UnlockExperimentalVMOptions – XX: + UseJVMCICompiler to open Graal features.

Lead to compile

We know that in jIts, the JVM usually waits a certain amount of time for the code to execute in order to find hot code before it starts compiling native code. The downside is that it takes a long time.

Similarly, if it is duplicate code that is not compiled into machine code, then performance will be affected.

AOT (ahead-of-time), however, does not need to wait, but starts compiling before the JVM starts.

AOT provides a Java tool called JAOTC. Display jAOTC command format:

jaotc <options> <list of classes or jar files>
jaotc <options> <--module name>
Copy the code

For example, we could pre-compile the AOT library for use in later JVMS like this:

jaotc --output libHelloWorld.so HelloWorld.class
jaotc --output libjava.base.so --module java.base
Copy the code

This code precompiles HelloWorld and its dependency Module Java.base.

We can then specify the corresponding lib when starting HelloWorld:

java -XX:AOTLibrary=./libHelloWorld.so,./libjava.base.so HelloWorld
Copy the code

This way, when the JVM starts, you go back to the appropriate AOTLibrary.

Note that AOT is an experience feature on Linux-X64.

Compressed object pointer

Object Pointers are used to point to an object and represent a reference to that object. Typically on a 64-bit machine, a pointer takes up 64 bits, or 8 bytes. On a 32-bit machine, a pointer takes up 32 bits, or 4 bytes.

In real time, the number of Pointers to these objects in an application is very, very large, and the memory footprint of the same program running on a 32-bit machine is very different from that of a 64-bit machine. A 64-bit machine may use 1.5 times as much memory as a 32-bit machine.

To compress an object pointer, we compress a 64 – bit pointer to 32 – bits.

How do you compress it? The object address of the 64-bit machine is still 64-bit. The compressed 32 bits store only a shift relative to the heap Base address.

We use the 64-bit heap base address plus the 32-bit address shift to get the actual 64-bit heap address.

Object pointer compression is enabled by default in Java SE 6U23. Until then, you can turn it on with -xx :+UseCompressedOops.

Zero-based compression pointer

I just mentioned that the compressed 32-bit address is based on the 64-bit heap Base address. In a zero-based compressed pointer, the 64-bit heap base address is the reassigned virtual address 0. This eliminates the need to store 64-bit Heap Base addresses.

Escape analysis

Finally, escape analysis. What is escape analysis? If so, the object should be allocated in the Heap so that it can be seen by other objects.

If there is no other object to access, then it is perfectly fine to allocate the object in the stack, which is definitely faster than the heap allocation, because there is no need to worry about synchronization.

Let’s take an example:

  public static void main(String[] args) {
    example();
  }
  public static void example(a) {
    Foo foo = new Foo(); //alloc
    Bar bar = new Bar(); //allocbar.setFoo(foo); }}class Foo {}

class Bar {
  private Foo foo;
  public void setFoo(Foo foo) {
    this.foo = foo; }}Copy the code

In the example above, setFoo refers to foo, and if the bar object is allocated in the heap, then the referenced Foo object escapes and needs to be allocated in the heap space.

But because bar and Foo are only called in the Example method, the JVM can figure out that there are no other objects that need to refer to them and simply assign them to the Example method stack.

Another function of escape analysis is lock Coarsening.

The JVM introduces the concept of locking to ensure that resources are accessed in an orderly manner in a multi-threaded environment. Locking ensures that multiple threads are executed in an orderly manner, but what about in a single-threaded environment? Is it necessary to use locks all the time?

Take the following example:

public String getNames(a) {
     Vector<String> v = new Vector<>();
     v.add("Me");
     v.add("You");
     v.add("Her");
     return v.toString();
}
Copy the code

Vector is a synchronous object, and in a single-threaded environment the lock is meaningless, so after JDK6 the lock is used only when it is needed.

This will improve the efficiency of the program.

Author: Flydean program stuff

Link to this article: www.flydean.com/jvm-perform…

Source: Flydean’s blog

Welcome to pay attention to my public number: procedures those things, more wonderful waiting for you!