As Java continues to mature, it is important to push cloud optimization features to provide better performance and lower costs.

Across the industry, companies are trying to control runaway cloud costs by squeezing more carrying capacity out of instances running in the cloud. In the Java world in particular, developers are trying to fit workloads into smaller and smaller instances and use server resources as efficiently as possible. Relying on elastic horizontal scaling to handle traffic spikes means that Java workloads must start up quickly and stay fast. But some features of outdated JVMS make it difficult to effectively leverage resources on cloud instances.

It’s time to reimagine how Java works in a cloud-centric world. We started by exploring how to optimize compilation by offloading JIT workloads to cloud resources. Can we get optimized code with better performance and less time to warm up?

In this blog post, we will focus on:

  • The origins of the current JIT compilation model
  • Disadvantages of JVM-based JIT compilation
  • How does cloud native compilation affect performance, warm-up time, and computing resources

More Java course learning routes, notes, and architecture materials

Review of JIT compilation

Today’s Java JIT compilation model dates back to the 1990s. A lot has changed since the 1990s! But some things, such as the Java JIT compiled model, do not. We can do better.

First, review JIT compilation. When the JVM starts, it runs the portable bytecode compiled in the Java program in a slower interpreter until it can recognize the Outlines of the “hot” methods and how they work. The JIT compiler then compiles these methods into machine code that is highly optimized for the current usage pattern. It runs the code until the optimization turns out to be wrong. In this case, we get a de-optimization, where the optimized code is thrown and the method runs again in the interpreter until a new optimization method is produced. When the JVM shuts down, all the profile information and optimized methods are discarded, and the whole process starts from scratch the next time it runs.

When Java was created in the ’90s, there was no such thing as a “magic cloud,” a collection of interconnected, elastic resources that we could spin up and down at will. Therefore, making the JVM(including the JIT compiler) fully packaged and independent is a logical choice.

So what are the downsides to this approach? B: well…

  • The JIT compiler must share resources with the thread executing the application logic. This means limits on how many resources are available for optimizations, limiting the speed and effectiveness of those optimizations. For example, the maximum optimization level of Azul Platform Prime’s Falcon JIT compiler can produce 50%-200% faster code on a single method and workload. However, on resource-constrained machines, such a high optimization level may not be practical due to resource constraints.
  • You only need to JIT compile resources for a small part of the program’s life. However, for ON-JVM JIT compilation, you must always preserve capacity.
  • An outbreak of JIT-related CPU and memory usage at the beginning of the JVM life cycle can wreak havoc on the load balancer, Kubernetes CPU throttling, and other parts of the deployment topology.
  • The JVM has no memory of past runs. Even if a JVM is running a workload that it has run a hundred times, it must run it from scratch in the interpreter as if it were running it for the first time.
  • The JVM does not know about other nodes running the same program. Each JVM builds a profile based on its traffic, but a higher performance profile can be built by aggregating the experience of hundreds of JVMS running the same code.

Offload JIT compilation to the cloud

Today, we do have a “magic resource cloud” that can be used to unload JVM processes that can be done more efficiently elsewhere. That’s why we built the Cloud Native Compiler, a dedicated service for scalable JIT-compiled resources that you can use to warm up your JVM.

The cloud native compiler runs as a Kubernetes cluster on your server, either locally or in the cloud. Because it is scalable, you can upgrade the service when needed, provide almost unlimited resources for JIT compilation for a short time, and then reduce it to near zero when it is not needed.

When using cloud native compilation on a Java workload:

  • CPU consumption on the client remains low and stable. You do not retain the ability to JIT compile, and you can resize instances to adjust the resource requirements for running your application logic.
  • Regardless of the capacity of the JVM client instance, you can run the most aggressive optimizations for the highest performance.
  • With more threads available, JIT compilation requests are running more in parallel, and the wall clock time to warm up the JVM is significantly reduced.

So, is there really a difference?

Notice what happens when we run the Finagle-HTTP Renaissance workload on a 2 vCore machine with extremely limited resources. Making heavier optimizations means spending more resources, as shown by Azul Platform Prime’s long warm-up curve with native JIts. This lengthy warm-up time is the same as OpenJDK when cloud native compilation, and optimized Falcon code continues to run at faster throughput.

At the same time, CPU utilization on the client remains low and stable, and you can allocate more power to run the application logic even during the warm-up period.

But it’s not just resource-constrained machines that benefit from cloud-native compilation. Let’s look at a more realistic workload — running a three-node Cassandra cluster on an 8 vCore R 5.2x large AWS instance. By setting optimizations to the highest level for high and consistent throughput, the time to warm up was reduced from 20 minutes for on-premise JIts to less than two minutes for cloud native compilations.

conclusion

As Java continues to mature, it is important to push cloud optimization capabilities to provide better performance and lower cost for enterprise applications built in or in the cloud. Cloud native compilation is a solid start to advancing how the Java runtime performs across internal and cloud environments.

The last

Opportunity is for those who are prepared, like + favorites,More Java course learning routes, notes, and architecture materials, want to learn friends pay attention to the blog, background private letter “learning” can get free information!!