Micro Benchmarking with JMH: Measure, Don’t guess! Microbenchmark with JMH: Don’t guess, Test! The original article was translated by Antonio Hollis.

Many Java developers know that declaring a variable null is useful for garbage collection. Some also know that defining an inline method with final can improve performance. However, we also know that JVMS are evolving, and many of the theories you assumed yesterday may not be applicable today. (Due to the JVM’s constant optimization, most of the time the garbage collector can quickly determine whether an object should be collected, even if it is not declared null. After constant optimization of the JVM, setting the variable to NULL may not result in significant performance gains.) So how do we know if we are writing efficient code? In fact, we should not guess, but to test.

The Measure, don ‘t guess!

As my friend Kirk Pepperdine once said, “Measure, don’t guess.” When our code has a performance problem, we always try to make small (probably arbitrary) changes in hopes of improving performance. Instead, set up a stable performance test environment (operating system, JVM, application server, database, etc.), set some performance goals, and test against them until you meet your expectations. Similar to continuous testing and delivery, we should also conduct continuous performance testing.

In any case, performance is a dark art, and that’s not what this article is about. I just want to focus on microbenchmarks and show you how to use JMH in a real use case (logging is an example for this article).

Use microbenchmarks in log output

Many of you, like me, have seen the following debug logs after using multiple logging frameworks:

logger.debug("Concatenating strings " + x + y + z);

logger.debug("Using variable arguments {} {} {}", x, y, z);

if (logger.isDebugEnabled())
  logger.debug("Using the if debug enabled {} {} {}", x, y, z);Copy the code

In most applications, the log output level is INFO or WARNING. Even if the WARNING level is used, the above sections of code will output debugging information normally. However, debug logs can affect application performance. To demonstrate this, we will use microbenchmarks to test the performance of the above three types of code, in this case using the Java Microbenchmark Tool (JMH). The above three types of code can be summarized as: using string concatenation, using variable parameters, and using If for debug availability detection.

JMH set

JMH is a tool for Java and other JVM languages that provides build, run, and analysis (on a variety of benchmarks: nanosecond, subtle, millisecond, macro). With Maven ArchType we can quickly create a JMH project.

mvn archetype:generate -DinteractiveMode=false -DarchetypeGroupId=org.openjdk.jmh \ - DarchetypeArtifactId = JMH - Java - benchmark - archetype - DarchetypeVersion = 1.4.1 \ - DgroupId = org agoncal. Sample. JMH - DartifactId = logging - Dversion = 1.0Copy the code

The project structure created using the Maven prototype is as follows:

A POM file containing JMH related dependencies and the Maven-shade-plugin plugin

An empty MyBenchmark file with the @benchmark annotation

At this point, although we haven’t done anything yet, the microbenchmark project we just created is up and running. This can be packaged using the Maven command to generate benchmarks

 mvn clean install
 java -jar target/benchmarks.jarCopy the code

When we run the JAR with the command above, we see some interesting output on the console: JMH enters a loop, heats up the JVM, executes an empty method annotated by @Benchmark, and gives the number of operations per second.

# Run progress: 30,00% complete, ETA 00:04:41
# Fork: 4 of 10
# Warmup Iteration   1: 2207650172,188 ops/s
# Warmup Iteration   2: 2171077515,143 ops/s
# Warmup Iteration   3: 2147266359,269 ops/s
# Warmup Iteration   4: 2193541731,837 ops/s
# Warmup Iteration   5: 2195724915,070 ops/s
# Warmup Iteration   6: 2191867717,675 ops/s
# Warmup Iteration   7: 2143952349,129 ops/s
# Warmup Iteration   8: 2187759638,895 ops/s
# Warmup Iteration   9: 2171283214,772 ops/s
# Warmup Iteration  10: 2194607294,634 ops/s
# Warmup Iteration  11: 2195047447,488 ops/s
# Warmup Iteration  12: 2191714465,557 ops/s
# Warmup Iteration  13: 2229074852,390 ops/s
# Warmup Iteration  14: 2221881356,361 ops/s
# Warmup Iteration  15: 2240789717,480 ops/s
# Warmup Iteration  16: 2236822727,970 ops/s
# Warmup Iteration  17: 2228958137,977 ops/s
# Warmup Iteration  18: 2242267603,165 ops/s
# Warmup Iteration  19: 2216594798,060 ops/s
# Warmup Iteration  20: 2243117972,224 ops/s
Iteration   1: 2201097704,736 ops/s
Iteration   2: 2224068972,437 ops/s
Iteration   3: 2243832903,895 ops/s
Iteration   4: 2246595941,792 ops/s
Iteration   5: 2241703372,299 ops/s
Iteration   6: 2243852186,017 ops/s
Iteration   7: 2221541382,551 ops/s
Iteration   8: 2196835756,509 ops/s
Iteration   9: 2205740069,844 ops/s
Iteration  10: 2207837588,402 ops/s
Iteration  11: 2192906907,559 ops/s
Iteration  12: 2239234959,368 ops/s
Iteration  13: 2198998566,646 ops/s
Iteration  14: 2201966804,597 ops/s
Iteration  15: 2215531292,317 ops/s
Iteration  16: 2155095714,297 ops/s
Iteration  17: 2146037784,423 ops/s
Iteration  18: 2139622262,798 ops/s
Iteration  19: 2213499245,208 ops/s
Iteration  20: 2191108429,343 ops/sCopy the code

Add SFL4J to the benchmark

As mentioned earlier, the use case we are going to test is logging, so in this project I will use SFL4J and Logback, and we will add dependencies to poM files:

<dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.7</version> </dependency> < the dependency > < groupId > ch. Qos. Logback < / groupId > < artifactId > logback - classic < / artifactId > < version > 1.0.11 < / version > </dependency>Copy the code

We then add a logback.xml configuration file and set the log output level to INFO.

<configuration>
  <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
      <pattern>%highlight(%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n)</pattern>
    </encoder>
  </appender>

  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder><pattern>%msg%n</pattern></encoder>
  </appender>

  <root level="INFO">
    <appender-ref ref="CONSOLE" />
  </root>
</configuration>Copy the code

The advantage of using maven-shade-plugin is that when we package the application using Maven, all dependencies and configuration files are packaged into the target directory.

Use string concatenation in logs

Start the first microbenchmark: use string concatenation in the log. Here we write the required code into the method annotated by the @Benchmark annotation, and then JMH takes care of the rest.

In this code, we create string variables x,y, and z, and then, in a loop, print the debug log as a string concatenation. The code is as follows:

import org.openjdk.jmh.annotations.Benchmark; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class MyBenchmark { private static final Logger logger = LoggerFactory.getLogger(MyBenchmark.class); @Benchmark public void testConcatenatingStrings() { String x = "", y = "", z = ""; for (int i = 0; i < 100; i++) { x += i; y += i; z += i; logger.debug("Concatenating strings " + x + y + z); }}}Copy the code

Then, just as before, run the microbenchmark and view the iteration output.

Translator’s note: The following text will be unified for comparison.

Use variable parameters in the log

In this microbenchmark, we used variable parameters instead of string concatenation, changed the code as follows, and then packaged it.

@Benchmark public void testVariableArguments() { String x = "", y = "", z = ""; for (int i = 0; i < 100; i++) { x += i; y += i; z += i; logger.debug("Variable arguments {} {} {}", x, y, z); }}Copy the code

Use an If statement in the log

Last but not least, use isDebugEnabled() for optimization when using log output

@Benchmark public void testIfDebugEnabled() { String x = "", y = "", z = ""; for (int i = 0; i < 100; i++) { x += i; y += i; z += i; if (logger.isDebugEnabled()) logger.debug("If debug enabled {} {} {}", x, y, z); }}Copy the code

Microbenchmark results

After running three microbenchmarks, we will expect results (remember,don’t guess, measure). The more operations per second, the better the performance. If we look at the last line of the table below, we notice that performance is best using isDebugEnabled and worst using string concatenation. It is also found that the test results are not bad when using variable parameters instead of isDebugEnabled. Overall code readability (less boilerplate code). So I’ll go with the variable argument form!

String concatenation Variable arguments if isDebugEnabled
Iteration 1 57108635 ops/s 97921939 ops/s 104993368 ops/s
Iteration 2 58441293 ops/s 98036051 ops/s 104839216 ops/s
Iteration 3 58231243 ops/s 97457222 ops/s 106601803 ops/s
Iteration 4 58538842 ops/s 100861562 ops/s 104643717 ops/s
Iteration 5 57297787 ops/s 100405656 ops/s 104706503 ops/s
Iteration 6 57838298 ops/s 98912545 ops/s 105439939 ops/s
Iteration 7 56645371 ops/s 100543188 ops/s 102893089 ops/s
Iteration 8 56569236 ops/s 102239005 ops/s 104730682 ops/s
Iteration 9 57349754 ops/s 94482508 ops/s 103492227 ops/s
Iteration 10 56894075 ops/s 101405938 ops/s 106790525 ops/s
Average 57491,4534 ops/s 99226,5614 ops/s 104913,1069 ops/s

JVMS have evolved significantly over the past few decades. Optimizing our code with design patterns from ten years ago is not an option. The only way to compare two pieces of code is to measure it. JMH is the perfect tool for simple and efficient microbenchmarking. Of course, a small piece of code for reasoning has only one step, because we usually need to analyze the performance of the entire application. Because of HMH, this first step is easy.

Here are some more examples of JMH, which is full of interesting ideas.

The resources

JMH Samples

Java Performance Tuning Guide

Using JMH for Java Microbenchmarking

Writing Java Micro Benchmarks with JMH: Juicy

Copyright Notice


exceptional