sequence
Having been working in CRUD for a long time, I suddenly became interested in QPS of my own project. I tested the most visited interface with JMeter and found only a pathetic 17r/s…… I am ashamed to see millions of QPS everywhere on the Internet.
So we’re ready for performance tuning, but we need to do some performance testing before tuning to see how it works. For example, the first step is to use Redis to add cache, and the test found that it is slower than before?? So performance testing is very important, and JMH is a very suitable tool for performance testing.
The preparatory work
The preparation is very simple, just import the MAVEN package of JMH.
< the dependency > < groupId > org. Its. JMH < / groupId > < artifactId > JMH - core < / artifactId > < version > 1.22 < / version > <scope>provided</scope> </dependency> <dependency> <groupId>org.openjdk.jmh</groupId> < artifactId > JMH - generator - annprocess < / artifactId > < version > 1.22 < / version > < scope > provided < / scope > < / dependency >Copy the code
The first example
package jmh;
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;
/**
* Benchmark
*
* @author wangpenglei
* @since 2019/11/27 13:54
*/
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Benchmark)
public class Benchmark {
public static void main(String[] args) throws Exception {
// Perform tests using a separate process, warmUp 5 times, and then tests 5 times
Options opt = new OptionsBuilder().include(Benchmark.class.getSimpleName()).forks(1).warmupIterations(5)
.measurementIterations(5).build();
new Runner(opt).run();
}
@Setup
public void init(a) {}@TearDown
public void down(a) {}@org.openjdk.jmh.annotations.Benchmark
public void test(a) {}}Copy the code
@BenchmarkMode
This annotation determines the test mode. There are a lot of specific content on the web, but I’m using the calculated average run time here
@OutputTimeUnit(TimeUnit.MILLISECONDS)
This annotation is the unit of final output. Because I’m testing interfaces, I’m using milliseconds. If you’re testing native Redis or native methods this can be changed to smaller units.
@State(Scope.Benchmark)
This annotation defines the range available for a given class instance, and since beans in Spring are singletons by default, I’m using the idea that all threads running the same test will share the instance. Can be used to test the multithreaded performance of a state object (or just to mark benchmarks for that range).
@Setup @TearDown @Benchmark
Very simple annotations that normal tests have before test initialization * after test clean up resources ** test method *.
How do I use it with Spring
Because we need the Spring environment to test the beans in the container, we need to manually create one in the initialization method. I did some research and couldn’t find a better way, so I’ll just create it by hand.
private ConfigurableApplicationContext context;
private AppGoodsController controller;
@Setup
public void init(a) {
// WebApplication.class is the project's Spring Boot class
context = SpringApplication.run(WebApplication.class);
// Get the bean to test
this.controller = context.getBean(AppGoodsController.class);
}
@TearDown
public void down(a) {
context.close();
}
Copy the code
To begin testing
After writing the test method and launching the main method to start the test, I will now report some strange errors, but I don’t care about the results. After the completion of the operation will output the results, this time you can compare the optimization effect.
Result "jmh.Benchmark.testGetGoodsList": 65.969 ±(99.9%) 10.683 ms/op [Average] (min, AVg, Max) = (63.087, 65.969, 69.996), STdev = 2.774 CI (99.9%): [55.286, 76.652] (assuming Normal distribution)# Run complete. Total time: 00:02:48
REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask forreviews from the domain experts. Do not assume the numbers tell you what you want them to tell. Benchmark Mode Cnt Score Error Units Benchmark. TestGetGoodsList avgt 5 65.969 + / - 10.683 ms/op Process finished withexit code 0
Copy the code
Negative optimization at the beginning
At the beginning of this article, I mentioned an example of negative optimization, which was slower than direct database query after I added redis cache. In fact, the reason is simple, I tested on the local computer, but the connected Redis was deployed on the server. The latency to and from the public network is already high. However, the database is also through the public network, and it is not faster than Redis. The final reason is that the bandwidth of the server where Redis is deployed is only 1m, or 100KB /s, which is easily occupied. The final optimization is redis plus cache and the use of Intranet redis connection.
The optimization results
Speed before optimization:
Result "jmh.Benchmark.testGetGoodsList":
102.419 ±(99.9%) 153.083 ms/op [Average]
(min, avg, max) = (65.047, 102.419, 162.409), stdev = 39.755
CI (99.9%): [≈ 0, 255.502] (assumes normal distribution)
# Run complete. Total time: 00:03:03
REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask forreviews from the domain experts. Do not assume the numbers tell you what you want them to tell. Benchmark Mode Cnt Score Error Units Benchmark. TestGetGoodsList avgt 5 102.419 + / - 153.083 ms/op Process finished withexit code 0
Copy the code
Optimized speed (to simulate the Intranetredis
Speed. It’s localredis
) :
Result "jmh.Benchmark.testGetGoodsList":
29.210 ±(99.9%) 2.947 ms/op [Average]
(min, avg, max) = (28.479, 29.210, 30.380), stdev = 0.765
CI (99.9%): [26.263, 32.157] (assumes normal distribution)
# Run complete. Total time: 00:02:49
REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask forreviews from the domain experts. Do not assume the numbers tell you what you want them to tell. Benchmark Mode Cnt Score Error Units Benchmark. TestGetGoodsList avgt 5 29.210 + / - 2.947 ms/op Process finished withexit code 0
Copy the code
As you can see, it is about 3.5 times faster, but there is still room for optimization. If all database operations are processed by redis cache, it takes about 1ms to complete.