The business scenario
scenario
Provide Restful API interface to directly read the cache data in Redis, and the data in Redis is thrown in asynchronously through MQ.
The target
Provide reliable fast query API service, support service horizontal expansion.
indicators
At 500 connections, server costs are also a factor. The impact of public network, bandwidth, and gateway is not considered for the time being.
The first technique
Startup Settings
/usr/bin/java -XX:MetaspaceSize=64m -XX:MaxMetaspaceSize=256m -Xmx768m -Xms768m -XX:NewSize=1 -Xss256k -jar ps1.jar
Copy the code
Pressure test results
[root@tech- 0001. wrk]# wrk -t4 -c500 -d60s 'http://127.0.0.1:9021/hotdog/redis? key=aaa' --latency -- / 16
Running 1m test @ http:// 127.0.0.1:9021 /hotdog/redis? key=aaa 4 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 114.57ms 12.10ms 313.86ms 98.16% Req/Sec 1.10k 88.43 1.26k 89.51% Latency Distribution 50% 113.75ms 75% 115.66ms 90% 117.58ms 99% 134.86ms 262303 requests in 1.00m, 82.55MB read Requests/sec: 4366.86 Transfer/sec: 1.37MB Copy the code
The second technique
A launch configuration
java -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=128m -Xmx256m -Xms256m -XX:NewSize=1 -Xss256k -jar ps2-fat.jar run com.tech.luckin.MainVerticle
Copy the code
Pressure test results
[root@jiaomatech- 0001. wrk]# wrk -t4 -c500 -d60s 'http://127.0.0.1:8081/app/redis? key=aaa' --latency -- / 16
Running 1m test @ http:// 127.0.0.1:8081 /app/redis? key=aaa 4 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 37.45ms 54.67ms 228.47ms 83.90% Req/Sec 9.17k 4.36k 14.08k 72.72% Latency Distribution 50% 10.12ms 75% 22.99ms 90% 140.38ms 99% 206.51ms 1959728 requests in 1.00m, 80.36MB read Requests/sec: 32635.44 Transfer/sec: 1.34MB Copy the code
In doubt
The stability of
The first technique has low standard deviation and high service stability, while the second technique has no advantage.
QPS
The second technique had an average QPS of 32635.44 Requests/ SEC. That’s 7.5 times better than the first technique, not more than 10 times better.
The average response
The average time of the second technique was 37.45ms. The second took 114.57ms on average, about three times less.
Server cost
The second technique provides good performance in low memory, while the first technique has problems booting in low memory.
then
The advantages and disadvantages of each technology framework are different in different scenarios, and heterogeneous frameworks are a good solution to get the right tools to do the right things. Which technology would you choose as your solution for the current scenario?
I want to choose the first option!!