The introduction
Writing purpose
This test report, at the end of the performance test, analyzes the test results and gives a conclusion.
The terms defined
1. Average Transaciton Response Time
Average Transaction Response Time displays the average transaction execution time per second during the test scenario, which can be used to analyze the performance trend of the application system during the test scenario.
Number of Transactions per Second /TPS (Transactions per Second)
“Transactions passed per second /TPS” shows the number of transactions passed, failed, and stopped per second in a scenario run, making it an important parameter to examine system performance. It can be used to determine the time transaction load of the system at any given moment. TPS is mainly analyzed by looking at the performance trend of the curve and comparing it with the average transaction response time to analyze the impact of the number of transactions on the execution time.
3. Number of concurrent users (Vuser)
Number of Concurrent Users, the number of online users interacting with the server at the same time. The biggest characteristic of these users is that they interact with the server.
Intended readers
This document is intended for project managers, test managers, R&D managers, and technical veterans.
Test purposes
The purpose of this report is to reflect the performance of the system in the case of multi-user concurrent access under various middleware scenarios.
This test analyzes the performance of the current system from the aspects of transaction response time, the number of concurrent users, the use of system resources and other professional performance testing tools, and compares the actual test data with the expected performance requirements to check whether the system achieves the established performance goals.
Environment configuration
Server:
describe |
OS |
The Numbers |
CPU |
Mem |
IP |
Application server |
Linux |
2 |
4 nuclear |
4G |
10.126.3.59 10.126.3.61 |
Nginx |
Linux |
1 |
12 nuclear |
6G |
10.126.3.63 |
The Node server |
Linux |
1 |
4 nuclear |
4G |
10.126.3.59:1337 |
press |
windows |
2 |
Eight nuclear |
4G |
10.126.3.58 10.126.3.62 |
Application configuration:
A configuration object |
parameter |
configuration |
instructions |
Tomcat /context.xml |
sticky |
true |
Sticky Session mechanism |
lockingMode |
auto |
Locking strategies for non-sticky sessions |
|
sessionBackupAsync |
false |
Whether it should be saved asynchronously to Memcached |
|
operationTimeout |
5000 milliseconds |
||
sessionBackupTimeout |
5000 milliseconds |
Backup Session timeout duration |
|
Tomcat / server.xml |
maxThreads |
500 |
Maximum number of threads in a thread pool |
minSpareThreads |
5 |
||
acceptCount |
1000 |
Queue length |
|
keepAliveTimeout |
20000 |
||
web.xml |
session-timeout |
30 p.m. |
|
The level of logging |
level value |
error |
Check whether logs are recorded |
JVM |
Minimum heap memory |
2000M |
|
Maximum heap memory |
2000M |
||
Maximum permanent generation MaxPermSize |
256m |
||
GC policy |
The default |
||
other |
-XX:-HeapDumpOnOutOfMemoryError |
LR configuration (true means selected, false means not selected) :
A configuration object |
parameter |
configuration |
instructions |
ignore think time |
true |
Ignore thinking time |
|
download non-HTML resources |
true |
Download the pictures |
|
continue on error |
true |
Continue when errors occur |
|
run user as thread |
true |
Run using thread mode |
|
simulate a new user on each iteration |
true |
The HTTP context is cleared with each iteration |
Tomcat
The resulting data
A single node |
||||
HTML |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
19305 |
32584 |
44195 |
40566 |
Mean response time |
0 |
0.001 |
0.001 |
0.002 |
JSP |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
19745 |
27982 |
33233 |
32576 |
Mean response time |
0 |
0.001 |
0.001 |
0.002 |
Serverlet |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
19305 |
28591 |
32460 |
34249 |
Mean response time |
0 |
0.001 |
0.001 |
0.002 |
Problem and result analysis
As the number of concurrent users increases, the TPS value increases.
This application does not use the database, and the application and press are in normal use of memory and resources, which do not constitute the bottleneck of the system.
Nginx+Tomcat
The resulting data
A single node |
||||
HTML |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
13252 |
21481 |
31044 |
34275 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
JSP |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
12267 |
19230 |
25603 |
30903 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
Serverlet |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
12007 |
19610 |
25135 |
29242 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
The two nodes |
||||
HTML |
||||
Number of concurrent users |
10 * 2 |
20 * 2 |
50 * 2 |
100 * 2 |
Number of transactions passed per second TPS |
22160 |
33492 |
45619 |
41595 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
JSP |
||||
Number of concurrent users |
10 * 2 |
20 * 2 |
50 * 2 |
100 * 2 |
Number of transactions passed per second TPS |
20071 |
31514 |
39274 |
36363 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
Serverlet |
||||
Number of concurrent users |
10 * 2 |
20 * 2 |
50 * 2 |
100 * 2 |
Number of transactions passed per second TPS |
20050 |
32584 |
39195 |
40566 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
Problem and result analysis
From the perspective of single node, the trend of HTML,JSP and Serverlet results is similar. After adding Nginx, there will be certain performance loss, and a Tomcat node is added for load balancing. Compared with the single node, the Serverlet of TPS results is about 1.7 times increased, and HTML is about 1.6 times increased. JSP grew about 1.6 times. The TPS of Nginx+2 Tomcat in the same scenario does not increase about twice as much as that of Tomcat on a single node. The bottleneck of the system is that the DISK I/O of the press is too large. The way to improve system performance is to add presses that are not on the same disk. Improves performance by reducing disk IO writes on a single press. The application program does not use the database, and the application server resources are in normal use, which does not constitute a bottleneck of the system.
Node
The resulting data
A single node |
||||
Node directly launches the Node server1.js + text page |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
13010 |
15317 |
14123 |
15373 |
Mean response time |
0 |
0.001 |
0.001 |
0.002 |
Node Pm2 Starts one Node |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
13660 |
15097 |
15133 |
15540 |
Mean response time |
0 |
0.001 |
0.001 |
0.002 |
Node PM2 Starts four nodes |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
17597 |
28405 |
38392 |
41549 |
Mean response time |
0 |
0.001 |
0.001 |
0.002 |
Problem and result analysis
There is little difference between Node direct startup and PM2 startup. TPS of four nodes is 2.7 times higher than that of a single Node.
This application does not use the database, and the application and press are in normal use of memory and resources, which do not constitute the bottleneck of the system.
Node+Tomcat
The resulting data
A single node |
||||
HTML |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
8480 |
11856 |
11437 |
11339 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
JSP |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
8214 |
11264 |
11514 |
11269 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
Serverlet |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
5422 |
7878 |
8926 |
9013 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
Problem and result analysis
From the point of view of single node, the results of HTML,JSP and Serverlet are similar.
This application does not use the database, and the application and press are in normal use of memory and resources, which do not constitute the bottleneck of the system
Node+Nginx+ Tomcat
The resulting data
A single node |
||||
HTML |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
7235 |
9563 |
9969 |
10254 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
JSP |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
5736 |
8030 |
8384 |
9050 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
Serverlet |
||||
Number of concurrent users |
10 |
20 |
50 |
100 |
Number of transactions passed per second TPS |
6028 |
8570 |
9141 |
8763 |
Mean response time |
0.001 |
0.002 |
0.004 |
0.008 |
Two Tomcat nodes |
||||
HTML |
||||
Number of concurrent users |
10 * 2 |
20 * 2 |
50 * 2 |
100 * 2 |
Number of transactions passed per second TPS |
9494 |
10988 |
11679 |
11280 |
Mean response time |
0.002 |
0.003 |
0.008 |
0.015 |
JSP |
||||
Number of concurrent users |
10 * 2 |
20 * 2 |
50 * 2 |
100 * 2 |
Number of transactions passed per second TPS |
8995 |
9251 |
10134 |
10551 |
Mean response time |
0.002 |
0.003 |
0.008 |
0.015 |
Serverlet |
||||
Number of concurrent users |
10 * 2 |
20 * 2 |
50 * 2 |
100 * 2 |
Number of transactions passed per second TPS |
9080 |
9741 |
10186 |
10208 |
Mean response time |
0.002 |
0.003 |
0.007 |
0.015 |
Problem and result analysis
From the point of view of single Node, the results of HTML,JSP and Serverlet have similar trend. After adding Nginx and Node, there will be certain performance loss. When adding a Tomcat Node for load balancing, TPS does not increase by two times. The bottleneck of the system is that the disk IO of the press is too large. The way to improve the performance of the system is to increase the way of the press, increase the press that is not on the same hard disk, reduce the disk IO writing on a single press and improve the performance.
Longitudinal comparison data
The test results
As the change of system architecture TPS falling one by one, each adding a layer of Nginx or Node has certain performance losses, the system bottleneck is the disk I/o of press is too large, improve system performance, can increase the press is not in the same piece of hard disk, reduce the pressure on a single disk IO write on situation and improve the performance.
This middleware architecture test provides reference data comparison for other project tests in the future
TIPS
Article open source my blog: address (reproduced please specify out)
My latest full stack architecture system: Creek-DAM-Nova (Welcome everyone star)