Pressure test
To optimize performance, you first need to do a performance check. For HTTP service performance detection, the first thing you need to do is stress test how the HTTP service performs/performs under high concurrency. First, understand the concept of parameters related to stress testing:
- Requests per second
Concept: A quantitative description of the concurrent processing capacity of a server, in reqs/s. It refers to the number of requests processed per unit time for a certain number of concurrent users. The maximum number of requests that can be processed per unit time for a certain number of concurrent users is called the maximum throughput rate. Formula: Total number of requests/time spent processing those requests
- The number of concurrent connections
Concept: The number of requests received by the server at any given moment is simply a session.
- Concurrency Level The Number of Concurrent Users
Concept: Be careful to distinguish between this concept and the number of concurrent connections, which is the number of connections a user may have at the same time.
- Average Time per request
Calculation formula: Time taken to complete all requests/(Total requests/Number of concurrent users)
- Time per request: Across all concurrent requests
Formula: Time spent processing all requests/total requests. As you can see, it’s the reciprocal of throughput. It is also equal to the average request waiting time/number of concurrent users.
Ab is apache’s own stress test tool. Ab is very useful, it can not only apache server web site access stress test, or other types of server stress test.
/ / input instructions ab - the c200 - n1600 HTTP / / 127.0.0.1:3000 / downloadCopy the code
After starting the Node service, enter the instructions above, where -c indicates the number of concurrent requests, -n indicates the number of requests, and the following url indicates the page we want to stress test. The results are shown below:Document Path, Document Length, Concurrency Level, Time taken for tests Complete requests indicates the number of completed requests. Total Transferred and HTML transferred represent the Total transferred volume. Requests per second indicates the number of Requests that the server can handle per second. Time per Request indicates the average waiting Time for Requests. Time per request: Accross all Concurrent requests indicates the average time spent per request, and Transfer rate indicates the maximum throughput of the server, indicating the maximum amount of data that the server can receive or send per second.
Determine where the server performance bottleneck is: If throughput is exactly the maximum traffic that the network card can handle, the bottleneck of our service is in the network card, not some other part of the computer. At the same time, you can use Linux instructions to view the CPU and memory usage of the computer, you can detect the CPU and memory of the computer at the same time, you can detect whether the performance bottleneck is in the CPU or memory; The iostat function is used to check the BANDWIDTH of an I/O device and determine whether the PERFORMANCE bottleneck is on the I/O device. Another possibility is a bottleneck in the back end. For example, if our service can send 600 requests per second, but the back end can only handle 300 requests per second, then the bottleneck in service performance is in the back end.
After finding the performance bottleneck of the server, we can optimize for these bottlenecks: if the network card is not good, we can change the network card with larger bandwidth, if the memory is not good, we can change the larger memory, and if the back-end processing capacity is not good, we can communicate with the back-end students to optimize the back-end.
Node.js performance analysis tool
- Node.js comes with a profile
Using the command: node –prof entry.js, a pressure report file will be generated during the pressure test. After that, we use the command: Log > profile. TXT, which outputs the profile. TXT file to the profile. log file. The profile. TXT file collects statistics on the time and CPU usage of different operations
- Chrome devtool. Since Node.js is based on the V8 engine, you can use the Chrome DevTool to check service performance.
First type the command: node –inspect-brk entry.js, which means that we will suspend our service while debugging is started, and then type in Chrome: Chrome ://inspect, the target in the page represents the debugger protocol of the node.js found, and click inspect to enter Chrome debug mode. It has the console output console, source is our Node.js code, and profile monitors our CPU performance.
Code optimization
We use Chrome Devtool to monitor CPU performance and perform stress tests, and the results are as follows. You can then optimize your code based on specific test reports.
- JavaScript code performance optimization
As you can see readFileSync is consuming a lot of CPU, look at where readFileSync is used in your code. This is what we’re going to find, because we’re calling the readFileSync method in the middleware, and then we’re going to re-execute the middleware logic on every request, so we’re going to execute the readFileSync method as many times as we want, So readFileSync causes a significant CPU performance drain.
app.use( mount('/',function(ctx){ ctx.body = fs.readFileSync(__dirname + '/index.heml','utf-8'); }));Copy the code
In this case, readFileSync calls can be taken out of the middleware and reused to reduce the CPU consumption of readFileSync.
const str = fs.readFileSync(__dirname + '/index.heml','utf-8'); app.use( mount('/',function(ctx){ ctx.body = str; }));Copy the code
After modifying the code, Chrome DevTool was used to monitor CPU performance and perform stress tests. We can see that the CPU usage of readFileSync is reduced by taking the reading of files out of the middleware.The nature of JavaScript optimization performance: 1. Reduce unnecessary computation; For example, merging many small images into large images can reduce the total number of HTTP requests and reduce the consumption of TCP connections, disconnections, and HTTP codecs. 2, space for time; For example, if the results of repeated calculation are cached, the cached results can be directly used in the next calculation, saving a large amount of computation and thus achieving performance optimization.
In general, think about optimization: when you see a line of code, think about whether the calculation is necessary in the time the user perceives it. Can we do this calculation at another time, so that we can optimize space for time?
When you think about it, the formula for HTTP performance optimization is: calculate ahead. If possible, move the calculation of the HTTP service phase to the Node.js startup phase to achieve good performance optimization.