Why am I writing this article
Performance testing is a step that a software product must go through before it is released, either at POC or before UAT. While there are thousands of business systems in different companies, this article will explain where performance testing can be overlooked and the problems encountered by the author during the actual performance testing work. I hope I can give you a little inspiration or help.
Performance testing tool
The performance testing tools I often use are:
- Apache Benchmark (AB)
- Apache JMeter
- HP LoadRunner (LR) (charge)
AB personally thinks it is more suitable for pure API pass rate test, or just for TPS, simple and efficient. LR has the best performance, good chart display and no shortcomings. The disadvantages are charging (expensive is not LR’s weakness, but mine). Because JMeter is open source and has plenty of plug-ins, and the concurrency performance is just right for the system, I chose it.
How to establish pressure measurement plan
The most difficult part of performance testing is how to develop an effective test plan. Different from the previous functional testing, more biased to business understanding. Performance testing, on the other hand, tends to simulate what kind of problems the system will present in the most realistic production situation.
My suggestions are as follows:
- Communicate well with the system architect to obtain the overall system architecture and establish the starting point for pressure testing (usually starting from the gateway, but different systems may have multiple service gateways or third-party system gateways).
2. Obtain the focus of performance testing from business architects. In the context of big data and microservices, business systems are often divided into multiple sub-services. Sometimes, in order to reflect the report data, it is possible to conduct specific performance tests on some sub-services (if the performance is not enough, the data will be collected). Avoid wasting too much energy.
3. Timely analyze data and form reports.
Performance test pit (dry goods)
I’ve had the following pitfalls (or mistakes) :
- No benchmarking was done before the stress test (later, under the guidance of the technical boss, I learned that the stress test must be benchmarked, and it should be benchmarked by simulating the actual situation of the business system).
- Technically speaking, the goal of the stress test is to use hardware resources to the fullest, with each core of the server running at 90 percent utilization, indicating that CPU performance is being utilized (which is the opposite for developers. Just like when we use our computers, the lower the CPU usage, the happier we are, the optimal usage on servers is 99% utilization).
- Distinguish between computation-intensive and I/ O-intensive service scenarios.
- Log printing is really time consuming and asynchronous writing is recommended.
- Personally, I think Nmon’s chart is really good.
- Do not use too many threads for stress testing to avoid additional performance loss due to frequent CPU switching.
Welcome to my official account: Big Blue Cat, continue to share big data, SpringCloud dry goods ~