Talk about performance test in software test

This is the 15th day of my participation in Gwen Challenge

Most of the time, we know that software has a black and white box test, but there is often a performance test that is missing.

In the following article, I’ll take you through performance testing. Let’s learn ~🧐

1. 🤪 Performance test concept

1. Why performance tests?

(1) In October 2007, the Beijing Organizing Committee for the Olympic Games (BOCOG) began pre-selling tickets for the 2008 Olympic Games.

(2) On November 22, 2009, due to the Christmas approaching, the trading volume of commodities on eBay increased by 33% compared with the same period last year. It was because of the 33% extra burden that eBay website collapsed, causing the sellers to suffer a loss of 80% of the sales on that day, which could be described as a heavy loss.

(3) Since its launch in 2010, 12306.com, a booking website, has been criticized for crashing during the Spring Festival travel rush every year, leaving users unable to log in when buying tickets. In 2014, 12306 even had security problems, allowing users to easily access strangers’ ID numbers, mobile phone numbers and other information.

Through the above examples, we can clearly recognize that both the pre-sale system of the Olympic Games and the collapse of 12306’s ticket booking system are caused by the failure or insufficiency of the performance test of the software system. Therefore, as a tester, in addition to the basic function test of the software, but also need to test the software performance, software performance test is also very important and very necessary a test.

2. What is performance testing?

The so-called performance test is to test various performance indicators of the system under normal, peak and abnormal load conditions by simulating performance test tools. Performance testing can verify whether the software system meets the performance requirements expected by users, and at the same time, it can find the possible performance bottlenecks and defects in the system, so as to optimize the system performance.

3. Purpose of performance test

The purpose of performance testing is as follows:

  • Verify that the system performance meets the expected performance requirements, including the efficiency, stability, reliability, and security of the system.
  • Analyze the running state of software system under various load levels to improve performance adjustment efficiency.
  • Identify system defects, look for possible performance problems in the system, locate system bottlenecks and solve problems.
  • System tuning detects the optimal balance between system design and resources to improve and optimize system performance.

2. 🤐 Performance test indicators

The performance test has the following six indicators:

  • The response time
  • throughput
  • Number of concurrent users
  • TPS (Transaction per Second)
  • Click rate
  • Resource utilization

The following will focus on these six indicators.

1. Response time

Response Time Indicates the Time required for the system to respond to user requests.

This time refers to the time required by the user for the whole process from the software client to the user receiving the returned data, including the processing time of various middleware (such as servers, databases, etc.).

As shown below:

As can be seen from the figure above, the response time of the system is t1+ T2 + T3 + T4 + T5 + T6 during the whole process from the client sending a request to the client receiving returned data.

In general, the shorter the response time, the faster the response and the better the performance of the software. However, the response time needs to be combined with the specific needs of users. For example, the response time of the query function of train ticket booking can be completed within 2s, while when downloading movies on the website, if a movie can be downloaded within a few minutes, it means that the website is already fast, so it needs to be determined according to the actual situation.

2. Throughput

Throughput refers to the amount of work that a system can accomplish per unit time. It measures the processing capacity of a software system server.

Throughput can be measured in requests per second, pages per second, visitors per day, traffic processed per hour, etc.

Throughput is a very important indicator of software system to measure its load capacity. The larger the throughput is, the more data the system processes per unit time, and the stronger the system load capacity is.

3. Number of concurrent users

The number of concurrent users refers to the number of concurrent users.

A larger number of concurrent users has a greater impact on system performance. A larger number of concurrent users may lead to slow system response and system instability. Concurrent access must be considered in the design of software system, and the test engineer must also test concurrent access in the performance test.

4. TPS(Transaction Per Second)

TPS refers to the number of transactions and transactions a system can process per second, which is an important indicator to measure the system’s processing capacity.

5. Click rate

Click through rate refers to the number of HTTP requests submitted by users to the Web server per second. It is a performance indicator specific to Web applications. The click through rate can be used to evaluate the load generated by users and determine whether the system is stable. CTR is just a reference metric to help measure Web server performance.

6. Resource utilization

Resource usage refers to the usage of system resources by software, including CPU usage, memory usage, and disk usage. Resource usage is an important parameter for analyzing software performance bottlenecks.

3. 😶 Types of performance tests

There are six types of performance tests:

  • The load test
  • Pressure test
  • Concurrent test
  • Configuration testing
  • Reliability test
  • Capacity test

The following sections will focus on the six types of performance testing mentioned above.

1. Load test

(1) Definition

The load test is to gradually increase the system load, test the change of system performance, and finally determine the maximum load that the system can bear under the condition that the system performance indicators are met.

(2) Take an example

A load test is similar to weightlifting, in that the athlete is repeatedly added to the weight to determine the maximum weight he can lift while remaining in normal physical condition.

For the load test, the premise is to meet the performance index requirements, for example, the response time of a software system is not more than 2s. On this premise, the user visits should be continuously increased. When the number of visits exceeds 10,000, the response time of the system will slow down and the response time will exceed 2s. Therefore, it can be determined that the maximum load is 10,000 people under the premise that the system response time is less than 2s.

2. Stress test

(1) Definition

Pressure test is also called strength test, which refers to gradually increase the pressure on the system, test the performance of the system changes, so that some resources of the system to reach the edge of saturation or system collapse, so as to determine the maximum pressure the system can bear.

(2) The difference between pressure test and load test

Load test is the maximum load that the system can bear under the premise of maintaining the performance index requirements, while pressure test is the state that makes the system performance reach the limit.

Stress testing can expose bugs that only appear under high load conditions, such as synchronization problems, memory leaks, and so on.

(3) Peak test

In performance testing, there is another kind of pressure test called peak test, which refers to the moment (not step by step) to load the system pressure to the maximum, so that the test software system under the maximum pressure operation.

3. Concurrent testing

(1) Definition

Concurrent testing tests whether there are deadlocks or other performance problems when multiple users concurrently access the same application, module, or data record by simulating concurrent user access.

(2) Take an example

Concurrent test, generally there is no standard, just test the concurrent accident will not happen, almost all of the performance test will involve some concurrent test, such as multiple users to access the data for a certain condition, at the same time more than one user at the same time in the update data, the database may visit, write errors will occur and other anomalies.

4. Configure tests

(1) Definition

Configuration testing refers to adjusting the software and hardware environments of a software system, testing the impact of various environments on system performance, and finding the optimal allocation principle of system resources.

(2) Take an example

The configuration test does not change the code, but only changes the software and hardware configurations, for example, installing a higher version of the database, configuring the CPU and memory with better performance, and improving the software performance by changing the external configuration.

5. Reliability test

(1) Definition

Reliability test means to load certain service pressure on the system and make it run continuously for a period of time (for example, 7*24h) to test whether the system can run stably under such conditions.

6. Capacity test

(1) Definition

Capacity test refers to testing the maximum number of users and storage capacity supported by the system under certain hardware, software and network environments.

(2) Take an example

Capacity tests are usually related to databases and system resources (such as cpus, memory, and disks). Capacity tests are used to optimize database and system resources when future requirements (such as user growth and service volume growth) increase.

4. 😲 Performance test process

1. Performance test process

Start with a diagram to see the overall flow of performance testing. As shown below:

2. Performance test process analysis

(1) Analyze performance test requirements

In the performance test demand analysis stage, the tester needs to collect all kinds of information about the project, communicate with the developer, have a certain understanding of the whole project, analyze the part that needs to perform the performance test, and determine the test target.

For example, if the customer requires the response time of the query function of the software product to be less than 2s, it is necessary to determine how many users the response time is less than 2s. For the newly launched product, the number of users is not large, but the number of users may increase dramatically in a few years. Therefore, whether to test the product’s high concurrent access and response time under high concurrent access during performance testing?

(2) Develop a performance test plan

  • Determine the test environment: physical environment, production environment, tools and resources available to the test team, etc.
  • Determine performance acceptance criteria: Determine overall goals and limits for response time, throughput, and utilization of system resources (CPU, memory, and so on).
  • Design test scenarios: analyze product services and user usage scenarios, design scenarios that meet user usage habits, and sort out a business scenario table to provide a basis for writing test scripts.
  • Preparing test data: A performance test simulates real-world scenarios. For example, to simulate high concurrency, you need to prepare data such as the number of users, working hours, and test duration.

(3) Design test cases

A performance test case is used to prepare data for the test according to the test scenario. For example, to simulate high user concurrency, the number of concurrent users can be 100 or 1000, respectively. In addition, various situations such as user active time, access frequency, and scenario interaction should also be considered. Testers can design enough test cases from the business scenario table in the test plan to achieve maximum test coverage.

(4) Write performance test scripts

  • Choose the right protocol.
  • Select a scripting language based on tool support and tester familiarity.
  • When writing test scripts, code writing specifications should be followed to ensure the quality of the code.
  • Maintain and manage scripts well.

(5) Test execution and monitoring

1) Know a few indicators

Performance indicator: Indicates the change of the performance indicator to be tested.

Resource usage and release: CPU, memory, disk, and network usage. Check whether all resources can be released for subsequent services after the performance test is stopped.

Warning message: The general software system will send a warning message when there is a problem. When there is a warning message, the tester should check it in time.

Log check: Frequently analyzes system logs, including operating system and database logs.

2) Impact of results

Performance test monitoring plays a very important role in the analysis of performance test results and software defects.

During the testing process, if the results do not conform to the expected situation, the tester should adjust the system configuration or modify the program code to locate the problem.

Due to the complexity and variability of the data to be monitored during the execution of performance testing, it requires testers to have a very clear understanding of the monitored data indicators and to be very familiar with the performance testing tools. As a performance tester, you should continue to work hard, learn deeply, and accumulate knowledge and experience to do better.

(6) Analysis of operation results

After the performance test is completed, the tester needs to collect and analyze the test data, and compare the test data with the performance indexes required by the customer. If the performance does not meet the customer’s requirements, the tester needs to conduct performance tuning and re-test until the product performance meets the customer’s requirements.

(7) Performance test report

After the performance test is completed, you need to write a performance test report, which describes the performance test objectives, performance test environment, performance test cases and scripts, performance test results, problems encountered during the performance test, and solutions. A software product cannot be tested only once. Therefore, a performance test report must be recorded and saved for reference in the next performance test.

V. 🤪 Conclusion

For testers, not only black and white box testing, but also performance testing. Performance testing is also a crucial part of software, and software without performance testing is often plagued with bugs. I believe that through the understanding of the above, we have a basic understanding of performance testing.

So much for performance testing! If you need to know other content related to software testing, you can go to the “Software Testing” section to view and learn ~

At the same time, if you don’t understand or have mistakes, please leave a message in the comment section or send me a private message

  • Pay close attention to the public account Monday lab, irregularly share learning dry goods, more interesting columns waiting for you to unlock ~
  • If this article is useful to you, be sure to like it and follow it
  • See you next time! 🥂 🥂 🥂