This article is an internal sharing document intended to share single-interface testing and system interface testing from the basics to the depth.
This article draws on and quotes a number of peers of the article content, have marked the source of content. You can check out the link to see the original content.
The first goal of testing is quality assurance, so let’s look at interface testing from the quality assurance dimension.
1. Understand interfaces
1.1 What does the Interface Do?
First, from a functional perspective. For example, to understand the business process of purchasing an item from a user:
- New user registration: Add a user data (Create) through the registration interface.
- Login after successful registration: The login interface queries the password based on the user name and verifies the password. After the password is successfully verified, the login interface creates a token signature based on rules and encryption and sends the token to the client.
- Commodity search: through commodity search interface to search the target commodity, essentially from the commodity database for conditional query (Select);
- View product details: Product ID + product details interface, query product details (Select);
- Select goods to add to shopping cart: Add shopping cart interface, Update shopping cart data, subtract inventory data (Update);
- Create an address and select the address: Create an address interface and add a new address (Create).
- Order & Settlement: order interface, a new order data (Create);
- Payment: payment interface. If the payment is successful, a payment information will be added, the order status will be updated to paid, an electronic invoice will be added, and a logistics information will be added.
The function of the interface is mainly the data interaction between the client and the server. That is, the interface can add, delete, modify and check the back-end data to realize the interaction between the user and the product.
1.2 How to Ensure Interface Quality
From the jingdong website registration interface, we need to ensure quality from what latitude.
Analyzing the registered interface
Registration Page:
Http registration interface:
Request Header:
Accept: * / *Accept-Encoding: gzip, deflate, br
Accept-Language: zh-CN,zh; Q = 0.9, en - US; Q = 0.8, en. Q = 0.7Connection: keep-alive
Content-Length: 6028
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Cookie: shshshfpb=sJZnAsUTcZJjuNedVMhztBA%3D%3D;
Host: reg.jd.com
Origin: https://reg.jd.com
Referer: https://reg.jd.com/reg/person?ReturnUrl=https%3A//www.jd.com/
sec-ch-ua: "Google Chrome"; v="89", "Chromium"; v="89", "; Not A Brand"; v="99"sec-ch-ua-mobile: ? 0Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
User-Agent: Mozilla / 5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36X-Requested-With: XMLHttpRequest
Copy the code
Request Body :(unable to read parameters by ignoring part)
uuid: 68455e864c894bda845de413849204d0
authCodeStrategy: 1 (Verification Code Policy)phone: + 00861553605XXXX (Mobile phone number)mobileCode: 116547 (Mobile verification code)regName: Demo83520 (Registered user name)email: [email protected] (Registered email address)mailCode: 661591 (Email verification code)pwd: MvaEqtzkZ4/R4P3wMoRIuZpA4egWYBmz7bikspIWRYwozJgOHJQlQW8POp8elFhi7OXchoz1OPRoFwxqjWpwcWQCUABx5oovhFxLZ0p8CqB3s0lNDz9QlF8Z YMBanwk + Cne4mXMOTop9OGD8XF8YPqb4qkox8A = (passwords encrypted string)Copy the code
Next, from the perspective of interface development and design, the registered interface is analyzed. From the perspective of pure black box, the design logic of registered interface is analyzed from UI interaction and interface parameters.
UI interaction perspective analysis
- You can register a maximum of 3 accounts with the same mobile phone number and different email addresses.
- User name: contains 4 to 20 characters, including Chinese characters, English characters, digits, hyphens (-), hyphens (-), and underscores (_). The user name must be unique.
- Password: Contains 8 to 20 characters. You are advised to use a combination of letters, digits, and symbols. Unable to register frequently;
Interface parameter analysis (take several main parameters)
- Uuid: unique user ID? Uid generation rule?
- Phone: Register mobile phone number format verification?
- MobileCode: Mobile verification code number, character type verification?
- RegName: Verify user name rules?
- Email: Registration email format verification?
- MailCode: Verification of email verification code bits and character types?
- PWD: indicates the encrypted password
2. How to test the interface
Interface test is mainly divided into three steps:
- Prepare test data (it may not be necessary);
- API test tool, initiate request;
- Verify the Response of the returned result;
(1) Test data
Test data generation method:
- Generate data based on API;
- The database constructs the data directly;
- UI operations generate data;
Generation time:
- Real-time creation: generated during test case execution (resulting in longer test case execution time);
- Create in advance: Generate all test data in batches before test execution (the data created in advance may have been modified and cannot be used normally);
- The test environment is unstable, leading to the smooth creation of test data;
Dirty data:
- Concept: Dirty data refers to data that has been modified unexpectedly before it is actually used.
Classification of test data:
- “Dead water data” : data that is relatively stable, does not change state during use, and can be used more than once. This kind of data is suitable for prior creation.
- Note: The “dead water data” is relatively stable, depending on the test purpose. For example, user data is basically stable when testing non-user related tests, but for test cases that are dedicated to testing user accounts, it often involves functions such as user logout, so it is unstable at this time.
- “Live water data” : data that can only be used once or is subject to frequent modification. Such as coupons, orders and so on.
(2) Test case design
Use case design is designed from two dimensions, functional requirements and non-functional requirements.
Functional requirements
Use-case Design Method Reference: Software Testing basics — Process and Use-case Design method — Piecesof
Non-functional requirements – Latitude of safety
- Encryption of sensitive information: Whether to encrypt passwords transmitted at the front and back ends
- SQL injection? (When Sql statements are dynamically constructed from user input data, unexpected risks can arise if the data entered by the user is constructed into malicious Sql code and the parameters used in dynamically constructed Sql statements are not reviewed by Web applications.)
- Logic vulnerability:
- Batch registration repeat consumption problem? (For example, is there only one Http request registered at the same time with the same parameter and high concurrency?)
- Can I register more than 3 users with the same mobile phone number and different email addresses?
Non-functional requirements – Performance latitude
- Is the benchmark performance up to standard? For example, a single request requires less than 500ms?
- High concurrency performance evaluation: Use performance tests to evaluate the performance of registered interfaces. See: Server Performance Testing – Getting Started Guide and Server Performance Testing – Tools
Latitude of functional requirements: There are at least 29 use cases using the orthogonal method. Latitude of non-functional requirements – Latitude of safety: 4 use cases. Non-functional requirements latitude – Performance latitude: 2 use cases.
(3) How to do interface assertion?
- Http Response assertion:
- The Http status code
- Response Body field and structure verification
- Response Header
- Data assertion:
- Assertions about data in a database
- Does the response time meet the requirements?
3. How to test a system with interfaces?
(1) Complex system test case structure
Reference: HttpRunner’s step/case/suite
Test Steps (testStep) -> Test Cases (testCase) -> Test Scenarios/Test Cases (testSuite)
Test Steps (testStep)
For interface testing, each test step should correspond to an API’s request description.
Test Cases (TestCases)
Testcases should be designed to test a specific functional logic and contain at least the following:
- Clear test purpose
- Explicit input
- Clear operating environment
- Explicit description of test steps
- Clear expected results
Test case design principles:
- Test cases should be complete and independent, and each test case should run independently;
- Test cases are composed of test scripts and test data.
- Test scripts: Test scripts focus only on the logic of the tested business functions, including preconditions, test procedures, and expected results.
- Test data: indicates the service data corresponding to the test.
- Test data and test script separation: convenient implementation of data-driven testing. By passing in a set of data to the test script, the same business function can be tested and verified under different data logic. For example: purchasing commodity interface, member and non-member commodity prices are not the same, coupon logic is not the same. So through different user data, you can test the shopping logic of members and non-members.
Test Case Set (testSuite)
Test case set is an unordered set of test cases, and the test cases in the set should be independent of each other without sequential dependence.
If there are sequential dependencies, such as login and order functions. The correct approach would be to log in as part of the pre-order test case.
(2) Test data management
Source: Sun Gaofei – Data management strategies in testing frameworks
Two properties of data:
- Scope: Shared data (at testSuite level), isolated data (at testCase level)
- Creation method: call the development interface, use Sql, independently develop the data template
Test the scope of the data
Shared data: Test data shared by all or some cases
- Advantages: Fast, data only needs to be created once can be used by many cases.
- Disadvantages:
- Data is prepared for many cases, and it is difficult to distinguish which data is prepared for one case and which data is prepared for another case. Case is not readable
- Cases affect each other. This is because the functionality under test itself affects the database. It is likely that the failure or success of one case will lead to the failure of a batch of cases
- The data itself is not scalable, and a small change can have a wide impact. Several or even dozens of case failures are common. Script maintenance costs are high
Data isolation: Each case has its own test data, which does not affect each other. That is, each case performs setup and teardown operations. Case creates data before execution and destroys data after execution.
- Advantages: Case does not affect each other, data does not affect each other. Case stability, maintainability, readability and so on are greatly improved
- Cons: Slow speed… Grey often slow… Because each case has a lot of disk IO operations… It is not surprising that maintaining data takes longer than calling functionality. OK, this is actually the most common approach we use in testing. It’s slow and difficult for many people to implement. But the maintainability is just too tempting. I no longer had to maintain unstable scripts all day. Slow down, slow down. Anyway, our continuous integration strategy for interface testing and UI automation testing is run on a regular basis. I don’t care if I run for 10 minutes or more. As long as it’s not a policy to monitor code changes, it’s fine.
Sensitive data: Sensitive information, such as accounts, passwords, and keys, can be set as environment variables with restricted permissions.
-
The main reasons why sensitive information cannot be disclosed:
- Enhanced access control: There may be many developers involved in the project, and everyone has access to the code repository, but not everyone should have access to extremely sensitive information such as keys;
- Reduce the risk of code leaks: If code leaks, sensitive data information should not be leaked as well.
-
Recommended solutions:
- Only o&M personnel (or core developers) have the permission to log in to the server.
- Operation personnel (or core developers) : set sensitive data into system environment variables on the running machine;
- Ordinary developers: just need to know the variable names of sensitive information and get sensitive data in code by reading environment variables.
How to structure data
Calling the development interface
-
Advantages: Relatively simple to implement in scripts, without deep understanding of background databases.
-
Disadvantages:
- The coupling is too high, and depending on the way other interfaces to the product create data, cases are bound to be non-isolated. Note that isolation is an important indicator of case quality. How many cases did you say failed once the interface that created the data was bugged? And in the real world, you have to call N interfaces to create the data you need. Unable to determine which interface is buggy. This has become end-to-end testing. Being able to quickly locate bugs is also an important indicator of case quality.
- If you’re isolating data, the interfaces in the product often don’t meet your need to destroy data, to cite the most common example. There is a deletion mechanism in the world called logical deletion, which is that the product interface does not actually delete the data in the database, but uses a logical flag to indicate that the data has been deleted. Stop giving feedback to users. So you can’t actually isolate the data
-
Usage suggestion: Not recommended. Although the script maintenance cost of this method is low, the use case coupling degree is high, the isolation is poor, and the problem location cost is high. A BUG that calls the development interface can cause a large number of use cases to fail.
Use SQL directly: Write SQL directly to create and destroy data.
Pros: Good isolation and bug tracking. Disadvantages: If given to the testers to write SQL in the script, the difficulty and readability are not optimistic, and depends too much on the tester’s own ability, high error rate. Fortunately, we can fix this problem by tinkering with the testing framework. Usage suggestion: In addition to query SQL, add, delete, or modify SQL should be carefully used, because of high implementation cost and high operation risk. Need to have a good understanding of database table structure and business logic, deleting data is likely to affect the actual business or other student tests.
Data template: Create an independent data template for core business test data, which can be independently maintained by special personnel.
Reference: The birth of test energy efficiency platform – international mall intelligent material platform · TesterHome
-
Implementation idea:
- For data with complex data structure, multiple data associated services and high risk of abnormal data (such as the material data of e-commerce), it is recommended to encapsulate general functions based on the development of interfaces, and handle and locate anomalies accordingly.
-
Advantages:
- Dedicated development and maintenance greatly reduce the cost and risk of building complex data
- Reduce the functional test construction data energy, break through the bottleneck of related test manpower.
-
Disadvantages: High development costs, use of heavy business systems
4. Interface test evolution
Reviewing the previous interface tests, I found a few problems:
- The complex system requires thousands of interfaces, and the workload of regression testing is large.
- Use case writing expensive, parameter – heavy interface
Here are a few examples of how interface testing can improve productivity.
(1) Heavy workload of regression test? Recording line traffic returns
Reference: Recording line traffic to do regression testing the correct way to open · TesterHome
(2) High cost of writing use cases? Universal interface automatic test scheme
Reference: Universal interface robustness scanning scheme – thumbs up
(3) Quick verification of changes in interface data structure? Interface automatic full field verification
Source: Interface automation full field verification · TesterHome
Implementation: custom interface return data format ([contract definition])- the actual response data format verification ([contract verification]) function
Calibration principle:
- The actual return field name should be strictly equal to or inclusive of the contract definition field name (depending on the different matching pattern)
- Field values can be of equal value or type