An overview of the

This specification is a guiding document for the software testing of the project. It provides an overall specification for the testing theory, test type, test method, test standard and test process involved in the software testing process, so as to effectively ensure the quality of software products. Project software testing is a means of controlling software design and checking and auditing software product quality. Project testers should check and test software according to the requirements of this specification.

Test objectives

A. Testing is the process of executing a program in order to find errors in the program; B. A good test case is a use case that is likely to find a hitherto undiscovered error; C. A successful test is one that finds an error that has not been discovered so far.

== Software testing process ==

To do a good project is to do the right thing, and to do things the right way is to improve efficiency. After receiving a new project, the testing department needs to gradually carry out the testing work according to the following five processes, which can be supplemented and improved according to the actual situation in the actual work.

A. Test reference Documents After the test personnel carry out their work, they need to use the documents provided by product personnel and developers. The formal documents can bring the basis for the test work. Specific reference documents include: product requirements specification, product design prototype, database design scheme, code specification specification of development department, developer (front-end and back-end) task assignment table, etc.

B. Test flow chart

The basic flow chart of the test work is as follows:



= =Software testing methods= =

According to the needs of the project, the main function test and performance test. At present, functional testing is mainly used, and the most commonly used testing methods of functional testing include equivalence class division method, boundary value analysis method and error conjecture method. All three are among the most common, typical, and important black-box testing methods. Other methods will also involve causal graph method, decision table method, orthogonal analysis, scene method and so on.

A. Equivalence class partition method divides all possible input data into several subsets. In each subset, if any of the input data has the same effect on exposing potential errors in the program, such subsets constitute an equivalence class. As long as a random value is selected from each equivalence class to test, a small number of representative test inputs can be used to obtain good test coverage results. B. Boundary value analysis method Selects input and output boundary values for testing. Because most software errors occur at the boundary of the input or output range, it is necessary to focus on testing the boundary values, usually selecting values that are exactly equal to, just greater than or just less than the boundary as test data. It can be seen from methodology that boundary value analysis is a supplement to equivalence class partitioning, so the two test methods are often used in combination. C. Error conjecture is to a large extent based on experience. It is based on the analysis of the test results made in the past and the intuitive conjecture of the regularity of the revealed defects to discover the defects.

Example: For “user login” function, based on equivalence class partition and boundary value analysis method, we designed test cases including:

  1. Enter the registered user name and correct password to verify the login.

  2. Enter the registered user name and incorrect password to verify whether the login fails and the message is correct.

  3. Enter an unregistered user name and any password to verify whether the login fails and the information displayed is correct.

  4. The user name and password are both empty. The login fails and the correct information is displayed.

  5. If either the user name or password is empty, the login fails and the correct information is displayed.

  6. If the verification code function is enabled for login, enter the correct verification code if the user name and password are correct.

  7. If the verification code function is enabled for the login function, if the user name and password are correct and you enter an incorrect verification code, the login fails and the information is correct.

If you add miscalculation (which varies from person to person), you add the following test cases:

  1. Whether the user name and password are case-sensitive;

  2. Whether the password box on the page is displayed encrypted;

  3. Whether the system prompts you to change the password upon the first successful login of a user created in the background system.

  4. Whether the functions of forgetting user name and password are available;

  5. Whether the length of user name and password is limited according to the design requirements;

  6. If the login function requires a verification code, click the verification code picture to check whether the verification code can be replaced and whether the new verification code is available.

  7. Whether the verification code is refreshed when the page is refreshed;

  8. If the captcha is time-sensitive, the validity of the in-time and out-of-time captcha needs to be verified respectively.

  9. If a user logs in successfully but the session times out, the user is redirected to the user login page if the user continues to perform operations.

  10. Whether users of different levels, such as administrators and common users, have the correct permissions after logging in to the system.

  11. Whether the default focus of the page is located in the user name input box;

  12. Check whether shortcut keys such as Tab and Enter can be used normally.

Nonfunctional testing

In addition to explicit functional requirements, other non-functional requirements, namely implicit functional requirements, are also critical to a software system of high quality. The meaning of explicit functional requirements is well understood literally, referring to the specific functionality that the software itself needs to implement. For example, “normal users can log in using the correct user name and password” and “unregistered users cannot log in” are typical explicit functional requirement descriptions. From the perspective of software testing, non-functional requirements mainly involve security, performance and compatibility. In all of the above test case designs, we did not consider testing for non-functional requirements at all, but these are often the key determinants of software quality.

Example: Let’s continue refining the “user login” test case.

Additional test cases for security include:

  1. Whether the user password is encrypted in background storage;

  2. Whether the user password is encrypted during network transmission;

  3. Whether the password has a validity period and whether a message is displayed asking you to change the password after the validity period expires.

  4. If no login is required, enter the URL in the address box of the browser to check whether the login page is redirected.

  5. Whether the password input box does not support copy and paste;

  6. Whether passwords entered in the password input box can be viewed in page source mode;

  7. Enter the typical “SQL injection ***” string respectively in the user name and password input box to verify the system return page;

  8. Enter the typical “XSS cross-site scripting ***” string in the user name and password input boxes respectively to verify whether the system behavior is usurped.

  9. In the case of multiple login failures, whether the system will block subsequent attempts to deal with brute force cracking;

  10. The same user logs in to multiple browsers on the same terminal to check whether the mutual exclusion of the login function meets the design expectation.

  11. The same user logs in to browsers on multiple terminals to check whether the login is mutually exclusive.

Test cases from a performance stress testing perspective include:

  1. Check whether the response time of a single user login is less than 3 seconds.

  2. If a single user logs in to the system, the number of background requests is too large.

  3. Check whether the response time of a user login is less than 5 seconds in a high-concurrency scenario.

  4. Whether the monitoring indicators of the server meet expectations in high concurrency scenarios;

  5. In the concurrent scenario with high collection points, whether there are resource deadlocks and unreasonable resource waiting;

  6. Check whether memory leaks occur on the server when a large number of users log in and out continuously for a long time.

From the perspective of compatibility testing, test case supplements include:

  1. Verify that the login page is displayed and functions are correct in different browsers.

  2. Verify that the login page is displayed and functions are correct in different versions of the same browser.

  3. Verify that the login page is displayed and functions are correct in different browsers on different mobile devices.

  4. Verify that the login page is displayed and functions correctly in different resolutions.

For high quality software testing, use case design not only needs to consider explicit functional requirements, but also involves a series of non-functional requirements such as compatibility, security and performance, which play an important role in the quality of the software system. However, the use case design of software testing is inexhaustible, and the engineering practice is inevitably subject to the time cost and economic cost, so the testing department needs to take into account the balance between defect risk and r&d cost.

Defect classification

According to the definition of defects, defects are classified into the following four categories:

A. Document defects refer to defects found during static inspection of documents. Inspection activities include peer review, product audit, etc. Defects to be reviewed are determined according to the type of objects to be reviewed, including final outputs and intermediate process outputs. Such as product requirements documents, prototyping documents, test plans, test cases, and so on.

B. Code defects refer to defects found during code peer review, audit or code walk through.

C. Test defect refers to the defect of the test object discovered by the test activity. The test object generally refers to the runnable code and system, excluding the problem found by the static test.

D. Process defects, also called nonconformance issues, refer to process defects and problems found through process audit, process analysis, management review, quality assessment, quality audit and other activities. The discoverers of process defects are usually testers, project managers, etc.

== Severity definition of a Bug ==

This specification defines the following five levels, depending on the severity of the bug submitted.

Category A: Serious errors, including the following:

1. Exit illegally due to program crash.

2. Death cycle

3. A database deadlock occurs

4. Program interruption caused by incorrect operation

5. Function error

6. The connection to the database is incorrect

7. Data communication errors occur

Type B: more serious errors, including the following errors:

1. Program error

2. The program interface is incorrect

3. The database tables, service rules, and default values are incomplete

Type C: General errors, including the following:

1. Errors in the operation interface (including whether the column names in the data window are consistent in definition and meaning)

2. The content or format is incorrect

3. Simple input limits are not in the foreground for control

4. No prompt is displayed during the deletion operation

5. There are too many empty fields in the database table

Type D: Minor errors, including the following:

1. The interface is not standard

2. The auxiliary description is not clear

3. The input and output are not standardized

4. Long operations do not prompt users

5. The text of the prompt window does not use industry terms

6. There is no obvious distinction between the input area and the read-only area

Type E: Test suggestions

Priority definition of bugs

This specification defines the following five levels based on the priority of submitted bugs.

  1. Highest: indicates that the problem must be resolved immediately, or the system cannot meet the required requirements at all.

  2. High: Indicates that the repair of the problem is urgent and depends on whether the main functional modules of the system are normal.

  3. Medium: indicates that the system must be solved immediately if time is available. Otherwise, the system deviates from the requirements or the scheduled functions cannot be realized normally.

  4. Low: indicates that the solution is planned. It indicates that the problem does not affect the implementation of requirements, but affects other aspects of usage, such as page invocation errors or incorrect page invocation.

  5. Lowest: the problem must be confirmed resolved or confirmed not to be resolved prior to system release.

Test standard

The general criteria for passing functional tests are:

1. Unit functions are consistent with design requirements;

2. The specified path coverage and coverage classes meet the requirements and are correctly executed;

3. The specified black box test means are used and executed correctly;

4. Residual errors have a legal explanation or are approved for temporary retention;

5. Although path coverage cannot be achieved, error detection rates for other tests tend to be zero or stable (depending on the length of time).

All kinds of software tested must meet the following standards.



The above ratio is the percentage of errors in the total test module.

Software products that fail to pass the test are not allowed to be released online.