preface

It was a few years ago when I first heard about precision testing. At that moment, I was full of curiosity and desire to explore this genre. In recent years, it has gradually gained wide attention from testers in various fields and industries.

  • What is precision testing;
  • What is the meaning and value of precision testing;
  • How to accurately test the overall scheme landing;

The pain point of traditional testing

Test inefficiency

Conventional testing types include functional testing, regression testing, automated testing, interface testing, etc., which rely heavily on the testing experience of testers. Black-box testing based on manual subjective analysis ensures product quality with the help of conventional use case design methods.

According to the law of diminishing returns, the rate of missed testing remains high despite the massive human investment and continuous test execution. Invalid and duplicate tests in the middle also waste a lot of testing costs.

The test range cannot be evaluated

  • Multi-branch code merged into the main branch, modify which file which line, the test is not controllable;
  • Which functions are affected by code updates are not aware of;
  • Most of the tests are still based on the understanding of the business, and there is still a gap between them and the real business data. The accuracy is difficult to guarantee, and blind testing is risky.

Quality standards during testing are not measurable

How do you know that the test is done? How do you know that the test went well? Quality control runs through the whole quality assurance process.

  • Use case execution completed;
  • Exploratory tests completed;
  • Developer defect repair completed;
  • Regression test completed;
  • Automatic execution through;

Does the completion of the above steps mean that our product quality is acceptable?

After the launch, the cost of inconsistency gradually increases. There is no quantitative evaluation of data in the testing process, which cannot be measured, and the evaluation can only rely on the on-line defect rate, the number of offline defects, the defect rate of thousands of lines and other relatively floating indicators, so the test management is difficult.

Agile and distributed microservices architecture challenges

  • The iteration cycle is short, especially in the Internet industry. The daily version cycle is basically one iteration every two weeks. The time cost should be controlled very accurately.
  • Requirements change frequently, and each change requires a return to all use cases and a great deal of repetitive effort;
  • The software system is becoming more and more complex, and the logical relationship between services and services is not in a clue, so it is impossible to accurately estimate the scope and locate defects.

Based on the above pain points, we expect solutions from the following aspects:

  1. To scientifically evaluate the function points affected by code changes, it is necessary to conduct accurate and targeted tests on this part of the code, so as to make the tests more accurate, the time required for regression tests shorter, and the regression range more accurate. Release human cost, invest more time and cost in deeper and lower level testing work;
  2. Have a deep understanding of the logic of the code, which branch code is covered, which branch code is not covered, conduct detailed analysis, find out where the missed test, need to be accurate, reduce repetitive work, from experiential subjective judgment to accurate data visualization

The concept of precision testing

** Precision testing is a set of computer aided test analysis system. * * use two key factors in the use cases and code, quality comprehensive consideration and analysis method to test theory innovation system, the core component contains software testing oscilloscope, use cases and code the bidirectional traceability, intelligent regression test case selection, coverage analysis, defect location, test cases, clustering analysis, automatic test case generation system, These complete functions constitute an accurate testing technology system, which greatly enhances the depth and breadth of testing, breaks the growth ceiling of the testing department, and provides necessary and sufficient conditions for the value mining of the testing process itself and the increment of test data assets.

One of the core features is two-way traceability

Through the system acquisition of program code execution logic, establish the logical relationship between test cases and program code, form forward and reverse two-way traceability mechanism, achieve accurate and accurate data visualization.

  • Forward traceability: After a tester executes a test case, precision testing automatically records and displays the internal code execution details of the test case. Each test case can be quantitatively analyzed and statistically analyzed. These quantitative data can be used not only to evaluate the work of testers, but also to provide information communication between developers and testers.
  • Reverse traceability: according to the code modified by the development, the tester analyzes the call logic relationship associated with the code, quickly and accurately locate the test scope, greatly reduce invalid and repeated test work, and maximize the test coverage.

Accurate data to judge, all data by the system automatically, original input, data can not be tampered, generated test data can be directly used for testing process management and effectiveness analysis. Support precise measurement of test data and comprehensive, multi-dimensional test analysis algorithms, extending the perspective of white box testing from coverage to intelligent test analysis.

Intelligent filtering of use cases

Traceability based on use cases and code is automatically calculated. The use case and the precise traceability mechanism of the code enable the data to be applied to a large number of intelligent testing algorithms.

Design ideas

Diff

One of the core features of precision testing is two-way traceability. The premise is two-way traceability based on the changed code, so the changed code is the most important input of the entire system.

  • Capture code differences via JGit;
  • Parsed to method level as input to call link inference;

Two, source code static structure analysis

  • ** Function call analysis method: ** Static bytecode based parsing (ASM/ BCEL /Javassist, etc.), dynamic analysis of call links (using JavaAgent to code weaving of internal methods, etc.), JavaParser+JavaSymbolSolver is used here to get a complete call link based on static code.
  • Get change code call chain with code increment:
    • Obtain the Http interface related to the upper layer business affected by the change code according to the Controll layer Mapping annotation.
    • According to the Dubbo XML configuration file, the internal Dubbo protocol call interface affected by the changed code is obtained;

Key code:

  1. Gets the symbolic reasoner for a Java file

  1. Symbolic inference for Jar files (procedure Call oriented across services under microservices Architecture)

At this point, the key step is that the affected interfaces have been obtained. With this capability, we can know which interfaces can be used to test the changed code, but faced with a large list of interfaces, I believe that most testers are confused.

Incremental code coverage

Value:

  1. Through the statistics of incremental code coverage, in-depth analysis of the system’s internal execution logic, screening analysis of uncovered or low coverage methods, relying on the above call link can obtain the interface to supplement and test use cases;
  2. Through the continuous integration of the R&D performance platform, the coverage point is carried out. When the coverage is lower than the expected value, the test will not pass;

Solution:

The design scheme of incremental coverage is not introduced here. Please refer to relevant information by yourself and summarize it as follows:

  • Jacoco to do the corresponding transformation, increase the data statistics of incremental code;
  • JVM startup parameters configure Agent, TCP startup;
  • Dump coverage files and class files to get reports after parsing;

There are several solutions to avoid coverage loss caused by multiple service restarts and deployments during a test:

  • Periodically Dump coverage data for summary.
  • The shutdown event triggers the Dump of coverage data.

Overall framework:

Coverage curve oscilloscope:

Coverage details:

Iv. Intelligent recommendation use cases

** Intelligent Recommendation Algorithm: what is ** algorithm? We can reduce it to a function. The function takes several arguments and outputs a return value.

The input parameters are various attributes and characteristics of online monitoring, coverage and automation coverage, including call frequency, time segments, industry, coverage, whether there is automation, release time, and so on. After being processed by the recommendation algorithm, a list of recommended use cases sorted by importance and urgency is returned.

Several recommendation algorithms commonly used in the industry are as follows:

In addition, through the screening of automated use cases, we can get which interfaces are not automated and feed back to the supplement of the use cases of the automation platform, so as to improve the coverage of automated interfaces and form a closed loop in the process.

Interface change compatibility verification:

We can also verify the compatibility of interface parameter changes. During the test, we can use the continuous integration of the R&D efficiency platform to trigger detection, select the interfaces that may have problems with interface compatibility, and notify the testers by email.

Vision of the future

I. Intelligent defect positioning

In traditional software testing, defects are recorded into the defect management platform by testers, who are only responsible for discovering defects. Developers locate, troubleshoot, and remotely debug these defects. If the defect submitted by the tester is a vague functional description, the developer will spend a lot of time troubleshooting the problem.

Through the precision testing platform, for the use cases that fail or fail, detailed path tracing information is executed according to the use cases, code blocks generated by defects are automatically analyzed, and the suspicious codes generated by defects are ranked.

By referring to the actual defect location method, the abstract location idea is transformed into a visual graphic interface.

Second, the behavior analysis of the test process

Cluster analysis is performed on the path tracing information of testers’ test execution and code:

  • Whether the testing scope of a feature is sufficient;
  • What features the tester’s testing habits focus on;
  • Which branches are weakly covered;
  • Which use cases are designed with high repeatability;

Objective presentation of each tester’s test thinking and ability curve, such as whether the test is sufficient to show the circle, the more full the diameter of the larger, accustomed to focus on the function point to show the distance, the more focused the distance, the closer the function.

conclusion

Precision testing is a complete quality system architecture. It combines traditional black box testing with white box testing. Through continuous analysis of human behavior, deficiencies and loopholes are obtained to guide the development and testing of targeted correction and gradually improve the whole quality assurance system.

More relevant technical content welcome to pay attention to netease wisdom enterprise technology + public number.