With the enrichment of content operation schemes and gameplay on Youku APP, the number of operation components on content configuration platform is also increasing for distribution and consumption business scenarios, and the workload of regression testing on mobile terminals is surging. How to follow the pace of business development and ensure the high efficiency of component testing quality? This article will share youku’s thinking and exploration in this aspect (this series of articles will be released in succession, interested friends continue to pay attention to!)
Analysis of the situation
- The number of components is large and the regression cost is high
Frequently used distribution consumption business, hundreds of components, regression use cases of components accounted for a high proportion, version regression needs to quickly complete the coverage test, find out the components with problems. The diversity of metadata and different operation strategies make the components on the end thousands of faces. In visual upgrade or technical transformation projects, the adaptation test of components is necessary, and the workload is relatively large to complete the adaptation of Top models.
- Conventional UI automation validation is coarse in granularity
The existing core scenarios automate test cases for components. The validation granularity at the UI level is very coarse. It can only verify the existence of component View, but not the existence of specific controls (such as the main and subtitle in the pit of components). In addition, the native Dump tool has different positioning methods for both ends, and two sets of Case need to be maintained.
- The automation based on native location is easily affected by technical transformation and has high maintenance cost
Youku homepage development in order to improve the performance of View to do some transformation, resulting in an increase in the cost of automatic maintenance, such as the gradual increase of technical transformation, it is very challenging to conventional automation. By analyzing the status of the components, we combined our existing Mock and image recognition capabilities to create a componentized intelligent test solution that was designed to test mobile components as if they were server-side interfaces.
Project plan
Solve the problem from the following five aspects:
Project value
Mock to create data-driven componentized testing solutions to improve component testing efficiency and user experience; Based on image recognition and Mock, reduce the dependence of peer test system and improve test stability; The server drainage data is applied to the client, the client data factory is built, and the test data coverage is improved.
The overall architecture
The data factory
Abstract analysis and storage of online drainage data and end effect tool data are carried out for mobile and server component-based architecture. This scheme supports single-pit/multi-pit template storage, normal data/abnormal data generation, and fast construction of component test data rich in different services. At present, the first phase of data factory construction has been completed, mainly to platform the above capabilities, which can quickly support the automatic data construction requirements of components, such as Mock data editing, composition, generation, configuration file generation.
Image recognition capability
The image recognition ability uses the service of Youku image recognition. With this ability, the component verification ability can be improved and the difficulty of verification can be reduced. The establishment of the special image recognition scheme of the component is rather bumpy, which has undergone many evaluations, experiments and negations. It is summarized into the second phase. Phase I: The algorithm performs deep learning for the business, removes component partitioning rules, and delivers data structures based on the server interface style of the page layout.
Phase II: Designed the component annotation interface, opened the capability of component layout division to the business side, and the business side uploaded screenshots to the platform, annotated the layout of each layer of components and saved them. For automatic use, only screenshots and annotation Id are required, and the image recognition service will obtain the annotation layout according to the configuration Id and return the final identification data, including text and coordinates.
Automated testing
With the data construction and configuration layout recognition capabilities of the data factory, you can build data-driven component intelligence testing. Automation using the optimal coko households end automation framework (the framework based on the ant mobile terminal automation framework of general process for encapsulation), combined with business precipitation component test strategy, designed the data processing, data request and application, data mapping and contrast, business only need to construct test data configuration file, write business logic, The contrast section directly inherits the public Case, which handles the core UI comparison and click validation.
Basic process and logic automation combine similarity recognition with configuration layout recognition scheme to complete UI and click verification. The test case has two steps: UI verification and click verification. UI verification has two schemes of similarity comparison and configuration layout comparison. Click verification has two schemes of configuration coordinate click and traditional click. The test is divided into 5 logic lines according to whether there is a comparison baseline map, whether there is a configured tag ID, and whether the SIMILARITY UI comparison passes:
- When there is no similarity comparison baseline map: after Mock, only screenshot is uploaded without UI and click verification;
- When there is a similarity comparison baseline map and a configured annotation ID: Perform the similarity UI verification, and click on the configured coordinates to verify if the similarity is passed.
- When there is a similarity comparison baseline map and a configured annotation ID: Perform similarity UI verification, fail to perform configuration layout UI verification, and then perform configuration coordinate verification;
- When there is a similarity comparison baseline map and no configured annotation ID: Perform similarity UI verification, and click verification in the traditional way if it passes;
- When there is a similarity comparison baseline map and no configured annotation ID: Perform similarity UI verification, do not pass the configuration layout UI verification, and then click verification in the traditional way;
Project effect and application
At present, the home page/channel page has been automatically connected and applied in August version. The cleaned data involves dozens of general components and double-column components, and hundreds of mock files, covering more than 90% of commonly used online components, and more than 60% of test cases of components can be automated. Below is the execution effect of similarity comparison + configuration coordinate click, each verification point can view the detailed picture.
The following figure shows the execution effect of UI comparison of configured layout. The verification results of 6 pits are shown one by one. Click details to see the verification results of core controls (main and subtitle, etc.) in each pit.
The future planning
3 core links will be further refined in the future: automation test side, general capability optimization in the framework, extended to other test types, such as adaptation test, extended to Pad side; At present, the test triggering is still at the script level. In the future, we will consider integrating the test into the data platform, which can trigger the test with one button.
Image recognition side, the interface function of phase II configuration annotation needs to be strengthened to support visual editing of images, and support multi-pit components to mark one standard pit so that other pits can be applied.
On the data factory side, it is necessary to refine abnormal data construction rules, cover more comprehensive business policies, and construct fewer and complete business data use cases. At the same time, online data can be used to generate and replay online hot link use cases, which is not limited to component automation.