I. Project significance
With the shortening of mobile application development cycle, how to improve the efficiency of mobile application quality assurance has become a big problem. UI testing is a very important part of Android testing, and improving its efficiency means saving a lot of costs. On the one hand, some companies are using crowdsourcing to save money by sending testing tasks to volunteers for manual testing. However, volunteers often just submit bug-related text reports, and the full “user flow” itself, which is equally valuable, is often overlooked. On the other hand, some researchers intend to develop automated testing tools to reduce the consumption of human resources and time resources. However, due to the lack of human-owned testing knowledge and numerous limitations, tools are still not as effective as human tests.
Combining human knowledge with automated tools has become a new way to improve the effectiveness of tools. However, this direction has not received much attention. At present, it is still necessary for testers to write configuration files to introduce relevant testing knowledge. This is similar to manual testing, which adds a significant amount of cost. If the professional knowledge of application can be collected automatically and introduced into automatic tools, the practicability of automatic tools will be greatly increased on the basis of saving resources.
two The project content
This paper designed and implemented an iterative Automated Test system for Android applications, which automatically extracted and fused the user operation process of test users and input it into the test tool. On the basis of reusing the relevant knowledge of users, the test efficiency of the tool was enhanced
It mainly contains two kinds of internal and external iterative cycles, and four modules. One of the iterative methods of the system is process iteration: the user operation process generated by the operation of the test user can make the tool understand the structure of the application under test, so as to enhance the coverage of the test tool. In the process of testing, the test tool will record the specific information of its own operation and show it to the test user, so that the user can know the current coverage status of the application, so that their new round of manual operation is more purposeful. New user-generated operational processes are used iteratively for the next round of testing, resulting in spiraling application coverage to meet testing requirements. The second is the running iterations: unlike conventional one-time test process automation test tools, the automated test process of this system will be reset through many times for the application to the application of iterative traversal, so as to ensure the control of has been found not defects due to testing tools, and the chance of missing during testing are covered. The four modules are used one by one in these two iterative loops.
The process capture module is used to silently collect the active operation flow of the tester in the background. The biggest function of this module is to silently collect user operations, and analyze the application model to be tested through user operations. Users in the use of this system support tools to measure application during operation, the system will automatically record the user action in the process, every step of the operation control of the state of the page, the operation itself, by the operating controls information, and to integrate this information into the automation test modules, the required flow file upload store. These files can be directly converted into part of the application under test modeling, which can be assembled to capture parts of the application under test that have already been tested when the system performs automated testing. This information allows automated testing tools to understand the structure of the application under test and thus cover more pages.
Appium management module used for unified initial configuration of mobile devices connected to the system. The automated test tool used in this system is completely based on Appium, and the test scenario is usually multi-device scenario. When operating in a multi-device environment, the system needs to distinguish each device connected to the iterative automated test server and selected, which is usually done by relying on the device number of each device. At the same time, because Appium server can only connect to one device on each computer virtual port, we need a management module to manage the establishment, maintenance and shutdown of Appium corresponding to each Android device. At the same time, some tool classes needed in each process are also placed in this module (such as formatted output class, fixed address access class, etc.) for unified management.
Automated test modules for executing iterative automated test processes. When the test user applies for automatic testing on the Web end, the system reads the existing model information of the application under test and generates an application control model covering tree-1 to help the automatic test tool understand the application framework under test. The tool will first perform an automated test of the application, record all the test flows, and also generate an application control model overlay tree-2. The two trees compare within the system and query for controls in the application model that have not yet been covered. At this point, the system recreates the user action from the override tree, bringing the application to the control that was not overwritten, and again returning the action to the automated test logic. After several iterations, when the known application model has been completely covered, the system considers that the automatic test is over, integrates the test process recorded in the whole process, generates a more new model, and uploads it to the database for storage.
Operation flow modeling module for managing all information related to operation flow. In this system, the basis of all information is the user operation flow. After the user’s operation is recorded, the operation flow files will be generated, and these files can be transformed into the coverage model of the application under test. This module is used to convert these models into files that can be stored, convert these files back into models for reading, and reproduce them from models. After the test is complete, the module outputs the test results, except for any BUG information that occurs.
The operation flow information generated by the test will also be displayed. After the multiple operation paths experienced in the test are fused and de-duplicated, the application coverage model containing each operation information can be generated as shown in the figure. By analyzing these nodes, the tester can know the current test situation.
3. Project benefits When programming in a user test or practice, the background will continue to record the related application of the user operation information, this information will be continuously accumulated, until the tester for related applications when applying for automated testing, the cumulative information will come in handy, help test automation tool master application framework, so as to improve the test effect of tools, Make the tool cover more pages to trigger more bugs.
In order to verify the effectiveness of the system in practical applications, this paper selected 10 well-known mobile applications and 50 user operation processes each containing 15 click actions (repeatable) to carry out an experiment, and obtained the application coverage of the system after the introduction of user operation processes. The results were compared with those of Monkey, Google’s official testing tool. The experimental results are shown in Figure 3, where AC (Activity Coverage) represents the percentage of page number covered by the application to be tested, CC (Code Coverage) represents the percentage of Code number covered by the application to be tested. When the user operation process was introduced, the automated test result of the system was significantly improved compared with that without the introduction. When the test time was set at one hour, the average code coverage rate was increased by 13.98% and reached 37.83%, which exceeded that under the same conditions. Monkey has an average code coverage of 28.90%. By comparing the control information in the coverage process, we found that the test results after the introduction of user operation process completely included the control that was overwritten when the test user and the tool tested separately, and there was no coverage omission. On this basis, the tool expands the original test results and finds more controls, which proves the usability of the system and shows that the system can effectively enhance the normal automated test logic, so that test users have a greater chance to find defects in the application under test.
Thank you
This article was written by Youran Xu, a 2018 master student in intelligent Software Engineering Laboratory, School of Software, Nanjing University.