Lecturer: Pan Zhigang, head of the quality and performance Department of Soundnet, with more than 14 years of cross-industry experience in server, mobile terminal, audio and video codec and automotive electronics, responsible for the establishment of test infrastructure and automated test solutions, and presided over the establishment of continuous integration test ecosystem. As the head of the quality and efficiency department of Sonnet, I am responsible for promoting the continuous optimization of quality and efficiency, focusing on technical innovation to enable the team to ensure software, and exploring the optimal solution of product delivery through the efficient combination of software and hardware.

preface

SDK testing is different from APP testing in that it is not only about the end user, but also about the APP developer. In the face of different industry needs, how to ensure the quality of the invincible, this is a track to explore the unknown. This issue will bring you “SDK Testing Best Practices — Building an Integrated Application Platform for Quality Assurance” by Pan Zhigang, head of SNET r&d performance, sharing the evolution of the integrated application platform and how to integrate basic capabilities, ensure the efficient execution of testing and delivery, and improve quality and efficiency.

1.0 the GUI Driven Test

SDK (software development kit) is a net foreign main product delivery, is used to a particular software package, software framework, hardware platform and operating system to create application software development and application of the collection, with the traditional sense of the APP, peripheral application or end customer perceived product is different, is invisible to the end user.

In the early days, to ensure the quality of SDK testing, testers needed to set up GUI demos according to the APIS delivered by the SDK. For example, in a real-time Internet communication interface, users need to join corresponding channels for corresponding audio and video communication. Corresponding buttons, drop-down lists or small ICONS will be designed in such an interface, and each corresponding element reflects the implementation ability of corresponding interfaces. As shown in the figure below, the bottom four buttons are microphone, camera and hook Button respectively, corresponding to API interfaces enableLocalAudio, enableLocalVide, startScreenCapture and leaveChannel. The signal bar icon in the upper right corner is to obtain the onNetworkQuality interface. With such a simple Demo, the tester designs test cases to ensure that each interface is properly called, based on which the quality standards delivered in the initial iteration are guaranteed.

However, with the increasing number of delivery platforms, the delivery needs to be based on desktop, mobile and Web terminals. Desktop terminal includes Windows, macOS and Linux, and mobile terminal includes Android and ios. As more and more platforms design corresponding demo, it is bound to require testers to invest more resources. So automation is inevitable.

2.0 GUI Demo Test Automation

Phase 2.0 was GUI Demo Test Automation, where developers layered the platform.

As shown in the figure above, iOS, OSX, Android, etc., are delivered platforms, and third-party open source tools used by corresponding platforms, such as Appium and Selenium, are installed in the middle layer. The purpose is to improve test performance and cover all delivered platforms with a set of case. Achieving 70% automation has saved teams half of their time and greatly improved testing efficiency.

3.0 API Demo Test

In phase 3.0, the API Demo Test is displayed. The test subject of soundnet is SDK, which focuses on API function realization, platform adaptation, developing-oriented, performance power package volume, integrated construction package; Apps focus on business functions, user interaction, end users, interface manipulation, and application installation. In view of the two completely different test emphases of SDK and APP, sonnet redesigned a set of automatic Testframework for SDK — Wayang Testframework.

Wayang is based on an Indonesian puppet show that uses strings and dexterous hands to control the puppet’s movements. In such a system there are three different objects, the object on the left is the Test Client, the object in the middle is the Test Server, and the corresponding test demo is on the right. The Test client is like a performer behind the puppet show, who needs to define his own Test requirements and design corresponding Test cases. A test demo is like a puppet on the front end. It makes calls based on continuous requests from the test end. All active and passive calls are based on code output. All interface calls and corresponding callbacks are based on the output of the code terminal, regardless of the implementation of the interface. Compared with automated 2.0 and manual 1.0, more than 100 API tests can be completed every day, and the coverage of automated tests can exceed 80%.

4.0 AIO

After the completion of Wayang practice, Sonnet is still considering whether further optimization practice can be carried out. As product delivery iterations get tighter and more requirements get involved, it’s not enough to think about testing from a testing perspective. You need to think about the entire product delivery — pre-build, package, test, and deliver. Therefore, the concept of AIO is introduced here.

AIO is a combination of box court and sandbox. So, what are boxes and sandboxes? Chests provide infrastructure, the equivalent of traps, enemies, or chests in the game, and can be captured or defeated as you choose. Sandbox means opening up the smallest atomic unit to the user. Examples include Minecraft, Lego, and the smallest unit is a cube. The testing unit of sound Net is based on API, so in the whole delivery, the box is responsible for ensuring the stable operation of infrastructure (network, power, test environment), and the sandbox is divided into three parts: build, test, and launch.

In AIO architecture of sound network integration, a series of corresponding modules are included.

The AIO architecture includes a cluster of devices. Because delivery on different platforms necessarily covers a wide variety of situations, compatibility between devices needs to be considered. The dispatch center ensures that all equipment is delivered on time in the expected Settings, hence the need for a service gateway. The data center will analyze the explicit log output of SDK artifacts; The last part is building and publishing. The ACCS platform includes compilation, publishing, crash reporting, data analysis, automated testing and other functional modules. The following basic capabilities represent the lower-level elements, such as link simulation, physical connection control, human-computer interaction, and so on.

Going back to the Wayang feature, you need to have a client that corresponds to a demo. The Client performer knows what needs to be done and lets the demo do it. Based on this situation, sonnet has made further improvements. Through API Driven Test, Sound Net set up an independent soloWayang app, in which the Test Iterator generator can continuously call the test API. Run soloWayang on all devices through test Farm-based concurrent testing, and all corresponding apis are tested to ensure that problems are found and handled.

In the test section, a lot of data will be generated, including SDK data, Demo data, test data and Server data. How to do reasonable and effective pre-mining of these data?

In the traditional model, the value of data lies in analyzing it when something goes wrong. In a reversal of thinking, data collection and pre-analysis can proactively identify and address risks before they are exposed. The acoustic network data analysis platform cleans the data of different platforms through Beats and Logstash to eliminate invalid information. Kibana can list the corresponding problems through the corresponding filtering. For example, I ran 400 devices in one night and found corresponding log anomalies on one device. Kibana can give early warning and timely find out whether the problem really exists on only one device or is common among several machines. In the past, if there were any corresponding problems in mining the data of several devices manually, it was hard to imagine whether it was related to a certain system, a certain chip, or a specific network scene. Through reasonable filtering of data analysis platform, we can find problems effectively and solve problems as soon as possible by summarizing all kinds of evidence.

Q: Test for mobile APP. If hundreds of mobile phones need to be connected at the same time, make a performance test environment. However, the support capacity of a computer is limited, and it may reach the limit when connecting more than ten mobile phones at the same time. How to do horizontal expansion and performance environment? A: If it is for Android phones, we have an early node connected to 30 Android devices at the same time proved to be feasible. It is suggested to confirm the configuration of nodes and peripherals. More machine connections can be deployed by clustering test Farm. In addition, independent performance testing components can be designed in the corresponding test app, which is conducive to the horizontal expansion of performance testing.

Click to get the video and PPT materials