Author: A Xiang

preface

Jingxi (formerly jingdong Shopping) project, as a strategic business of JINGdong, has tens of millions of traffic entrance. To ensure the stable running of online services, front-end DISASTER recovery (Dr) drills are carried out monthly, including small programs and H5 versions. In case of exceptions, each module on each page is required to perform proper degradation and avoid empty Windows, abnormal styles, and unreasonable error messages. The original disaster recovery exercise process: small program (change the communication mode to Https) and H5 modify the interface return through Whistle to simulate abnormal situations and verify that the degradation processing of each module on each page meets expectations. The disaster recovery exercise is a long-term and continuous work involving multiple page functions and scenarios, and the exercise efficiency is low due to the abnormal simulation of manual switching scenarios. Therefore, automatic test tools are developed to improve the research and development efficiency and make the disaster recovery exercise easily carried out at any time. Jingxi H5 and small program scenarios are quite different, so the road of automated testing is divided into H5 and small program two parts, with H5 as a starting point.

To sum up, we hope that Jingxi H5 automated test tool can provide the following functions:

  1. Visit the target page and take screenshots of the page;
  2. Set UA (simulate different channels: wechat, Mobile Q, other browsers, etc.);
  3. Simulate user click, slide page operation;
  4. Network interception and simulation of abnormal situations (interface response code 500, abnormal interface return data);
  5. Manipulate cached data (simulate scenarios with or without caching, etc.).

Technology selection

When it comes to automated testing on the Web, many people are familiar with Selenium 2.0 (Selenium WebDriver), which supports multiple platforms, languages, and browsers (driven by various browser drivers) and provides a rich API interface. With the development of front-end technology, Selenium 2.0 gradually presents the disadvantages of complex environment installation, unfriendly API calls, and low performance. Puppeteer, a new generation of automated testing tools, is simpler to install, performs better, and is more efficient than the Selenium WebDriver environment. It also provides a simpler API for executing Javascript in the browser, as well as web interception.

Puppeteer is a Node library that provides a high-level API for controlling Chromium or Chrome via the Devtools protocol. Puppeteer runs in Headless mode by default, but can be run in Headless mode by modifying the configuration file.

Features of official description:

  • Generate page PDF;
  • Grab SPA (single page application) and generate pre-rendered content (i.e., “SSR”, server-side rendering);
  • Automatic form submission, UI testing, keyboard input, etc.
  • Create an up-to-the-minute automated test environment that executes tests directly in the latest version of Chrome using JavaScript and the latest browser capabilities;
  • Capture the Site’s Timeline Trace to help analyze performance issues;
  • Test the browser extension.

Puppeteer provides a way to launch Chromium instances. When Puppeteer is connected to a Chromium instance, a Browser object is created using puppeteer.launch or puppeteer.connect, and a Page instance is created using Browser. Navigate to a Url, then save the screenshot. A Browser instance can have multiple Page instances. The following is a typical example of Puppeteer automation:

const puppeteer = require('puppeteer');
puppeteer.launch().then(async browser => {
  const page = await browser.newPage();
  await page.goto('https://example.com');
  await page.screenshot({path: 'screenshot.png'});
  await browser.close();
});
Copy the code

To sum up, we choose Puppeteer to develop the automated test tool for the Beijing-Xi Homepage disaster recovery exercise. Through a series of apis provided by Puppeteer, the process of accessing the target page, simulating abnormal scenes and generating screenshots is automated. Finally, through manual comparison of screenshots, judge whether the page degradation process is in line with expectations, user experience is friendly.

Implementation scheme

We divided the disaster recovery exercise into automatic process and manual operation.

Automated processes:

  1. Simulate user access page operation;
  2. Intercept network requests, modify interface returned data, simulate abnormal scenarios (interface returned 500, abnormal data, etc.);
  3. Generate screenshots.

Manual operation:

After the automatic script is executed, the screenshots of each scene are manually compared to determine whether they meet expectations.

Solution flow chart:

Development ‘

Install Puppeteer, things you might encounter

After initializing the project with NPM init, Puppeteer dependencies can be installed:

NPM I Puppeteer: Downloads the latest version of Chromium automatically during puppeteer installation.

or

NPM I Puppeteer-core: Chromium is not automatically downloaded during puppeteer-core installation. (Cannot generate screenshots)

NPM I –save Puppeteer –ignore-scripts to prevent Chromium download, and then manually download Chromium.

After manual download, you need to configure a specified path and modify the index.js file

const puppeteer = require('puppeteer'); (async () => {const browser = await puppeteer.launch({// Run Chromium or Chrome executables path (relative path) :'./chrome-mac/Chromium.app/Contents/MacOS/Chromium', 
        headless: false
      });
      const page = await browser.newPage();
      await page.goto('https://example.com');
      await page.screenshot({path: 'screenshot.png'}); browser.close(); }) ();Copy the code

Quickly create test cases

In order to improve the maintainability and extensibility of the test script, we configured the test case information into JSON files, so that when writing the test script, we only need to focus on the implementation of the test flow.

Test case JSON data configuration includes public data (global) and private data:

Public data (global) : data required by each test case, such as the address, name, description, and device type of the target page accessed by simulation.

Private data: data specific to each test case, such as test module information, API address, test scenario, expected result, screenshot name, etc.

{
  "global": {
    "url": "https://wqs.jd.com/xxx/index.shtml"."pageName": "index"."pageDesc": "Home page"."device": "iPhone 7"
  },
  "homePageApi": {
    "id": 1,
    "module": "home_page_api"."moduleDesc": "Home page main interface"."api": "https://wqcoss.jd.com/xxx"."operation": "Analog response code 500"."expectRules": [
      "1. Display abnormal information and refresh button"."2. Click the refresh button to display abnormal information"."3. Restore the network, click the refresh button to display normal data"]."screenshot": [{"name": "normal"."desc": "Normal scenario"
      },
      {
        "name": "500_cache"."desc": "Cached - return 500"
      },
      {
        "name": "500_no_cache"."desc": "No cache - return 500"
      },
      {
        "name": "500_no_cache_reload"."desc": "No cache - return 500- Hit refresh button"
      },
      {
        "name": "500_no_cache_recover"."desc": "No cache - Return 500- Restore network"}}],... }Copy the code

Writing test scripts

We take the test case of the main interface of Jingxi home page as an example, and verify whether the exception handling mechanism of the main interface is perfect and the user experience is friendly by returning 500 response codes through the module interface.

Expected effect:

  • If there is cache, the cached data is displayed
  • Display exception information and refresh button when there is no cache
  • Click the refresh button to display abnormal information
  • Restore network, click refresh button, display normal data

Test process:

Scenario implementation:

According to the test process and configured test case information, write test scripts to realize test case scenarios:

  1. To access the page
await page.goto(url)
Copy the code
  1. Generate screenshots
 await page.screenshot({
      path: './screenshot/index_home_page_500.png'
 })

Copy the code
  1. Intercepting interface requests
async test() = > {... / / create a Page instance, visit the home Page await Page. SetRequestInterception (true) // Set request blocking page.on("request", interceptionEvent) // Listens for the request event, which will be triggered when the request is issued... // Refresh the page to trigger request interception and generate test scene screenshots}Copy the code

If the test case needs to intercept different requests or simulate multiple scenarios, you need to set up multiple request listening events. After an event is executed, you must remove the event listener before continuing the next event listener.

Add event listener: page. On (“request”, eventFunction)

Remove event listening: page.off(“request”, eventFunction)

/ / set to intercept request await page. SetRequestInterception (true)
    const iconInterception1 = requestInterception(api, "body"// Add event 1 to listener page.on("request", iconInterception1)
    await page.goto(url)
    await page.screenshot({
      path: './screenshot/1.png'}) // Remove event 1 listener page.off("request", iconInterception1)
    const iconInterception2 = requestInterception(api, "body"// Add event 2 to listen on page.on("request", iconInterception2)
    await page.goto(url)
    await page.screenshot({
      path: './screenshot/2.png'}) // Remove event 2 listener page.off("request", iconInterception2)
Copy the code
  1. Simulate abnormal data scenarios to generate mock data.
function requestInterception (api, setProps, setValue) {
  let mockData
  switch (setProps) {
    case "status"MockData = {status:setValue
      }
      break
    case "contentType"MockData = {contentType:setValue
      }
      break
    case "body"MockData = {contentType: getMockResponse(setValue)
      }
      break
    default:
      break
  }
  returnAsync req => {// If it is an API that needs to be intercepted, modify the return data through req.respond(mockData), otherwisecontinueKeep asking for something elseif(req.url().includes(API)) {// Intercept API req.respond(mockData) // Modify returned datareturn false// When a request is processed, it must exitcontinue
    }
    req.continue()
}
Copy the code

Analog interface returns 500:

  const interception500 = requestInterception(api, 'status', 500)
  page.on("request", interception500) // This event is emitted when the request is receivedCopy the code

Simulated abnormal data:

 const iconInterception = requestInterception(api, "body", { 
     "data": {
       "modules": [{
          "tpl": "3000"."content": []
        }]
      }
 })
 page.on("request", iconInterception)
 
Copy the code

There are two implementations of mock data generation, depending on the situation:

  • Mock data is generated by modifying the actual data returned by the interface directly, requiring the real-time data returned by the interface first
  • Store a complete copy of the interface data locally and generate mock data by modifying the way the data is stored locally (the examples in this article are implemented based on this scheme)

If you choose the first solution, you need to intercept the interface request, obtain the real-time return data of the interface through req.response(), and modify the real-time return data according to the test scenario as mock data.

Because the data returned by the interface of Jingxi H5 page is in JSONP format, the callback information of JSONP must be intercepted before the simulation data is returned.

 function requestInterception (api, setProps, setValue) {
    let mockData
    switch (setProps) {
      case "status":
        mockData = {
          status: setValue
        }
        break
      case "contentType":
        mockData = {
          contentType: setValue
        }
        break
      default:
        break
    }
    return async req => {
      if (req.url().includes(api)) {
        if (setProps === "body") {
          const callback = getUrlParam("callback", req.url()) // Get callback information constlocalMockData = {body: getResponseMockLocalData(getResponseMockLocalData)localData, setValue, callback, API) // Generate mock data}} req.respond(mockData) // Set the return datareturn false
      }
      req.continue()
    }
  }
Copy the code
  1. Clear the cache
page.evaluate(() => {
    try {
      localStorage.clear()
      sessionStorage.clear()
    } catch (e) {
      console.log(e)
    }
})
Copy the code
  1. Hit the Refresh button
await page.waitFor(".page-error__refresh-btn"// We can send CSS selectors, we can also send time (in milliseconds) await page.click(".page-error__refresh-btn")
Copy the code

Before simulating the refresh button click, wait for the button rendering to complete and then trigger the button click. (To prevent errors due to DOM not being found when DOM rendering is not completed after page refresh)

  1. Cancel the interception and restore the network
await page.setRequestInterception(false)

Copy the code

Run scripts and debug

Since the test tool of the first stage has not been platformized, the automatic test flow program is started by inputting the command line in the terminal and running the script.

In your project’s package.json file, use the scripts field to define the script command:

 "scripts": {
    "test:real": "node ./pages/index/index.js"."test:mock": "node ./pages/index-mock/index.js"
  },
Copy the code

Run:

To start the test tool and run the test script, enter the following command line from the terminal into the project root directory path.

-npm run test:real // Tests the actual data returned by the interface. -npm run test:mock // Tests the local mock dataCopy the code

Debug:

Before starting debugging mode, you need to know About Headless Chrome.

Headless Chrome allows you to run a test script on the command line without opening the browser. You can complete all user operations just like the real browser without worrying about external interference or using any display device. Make automated testing more stable.

Puppeteer runs in headless mode by default.

To enable debug mode, you must remove the headless mode and automate the test while the browser is open. Therefore, “Cancel headless mode” and “Open developer tools” parameters are added in the command script. The test script determines whether to enable debugging mode based on the obtained parameters.

const headless = process.argv[2] ! = ='head'Const devTools = process.argv[3] ==='dev'Const browser = await puppeteer.launch({executablePath: browserPath, headless, devTools})Copy the code

To enable debug mode and run the test script, enter the following command line from the terminal into the root directory of the project.

-npm run test:mock head // Open Chromium window - NPM run test:mock head dev // Open Chromium window and developer tools windowCopy the code
  • headParameters: cancel headless mode, open Chromium window to run script;
  • head devParameters: Run the script in the Chromium window and open the Devtools window to debug mode.

The test results

Manual comparison screenshot results:

Example of running a script:

More test scenario implementations

1. Capture the area from the top of the page to the specified DOM (the content may be longer than one screen)

Puppeteer provides four screenshots:

(1) Capture a screen of content (the default is ordinary screenshot); (2) Intercept the specified DOM; (3) Full screen capture; (4) Specify clipping area, x, Y, width, height can be set. X and y are relative to the upper left corner of the page. However, only one screen of content can be captured, beyond one screen will not be displayed.Copy the code

Transformation based on the fourth method:

  1. Get the x and y coordinates of the specified DOM through the native JavaScript getBoundingClientRect() method.
  2. Reset the viewport height with page.setviewPort ();
  3. Call the screenshot API to generate a screenshot.
async function screenshotToElement (page, selector, path) {
    try {
      await page.waitForSelector(selector)
      let clip = await page.evaluate(selector => {
        const element = document.querySelector(selector)
        let { x, y, width, height } = element.getBoundingClientRect()
        return {
          x: 0,
          y: 0,
          width,
          height: M(y),  
        }
      }, selector)
      await page.setViewport(clip)
      await page.screenshot({
        path: path,
        clip: clip
      })
    } catch (e) {
      console.log(e)
    }
  }
Copy the code
  • height: y: truncates to the top of the specified DOM, excluding the DOM;
  • height: y + height: intercepts the bottom of the specified DOM, including the DOM;
  • Native Javascript’s getBoundingClientRect() method gets DOM element positions and width-height values that may be decimals, while Puppeteer’s setViewport() method does not support decimals. Therefore, the DOM element location information obtained needs to be rounded.

2. Simulate different channels, such as Hand Q scenario:

// set UA await page.setUserAgent("Mozilla / 5.0 (iPhone; CPU iPhone OS 102_1 like Mac OS X) AppleWebKit/602.4.6 (KHTML, Like Gecko) Mobile/14D27 QQ/ 6.7.1.416v1_iPH_sq_6.7.1 _1_APP_A Pixel/750 Core/UIWebView NetType/4G QBWebViewType/1")
Copy the code

3. Scroll

 await page.evaluate((top) => {
    window.scrollTo(0, top)
 }, top)
Copy the code

Page. The evaluate (pageFunction,… Args) : Executes JavaScript code in the context of the current page instance

4. Listen for page crashes

// Trigger page.on('error', (e) => {
    console.log(e)
})
Copy the code

conclusion

The H5 automation of the first stage has come to an end, and the disaster recovery exercise has been semi-automated. You can run test scripts on terminals to simulate abnormal scenarios and automatically generate screenshots, and then compare screenshots with manual operations to judge whether the exercise results meet expectations. It is used in monthly disaster recovery drills.

With the iteration of Jingxi business, the page will be updated and revised, so the test cases need to be maintained and updated continuously. In the future, automation tools will be continuously optimized, test scripts will be shared, test results will be automatically compared based on screenshots, data will be stored, test results will be converted into documents, automatic email will be sent, and so on. Automatic testing based on disaster recovery drills, but also extend the monitoring of advertising space, data reporting monitoring automation testing……

For jingxi home automation test road, far from the end, there are a lot of places can be optimized and expanded, the next stage to continue to optimize automated test tools, please look forward to!

A link to the

Puppeteer


Welcome to the bump Lab blog: AOtu.io

Or pay attention to the bump Laboratory public account (AOTULabs), push the article from time to time: