1 introduction

This paper studies four questions: what is automated testing, why automated testing, what project is suitable for automated testing, and how to do automated testing. In the process of searching for the answers to these four questions, a complete set of front-end automation test scheme is combed, including: unit test, interface test, functional test and benchmark test.

What is automated testing

Wikipedia defines it this way

In software testing, Test automation (English: Test Automation) is a testing method that uses specific software to control the testing process and compare the difference between the actual result and the expected result. By automating tests, the necessary tests in the formal testing process can be repeated. This way, you can also hand over testing to software that is difficult to do manually.

The greatest advantage of test automation is the ability to test quickly and repeatedly.

To sum up: automated testing refers to the automation of software testing, so that software instead of manual testing, can be fast and repeated.

There is a pyramid of automated testing that divides testing from top to bottom into UI (user interface testing) /Service (Service testing) /Unit (Unit testing). As shown in the figure, the further down the pyramid, the higher the test efficiency, the higher the test quality assurance, and the lower the test cost. How to understand this sentence? The UI of front-end projects often changes frequently. Once changes occur, UI test cases cannot be executed and maintained, so the cost of UI automated testing is high and the benefits are small. Compared to UI testing, Service testing is simpler, more direct and changes less frequently; Unit test mainly tests common functions and methods. The test cases have high reuse degree and can better guarantee the code quality.

Unit testing
The interface test
A functional test
The benchmark

What are the benefits of implementing automated tests

The most important purpose of testing is to verify code correctness and ensure project quality. For example, one day I wrote a complex logic function, this function is called a lot of places, a month later, I could forget the specific logic, for some reason need to add a few features for this function, modify the function of the code, then how I will do to ensure that after modifying code does not affect other caller, or, What can I do to quickly know which areas are affected and which are not? The answer is to automate testing and run test cases.

If we don’t automate testing, how do we verify that the code is correct? The usual methods FE uses are manual testing: console, alert, break point, dot point. But manual testing is a one-time thing, and the next time someone changes the code’s functionality, we have to do manual testing again, and it’s hard to guarantee full coverage. However, if you write test cases for automated testing, the test cases written for the first time can be reused, written once and run many times. If test cases are well written and semantic, developers can also quickly understand project requirements by looking at test cases. Implementing automated tests can drive developers to make better abstractions in code design and write testable code. Taking testing common methods as an example, it is necessary to ensure that the methods under test have no side effects, neither dependence on external variables, nor change the global original state.

To summarize, there are four benefits to implementing automated testing:

  • Can verify the correctness of the code, ensure the quality of the project

  • Test cases can be reused, written once and run many times

  • You can quickly understand requirements by looking at test cases

  • Drive development, direct design, and ensure that the code written is testable

What kind of project is suitable for automated testing

If automated testing is so good, is it appropriate for all projects? The answer is no, because there are costs. Before implementing automated testing, the software development process should be analyzed to determine whether it is suitable to implement automated testing based on the input and output. A project implementing automated testing usually needs to meet both of the following conditions:

  • Demand changes infrequently
  • The project cycle is long enough
  • Automated test scripts can be reused
  • The code specification is testable

If requirements change too frequently, the cost of maintaining test scripts is too high; If the project cycle is short, there is not enough time to support the process of automated testing; If the test script is low in repetition, the effort consumed is greater than the value created, which is not worth it. If the code is non-standard and testability is poor, automated testing can be difficult to implement.

5 How do automated tests work

5.1 Original test method

For example, now we have the method sum

const sum = (a, b) = > { return a + b }
Copy the code

How to prove the correctness of sum method? We usually use the following code for testing

// test/util.test.js
const sum = (a, b) = > { return a + b }
if(sum(1.1) = = =2) {console.log('sum(1,1)===2 ')}else{
    console.log('sum(1,1)===2 ')}Copy the code

The console output after executing the test code is as follows

The test result is correct. Suppose we now change the sum method to +1

const sum = (a, b) = > { return a + b + 1 }
Copy the code

The console output after executing the test code is as follows

Although this output shows the hint of method error, there is no obvious distinction between correct and wrong results. The test conclusion is not intuitive, so we modify the test code

// test/util.test.js
const sum = (a, b) = > { return a + b + 1 }
if (sum(1.1) = = =2) {
  console.log('sum(1,1)===2')}else {
  throw new Error('sum(1,1)===2 ')}Copy the code

This code executes and actively throws an error if the result of method execution is not as expected

In this way, we can see the test results more intuitively. We further optimize to use the assertion module provided by NodeJS to write test cases

const sum = (a, b) = > { return a + b + 1 }
const assert = require('assert')
assert.equal(sum(1.1), 2)
Copy the code

The console results after executing the test code are as follows

The output is similar to what you just saw: actively throw an error if the execution result is not as expected. Assert achieves the same effect, but with less code and more semantics.

5.2 Using a testing framework

The above method can help us to complete the code testing, but is there a better way? When we develop projects, we usually choose to use frameworks and libraries. The advantage of using frameworks is to constrain the style of our code and ensure the maintainability and expansibility of the code. The use of tool libraries can improve the development efficiency. Similarly, when implementing automated testing, we also choose to use testing frameworks and libraries. At present, Mocha, QUnit, Jasmine, Jest and other popular front-end testing frameworks in the market are briefly introduced as follows

The framework can output more intuitive test reports for us, such as the following, showing us both correct and wrong test results

You can also output test reports with a document structure, such as the following

5.3 Selection of test scheme technology

The technical selection of the automated test scheme discussed in this paper is as follows:

  • Test framework: Mocha
  • Assertion library: Chai
  • Test report: Mochawesome
  • Test coverage: Istanbul
  • Test browser: Chrome
  • Browser driver: Selenium-WebDriver/Chrome
  • Interface tests HTTP request assertions: supertest
  • The React component tests: enzyme
  • Benchmark: Benchmark

Mocha was chosen because it:

  • Lean and flexible, strong scalability
  • The community is mature with many people
  • Various test cases can be found in the community

Here’s a test case to see what Mocha is capable of:

You can see the four core competencies of Mocha

  • Test case grouping
  • Lifecycle hook
  • Compatible with different style assertions
  • Synchronous asynchronous test architecture

The describe block in the code is called a “test suite” and represents a set of related tests. It is a function where the first parameter is the name of the test suite (” Test sum method “) and the second parameter is the actual function executed. Grouping makes the test case code structured and easy to maintain. The IT block, called a “test case,” represents a single test and is the smallest unit of testing. It is also a function whose first argument is the name of the test case (“1 plus 1 should equal 2”) and whose second argument is the actual function to execute.

Chai was chosen as the assertion library because it provides two styles of assertion :BDD style (behavior-driven development) and TDD style (test-driven development), where THE BDD style is closer to natural language. It can be used freely and flexibly with Mocha. Here are two assertion styles shown on the Chai website.

5.4 Test scheme code

Let’s start by sorting out the complete automated test solution. The overall directory structure is as follows:

5.4.1 Unit tests

(1) Unit test the following methods

// /src/client/common/js/testUtil.js
export const sum = (a, b) = > {
  return a + b
}
Copy the code

Write test cases

import { sum } from '.. /.. /src/client/common/js/testUtil.js'
const { expect } = require('chai')

describe('Unit test: sum (a, b)'.function () {
  it('1 plus 1 should be equal to 2'..function () {
    expect(sum(1.1)).to.be.equal(2)})})// skip can specify a group to be skipped
describe.skip('Unit test: Amount by thousandth comma separated method formatMoney (s, type)'.function () {... })Copy the code

The test case is then executed using Mocha, with the following output

You can see that one of the two test groups passed and the other was actively skipped. When executing the test cases using Mocha, because we specified the test report format — Reporter parameter is mochawesome, the test report will be output in the following HTML format

To analyze the current test case coverage of the source code, we use Istanbul to generate test coverage reports

  • Statement coverage: Whether each statement is executed
  • Branch coverage: Whether every block of if code executes
  • Function coverage: Whether every function is called
  • Line coverage: Is every line executed

Corresponding to Statements, Branches, Functions and Lines in the above figure, click the link on the left to view the source code test details. The green part indicates that the source code has been covered by the test

With regard to test coverage, it is important to emphasize that we should not use test coverage as a standard to check the quality of the project, only as a reference. The real meaning of code coverage is to help developers find problems in code design, to help us find out why some code is not covered by tests, whether it is a problem in code design, or whether it is adding useless code, it can guide us to do better abstraction in code design, write testable code.

(2) React component test

There is now the following React component

// /src/client/components/Empty/index.jsx'
import React, { Component } from 'react'
import { Icon } from 'antd'

const Empty = (props) = > {
  const placeholder = props.placeholder

  return (
    <div>
      <Icon type='meh-o' />
      <span>{placeholder | | 'data is empty'}</span>
    </div>
  )
}

module.exports = Empty
Copy the code

Write test cases to test it

import React from 'react'
import { expect } from 'chai'
import Enzyme, { mount, render, shallow } from 'enzyme'
import Adapter from 'enzyme - adapter - react - 15.4' Install the adapter according to the React version
import Empty from '.. /.. /src/client/components/Empty/index.jsx'
import { spy } from 'sinon' // Encapsulate the original function and listen

Enzyme.configure({ adapter: new Adapter() }) // Use the Enzyme first to adapt the React version

describe('Test the React component: 
      ', () => {
  it('When no property is passed in, the text of the span in the component is' data is empty' ', () = > {constWrapper = render(<Empty />) expect(wrapper.find('span').text()).to.equal(' Empty ')}) it(' I am a placeholder text ') The span text in the component is "I am placeholder text", () => {const wrapper = render(<Empty placeholder=' I am placeholder '/>) expect(wrapper. Find ('span').text()).to.equal(' I am placeholder ')}) })Copy the code

Executing the test case with Mocha generates the following test report and the test passes

The test coverage is reported below

5.4.2 Interface Testing

Write test cases and implement interface tests using supertest

const request = require('supertest')
const { expect } = require('chai')
const BASE_URL = 'http://127.0.0.1:1990'

describe('Interface Tests: Merchant Login Test Case'.function () {
  it('Login interface/API /user/login'.function (done) {
    request(BASE_URL)
      .post('/api/user/login')
      .set('Content-Type'.'application/json') // set header content
      .send({ // Send body content
        user_code: 666666.password: 666666
      })
      .expect(200) // The assertion expects an HTTP status code to be returned
      .end(function (err, res) {
        // console.info(res.body) // Returns the result
        expect(res.body).to.be.an('object')
        expect(res.body.data.user_name).to.equal('merchant AAAAA)
        done()
      })
  })
})

Copy the code

Execute the interface test case to generate the following test report

5.4.3 e2e test

Write E2E test cases and use Selenium – WebDriver to drive the browser for functional tests

const { expect } = require('chai')
const { Builder, By, Key, until } = require('selenium-webdriver')
const chromeDriver = require('selenium-webdriver/chrome')
const assert = require('assert')

describe('E2E Testing: End-to-end Test Cases for Merchant Systems', () = > {let driver
  before(function () {
    // execute before all test cases in this block
    driver = new Builder()
      .forBrowser('chrome')
      // Set no interface test
      // .setChromeOptions(new chromeDriver.Options().addArguments(['headless']))
      .build()
  })

  describe.skip('Login-related Traditional Use Cases - skip'.function () {... }) describe('Login to merchant System'.function () {
    this.timeout(50000)
    it('Login jump'.async() = > {await driver.get('http://dev.company.home.ke.com:1990/login') // Open the merchant login page
      await driver.findElement(By.xpath('//*[@id="root"]/div/div[2]/div/ul/li[1]/input')).sendKeys(666666) // Enter the user name
      await driver.findElement(By.xpath('//*[@id="root"]/div/div[2]/div/ul/li[2]/input')).sendKeys(666666) // Enter the password
      await driver.findElement(By.xpath('//*[@id="root"]/div/div[2]/div/div/button')).click() // Click the login button
      const currentTitle = await driver.getTitle()
      await driver.sleep(2000)
      expect(currentTitle).to.equal('Merchant Management System')
    })
  })

  after((a)= > {
    // execute after all test cases in this block
    driver.quit()
  })
})
Copy the code

Executing the E2E test case using Mocha produces the following test report

The following figure shows the Selenium – WebDriver driver running automatically in Chrome for functional testing

5.4.4 Benchmarking

Given that we currently need to test the performance of the test method for regular expressions and the indexOf method for strings, we typically do this by running each method 1000 times and comparing which takes longer.

// Compare reg.test and str.indexof performance to determine if a particular character is present in a string
const testPerf = (count) = > {
  var now = new Date() - 1
  var i = count
  while (i--) {
    /o/.test('Hello World! ')}console.log(The 'test method is executed${count}Time takes `.new Date() - 1 - now)
}

const indexOfPerf = (count) = > {
  var now = new Date() - 1
  var i = count
  while (i--) {
    'Hello World! '.indexOf('o') > - 1
  }
  console.log(The indexOf method is executed${count}Time takes `.new Date() - 1 - now)
}

testPerf(1000)
indexOfPerf(1000)
Copy the code

The test results are as follows. Because the code execution is fast, the execution time of the two methods is zero for 1000 times, so the code execution efficiency cannot be accurately judged

Scientific statistical methods require multiple executions to sample the results of a large number of executions. We can use a tool to do this for us. Here is a test using benchmark

// Compare reg.test and str.indexof performance to determine if a particular character is present in a string
const Benchmark = require('benchmark')
const suite = new Benchmark.Suite()

// add test
suite.add('Regular expression test method'.function () {
  /o/.test('Hello World! ')
})
  .add('String indexOf method'.function () {
    'Hello World! '.indexOf('o') > - 1
  })
  // add listeners
  .on('cycle'.function (event) {
    console.log(String(event.target))
  })
  .on('complete'.function () {
    console.log('Fastest is ' + this.filter('fastest').map('name'))})// run async
  .run({ 'async': true })
Copy the code

When the test code is executed, the result is that indexOf executes an order of magnitude more times per second than test, so indexOf performs better

6 summarizes

After sorting out the specific implementation plans of unit test, interface test, functional test and benchmark test, we can draw the following conclusions based on the characteristics of automated test:

Whether the front end should carry out automatic tests should be judged according to the characteristics of the specific project. Automatic tests can be carried out for the codes that meet the following conditions:

  • Core function modules and functions
  • UI components that do not change in the short term
  • Provides an interface for external calls
  • Benchmark method performance

Finally, it’s important to emphasize that automated testing is just one way to make your code robust, maintainable, and efficient.

7 Reference Materials

  • The test framework Mocha:www.ruanyifeng.com/blog/2015/1…
  • Code coverage: www.ruanyifeng.com/blog/2015/0…
  • Selenium-webdriver:www.npmjs.com/package/sel…
  • Benchmark: github.com/bestiejs/be…
  • Front-end tool: ashleynolan. Co. UK/blog/fronte…