This article focuses on some advanced tips for front-end testing and demonstrates how to integrate more complex command line tools with a lot of demo code
For details on the functionality of each API in the test case, please visit the official website
Why do you need test cases
advantages
1. Code Quality assurance & Increased trust
Looking at Github as a whole, a mature tool library is a must
- Complete test cases (JEST/Mocha…)
- Friendly documentation System (official website/Demo)
- Type declaration file d.ts
- Continuous Integration environment (Git Action /circleci…)
Without these elements, users will encounter bugs that they obviously don’t want to see happen, which will make it difficult for users to accept your product
The most important thing about test cases is to improve the quality of the code and give others the confidence to use the tool you’re developing (the trust that comes from confidence is crucial for software engineers).
In addition, test cases can be viewed directly as an off-the-shelf debugging environment, gradually making up for cases that were not thought of during requirements analysis when writing test cases
2. Guarantee of reconstruction
Having good test cases can make a big difference when it comes to refactoring code that needs a big update
Choose the test case design method of black box test, only care about the input and output, do not care about what is done inside the test case
In the case of refactoring, if the API ultimately exposed to the user has not changed, then there is little need for any changes and the previous test cases are reused
Therefore, if the code has a good test case, you can greatly enhance the confidence of refactoring, and you don’t need to worry about the old functionality being unusable due to code changes
3. Make code readable
Reading test cases is an efficient way for developers who want to understand the project’s source code
Test cases can intuitively show the function of the tool and the behavior in various cases
In other words, test cases are “documentation” for software developers
// Vuejs test cases
// https://github.com/vuejs/core/blob/main/packages/reactivity/__tests__/computed.spec.ts#L24
it('should compute lazily'.() = > {
constvalue = reactive<{ foo? : number }>({})const getter = jest.fn(() = > value.foo)
const cValue = computed(getter)
// lazy
expect(getter).not.toHaveBeenCalled()
expect(cValue.value).toBe(undefined)
expect(getter).toHaveBeenCalledTimes(1)
// should not compute again
cValue.value
expect(getter).toHaveBeenCalledTimes(1)
// should not compute until needed
value.foo = 1
expect(getter).toHaveBeenCalledTimes(1)
// now it should compute
expect(cValue.value).toBe(1)
expect(getter).toHaveBeenCalledTimes(2)
// should not compute again
cValue.value
expect(getter).toHaveBeenCalledTimes(2)})Copy the code
disadvantages
There are two sides to every coin. The advantages are mentioned, followed by the disadvantages, to help you better judge whether the project should write test cases
Don’t have the time
It is common for developers not to write test cases, the most common being that they are too cumbersome
By the time I wrote the test cases, the code was done. Tests? Leave it to QA, that's their job
In fact, this situation is completely understandable, the usual development time is not enough, how can you spare time to write test cases?
So you need to judge the value of writing test cases based on the type of project
I do not recommend writing test cases for projects with frequent UI changes and short life cycle, such as the official website and activity page
Because they are generally time-sensitive, frequent changes in page structure can lead to frequent changes in test cases, and QA resources are commonly available to ensure project quality to a certain extent (self-testing is still necessary).
On the other hand, for tool libraries and component libraries, since there are few functional changes and generally no QA resources, supplementary test cases are recommended if a certain number of users already exist
Can’t write
Writing test cases requires learning the syntax of the test framework, so there is a certain amount of learning cost (lack of time to learn is also a cause of failure to write)
Fortunately, the mainstream testing framework on the market is similar with little difference, and the overall idea tends to be the same. At the same time, there is not much breaking change. The average developer can get started in one week and get advanced in two. Learn once and use it in any front-end project (Learning once, Write Everywhere)
Compared to the learning cost of Vue and React, combined with the previous advantages, is it a very good deal?
After the pros and cons, I’ll share some of my experiences with writing test cases for a complex command-line tool
Run integration tests on command line tools
I take the way of integration test for testing. The difference between integration test and unit test is that the former is broader, while the latter is more granular. Integration test can also be composed of multiple unit tests
Since it’s a command-line tool, the first thing you need to think about is simulating the user’s behavior using the command line
Command line operation
My initial understanding of the test case was to simulate the user’s original input as much as possible. So my idea is to run the command line tool directly in the test case
// cli.js
const { program } = require("commander")
program.command('custom').action(() = > {
// do something for test
})
program.parse(argv)
Copy the code
// index.spec.js
const execa = require("execa")
const cli = (argv = "") = > new Promise((resolve, reject) = > {
const subprocess = execa.command(`node cli.js ${argv}`)
subprocess.stdout.pipe(process.stdout)
subprocess.stderr.pipe(process.stderr)
Promise.resolve(subprocess).then(resolve)
})
test('main'.async() = > {await cli(`custom`)
expect(...)
})
Copy the code
Run the command line tool in the child process, then print the output of the child process to the parent process, and determine whether the print result is expected
Advantages: More consistent with the user command line interface
Disadvantages: When you need to debug test cases, the performance of the debugger is very poor when the debugger is started because it depends on child processes. The test cases often time out, and even swallow errors or output some system errors irrelevant to the test cases
function
The last solution was too severe and forced to think of other solutions
Since we use Commander to implement the NodeJS command-line tool, the test case essentially just lets the action behind the command execute
As mentioned in the COMMANDER documentation, the callback to Aciton can be triggered by calling the parse method to pass in command-line arguments
So we expose a startup function called bootstrap that takes command line arguments and passes them to parse
// cli.js
const { program } = require("commander")
const bootstrap = (argv = process.argv) = > {
program.command('custom').action(() = > {
// do something for test
})
program.parse(argv)
return program
}
export { bootstrap }
Copy the code
// index.spec.js
const { bootstrap } = require('cli.js')
test('main'.async() = > {const program = bootstrap(['node'.'cli.js'.'custom'])
expect(program.commands.map(i= > i._name)).toEqual(['custom']) // pass
expect(...)
})
test('main2'.async() = > {const program = bootstrap(['node'.'cli.js'.'custom'])
expect(program.commands.map(i= > i._name)).toEqual(['custom']) // fail, received ['custom', 'custom']
expect(...)
})
Copy the code
Advantages: Run test cases directly in the current process without relying on the child process, and the debugger has no problems, successfully solving the performance bottleneck
Disadvantages: Code has side effects, all test cases share the same program instance, test cases can be used independently, but multiple test cases may interfere with each other
Factory function run
Learning from last time, we directly exposed a factory function that generates a command-line tool
// cli.js
const { Command } = require("commander")
const createProgram = () = > {
const program = new Command()
program.command('custom').action(() = > {
// do something for test
})
return program
}
export { createProgram }
Copy the code
// index.spec.js
const { createProgram } = require('./cli.js')
test('main'.() = > {
const program = createProgram()
program.parse(['node'.'cli.js'.'custom'])
expect(program.commands.map(i= > i._name)).toEqual(['custom']) // pass
expect(...)
}
Copy the code
This creates a separate program each time the test cases are run, isolating the test cases from each other
With the command line tool initialization resolved, let’s move on to a few special test cases for the command line
Test help command
When you need to test help commands (–help, -h) or validate command-line arguments, commander will print the prompt text to the process as an error log and call process.exit to exit the current process
This causes the test case to exit prematurely, so you need to rewrite this behavior
Commander internally provides the exitOverride function, which, when used, throws a JS error in place of the exit of the original process
// https://github.com/tj/commander.js/blob/master/tests/command.help.test.js
test('when specify --help then exit'.() = > {
// Optional. Suppress normal output to keep test output clean.
const writeSpy = jest.spyOn(process.stdout, 'write').mockImplementation(() = >{});const program = new commander.Command();
program.exitOverride();
expect(() = > {
program.parse(['node'.'test'.'--help']);
}).toThrow('(outputHelp)');
writeSpy.mockClear();
});
Copy the code
After overriding the exit behavior, to validate the copywriting of the help command, you also need to use configureOutput provided by Commander
Then modify the test case
// index.spec.js
const { createProgram } = require('./cli.js')
test('help option'.() = > {
const program = createProgram()
// overwrite exit
program.exitOverride().configureOutput({
writeOut(str) {
// / assert the message of the help command
expect(str).toEqual(`Usage: index [options] [command] Options: -V, --version output the version number -h, --help display help for command Commands: custom `)
},
})
expect(() = > {
program.parse(['node'.'cli.js'.'-h'])
}).toThrow('(outputHelp)'); // assert the behavior of the help command
})
Copy the code
Test asynchronous use cases
Command line tools can have asynchronous callbacks, and test cases need to support asynchronous cases
The good news is that Jest uses help commands as an example for asynchronous test cases out of the box
// index.spec.js
+ test('help option', async () => {
const program = createProgram()
program.exitOverride().configureOutput({
writeOut(str) {
expect(str).toEqual(`Usage: index [options] [command]
Options:
-V, --version output the version number
-h, --help display help for command
Commands:
custom
`)
},
})
+ try {
+ // use 'parseAsync' for async callback hook
+ await program.parseAsync(['node', 'cli.js', '-h'])
+ } catch (e) {
+ // According to code
+ // distinguish whether it is an error of the help command itself or other code errors
+ if (e.code) {
+ expect(e.code).toBe('commander.helpDisplayed')
+ } else {
+ throw e
+}
+}
})
Copy the code
For asynchronous test cases, you are advised to set the timeout period to prevent waiting for test results due to coding errors
The Jest default timeout is 5000ms, which can also be overridden through configuration files/test cases
jest.setTimeout(10000); // 10 second
test('main'.async() = > {await sleep(2000) // fail timeout
expect(...)
});
Copy the code
// jest.config.js
module.exports = {
testEnvironment: 'node'.testTimeout: 10000
}
Copy the code
In addition to the timeout, the number of times assertions are added is what ensures the success of asynchronous test cases
Expect. Assertions specify how many times an assertion can be triggered by a single test case, which can be useful in testing scenarios where exceptions are caught
A timeout that does not match the expected number of times will cause the test case to fail
// index.spec.js
+ jest.setTimeout(10000); // set timeout
test('help option', async () => {
+ // Expect the assertion to fire twice,
+ // So an incorrect number of triggers/timeouts will cause the test case to fail
+ expect.assertions(2)
const program = createProgram()
program.exitOverride().configureOutput({
writeOut(str) {
+ // first trigger
expect(str).toEqual(`Usage: index [options] [command]
Options:
-V, --version output the version number
-h, --help display help for command
Commands:
custom
`)
},
})
try {
// use 'parseAsync' for async callback hook
await program.parseAsync(['node', 'cli.js', '-h'])
} catch (e) {
if (e.code) {
+ // second trigger
expect(e.code).toBe('commander.helpDisplayed')
} else {
throw e
}
}
})
Copy the code
Test the variables in the run
In a normal test scenario, the value of the validation variable can be verified by running the return value of the export function
// index.js
exports.drinkAll = function drinkAll(callback, flavour) {
if(flavour ! = ='octopus') { callback(flavour); }}// index.spec.js
const { drinkAll } = require("./index.js")
describe('drinkAll'.() = > {
test('drinks something lemon-flavoured'.() = > {
const drink = jest.fn();
drinkAll(drink, 'lemon');
expect(drink).toHaveBeenCalled();
});
test('does not drink something octopus-flavoured'.() = > {
const drink = jest.fn();
drinkAll(drink, 'octopus');
expect(drink).not.toHaveBeenCalled();
});
});
Copy the code
However, command-line tools may depend on context information (arguments, options), so it is not suitable to disassemble and export various internal functions. How to test the value of variables at run time?
I used debug + jest. DoMock + toHaveBeenCalled
// cli.js
const debug = require("debug") ('cli')
const { Command } = require("commander");
const createProgram = () = > {
const program = new Command()
program.command('custom <arg1>').action((arg1) = > {
debug(arg1)
// ...
})
return program
}
export { createProgram }
Copy the code
// index.spec.js
test('main'.() = > {
const f = jest.fn();
// mock debug module
jest.doMock("debug".() = > () = > f);
// require createProgram after debug have been mocked
const createProgram = require("./index");
const program = createProgram();
program.parse(["node"."cli.js"."custom"."foo"]);
expect(f).toHaveBeenCalledWith("foo"); // pass
}
Copy the code
-
Print parameters that need to be validated using the Debug module (somewhat code intrusive, but the debug module can also be used for logging)
-
The test case run hijacks the Debug module with jest. DoMock so that the debug execution returns jest. Fn
-
Jest. Fn was checked with toHaveBeenCalled
Why use jest. DoMock instead of jest. Mock?
Mock will declare promotions at run time, making it impossible to use external variables f github.com/facebook/je…
Simulate command line interaction
Command line tools with command line interaction are a very common scenario
Inspired by vuE-CLI, simulating user input in test cases is very simple without any code intrusion
- Create __mock__/ Inquirer. Js, hijack and proxy the Prompt module, and add Jest’s assertion statement to the reimplemented Prompt function
// __mocks__/inquirer.js
// inspired by vue-cli
// https://gist.github.com/yyx990803/f61f347b6892078c40a9e8e77b9bd984
let pendingAssertions
// create data
exports.expectPrompts = assertions= > {
pendingAssertions = assertions
}
exports.prompt = prompts= > {
// throw an error if there is no data
if(! pendingAssertions) {throw new Error(`inquirer was mocked and used without pending assertions: ${prompts}`)}const answers = {}
let skipped = 0
prompts.forEach((prompt, i) = > {
if(prompt.when && ! prompt.when(answers)) { skipped++return
}
const setValue = val= > {
if (prompt.validate) {
const res = prompt.validate(val)
if(res ! = =true) {
throw new Error(`validation failed for prompt: ${prompt}`)
}
}
answers[prompt.name] = prompt.filter
? prompt.filter(val)
: val
}
const a = pendingAssertions[i - skipped]
if (a.message) {
const message = typeof prompt.message === 'function'
? prompt.message(answers)
: prompt.message
// consume data
expect(message).toContain(a.message)
}
if (a.choices) {
// consume data
expect(prompt.choices.length).toBe(a.choices.length)
a.choices.forEach((c, i) = > {
const expected = a.choices[i]
if (expected) {
expect(prompt.choices[i].name).toContain(expected)
}
})
}
if(a.input ! =null) {
// consume data
expect(prompt.type).toBe('input')
setValue(a.input)
}
})
// consume data
expect(prompts.length).toBe(pendingAssertions.length + skipped)
pendingAssertions = null
return Promise.resolve(answers)
}
Copy the code
- Prompts can be used to simulate a user’s question and answer (the condition to create the assertion) before running the test case.
// If we want to mock Node's core modules (e.g.: fs or path),
// then explicitly calling e.g. jest.mock('path') is required
// else it is not required
// jest.mock('inquirer')
const { expectPrompts } = require('inquirer')
const { createProgram } = require('./cli.js')
test('migrate command'.() = > {
// create user input data
expectPrompts([
{
message: 'select project'.choices: [ 'project1'.'project2'.'project3'.'sub-root'].choose: 1,}])const program = createProgram()
// when inquirer.prompt is triggerd
// it will consumes user input data from __mocks__/inquirer.js
program.parse(['node'.'cli.js'.'custom'])})Copy the code
- When the code runs inquirer. Prompt, the agent and jumps to __mock__/inquirer
expectPrompts
Created questions and answers are matched in turn (consumption data) - Eventually, the broker prompt returns the same Answers object as the real prompt, making the final behavior consistent
summary
Ensure that test cases are written independently of each other, without side effects, and idempotent
You can start from the following angles
-
Each time you run the test case, create a new Commander instance
-
A single test case is allowed to use the singleton pattern and multiple test cases are not allowed to use the same singleton
-
File System Isolation
Other Test Techniques
Simulated working directory
jest.spyOn(process, 'cwd').mockImplementation(() = > mockPath))
Copy the code
Jest. SpyOn tracks calls to process. CWD, and Jest. MockImplementation overrides the behavior of process. CWD, eventually emulating the working directory
// index.spec.js
const { createProgram } = require('./cli.js')
test('mock current work directory'.() = > {
// when process.cwd() have been called
// return mockPath
const cwdSpy = jest.spyOn(process, 'cwd').mockImplementation(() = > mockPath))
const program = createProgram()
program.parse(['node'.'cli.js'.'custom'])
expect(cwdSpy).toHaveBeenCalledTimes(2) // assets process.cwd() to have been called twice
});
Copy the code
It is also possible to pass the mock working directory as an argument to the createProgram factory function without relying on the Jest API
// cli.js
const { Command } = require("commander");
const createProgram = (cwd = process.cwd()) = > {
const program = new Command()
// use cwd instead of process.cwd
program.command('custom').action(() = > console.log(cwd))
}
Copy the code
// index.spec.js
const { createProgram } = require('./cli.js')
test('mock current work directory'.() = > {
const program = createProgram(mockPath)
program.parse(['node'.'cli.js'.'custom'])
expect(...)
});
Copy the code
Emulated file system
Reading and writing files is also a side effect, because command line tools may involve file modifications, so there is no guarantee that a clean environment will be created each time a test case is run. To ensure that the test cases are independent of each other, you need to simulate a real file system.
Memory-fs is chosen here, which translates the operations of a real file system into operations of virtual files in memory
Create a new __mocks__ folder in the project root directory
Export the memFS module by adding fs.js to the __mocks__ folder
// __mocks__/fs.js
const { fs } = require('memfs')
module.exports = fs
Copy the code
Jest defaults to treating files in the __mocks__ folder as modules that can be emulated
Running jest. Mock (fs) in a test case hijacks the FS module and proxies it under __mocks__/fs.js
The problem of file system side effects is solved by proxying FS as MEMFS
Silent error log
// index.spec.js
const { createProgram } = require('./cli.js')
test('mock current work directory'.() = > {
const stderrSpy = jest.spyOn(process.stderr, 'write').mockImplementation()
const program = createProgram()
program.parse(['node'.'cli.js'.'custom'])
// silence when error happened
expect(stderrSpy).toBeCalledWith('something error')});Copy the code
MockImplementation does not pass any arguments and silently handles the mock function
Implement the function that no error log is output during running tests, making the test case run cleaner
No error logging is not a case of swallowing an error and still validating it with a try/catch
// index.spec.js
const { createProgram } = require('./cli.js')
test('unknown command'.() = > {
// overwrite standard error
jest.spyOn(process.stderr, 'write').mockImplementation()
try {
const program = createProgram()
program.parse(['node'.'cli.js'.'unknown'])}catch (e) {
expect(e.message).toEqual("error: unknown command 'unknown'")}})Copy the code
You can also override the error log with the program. ConfigureOutput mentioned earlier
// index.spec.js
const { createProgram } = require('./cli.js')
test('unknown command'.() = > {
jest.spyOn(process.stderr, 'write').mockImplementation()
try {
const program = createProgram()
// overwrite exit
program.exitOverride().configureOutput({
// overwrite commander error
writeErr: str= > {},
})
program.parse(['node'.'cli.js'.'unknown'])}catch (e) {
expect(e.message).toEqual("error: unknown command 'unknown'")}})Copy the code
Before use:
After use:
Test case lifecycle hooks
Jest provides the following hooks
- beforeAll
- beforeEach
- afterEach
- afterAll
Lifecycle hooks fire before/after each/all test case, and you can reduce the amount of code by adding common code to the hooks
For example, run the pre-mock code in each test case with the beforeEach hook and restore it with jest. RetoreAllMock when finished
// index.spec.js
describe('main'.() = > {
// mock debug in every test case
beforeEach(() = > jest.doMock("debug".() = > () = > f))
// remove mock for debug
afterEach(jest.restoreAllMocks)
test('main'.() = > {
const createProgram = require("./index");
const program = createProgram();
program.parse(["node"."cli.js"."custom"."foo"]);
expect(f).toHaveBeenCalledWith("foo"); // pass
})
test('main2'.() = > {
const createProgram = require("./index");
const program = createProgram();
program.parse(["node"."cli.js"."custom"."bar"]);
expect(f).toHaveBeenCalledWith("bar"); // pass})})Copy the code
Typescript support
Adding Typescript support for test cases results in stronger type hints and allows pre-checking of code types before running test cases
- add
ts-jest
.typescript
.@types/jest
Type declaration file
npm i ts-jest typescript @types/jest -D
Copy the code
- add
tsconfig.json
File and add the previously installed @types/jest to the list of declaration files
{
"compilerOptions": {
"types": [ "jest"].}}Copy the code
- Js – > index.spec.ts and commonJS to ESM
Test coverage
Visualizing test case runs, unrun code, and line counts
Add the Coverage parameter after the test command
jest --coverage
Copy the code
Generates the Coverage folder after running, containing reports of test coverage
In addition, the test coverage can be integrated with CI/CD platform, and the test coverage report can be generated and uploaded to THE CDN after each release of the tool
Measure the growth trend of test coverage for each tool release
conclusion
Writing test cases is a high upfront investment in time (learning test case syntax) and a high late-stage payoff (maintaining code quality and improving refactoring confidence)
It is suitable for products with few changes and few QA resources, such as command line tools and tool libraries
One tip for writing test cases is to refer to the github test cases of the corresponding tool. The official test cases are often more complete
Integration testing of command line tools must ensure that test cases are isolated from each other to ensure idempotency
Expose a factory function that creates a COMMANDER instance, creating a brand new instance each time the test case is run
Using Jest built-in apis such as Jest. SpyOn, mockImplementation, and Jest. DoMock to proxy NPM modules or built-in functions is less intrusive
The resources
Blog:
zhuanlan.zhihu.com/p/55960017
Juejin. Cn/post / 684490…
What is the best way to unit test a commander cli?
Itnext. IO/testing – wit…
Stackoverflow:
Stackoverflow.com/questions/5…
Making:
Github.com/shadowspawn…
Github.com/tj/commande…