Unit tests are important in software development, but in a real project, no matter how you organize them, no matter how many types of tests you add, unit tests, integration tests, they add no value if they can’t be trusted, easy to maintain, and easy for others to read. So there are three important pillars to measure whether a test is a good unit test:

  • Reliability developers want the tests running on their projects to be reliable, so that whether they are maintaining legacy projects or new projects, they can boldly modify or refactor the code so long as the tests are reliable that they do not break previous functionality. Solid testing has no defects and tests the correct code.
  • Maintainability tests that can’t be maintained are a nightmare. They delay the project, or they get tossed aside when deadlines are tight, and no one wants to add tests for new features. If changing the test takes too long, the developer will stop the maintenance and repair work of the test.
  • Readability tests are not only written for yourself, but also for other people to read. Good readability tests also make it easier to find problems in the code when they are detected. Without readability, the other two pillars — reliability and maintainability — quickly collapse. If the test is not understood, it becomes difficult to maintain and trust the test.

Here’s how to write good unit tests from a reliability standpoint.

Write reliable tests

Reliable testing has several characteristics. When the test passes, you can be completely confident that the code functions flawlessly in that scenario. In short, a solid test makes you feel in control of your project’s code and can handle any problems that arise. Here are some guidelines:

  • Deciding when to remove or modify tests;
  • Avoid logic in testing;
  • Each test has only one concern;
  • Separate unit and integration tests;
  • Drive code reviews.

By following these principles, your tests will be more reliable and will continue to find true bugs in your code.

Determines when tests are deleted and modified

Once the tests are written and passed, you should not normally modify or delete them. These tests are your code’s safety net, and they can tell you if the modified code has broken the current functionality. That being said, there are situations where you might want to change or delete tests, so you need to know when it makes sense to do so. Here are some possible reasons why.

  • Project defects The project code under test has a defect. If you change the project code so that an existing test fails, a defect occurs, and you must modify the corresponding test to make the test pass under the existing implementation.
  • Test defectsIf the test is inherently flawed, then you have to fix the test. Testing bugs are notoriously hard to spot because the tests are supposed to be right, and you can’t tell if it’s a bug in the test or a bug in the code.TDDYou can test tests, that’s why a lot of people love it, rightTDDThe reason why.
  • Semantic or API changes If the semantics of the code under test have changed, but the functionality has not. Take this example:
test('Semantics Change'.() = > {
    const logan: LogAnalyzer = new LogAnalyzer()
  expect(logan.isValid('abc')).toBeFalsy()
})
Copy the code

The semantics of the LogAnalyzer class change and must be initialized by calling the init method before using any other API. This is when you need to modify the test:

test('Semantics Change'.() = > {
    const logan: LogAnalyzer = new LogAnalyzer()
  logan.init()
  expect(logan.isValid('abc')).toBeFalsy()
})
Copy the code

This problem of test failure due to changes in code semantics is one of the worst experiences most developers face when writing and maintaining unit tests. Of course, if you have many tests for the LogAnalyze class, you can use a factory method to refactor the tests:

function createLogAnalyzer () {
    const logan: LogAnalyzer = new LogAnalyzer()
  logan.init()
  return logan
}
test('Semantics Change'.() = > {
    const logan: LogAnalyzer = createLogAnalyzer()
  expect(logan.isValid('abc')).toBeFalsy()
})
Copy the code

This way, if LogAnalyzer semantics change, we only need to modify the factory method.

  • Conflicting or invalid tests If a new feature is added to production code that directly conflicts with another test, a test conflict occurs. In this case, testing did not find defects, but did find conflicting product requirements. In this case, take this issue back to the product manager.
  • Renaming or refactoring test code that is unreadable causes more problems than it solves, because it prevents you from understanding your tests and finding bugs in your code. If you find that the test name is unclear or misleading, or that the test could be made more readable, you should change the test code.
  • On a development team, it is possible for different developers to write multiple tests to test the same feature. Of course, there are benefits to repeated testing. The more testing you do, the more problems you find. You can read tests and see different implementations or semantics of the same test. Of course, repeated testing also has many disadvantages: 1) It is difficult to maintain multiple tests of a feature; 2) The test quality is uneven and requires full review to ensure correctness; 3) One problem may cause multiple test failures; 4) Similar tests must have different names, otherwise the tests will be scattered across classes.

Therefore, sometimes it is necessary to remove duplicate tests.

Avoid logic in testing

As the logical code in your tests increases, the chance of test defects increases almost exponentially. Tests should be as simple as possible, and do not add logic to tests, including other operations such as generating random numbers, creating threads, reading and writing files, which turn tests into mini-test engines. If your test includes any of the following statements, your test contains logic that shouldn’t be there:

  • Switch, if, or else statements;
  • ForEach, for, or while loops;

Even try… None of the catch statements should be used in tests; they can cause a lot of problems.

  • The test is difficult to understand or read;
  • Tests are difficult to reproduce;
  • Tests tend to contain bugs and become difficult to debug;
  • It is difficult to name a test because it performs many tasks.

Test only one concern

A test concern is the end result of a unit of work: a return value, a change in system state, or a call to a third-party object. If your unit test asserts multiple objects or tests both the return value of an object and other state changes of the system, your test may be testing multiple concerns. The problem with testing multiple concerns is how you name the test appropriately, or what to do when the assertion of the first object fails.

Naming a test may seem simple, but when you test multiple concerns and you have to give it a common name, it forces the viewer to read the test source code to know what your test is testing, which is much easier if the test has only one concern.

Most testing frameworks do not execute subsequent assertions after the first assertion fails, so you may not be able to detect other possible defects in time. One way to determine this is: If the first assertion fails, do you care about the outcome of subsequent assertions? If you care, you should separate multiple tests.

Separate unit tests from integration tests

It’s important to create a separate green safe zone for unit tests, and team members won’t run them if they can’t trust your tests to perform easily and stably. When you refactor your tests to make them easy to run and consistent, the test area becomes stable and reliable. Create a green safe zone during your tests to give team members more confidence in your tests. Creating a green safe zone is simple: separate unit tests from integration tests, leaving only those tests in the unit test code that are stable, easy to run, and repeatable.

Ensure code coverage with code reviews

It doesn’t mean much if your test code coverage is 100%, and if code reviews aren’t done, maybe some tests don’t even assert themselves, just run toward coverage goals, and don’t care about test quality at all. What does 100% code coverage plus code reviews mean? Explain that good testing provides a safety net for your project, prevents silly mistakes, and benefits everyone by sharing knowledge during code reviews.

To ensure your test coverage, you need to check your code coverage frequently with tools. This is easy to do in Jest, just add the –coverage parameter to the command to print out the coverage report. Of course, you can also remove some code from the code, or change a branch condition to see if the test still passes, if the test depends on passing, you need to add or improve your test.

conclusion

Reliable testing allows your team members to trust your tests and ensure that they continue to find problems in your code. If the tests are unreliable and no one wants to run them, it is better not to write them, taking up development time and not bringing any benefits. This article introduces you to writing reliable tests from five principles, and hopefully gives you a guide to writing good unit tests.