In recent years the domestic GeChang appears to have turned into a single test coverage of code quality rigid standards, of course, you can think this is another kind of expression of volume, because one thousand lines of bug rate, such as single test coverage is a rare might be “quantitative” means of code quality, but I believe the facilitator will more or less have the thought of unit testing as test field silver bullet, This paper analyzes the problems existing in this idea.

From TDD

TDD stands for test-driven development, and frankly, it’s a very high-sounding term for most business units, given the lack of test cases in their own projects. Many people also often think that if we could practice TDD, our code development would be more efficient and our code correctness assurance would be easier and more reliable.

Instead, TDD is impossible to implement in most business units without the boss’s approval. The reason is simple: TDD visually lengthens the lead time of a project, but the benefits are only visible in the middle and later stages and are hard to quantify.

To get to TDD, we first need to spend our energy designing and writing test cases, and then we need to do some refactoring of the code based on the use case execution after the general development of the use case. In this process, the work content of the first process overlaps with QA’s responsibilities, resulting in a waste of both QA and R&D manpower. And more troubling is the reconstruction phase, many product development time line is MVP as the starting point, the first phase of the company and PM are more hope to be able to come online as soon as possible in order to observe the effect, as to the problem of code as long as it doesn’t affect the use of the core functions, can be in late and then gradually optimized, so it is difficult for such a scheduling conflict technical rationality as reason to force to solve.

If we compromise refactoring until after the first release, it means that TDD isn’t really working, that test cases are just functional validation, not driving development. Followed by a period of time not on behalf of the second phase of time bounteous, compromise down several times, the relationship between the program code, and test cases are complicated, the actual connection is no longer close, TDD is readily turned to DDT, I call it the test driven development, research and development began to catch up with use cases according to the code, and such a move is usually income.

Why is back-up unit testing less profitable

The first thing to understand is that many businesses lack test cases at the code level, most directly reflected in the lack of unit tests. When we try to supplement the historical code with use cases, our starting point is often to design the input and output of a link to meet the functional requirements of several scenarios, which means that the process and the use cases have the following characteristics:

  1. To get the output corresponding to the original logic, we need to construct specific inputs to meet the requirements, which in business code scenarios often rely on persistence layers such as data in DB, i.e. the input is not guaranteed to be stable.
  2. Such use cases can only be used for the current history of the code logic. When the code logic is adjusted, use cases should be adjusted in coordination, but since the number of use cases is not solid, adjustment needs to be screened one by one.
  3. Complementary use cases attempt to ensure that the functionality is correct, and do not contribute to the rationality of code implementation or architecture.
  4. Supplements often assume that the current logic is stable and reliable, making it difficult to spot problems inherent in the current functionality.

It can be seen that the supplementary unit test has no autonomy and must be attached to the implementation logic of the business code. As a result, the modification of the business code will lead to the modification of the unit test, and this kind of unit test can only provide basis for future modification at the functional level. That is, future changes can run through previous use cases, which means that the functionality should remain unchanged, at least at the known level of correctness, but the new boundary conditions and input-output combinations brought about by new logic adjustments are difficult to verify, and such unit tests are essentially not that different from direct calls through interfaces.

On the other hand, the dependency on persistence layer data makes it much easier to write unit tests than to call the interface directly. If you try to fix a stable set of inputs and outputs by emulating the data source, you’re maintaining an additional set of test environments, and it’s hard to guarantee that future iterations won’t clash with the main test environment and the logic on the line, and maintenance costs will spike.

How should we look at unit testing

Testing is a knowledge and science, unit testing is just one of the means of many tests, he has his own applicable scenario, if the idea of TDD can’t completely to develop, so in the proper scene using the unit test, combining using incremental testing, integration testing, regression testing, smoke test and other test way to set up system test plan, Our code should be as high quality and rational as possible.

So what scenarios are appropriate for unit testing? I think any logic at the domain level applies to unit testing if you follow the layering idea of DDD tactical patterns. This is regardless of whether the domain model is congested or not, mainly because the domain logic is not directly coupled to the persistence layer in a properly layered design, where unit tests are easy to write and maintain. At the same time, it is buried deep in the presentation layer where the interface resides, but it contains the most fundamental business logic. Other testing methods may not be able to fully test the domain layer logic, so adding unit tests to it can help discover hidden problems and better maintain the robustness of the code. At the same time, r&d only tests the underlying core business logic to avoid overlapping with the QA team’s work, thus avoiding waste of manpower. In addition, good domain design will implement the domain logic in a more detailed logical separation, which happens to meet one of the basic requirements of TDD, so it is easier to implement TDD for this kind of logic.

For other layers, they often contain the interaction of multiple domain models, and the execution of logic also depends on the data of persistence layer such as DB or only contains the DTO transformation logic of other layers. These logic can be completely transferred to other testing methods, and the quality can be guaranteed through the development of scientific testing process. In particular, the presentation layer is the most direct embodiment of product logic and multi-end interaction, so it is best suited for the design and implementation of use cases based on input and output parameters at the interface level, rather than the developer using various interface Mock tools to write complex test code to create so-called unit tests.

About single test coverage

Returning to single-test coverage, code implementation for modern business projects is particularly complex, especially for the Java class language. The general requirement to increase single-test coverage across an entire project means that developers have to spend more time focusing on simple or unimportant logic, and the reality in most companies is that single-test is often an afterthought with little effect.

The time spent writing redundant unit tests can be spent on refactoring and optimizations that are really important to the project, while ensuring single-test coverage of core domain logic early in the project to find the right balance between efficiency and quality to fuel the business.

Blog links:Easonyang.com/2021/06/15/…

Follow the public account “move one, get one” to get the latest articles