Developer testing is a very important part of modern software engineering. Agile development, backbone development, these advanced project management methods and processes are based on perfect developer testing. When a monthly or even weekly release is being delivered, it is not possible to devote a large number of test engineers to large-scale system level testing, and most of the testing pyramid needs to be automated.
We’re talking about developer testing today. What is “developer testing”? We have a clear development and testing division. Writing the code goes to developing the siege lion, testing goes to testing the siege lion, and most of the time both sides are in a red-blue standoff. This is very similar to the situation of my r&d team more than 10 years ago. In today’s software engineering, there are very few dedicated “test siege lions”. Many companies develop test ratios greater than 10:1, and some departments even have no test siege lions. The role of test siege lion is no longer manual test case “coolies”, but management of product test system, product test planning, analysis and induction of functional test mind map, test case design and lead the R&D team to test work, more like a “test expert/test coach”. As a simple example, my previous product was an online video conferencing collaboration product. Our daily online meetings are based on our own products, and we use our new feature testing sites to have “site-meetings.” In addition to spending a small amount of time doing dialy updates, the test experts then lead the team (PO, architect, SM, Dev) to conduct a focused (half hour) test as planned. So “developer testing” is “developer testing”, and many of our traditional test engineers face three ways out: growth, transformation, and obsolescence. “Test experts” also have a high voice in projects. My previous company used trunk development and had a “one in, one out” review, and this type of “test expert” on the team had veto power. There are even PE level test experts in the company (equivalent to our 20-21 level technical experts).
One-step: To see if a feature can enter Release Branch, turn on the feature toggle in Release Branch to test the release level.
First output: When Engineer release, the quality of this function is qualified and feature toggle is allowed to enter the production line.
There is no test that cannot be “automated”
Back in the testing pyramid, white-box testing at the code level is extremely important in terms of the four dimensions of “development cost”, “implementation cost”, “test coverage”, and “problem location” of testing.
Development cost: the cost of implementing the test case.
Execution cost: The cost of running a test case.
Test coverage: Line coverage and branch coverage
Problem location: Test the efficiency of locating problems
Through the evaluation of the test pyramid and its four test dimensions, we can conclude:
Do as many Low Level tests as possible: they are faster and more stable than the upper Test types and can be performed multiple times a day. In general, LLT gray is implemented in continuous integration build tasks, even in MR, to ensure the quality of the code going into the code repository. In the case of automation assurance, perform IT, ST, UITest of a certain size: because their execution speed is slow, the environment is more dependent, the test is not stable. It is usually performed at night to periodically check code quality and report code problems. Do as few large-scale manual tests as possible: their execution speed is less stable than LLT, their labor costs are higher, and they cannot be executed multiple times a day, each execution takes a long time to get feedback. However, they are more closely related to real user scenarios, so make sure to perform the following tests on a regular basis or at critical points to ensure software quality. Many companies are now iterating on shorter releases, even two weeks. Manual testing clearly does not fit into this development pattern, and automating the use cases of manual testing through various technical solutions is the only way. At the code level, UT can be made from the underlying business code to UI code as long as the architecture design is reasonable. The top-level UI interaction test and test cases can also be run automatically (most UI frameworks can perform AUTOMATIC UI test through the INTERFACE of ACCESSIBILITY). Just imagine that even our mobile hardware can automatically test the extreme “drop phone” test, why can’t software do that? At least some of the industry’s leading technology companies have products that can be measured in days, from code submission Merge Request to product delivery. Testing of this product cannot and cannot be done manually.
Iii. Developers test “Benefit in the present” and “win the future”
A lot of people think that the underlying developer testing, spend a lot of time, write a lot of code, and then ensure that the function is correct, but every time the function or structure of the code changes to modify the test code. I debug and validate manually more efficiently. It is true that debugging code through UT, API tests is not very different from running debugging yourself manually, but debugging code through developer tests ensures the quality of the current project iteration; But its more important function is not that. We have terms in the bug category :Build Regression bug, Release Regression bug.
3. Build Regression Bug: A new version of the same feature has a Bug that did not occur in the previous version. This is called a Build Regression Bug. There was a bug in the new version of the same function in production line, but there was no such problem in the previous version. We called it a Release Regression bug. Every time we commit code into production, no one can guarantee that it will be 100% problem-free. In the rapid iteration of Agile development, it is unlikely to be fully functional manual testing, so developer testing, especially the underlying UT, API, and integration tests, can easily identify such problems. So developers test “benefit in the present” and “win in the future”.
Fourth, TDD does not have to write test code first
With TDD, the common wisdom is to write the test code first and then the implementation code, which is both true and false. It’s conceptually correct, but it’s not necessarily the most efficient when done strictly, which is one of the reasons TDD is so hard to roll out. We divide the coding implementation into three parts: implementation code, test code, and debug code. The concept of TDD is to write test code, code it, and debug it. When we implement the code, it is not possible to consider very clearly at the beginning, the interface definition is completely accurate, if strictly follow the test, coding, debugging to do, the test code will be changed frequently with the code. Of course, this is not a big problem in itself, in the actual implementation process, many people used to build the code framework, test framework, and then in the code, test. Debug after the test is completed. So from the perspective of Huawei gray management, as long as the unit test before debugging, can be called TDD development mode. BTW, of course, is now in vogue for BDD. The point here is that if a team can’t do TDD as I said, don’t even consider BDD.
Behavior-driven Development: BDD combines the general techniques and principles of TDD with ideas from domain-driven design (DDD). BDD is a design activity that allows you to gradually build blocks of functionality based on expected behavior. BDD focuses on the languages and interactions used during software development. Behavior-driven developers use their native language in conjunction with the language of domain-driven design to describe the purpose and benefits of their code. Teams using BDD should be able to provide extensive “functional documentation” in the form of user stories and add executable scenarios or examples. BDD often helps domain experts understand implementation rather than expose code-level testing. It is usually defined in GWT format: GIVEN WHEN&THEN.
Five, UT coverage of 100% is really bad
In unit testing, we all focus on one metric: coverage. Regardless of module, function, line, or branch coverage, there must be a certain percentage of coverage. But achieving 100% of each will give you a bad rating. It’s not that you can’t do it right (not the right way), it’s about cost and value for money. For branch coverage, which is the hardest to achieve, if some memory allocation or fault-tolerant branches have to be tested to achieve 100% coverage, then your test case should be considered twice that, but it doesn’t add value. Even some code conditional branches are never executed during the lifetime of the program.
Module coverage: Business module code through UT, architecture module code through IT; From a UT coverage point of view, there is no need to test architecture code.
Function coverage: Don’t write UT for code that doesn’t have any logic. For example, some of our functions are just get/set attributes, and the internal implementation uses a variable to assign values to them. This function UT is written for coverage and has no real meaning. Line coverage: Generally speaking, 80% line coverage is a reasonable indicator, some can be 0%, and some need 100%. If all code is more than 90%, it is high cost and low efficiency, so it is not recommended.
Branch coverage: The more complex the business logic, the more test cases need to be written to cover it, and some memory allocation errors can be determined without testing.
6. Test driven
When we talk about test-driven architecture and code quality, we’re talking about making your code fully testable. What is code testability? Simply put, it is the decoupling of the relationship between classes, modules, classes and modules through interface programming. Dependent interfaces are passed in through passive injection rather than active retrieval. When the program is running properly, the interface parameters passed in are real business objects, and when testing, you can pass in a mock implementation of FAKE. Of course, not all dependency modules do this. Some business-neutral UtilityLibraries, or some specific data object implementations, can be called directly.
Here we have fake and mock, and Test Doubles, and basically the concepts are as follows. What does each mean? You can search it on the Internet
Dummy stubs Spy mocks fake
At present, most of our developers use Mock Objects (in fact, many of them are stubs controlled by input parameter return values). Regardless of the conceptual issues, it is possible to Mock code, but in practice mocks basically mean that our code is more relevant, the module display dependency is heavy, and the module portability is poor, especially in C programming. As a result, many modules are now unable to carry out unit testing at all, and more are doing integration testing.
Why is this happening? Our senior architects pay more attention to system-level architecture design and make the relationship between system modules and applications very clear. Usually, senior architects can design the relationship between system modules and applications reasonably. However, the design and implementation of the specific application business is left to low-level architects. In fact, the amount of code inside these modules is not small, many of them are hundreds of thousands or even millions of lines of code. At this point, the level of the architect determines the Clean Code quality of the Code. Many of our company’s current code problems are not system architecture problems, but specific business implementation, lack of strict requirements and reasonable architecture design. If there is an architectural solution to be regulated at the application level, it can be at least as clear as the system design in terms of module interfaces and module-to-module interactions. The indeterminable part is thousands of lines of code inside each submodule.
The reason why test-driven architecture and Code quality are proposed is that when a high standard is put forward for testing, we have to solve the problems of testing from the architecture. When the problems of testing are solved, Clean Code L3 will be achieved naturally.
From “I’m going to write test Dependent code” to “I’m going to write test dependent code”
Strange as it may seem, this is actually the fundamental way to solve unit testing from the ground up. There are dependencies between modules, whether through Mock or Fake, that cannot be eliminated no matter how architecturally sound they are. Instead, we try to decouple dependencies from modules by designing wisely. The first “I’m going to write test dependent code” means that I’m going to write test code to test when I implement my module. Instead, I had to test how to write my test dependencies. The second “I’m going to write test dependency code” means that when I implement my code, I’m going to think about how the module that depends on me is going to solve my dependencies when it tests. “I’m going to write test dependency code” (what I call FAKE object and implementation) to help the module that depends on me solve the test dependency problem.
Thinking shift, test-driven: When developing a module, don’t think about how to test yourself first, think about how I can make it easier for others to test if they rely on me. The provider of the module would provide not only the module code, but also a reusable Faked object (call verification; The return value. Parameter verification; Parameter processing; Function simulation, etc.). The writer of the module code implements his own Fake implementation. Basically, most of the code is done by the module writer, and this is a reusable Fake implementation. Module dependencies add their own code based on their particular business needs. It basically follows the 80/20 principle. Architecturally decoupled dependencies, interfaces are programmed by injecting dependencies. Developer tests use Fake to implement dependencies. When the test code is written, all the interfaces for dependencies and dependencies are basically done, and the focus is more on test cases than testing dependencies.
These materials should be the most comprehensive and complete preparation warehouse for my friend [Software Testing]. This warehouse also accompanied me through the most difficult journey. I hope it can also help you
Follow my wechat public account [sad Latiao] for free ~
Send a word: the appearance of the world depends on your gaze at its vision, their value depends on your pursuit and mentality, all good wishes, not in the waiting to have, but in the struggle for.
My learning exchange group: 902061117 group of technical cattle to communicate and share ~
If my blog is helpful to you, if you like my blog content, please “like” “comment” “favorites” one key three even oh!