1. “Clear the name” for unit tests

I used to think that unit tests were for a function. Any test that comes out of a function is not a unit test.

In fact, the definition of “unit” is up to you. If you’re using functional programming, a unit most likely refers to a function. Your unit test will call this function with different arguments and assert that it returns the expected result; In object-oriented languages, a unit can be everything from a method to a class (from a single method to an entire class). Intent is important (the word “intent” is the first word in this article, it’s important)

We have unit tests, incremental tests, integration tests, regression tests, smoke tests, to name a few. Google saw this “hundred schools of thought” phenomenon, created its own naming, only small test, medium test and large test.

· Small tests, tests for a single function, focusing on its internal logic, mocking all required services.

Small tests result in good code quality, good exception handling, and elegant error reporting

· Medium scale testing to verify interactions between two or more defined modular applications

· Large scale testing, also known as “system testing” or “end-to-end testing”. Large tests are run at a higher level to verify how the system as a whole works.



Conclusion: Our unit tests can either write a case for a function or write a case for a string of functions.

Ii. Pyramid model

Before the pyramid model, there was the ice cream model. It includes a lot of manual testing, end-to-end automated testing, and a few unit tests. As a result, as the product grows, manual regression testing takes longer and longer, making quality difficult to control; Automated cases fail frequently, and each failure corresponds to a long function call. What went wrong? Unit tests are few and far between.



In his book “Succeeding with Agile,” Mike Cohn changed his words. This metaphor is very vivid, and it lets you know at a glance that testing needs to be layered. It also tells you how many tests you need to write for each layer.

Testing the pyramid itself is a good rule of thumb, and we’d do well to remember two things Cohn mentioned in his pyramid model:

· Write tests of different granularity

· The higher the level, the fewer tests you should write



At the same time, our understanding of the pyramids must not stop there, but go further:

I interpreted the pyramid model as — ice cream melting. This means that the “manual tests” at the top should theoretically be all automated, melted down, and melted into unit tests first. The ones that cannot be covered by unit tests are placed in the middle layer (layered tests), and the ones that cannot be covered by unit tests are placed in the UI layer. So, UI layer cases, if you can’t have them, don’t have them, are slow and unstable. According to Steve Jobs, all automated cases should be considered as a whole. Cases should not be redundant. If unit tests can cover them, this case should be removed from the layer or UI.

The lower the level of testing, the less relevant, and the higher the level of testing, the more extensive. Unit tests, for example, focus on one unit and nothing else. So, as long as a unit is written, the test is passable; Integration tests, on the other hand, take longer than unit tests when several units are put together and all of them are written before they pass. System test to the whole system of each module are connected together, all kinds of data are ready, can pass.

In addition, because there are too many modules involved, the adjustment of any one module may destroy the high-level testing, so the high-level testing is usually relatively fragile, in the actual work, some high-level testing will involve external systems, so the complexity is constantly increasing.

3. Why single test

We cannot get around this problem. News is one of the main forces behind this change in the research and development model, so the top-down push makes it less tricky: what’s done is done. There are so many reasons not to:

(The real voice of teasing collected)

· Unit testing wastes too much time

· Unit tests simply prove what the code does

· I am a great programmer, can I not do unit testing?

· Later integration tests will catch all bugs

· Unit tests are not cost-effective I’ve written all the tests, so what do testers do?

· I was hired to write code, not tests

· It is not my job to test the correctness of code

The meaning of unit testing

· Unit testing is very important to the quality of our products.

, unit testing is at the bottom of the test of all kind of test, is the first link, also is one of the most important link, is the only time a guarantee can test code coverage reached 100%, is the basis and premise of the whole process of software testing, unit testing, to prevent the development of the late too many bugs and out of control, unit testing is the best cost performance.

· According to statistics, about 80% of errors are introduced during the software design phase, and the cost of fixing a software error will increase as the software life cycle progresses. The longer it takes for a mistake to be discovered, the more expensive it becomes to fix it, and it’s growing exponentially. The coder, who is also the primary practitioner of unit testing, is the only one who can produce a defect-free program that no one else can

· Code specification, optimization, testability of the code

· Refactoring with confidence

· Automatically execute three-thousand times

Here are some statistics from Microsoft: Bugs are discovered in unit testing, on average, in 3.25 hours, and in system testing, in 11.5 hours.



The figure below is intended to illustrate two issues: 85% of defects occur in the design phase of the code, and the later the bugs are found, the more expensive they become, exponentially. So, it’s a good idea to find bugs early on in unit testing, save time and effort, and get back to it once and for all.



Unit tests are time-consuming?

You can’t just look at how long a single test phase takes.

I interviewed the development of news client and background. First of all, it is certain that single test will increase the amount of development and increase the development time.



In the book The Art of Unit Testing, there is an example of two teams with similar development capabilities developing similar requirements. The single test team doubled the length of the coding phase, from 7 days to 14 days, but the team’s performance in the integration test phase was very smooth, with small bug numbers and fast bug locations. The final result, overall delivery time and number of defects, was the lowest for the single test team.



Single test, existence is reasonable. On the one hand, the single test needs to be observed throughout the iteration cycle; On the one hand, writing single test is also a technical work, write good students, less time, high code quality (that is, not to say that writing single test, can write good single test)

Who’s going to write the single test?

· Develop students to write single test

· Test students have the ability to write single test. Focus on developing scaffolding, layered testing/end-to-end testing

Incremental or inventory

· Single test case for incremental code

· When there is a massive refactoring of the stock of code that exposes the quality of the latter to great risk, it is a good time to push for completion of single tests

Iv. Phase of unit testing

1. In the broad sense of unit testing, we refer to the organic combination of these three parts:

Code review,

· Static code scanning

· Unit test case writing

Ii. Combined with the practice of news, I divided the process of single test growth into four goals, which are:

· Can write, all staff can write

· Well written, at the same time focus on testability problems, pilot solution

· Identify testability problems and be skilled in using reconstruction methods for reconstruction; Identify code architecture design issues; Case is written synchronously with the business code

TDD. But this goal is an expectation, not a necessity.



As of the date of publication, the news is in the third stage, that is, each iteration can produce high-quality cases with high number and demand coverage; Focus on testability and always refactor.

5. Unit test metrics

It’s awkward, there’s no direct indicator to measure the effect of single testing. We are often asked, “How do you prove your news playtest works?”

· Bug indicator (indirect indicator) : trend of the total number of bugs in successive iterations, trend of new bugs in iterations, and bug rate of 1000 lines

· Requirement coverage of single test (more than 50%), coverage of participants (more than 80%)

· Trend of total number of single test cases and incremental trend of code lines

· Line coverage of incremental code (access layer 80%, client 30%)

· Cyclomatic complexity of a single function (less than 40), lines of code of a single function (less than 80), and number of scanned alarms

Under the premise that iterations require continuous high throughput, take the data of News iOS as an example:









Vi. Go unit test framework selection

Testify + Gomonkey

Additional: HttpTest + SQLMock





The premise

· The test file, ending with _test.go, is placed in the same directory as the test file

· Test function whose name begins with Test and must be followed by an uppercase letter or underscore, such as TestParseReq_CorrectNum_TableDriven

· Test function with parameter TTesting. T; For bench tests, the parameter is btesting.B

· Run the command line. My article explains it in depth: Go test command line

All of us can testify to their good behaviour.

https://github.com/stretchr/t…

All is written based on GoTesting, so syntactically, the execution line is fully compatible with Go Test

Support for a number of efficient apis, such as:

Assert. Equal: indicates a regular comparison. In this case, the two values are changed into [] bytes for strict comparison

Assert. Nil: indicates that the object is Nil, and sometimes indicates that err is null

Error: Determines the specific type and content of the err

Assert. JSONEq: This is useful when comparing maps; In this example, I encapsulate the suggested method to convert a struct to a string(JSON) :



· Suite support, use case set management

· At runtime, use case sets can be specified to execute



· Built-in mock tool, but only supports mock for interface methods, and the usage is relatively complex

Table – driven



Gomonkey usage (in blue)

https://github.com/agiledrago…

https://studygolang.com/artic…

· Supports piling a function

· Supports driving a stake for a member method

· Supports piling a global variable

· Supports piling a function variable

· Supports typing a specific pile sequence for a function

· Supports punching a specific pile sequence for a member method

· Supports punching a specific pile sequence for a function variable

· Define a series of stubs in a table-driven manner

Note that the go test command must be parameterized for the Stub inline function to take effect. See official documentation. So, my command line defaults to -gcflags=all=-l.



I’ve set up some code templates for Goland and attached them.

ApplyFunc is Stub to external functions (non-class methods)

/* Usage: gomonkey.applyFunc (by the stub function name, signed by the stub function) return value

* example:

patches := gomonkey.ApplyFunc(fake.Exec, func(_ string, _ … string) (string, error) {

    return outputExpect, nil

                   })

* /

patches := gomonkey.ApplyFunc(lcache.GetCache, func(_ string) (interface{}, Bool) {return getCommentsResp()}) defer patches.Reset() ApplyMethod is the Stub of the class function. There are two ways to do this: 1) use an enhanced gomonkey; 2) Instead of stubbing it, choose to walk into the function, which we’ll talk about later when we talk about mocks. /* use: gomonkey.applyMethod (reflection class name, signed by stub function) return value * Example: var s *fake.Slice patches := ApplyMethod(reflect.TypeOf(s), “Add”, func(_ *fake.Slice, _ int) error { return nil }) */ var ac *auth.AuthCheck patches := gomonkey.ApplyMethod(reflect.TypeOf(ac), “PrepareWithHttp”, func(_ auth.AuthCheck, http.Request, … auth.AuthOption) error { return fmt.Errorf(“prepare with nil object”) }) defer patches.Reset() ApplyMethodSeq returns different results for the same Stub function /* usage: gomonkey.applyMethodseq (reflection of the class, “the name of the Stub function “, returns the structure); Params{info1}, brackets for the return values of the stub function; Times is the number of valid Times * Example: e := &fake.Etcd{} info1 := “hello cpp” info2 := “hello golang” info3 := “hello gomonkey” outputs := []OutputCell{ {Values: Params{info1, nil}}, {Values: Params{info2, nil}}, {Values: Params{info3, nil}}, } patches := ApplyMethodSeq(reflect.TypeOf(e), “Retrieve”, outputs) defer patches.Reset() */ conn := &redis.RedisConn{} patch1 := gomonkey.ApplyFunc(redis.NewRedisHTTP, func(serviceName string, _ string) *redis.RedisConn { conn := &redis.RedisConn{ redis.RedisConfig{}, &redis.RedisHelper{}, } return conn }) defer patch1.Reset() // mock redis data. OutputCell := []gomonkey. outputCell {{Values: gomonkey.Params{“12”, nil}, Times: 1}, {Values: gomonkey.Params{“”, nil}, Times: 1}, }

Patchs := gomonkey.applyMethodSeq (reflect.typeof (conn.redishelper), “Get”, outputCell) defer patchs.Reset() Detailed information can be found in the linked article above. To stub a class method, you must find the actual class (structure) that the method corresponds to. For example: // The function under test has the following section, where we want to stub out the Get method, ReadCountStr, _ := conn.get (redisKey) if len(readCountStr) == 0 {return 0, nil} RedisConn struct {RedisConfig *RedisHelper} patches := gomonkey.ApplyMethod(reflect.TypeOf(RedisConn),”Get”, func(_ redis.RedisHelper,_ string, _ []string) ([]string, error){ return info,err_notNil }) defer patches.Reset()

In the next post, we’ll have more on mocks and how not to abuse them. Let’s take a look at the next post.