As the enterprise development model shifts from traditional Monolithic product delivery to fast-paced microservice architecture, software testers must adapt their testing methods and tools accordingly in order to improve test coverage as quickly and efficiently as possible and discover potential defects as early as possible. Under the background of rapid iteration, it can still meet the strict requirements of the enterprise for product quality.
In this article, I will introduce how to quickly build a test Pipeline for microservices, combining the theoretical perspectives of industry masters such as Martin Fowler and Rick Osowski with my experience in DevOps and automated testing. This article is intended for development teams and testers planning or already adopting a microservices architecture. Dare not overdo everything, but to the practical experience of the dry goods, avoid repeating readers are already familiar with the concept, so that we have harvest or enlightenment.
What are we talking about when we talk about microservices?
There has been a lot of talk about micro services, and I believe readers are more or less aware of them. So what are the features of microservices for testers?
(1) Each service takes on certain responsibilities: “As small as possible, but as large as necessary. As small as possible but as big as necessary.
On question-and-answer site Quora, there’s a famous question: What are the biggest time wasters programmers find? The no. 1 response mentioned: “Unnecessary microservices.” This statement illustrates the pitfalls that enterprises often make when moving to microservices architectures. “Micro” is of course important, but the first thing is to provide “service”, which constitutes the value of “micro service”. Blindly cutting features does not play a role in decoupling, but only increases the cost of maintenance and testing. After all, for every additional service there is an additional set of assembly lines and testing requirements.
(2) Microservices are usually connected via Rest over HTTP.
The most common way to connect/interact is by manipulating the API with commands like POST, GET, PUT, and DELETE, and passing parameters through JSON. This simple, unambiguous way of interacting provides the basis for Contract tests, as described in the Getting Started Contract Test section of this article.
(3) Each service may not provide a user interface.
This means that testing for each service does not necessarily have to be done from the UI. This requires integration testing at the API level, as described in the Understanding Integration Testing section of this article.
(4) Microservices can also be divided into smaller modules.
As shown in the figure below, a typical microservice can be divided into these modules: resources, business logic, data storage interface, external communication interface, etc.
What does microservices architecture mean for software testing?
What are the requirements for testing based on the above features of microservices?
Any testing strategy adopted by the development team should aim to provide comprehensive test coverage for the integrity of each module within the service, as well as for interactions between modules and services, while keeping testing lightweight and fast.Copy the code
A typical development team, for example, may be working on multiple functional modules at the same time, with different development schedules and delivery deadlines, but the entire team must continue to provide a deployable, usable product at a fixed time point (such as once a month, once a Sprint, or even once a day). This means that the old approach of waiting for the product manager, the line of business to provide requirements, the developer to develop, and then the tester to integrate tests, no longer provides sufficient test granularity and fast response times.
In summary, compared to traditional testing methods based on integrated architectures, microservices architecture presents the following challenges for testing:
Complex dependencies exist between services/modules/layers:
This means that if you want to test a service, or a module within a service, separately, you have to strip them of their dependencies on the rest. This can be done by means of mocks, as described below.
Different services may run in different environments/Settings:
Some back-end services, in particular, may run in very different environments than front-end services. When considering setting up automation pipelines for each service, the environment configuration must be tailored accordingly.
End-to-end UI testing involving multiple services (E2E testing) is very error-prone:
Because each service has a different development schedule, end-to-end testing that integrates different services can often go wrong because of minor changes to one service. This kind of error is a distraction that testers want to avoid. This means that certain anti-interference and anti-false positive strategies must be adopted in the design of end-to-end testing, as detailed in the section of “Optimization Strategies for End-to-end Testing” in this article.
Test results may depend on network stability:
In particular, if the data storage and external communication are not affected by these factors in the test, some random false positives may be obtained and interfere with the test results.
Cost of communication between development teams with different delivery cycles:
This has nothing to do with technology, but it can actually cause a lot of trouble for testers. Because the development pattern is broken down into teams responsible for different services, testers often spend a significant amount of time each day tracking the progress of different teams. If you also need to do Regression tests manually, you will eventually become overwhelmed.
So automation is the means and direction that must be taken.
To address these challenges, I have summarized the following three principles:
Automation:
The increase in testing tasks requires testers to focus their efforts on automating tests and getting rid of the heavy burden of manual testing. Of course, automated tests must be stable and robust enough to avoid frequent false positives, which can lead to high maintenance costs.
Hierarchical:
This means a layered approach to testing, ranging from fine to coarse and from small to large. The following range illustrates the relationship between the main levels:
At the bottom are Unit tests, which are the most granular, fastest, and cheapest to maintain. Above is the testing of various modules and business processes within each service. At the top are the tests based on the front-end UI, which are the coarsest and most extensive (because they cover most services), but the most expensive to maintain because the scripts may need to be tweaked for minor changes. Also, because of the front end, there is a lot of response time and wait time to set up, so the slower the speed.
Visualization:
The best way to reduce communication costs is to visualize all test results. This means building, testing, and deploying all of these related tasks into a pipeline so that all team members can monitor the progress of the project and find bottlenecks that are holding it back. This article explains how to build such an assembly line in detail in the section “Uncovering the Secrets of the Testing Assembly Line”.
The main test methods used in microservices architecture are described in a hierarchical manner, as shown in the figure below. They include:
- Unit Test
- Integration Test
- Component Tests
- End to End Test
- Exploratory Test (manual Test)
How to unit test microservices architecture?
The purpose of unit testing is to execute the smallest testable unit of a software program and verify that it runs as expected.
There are many tools for unit testing, such as:
- C++ : Googletest, GMock
- Java: Junit, TestNG, Mockito, PowerMock
- JavaScript: Qunit, Jasmine
- Python: unittest
- Lua: luaunit
The implementation method mainly follows: Setup -> Exercise -> Verify -> Teardown.
Defining test boundaries is the first step in achieving efficient testing. The purpose of the test is to verify that the “black box” in the boundary behaves as expected. We feed data into the black box and verify that the output is correct. In unit testing, a black box is a function or class method that tests the behavior of a particular block of code individually. However, in microservice architecture, there are many times when the output of the black box depends on other functions or services, that is, there are external dependencies.
Stubs, also known as mocks, are used to generate input data without relying on external conditions. This can be done using dependency injection or method stirring (Swizzle). The testing framework ensures that calls to underlying dependencies are redirected to stubs while running the functions under test, so unit testing can be done without external services, ensuring speed and avoiding network conditions. There are many tools for creating stubs, including sinon.js and testdouble-.js in the Node.js/JavaScript framework. Mock in Python, etc.
It is important to mention that testers should try to document and visualize unit test coverage as an important monitoring indicator. For example, process-based tools like Teamcity or Jenkins can use dotCover to count the coverage of unit tests in the process and display the results on task pages in TXT reports or HTML. Furthermore, data on coverage and test results can be automatically output to code quality monitoring tools such as SonarQube so that tests fail or test coverage does not meet expectations.
High coverage of unit tests is the first and most important barrier to code quality. In terms of the division of labor, testers may not be involved in the development and maintenance of unit tests, but testers should assist developers in ensuring the deployment and coverage of unit tests, which is a prerequisite for ensuring the effectiveness of subsequent testing tools.
Understand integration testing
In the microservices architecture, the main purpose of integration testing is to put a number of sub-modules together, work in a “subsystem” fashion, ensure that they collaborate in the expected way, and check communication and interaction between different modules to verify if there are problems with the interfaces.
The most common integration test is to examine the microservice’s external module’s communication with external services, as well as its interaction with external databases, shown in the yellow dotted line below.
When testing the communication with the external, note that the purpose of integration test is to check whether the communication is smooth, not to perform functional acceptance tests on external modules, so only the basic “Critical Path” needs to be checked. This test can help detect errors at any protocol level, such as missing HTTP headers, SSL usage errors, and request/response mismatches.
Also, because most integration testing involves network connectivity, you must make sure that the service or module handles network failures and so on. If you need to test the behavior of the module when the external service enters a special state, you can also use the Stub described above to simulate the state of the external service, such as a response timeout. Testing the link to the database ensures that the data schema (Scheme) used by the microservice conforms to the definition in the database.
Building on the success of the unit tests, the integration tests further improved our test coverage by letting us know that not only did the modules within the microservice work (based on the results of the unit tests), but they could also function together and communicate and interact with the outside world.
Next, we need to know if the individual microservices will work, which brings us to the Component Test.
Component test details
A component is a fully packaged component of a large system that can work independently. In the microservices architecture, the components actually represent the microservices themselves.
The essence of this test is to simulate all other services or resources that a micro-service relies on around it, and check whether the service can provide the expected output from the perspective of the external “users” of the service.
To simulate these dependencies, there are usually two ways. One is to put all the services and invocation relationships in one process and then simulate the dependencies using tools such as Inproctester (for Java environments) and Plasma (for.NET environments). This has the advantage of reducing complexity, but has the disadvantage of requiring changes to production code. Another approach is to put the simulated dependencies outside of the microserver process and invoke the external API for the service using a real network connection. The advantage is that it is suitable for highly complex microservices, but the disadvantage is that it is much more difficult to simulate dependencies. Tools of choice include MOCO, Stubby4J and Mountebank.
Mountebank, for example, simulates a virtual API for microservice calls. For example, for the following data:
{
"port": 4545,
"protocol": "http",
"stubs": [{
"responses": [{
"is": {
"statusCode": 200,
"headers": {
"Content-Type": "application/json"
},
"body": ["Australia", "Brazil", "Canada", "Chile", "China", "Ecuador", "Germany", "India", "Italy", "Singapore", "South Africa", "Spain", "Turkey", "UK", "US Central", "US East", "US West"]
}
}],
"predicates": [{
"equals": {
"path": "/country",
"method": "GET"
}
}]
}, {
"responses": [{
"is": {
"statusCode": 400,
"body": {
"code": "bad-request",
"message": "Bad Request"
}
}
}]
}]
}
Copy the code
Write a short script, you can enter the address in your browser: http://localhost:2525/country returns a list.
#! /bin/sh set -e RUN_RESULT=$(docker ps | grep hasanozgan/mountebank | wc -l) MOUNTEBANK_URI=http://localhost:2525 BANK_IS_OPEN=1 if [ "$RUN_RESULT" -eq 0 ]; then docker run -p 2525:2525 -p 4545:4545 -d hasanozgan/mountebank fi curl $MOUNTEBANK_URI/imposters || BANK_IS_OPEN=0 if [ $BANK_IS_OPEN -eq 1 ]; then break fi curl -X DELETE $MOUNTEBANK_URI/imposters/4545 curl -X POST -H 'Content-Type: application/json' -d @stubs.json $MOUNTEBANK_URI/impostersCopy the code
At this point, we’ve finished testing the service itself. Next, how do we ensure that the different services work together properly? This introduces the concept of Contract Test.
Introduction to Contract Testing
Contract testing is also known as consumer-driven Contract Test (CDC). We can divide services into consumer side and producer side. The core idea of CDC is to generate a contract file from the perspective of consumer business implementation by defining the required data format and interaction details on the consumer side. The producer then implements his logic against the contract file and continuously verifies that the implementation is correct in a continuous integration environment.
Note that CDC contract testing has several core principles:
- CDC is an interface contract proposed by the consumer, which is delivered to the service provider for implementation, and the contract is constrained by test cases. Therefore, the service provider can change the interface or architecture implementation without affecting the consumers if the test cases are satisfied.
- CDC is a test for an external service interface that verifies that the service meets the contract that consumers expect. Its essence is to maximize the satisfaction of the business value realization of the demand side from the goal and motivation of stakeholders.
- Contract tests are not component tests. They do not examine the functionality of the service in depth, but only check whether the input and output of the service request contain the necessary data structures and attributes, and whether the response latency, speed, and so on are within the expected range.
In the following example, we use the contract test to check whether the packets (usually in JSON form) exchanged between the consumer and the service provider contain the three items (ID, name, and age) and whether the data structure of the three items is as expected.
Currently, the most common tool for contract testing is Pact. Its workflow can be summarized in two steps:
- On the consumer side, Pact writes a unit test that sends requests to the interface. When running the unit test, Pact automatically replaces the service provider with a MockService and automatically generates the contract file, which exists in JSON form.
- Contract validation tests are done on the service provider side, and once the provider service is started, a command can be run through the Pact plug-in, such as if using Maven
mvn pact:verify
It then automatically generates the interface request according to the contract and verifies that the interface response meets the expectations in the contract.
As you can see, without starting the service provider on the consumer side, validation similar to integration testing is done. This is the most powerful part of Pack, and it also has some other features:
- Test Decoupling is Decoupling between service consumer and provider. You can even start testing on the consumer side without a provider implementation.
- Consistency, by testing to ensure that the contract is consistent with reality.
- Tests move forward, which can be run at development time and as part of CI, or even done locally, and can be done with a single command in sight, making it easier to spot problems early and reduce the cost of resolving them.
- Pact provides Pact Broker that automatically generates a service invocation diagram, providing teams with a global service dependency diagram.
- Pact provides a tool called Pact Broker to manage contract files. With Pact Broker, contract uploading and validation can be done by command, and contract files can be versioned.
- Using a framework like Pact can help teams reduce the cost of integration testing between services and verify early on that when the provider interface is modified, the data format expected by the consumer side is broken.
- Pact currently supports ONLY REST HTTP communication, but does not support RPC communication mechanisms.
Optimization strategies for end-to-end testing
Contract testing addresses our testing of collaboration between microservices. The final step in automated testing is the so-called end-to-end Test, which verifies that the entire system functions as expected.
Most of the previous testing was done at the backend or API level, but end-to-end testing should be done from the UI to ensure that the user is seeing the interface as expected. But, as you’ve all encountered, UI testing is often very fragile and unstable, and often fails with just a few UI changes. To ensure that end-to-end tests complement other tests and improve coverage without frequent false positives, note the following:
- End-to-end testing should be as simple as possible. By “concise,” I mean that it should cover the core path of the functionality the user uses, but not too many branch paths. Strive to lightweight UI testing, in order to reduce maintenance costs. Otherwise, the entire test team gets bogged down in updating front-end scripts.
- Choose your testing area carefully. If a particular external service or interface is prone to random test errors, consider taking those uncertainties out of end-to-end testing and compensating for them with other forms of testing.
- Improve repeatability of test environments through automated deployment (Infrastructure-as-code). When testing different versions or branches of products, automated tests often give different test results depending on the test environment. This requires a repeatable environment, and the solution is to automate deployment through scripting to avoid the impact of manual deployment.
- Try to get rid of data as much as possible. One of the most common challenges of end-to-end testing is managing test data. Some teams choose to import existing data to speed up testing and avoid creating new data, but as production code changes, this pre-prepared data must change with it, or the test may fail. For this reason, I prefer to create new data during the testing process, which takes some time, but avoids the cost of data maintenance and ensures the comprehensive testing of user behavior.
There are many UI testing frameworks and tools. Currently, for web testing, the most common combination is “Protractor + Selenium Server + Jasmine Testing Framework”, as shown in the following figure.
Uncover the mysteries of the test pipeline
Above we have described the main types of testing for microservices architecture. So how do you choose a testing strategy that works for you? Now let’s review the characteristics of these tests:
- Unit testing: Examining the smallest testable pieces of production code to see if they meet expectations.
- Integration test: Check whether the combination of modules works and whether the module communicates with external services and resources properly.
- Component testing: Test the functionality of a single microservice by isolating it from the outside world through internal interfaces and external simulations.
- Contract testing: On the interfaces between microservices, check that their interactions meet the expected standards.
- End to end testing: An end to end inspection of the entire product/system to determine compliance with external requirements and achievement of its objectives.
In summary, from top to bottom, the granularity of the test goes from fine to coarse. The coarser the granularity of a test, the more parts it involves, the more vulnerable it is to false positives, and the more expensive it is to implement and maintain.
Once the test strategy is selected, scheduling tools such as TeamCity or Jenkins can be used to establish a continuous integration/continuous delivery pipeline. A common pipeline can be expressed as:
The next step is triggered only when the previous step succeeds. After the testing of a single microservice is complete, the next step, end-to-end testing combining multiple microservices, is triggered.
The above describes the various stages of microservice automation testing, and the last step is manual testing. Manual validation can actually be quite simple if you have good automated testing. The key to this step is to bring in Domain Experts to explore the functionality of the product from the user’s perspective. You can use tools like ApplicationInsight from Microsoft Azure or Analytics from Google cloud to record the behavior of these experts as a use case reference for future automated testing.
Cloud testing is different from local testing
Most development teams start the development phase by deploying the product in a local environment for various tests. But when they are finally deployed to the production environment, many products now need to be released to the cloud, whether it is Microsoft Azure, Google Cloud, Amazon AWS or Alibaba Cloud in China. So, can the testing process, the code executed locally, smoothly cover the product deployed to the cloud?
The main differences between the two test environments include the following:
- Login mechanism: In a local environment, the login mechanism can be simpler because most of it is inside the enterprise network. However, in a public cloud environment, cloud service providers provide a series of login mechanisms for security reasons, which may invalidate the local test code. In view of the difference in this way, it is necessary for developers to take into account the needs of cloud testing in the development stage and provide certain API-level access methods. If it is the front-end UI test, it is generally possible to directly enter the program interface by clicking the mouse and inputting the account number, but it faces the security problem of whether to write the login password in the test code.
- Network state: In a local enterprise network, network conditions are very predictable, but in a public cloud, the configuration of the network and virtual machines is often uncertain. This means that the test may fail due to some unknown factor. This means that local testing also simulates network failures and configuration errors to check how the production program handles these situations.
Of course, cloud testing also provides many useful features. For example, cloud service providers generally provide comprehensive monitoring and diagnostic tools for testers and maintainers to analyze health and find logs.
Tool selection for performance/capacity testing
Performance/capacity testing is also an unobtainable component of microservice testing, especially for web applications, and the ability to maintain stable operation in the face of heavy traffic is something every product manager needs to know.
Performance testing includes load testing, stress testing, peak testing, persistence testing, scalability testing, etc. It can prove whether the system can meet the expected performance indicators (SLA), but also can find out the parts of the system that cause performance degradation.
Its overall process includes:
- Determine the test environment
- Determine performance acceptance criteria (SLA)
- Plan and design test solutions
- Configuring the Test Environment
- Deploy test scenarios
- Perform the test
- Analyze test results
The main tools currently available include:
- Microsoft Visual Studio Load Testing
- HP LoadRunner
- Neo Load
- Apache JMeter
- Rational Performance Monitor
- Silk Performer
- Gatling
The one I use most is Microsoft Visual Studio Load Tester. It is entirely HTTP based, so no browser is required. In other words, it has nothing to do with any of the JS methods on the front end; it just logs HTTP requests. Other than that, it is similar to the end-to-end testing of the UI, which is based on the request response, extracting validation rules from the returned results to determine success. It has comprehensive parameterization, data source management, and custom Validation rules for most situations. As an added bonus, the scripts it records can also be used to test code manually.
The evolving role of the tester in the new era
The evolution of the role of QA (tester) under the new development architecture. The diagram below shows the relationship between development, test, and operations. DevOps is a very popular concept at present, while TestOps, consisting of QA+Ops, is the future direction in my eyes. The reason is that with the deepening of automation and the increase of product release frequency, testers who simply take products for manual or automated testing can no longer meet the needs of enterprises.
In the traditional working model, developers release code, testers test, and operations market the product. The downside of this model is that there is a high cost of communication between each team, as shown in the figure below.
In a future TestOps model, TestOps people will be responsible for testing, continuous integration/delivery, and ultimately roll-out.
This pattern puts higher demands on the skill level of the tester, but the benefits are very obvious.
For the team:
- Promoting cooperation and reducing communication costs;
- More effective control of the continuous delivery lifecycle;
- High quality continuous integration.
For the testers themselves:
- Can master operation and maintenance skills;
- Automated testing must be leveraged for continuous delivery;
- Take active control of the development lifecycle and have a big say in the overall team.