At the GOPS conference in Shenzhen in April, I shared “Difficulties in implementing Microservices and How to implement microservices effectively”, which is a summary of the project I started in April 2017 and later published on my blog and “ThoughtWorks Insights”.
This case is from the client R, whom I just joined ThoughtWorks in 2013, and it has been five years since then. After the National Day of 2013, I joined one of the product teams of customer R. This team has three projects: one project is routine maintenance (BAU), which is a long-term project. A project develops some new functionality. Another project was to transform the existing Java legacy system by changing some of the functionality of the Java application from ESB and internal invocation to HTTP external invocation using Sinatra (a Restful API framework for Ruby).
I didn’t know we were doing microservices at the time, but felt we were decoupling the application with low risk through automated testing and continuous delivery. This reduces the complexity of the system, reduces the need to maintain code, and makes it unencumbered for other teams working on the code base. It also reduces the risk of failure and release in the production environment.
I worked on the project for eight months, breaking down the “piece of functionality”. We didn’t have a separate Ops team at the time. All operations were done within the team, and we didn’t differentiate between development, testing, and operations. It’s just that different people claim different tasks, and if they can’t, they learn to do it by themselves, or consult the Ops team. This was the first DevOps I encountered: a fully functional end-to-end product team.
In 2014, we used Docker for deployment. Docker was a very new thing at that time, so there were not many relevant materials on the Internet. So we wrote our own orchestration tools to do the large-scale deployment of Docker. At the same time, we were introduced to contract testing and applied it to our microservices. And started using Scala and the Play framework to break up other applications. With contract testing, we turn serial integration tests into unit tests. Due to the constraint of contract, integration test is degraded to unit test, which greatly improves the efficiency of test and reduces the cost of test.
You’ll find the Pact tool in various microservices books and practices. It was developed by another consulting firm for Customer R and they define what contract testing is. It wasn’t until the end of 2014 when we split the interfaces that I knew it was microservices and understood what DevOps was.
I left the team in November 2014 and started spreading my experience with the team to different clients, which led to a deeper understanding of DevOps and microservices. At the same time, customer R also began to copy our previous successful experience and carry out comprehensive micro-service transformation within the whole group.
In April 2017, I returned to work on this client’s project until September 2018. During this period, I conducted some interviews with other micro-service teams, so that I could observe the effect and experience of micro-service improvement in the past five years from a more intuitive perspective.
This series consists of four chapters, namely, How do we measure the success of a Microservice implementation, Organizational Evolution of a Successful Microservice Implementation, Technological Evolution of a Successful Microservice Implementation, and Experience and Reflection in the evolution of microservice architecture. This Chat is the first “How Do we Measure the Success of a micro service implementation”. For confidentiality reasons, the names of specific customers, projects and personnel are all pseudonyms.
How are the maintenance costs of an application’s architecture increasing
We construct a coordinate using the size of the architecture (which can be measured by the number of features or lines of code) and the cost of maintenance (people, money, time). A simple comparison can be made:
At the beginning (point O), the cost of building a monolithic application is relatively low because there is no need to do distribution. An architecture that starts out as a microservice has an additional cost (O1 point) due to the complexity of distribution.
As the scale of the application grows, there are maintenance costs associated with the scale. As this scale rises, there is bound to be an X point. The maintenance costs of individual applications and microservices are the same. This also shows that before this point, the advantages of monomer applications are very prominent.
From an architectural model, the scale and maintenance costs of a single application increase exponentially, because the dependencies brought by scale lead to the convergence of risks, resulting in slower delivery and higher deployment risks. As a result, its maintenance costs have increased exponentially. However, microservice applications are composed of multiple simple systems, which have low dependency, controllable delivery efficiency and deployment risk. Therefore, their maintenance costs increase logarithmically.
Due to this feature of microservices, when the scale of two application architectures exceeds X point, their maintenance costs are very different, and there is bound to be a maintenance cost limit X’ point, which constrains the limit of application scale.
And in this case, that’s the driving force behind the transformation of most businesses to microservices: the old app growth model can’t handle the scale.
So, we can see that most companies start at point O, follow the red line to point X’, and then start the transformation of microservices. In exchange for growing the scale of the app.
In the beginning, the application of micro-service architecture invested more cost in the initial stage, but in exchange for a larger scale in the future. However, from a lean perspective, it’s a waste unless you’re aiming for a large scale application. This was not expected when most applications were first built.
So, an ideal model is to start at point O and then microservice transformation at point X.
However, this is simply not going to happen, for two reasons:
- People don’t build two identical apps at the same time to compare and find limits to growth.
- Currently, no metrics will be built to measure maintenance costs and application scale.
So, you’re at a point, somewhere between X and X’, where you start adopting DevOps techniques to shorten development cycles, improve delivery, and also measure the delivery status of your application. This requires us to build a metric to measure the architecture, which is: MSROI (Margin Scale Return on Investment) refers to the Return (cost reduction, risk reduction, revenue increase, etc.) generated by the cost (manpower, time, resources) consumed by a new function created by me in the future period. If that rate of return continues to fall, and costs continue to rise, you need to consider reducing the cost of application maintenance.
Apply the locality principle of architecture
In fact, for a running system, the maintenance process has two local principles. I call this the DevOps locality principle assumption for application systems, and it contains two locality assumptions for Dev and Ops. Respectively is:
Development locality assumption: Maintenance is the work of developing parts of a running application system. Therefore, the impact on the entire development team should only be local.
Operation and maintenance locality assumption: In a running application system, local changes should only have local risks and impacts.
The above two assumptions have one constraint: the combined cost of development due to locality. Are less than the total cost of replacing the entire system.
In other words, you need to build a system that is smaller locally than your original system size, or you are implementing a new system.
But what we have experienced is the opposite: a local change relies on many other development teams and people and has cascading effects across the application system.
So, if your application is a microservice architecture, you will meet the microservice DevOps hypothesis and pass the Microservice DevOps locality test:
Any applied local change has a local effect.
To reduce system risk, we need to isolate the risk of application change. Hence the DevOps corollary to microservices architecture:
The locality of maintenance concerns at run time makes locality of independent development possible.
That is, independent development and independent deployment are possible only when application operational architecture risks are isolated. In essence, the transformation from monomer application to microservice application is a process in which the internal high risk dependency of the application is transformed into the external low risk dependency. It’s a transition from internal complexity to external complexity. Therefore, most of the cost of microservices architecture transformation is in handling communication between services.
The principle of DevOps locality for application systems assumes that the goals of microservices are essentially the same as the goals of DevOps. This is also consistent with my previous experience with microservices landing: after the application deployment time and risk of change is maximized on a single application. Consider breaking up your application to further practice DevOps.
In other words, the microservice architecture is the result of continuous depth and optimization of organizational DevOps.
How do we measure the transformational impact of a microservice
Our main demand for micro services is to reduce management costs while increasing the scale of the system. That is, the application structure and organizational structure satisfy the above assumptions. This management cost includes two aspects:
- Personnel management costs are reduced. The cost of communication and coordination between organizations is reduced as a result of the establishment of a flat and fully functional team through the system.
- Technology management costs are reduced. The loose-coupling architecture strategy makes the application architecture both stable and flexible. It can clean up technical debt and fix problems at low cost and low risk. Or add new features.
Personnel management costs are reduced
As the business expands, more requirements need to be developed and maintained. Therefore, with constant time and quality, the demand can only be met by increasing resources. However, the law of diminishing marginal returns tells us that, other things being equal, the returns brought by the input of any single factor will be diminishing. Therefore, additional elements need to be added to achieve growth and return, especially restructuring the organization to reduce capacity.
In the case of large applications, the only people we put resources into are people, so we can see that. The management cost of a large project will increase with the increase of personnel, until the increase of personnel does not generate revenue. This is a problem we see in large applications, especially Internet applications.
The old model of developing requirements through a crowd of people by breaking down complex problems extensively is not sustainable. On the one hand, the increase in complexity requires architects who can properly disassemble problems to a reasonable size, and such architects are expensive to acquire, both in recruitment and training. On the other hand, with the increase of development and maintenance personnel, the management of personnel will add additional costs.
However, the emergence of microservices. Changed the application architecture, and under the action of Conway’s law, changed the form of organization. With self-organizing full functional agile teams, communication costs for delivery processes are simplified. The adoption of automation to institutionalize best practices improves delivery quality and reduces training costs. By breaking a problem down into separate problems, avoid the additional costs of unclear boundaries.
From the perspective of economics, the organization structure of microservices itself includes two parts: property right system and defining property right.
If you’re the best at microservices, according to Conway’s Law, your architecture will be consistent with your organizational structure. If you have a self-governing and self-growing team, you won’t have a very high level of hierarchy and reciprocal relationships in your organizational structure by turning best practices into systems and rules that are fully empowered, without the need for more guidance and reporting. It is a flat and loose structure, with each team completing tasks independently and following the same rules, with no organizational process blockades. But can achieve the same quality efficiency.
This culture eventually leads to a software development culture and system that can be replicated and extended throughout the organization. Reduced more labor management costs.
Therefore, microservices architecture can significantly reduce the management cost of an organization. We can see teams improving themselves, people improving, and you can have less management, as demonstrated by the case sharing of The Huawei DevCloud team at DevOpsDays 2018 in Beijing.
Therefore, for a successful microservice implementation, we can see the following organizational characteristics:
- A flatter organizational structure, with less management. It means lower administrative costs.
- Organizations are more resilient and more resilient to risk. The departure of anyone will not make a big difference.
- Delivery quality and output are high, with high-quality releases at least once a day so that no changes are blocked.
- Training costs are reduced, principles and cultures are unified, and any new person who joins the team can get started quickly. (Output in 2 iterations)
- There is no blocking, the team is independent and monitors the organization’s capacity in real time, and can dynamically adjust resources and output.
Technology management costs are reduced
Another feature of microservices’ success is the reduction in technology management costs, which are two-fold. Development costs on the one hand and maintenance costs on the other.
The main reason is that in the era of single application, when change requests are frequent, single application becomes a mutually exclusive resource. To borrow from DevOps and Constraint theory (ToC), the code base of a single application becomes a constraint point. All changes to this code base need to be carefully planned and designed, or they will have an overall impact. The architect needs to carefully consider all aspects before he can take all the factors into consideration for the next stage. At this point, we need to “batch” changes to the application architecture. If there are too many changes in a batch, the failure of one change will cause the batch to fail. As a result, the industry is adopting a “baby steps” approach to frequent releases to reduce failure rates. To do this, there are “continuous delivery” and DevOPs-related practices.
Therefore, we need to remove the constraint point through certain methods. Because of the DevOps assumptions and corollaries for applications mentioned above. We can divide the application into distinct parts, managed by different code bases, and maintained by different teams. So as to achieve local development operation and maintenance, for the implementation of continuous delivery.
This has several benefits:
First, you get a clear architecture. The application architecture of most of our systems is chaotic, and there is too much “dark knowledge”. “Dark knowledge” is uncertain knowledge that needs to be monitored by tracking code, tracking logs, and actual operations. Now let’s draw a hierarchical architecture diagram of the system. The diagram you draw does not correspond to the actual system very well, and it is very confusing. What often happens is that the deployment architecture and logical architecture are inconsistent.
Second, you’ll get faster and more consistent release feedback. This is actually a benefit of DevOps. This allows you to deploy your application more frequently, making it more stable during deployment and updates with less downtime. You’ll find a lot of automation in this process, and then as microservices do more and more. The biggest problem is the management of micro services. In fact, the size of the microservice determines the number of microservices without changing the external requirements function. It also determines the number of teams and the complexity of management. So it’s not the size of the microservice that matters, it’s how many people you have to support that complexity.
We know that if your development workload is increasing, the best way we can do that without increasing the cost, without increasing the development time is to reduce the need to meet your final delivery point. But in a microservices environment, you don’t have to have these deadlines anymore, and you find that you can solve these problems without adding people.
When you find yourself managing so many microservices that you need additional automation, you create a sense of automation within your team.
However, automation brings more than efficiency and stability.
With the continuous delivery pipeline, we automate and integrate both functional and non-functional testing into the continuous delivery pipeline.
In this way, we will improve quality as a system to carry out, to avoid the human quality management of omissions and quality decline.
Keep in mind that frequent releases require constant quality improvement. If the demand for quality is reduced, continuous delivery is reduced to frequent releases. The so-called “quality can not be degraded, once degraded, out of shape”.
With high quality as a release gate and requirement, we will see teams working towards this standard and improving their capabilities. The risk and impact of publishing will be less and less.
Any release that leaves you lacking confidence has a quality crunch, and the question is how many people will be willing to tackle it in an institutionalized way that doesn’t rely on people and testing.
Therefore, from a technical perspective, we can see that a successful microservices implementation has the following characteristics:
- Lots of code bases, and a pipeline of one to one.
- Applications can be deployed at any time without waiting.
- Lots of automated testing.
- Fewer change incidents.
- Lower release risk.
- You can expand as needed.
- More automation.
The last
Once we know how to measure the impact of microservices, we can use this reference to see if the organizational and technical practices of microservices can help us achieve the above results. Next, we illustrate the transformation of microservices by comparing the organization and technology of this microservice architecture. Look forward to the next article: Technological Evolution for Successful Microservices Implementation