Today, we will discuss some problems we often encounter in the practice of micro-service architecture. Some of them come from our own micro-service transformation projects, and some come from customers’ on-site micro-service architecture implementation projects and pre-sales plan communication.
This paper only discusses the key issues as follows:
-
Database division under microservices
-
Selection of microservices development and technical framework
-
Microservices and DevOps, containers converge
-
Microservices gateways and registries
-
Key technologies related to microservices, such as current limiting circuit breaker, security, service chain monitoring, etc
For the basic knowledge related to micro-services, we can basically search on the Internet and will not repeat the narrative, for SOA and micro-services, the difference between the mid-platform architecture, etc., can also refer to my previous published articles.
Whether the database split and microservice must be 1 to 1
We talked about the micro services architecture from the very beginning. To ensure that each microservice is autonomous and loosely coupled, the microservice is split vertically from the database to the logical layer to the foreground.
That is, database splitting is a key part of microservice architecture design.
And the key thinking here is that as you see in microservice practice, the fragmentation of your top microservice components is actually much more granular, and it’s normal for an uncomplicated business system to break down to 20 or 30 microservice components.
It is not reasonable to split the DataBase into 20 separate databases.
This, on the one hand, increases the management complexity of the database itself, and also introduces more distributed transaction processing problems and inconvenient cross-database data association query problems due to the too fine separation of the database. So the best suggestion here is that we introduce the concept of a business domain, which is:
The database can be split by business domain, where each business domain is quite independent and corresponds to a separate database, but the business domain itself can have multiple upper-layer microservice module components. Microservices within the same business domain are still accessed and invoked through the registry.
That is, the microservices in the same service domain are decoupled in the logical layer and can only be accessed and invoked through API interfaces to facilitate distributed deployment. However, the database layer itself is not split and shares the same database.
For example, in our project, we will share a database with the two microservices 4A and process engine, and share a database with the independent microservices such as expense reimbursement, travel reimbursement and loan reimbursement.
Although complete database decoupling is not achieved in this way, we can achieve more fine-grained management of microservice deployment packages through microservice decoupling and decoupling at the logical layer. At the same time, when there is a change in business logic, we only need to change the corresponding micro-service module to minimize the impact of the change.
Whether to use the SpringCloud family bucket
It can be seen that if SpingCLoud microservice technology development framework is adopted, all capabilities corresponding to service registry, current limiting fuse, microservice gateway, load balancing, configuration center, security, and declaratory call are provided.
All you need to do is use the SpringBoot framework to develop microservice components.
Of course, there is another way in our project, that is, we only use the SpringBoot framework for the development of a single micro-service component, and then combine and integrate the current mainstream micro-service open source technology component products.
-
Service Registry:
-
Service Configuration Center:
-
Current limiting fuse:
-
Service chain monitoring:
-
API Gateway: The Kong gateway is used to realize API integration and management and control governance
Of course, you can skip SpringBoot altogether and use the more efficient Dubbo open source framework that supports RPC calls for microservice component development and integration.
In terms of the construction and rapid development of the microservices technology platform, of course, it is easiest for you to directly choose the entire open source framework and components of SpingCLoud to implement, and basically can fully meet the needs. Performance is perfectly adequate for everyday traditional enterprise applications, and there is no reason that performance cannot be met. After all, not all projects are like the high performance requirements of the Internet with massive data access and large concurrency.
If a variety of open source components are used and the technical framework is integrated by itself, there must be work related to the building of the basic technical platform and integration verification in the early stage. It also adds complexity to the overall infrastructure. For example, if you use a Nacos registry, you will also need to cluster the registry to meet high availability requirements.
Based on the above description, the summary is as follows:
-
If the diagram is easy and there is no high performance requirement, use the SpingCloud overall framework directly
-
If the performance requirements are high and the technical reserves are sufficient, you can integrate open source technical components by yourself
So in our actual microservices architecture implementation project, we will see a third scenario.
For example, for a group enterprise, a plan management system is divided into 10 micro-service modules after preliminary architecture design, and three software developers need to be invited for customized development. Developers are also required to develop microservices architecture.
At this time, we found a key problem. It doesn’t matter if each manufacturer adopts the micro-service architecture, but from the perspective of the whole application, we actually need to carry out unified micro-service governance and control for the 10 micro-service modules. Similar API gateway, similar service configuration center and so on.
These components are not suitable for reuse of the technical components in SpingCLoud, but need to be extracted from a single microservice architecture to form a shared service capability. At such times, our recommendation is to try to integrate and use other open source technology components from third parties for governance.
In the example above, the three vendors can keep the basic configuration.
That is, when A supplier develops microservice modules A, B and C, it can enable Eureka+Feign+Ribbon to complete internal integration, API interface registration and invocation of the three components developed by itself.
However, when the modules among the three suppliers need to cooperate, the shared technical service platform built externally is used uniformly.
-
For example, the API interface is registered to a unified Kong gateway managed by the platform integrator
-
For example, some common configurations involving the three companies were transferred from SpingCloud Config to Apollo configuration center
So to summarize, when evaluating whether to adopt the full SpingCLoud solution, you also need to evaluate whether there is synergy across well-defined teams. Or whether there are similar group enterprises, multiple business systems micro-service integration. If exist, then some generic technical service ability must be taken out to build independently.
How does the development team split up?
When we implement the microservices and cloud native transformation, IT may seem as if the IT system is divided into multiple microservices, but IT is more important that the business organization and the team itself need to break down into microservices, breaking down the highly autonomous business team.
Each team is configured with independent front-end development, requirements, and a high degree of tester autonomy.
How to ensure conceptual consistency or architectural integrity of a large application and product after splitting into multi-business teams? Here we propose that the overall product planning and overall architecture design still need to be centralized and unified, and then split and distributed to each microservice development team.
So what does architectural design involve here? Details are as follows:
-
Function list of each microservice module
-
List of interfaces of each microservice module
-
Split the database and Owner ownership of the table
The above three points are the most important architectural design needs to be done ahead of time. Once this is clear, it can be assigned to each microservice team. Then the microservice team is highly autonomous and flat, and each team can cooperate and communicate with each other, without the need to increase the communication path through the coordination of architects.
That is, the product planning and architect are very similar to the responsibilities of the registry control center in the microservices architecture. This is also what we often call a technical microservice unassembly, which is really a precursor to the restructuring and separation of responsibilities of the business organization team.
First of all, you can’t split 20 microservices and you can split 20 development teams. There is still the concept of domain partitioning, that is, the 20 microservices should be categorized and separated by aspects.
-
Method 1: Categorize by vertical business domain, as we did in the previous database split method
-
Approach 2: Horizontal hierarchy, such as platform team, mid-platform team, front desk and APP team
After the team split, we can see that each development team must be staffed with front-end developers, back-end developers, and testers. Requirements can be uniformly configured without being split into development groups. It is also possible to have one requirements refiner per development team and only a product manager for the entire team to produce product requirements. Refinement of product requirements is done internally by the development team.
Why so much emphasis on devs?
In simple terms, the internal work of each development team should be transparent and invisible to other development teams. Development teams are highly streamlined and can only be delivered through coarse-grained interfaces.
If the development team itself is not split, you will see that when a development team manages multiple microservice modules, the various microservice development specifications and regulations we formulated in the previous can be easily broken, and these post-audit and modification will take a lot of time to change and rework.
For a simple example, when two DataBase libraries are split and managed by the same developer, it is often easy to solve problems through cross-library associative queries between the two libraries, which is not allowed in the microservices development protocol.
Of course, from the perspective of IT governance and control of the software enterprise itself, this is also the best solution. For a large project or large application system, not every developer can see the source code of all project modules, and other components that are not their own owners can only consume and use interfaces, and other contents are invisible.
For service registry and API gateway selection
I have covered the details of service registries and API gateways in previous articles.
When do YOU need to use API gateways?
In a microservice architecture, although there is no interaction and integration with other external applications, the entire application itself exists in the APP application end, which is developed through the analysis of the front and back end and accessed through the Internet. Its own presence requires a unified access API access point, and further security isolation from the internal microservice module needs to be considered.
When we talk about here, you will find that what we often say about API gateway’s service proxy or pass through capability is actually the same meaning as what we often say about Ngnix reverse proxy or routing.
If you just want to unify the access outlet of API interface and consider security isolation similar to DMZ zone, then you do not need to immediately implement API gateway at the early stage of your architecture, and directly adopt Ngnix for service routing proxy. Because in this architecture, the API interface consumer end and the provider end are all developed by a development team, all kinds of problem analysis and investigation are quite convenient, similar to THE API interface security access can also be unified through JWT, Auth2.0, and the process is not complicated.
Open capabilities or external integration of multiple applications are required for API governance
However, when we integrate with multiple external applications, or open our API interface service capabilities to multiple external partners, the requirements for API interface management and control will naturally increase.
In other words, on the basis of the conventional service proxy routing, it is necessary to add various capabilities such as load balancing, security, log, and current limiting circuit breaker, and we do not want these capabilities to be considered in the API interface development, but hope that these capabilities can be unified and flexibly configured to achieve control when THE API is connected to the gateway.
This is where the USE of API gateways comes in.
Multiple development teams collaborate and service governance standardization is required
This is the second scenario I understand that requires an API gateway, which is somewhat similar to the need for an ESB service bus in a traditional IT architecture. When there are multiple development teams, we need to unify the management of the API interface services registered and accessed by each development team, and in this case, an API gateway is needed to achieve this.
That is, the unified control of API interface integration delivery across development teams is replicated by THE API gateway, including security, log audit, flow control, etc. When multiple teams cooperate, these contents can no longer rely on some technologies within a single team and development specification conventions, but need a unified standard.
At the same time multiple development teams collaborate and integrate, there must be a unified integration side to solve the problems in the collaboration. Even with the ServiceMesh service grid architecture, we can see that there is a control center for coordination.
In the choice of technical components after using API gateway
Note that the API gateway itself has load balancing, current-limiting fuses, and service proxy capabilities.
Under the registry, Eureka+Feign+Ribbon+Hystrix can all be transferred to the API gateway. However, the complete microservice architecture of an application may have an API interface that both satisfies the API consumption calls of internal components and exposes this interface to external applications through the API gateway.
Through the API gateway, Http Rest API interfaces are reserved externally. Traditional API interfaces are not invoked in Feign declarative mode. The diagram below:
It can be seen that microservice A needs to satisfy both internal microservice B as the consumer to make consumption calls through the service registry and external APP to make consumption calls through the API gateway interface.
Then there is virtually no unified entry point for traffic into the microservice A cluster.
In this scenario, if the enterprise has a Hystrix limiting circuit breaker, it only controls the consumption calls between the internal microservice module components. For external APP traffic limiting, the fusing function of the gateway still needs to be enabled.
Microservices architecture and container clustering clustering and load balancing
Finally, the service discovery and load balancing problems after the aggregation of microservice architecture and Kubernetes+Docker container are discussed.
As mentioned earlier when adopting the Eureka service registry, we could start multiple microservice instances with different port numbers for the same microservice module A. Eureka is used to realize automatic registration and discovery of services after the port is started. Then the Ribbon implements load balancing for service access.
That is to say, we added and deployed the microservice module A node manually.
However, under the continuous integration of DevOps, after the implementation of Kubernetes+Docker container cloud, we can realize the dynamic expansion of micro-service node resources through K8S. The extended Pod resources are uniformly balanced by Kubernetes to achieve cluster load balancing, that is, external access only through Node+ port number.
So there are really two ways to do it at this point.
Practice 1: No longer use the Eureka service for registration and discovery
In such cases, Eureka service is no longer used to register discovery, but viPs are accessed by Kubernates after dynamic deployment, and Kubernates carry out load balancing of background nodes.
At this point, we can only consume and invoke the Http Rest API in the same way that we would normally invoke the Http Rest API. The original Feign declarative invocation may no longer be appropriate. This means that in this scenario you only use SpringBoot to develop stand-alone microservices that expose Http Rest APIS. No longer use Eureka+Feign+Ribbon in the SpringCLoud framework.
Practice 2: Use Eureka to replace Service in Kubernetes
In this scenario, the cluster function of Kubernetes itself is not used, but the dynamically deployed microservice modules are automatically registered to the Eureka service registry for unified management. In other words, the architecture is built according to the traditional SpringCLoud framework system.
In this way, the key capabilities of current limiting, fault tolerance and heartbeat monitoring under SpingCloud can be further retained.
Practice 3: The next step is ServiceMesh
In fact, we see further thinking in a completely decentralized microservice governance solution similar to Istio. In this mode, Sidecar can better realize the governance and control capabilities of relevant services such as registration, discovery, traffic limiting circuit breaker, and security.
If all microservice modules are deployed into Docker containers through Kubernetes, then we can see that the content of SideCar can be attached to the specific deployment package to achieve integration during k8S image making and container deployment.
To put it simply:
We do not need to consider the distributed API interface integration interaction at all when developing the microservice module, but with the integration of Kubernetes and Service Mesh, we have the ability of distributed interface invocation and integration. Also has the API interface security, log, current limiting fuse management capabilities.
Therefore, it is often said that the Service Mesh is the last piece of Kubernetes’ puzzle to support microservices.