ServiceMesh is a new stage in the development of cloud native and microservices. This is a speech delivered at the ServiceMesh Meetup hosted by ServiceMesher community and Ant Financial on Sunday, November 25, 2018. The first half of the content comes from Ant Financial’s Ao Xiaojian, and the second half comes from Ali UC’s Long Shi.
The original address: www.servicemesher.com/blog/ant-fi…
Hi, everyone. Today’s speech is “Incremental Migration scheme of Ant Financial Service Mesh”. I will introduce the Service Mesh migration scheme of ant Financial’s main website to you. Today’s presentation is a bit special. It will be a collaboration between two lecturers. I am Ao Xiaojian from the Middleware team of Ant Financial, and the other lecturer Is Long Shi from the BASIC RESEARCH and development Department of UC.
Today’s content will consist of four main sections:
-
Evolution route of Service Mesh: This paper introduces ant Financial’s plan to implement Service Mesh in the main station. Due to the large amount of stock applications and large scale, and the smoothness of migration process, our implementation scheme is much more complex than the community scheme.
-
The key to smooth migration: Introduce several key approaches to smooth migration in the overall migration solution, and then today we will expand in detail on another key point: DNS addressing scheme.
-
The evolution of DNS addressing schemes: detail Kubernetes Istio/SOFAMesh flight of the evolution and the DNS addressing mode
-
Follow-up planning of DNS addressing scheme: This section describes our follow-up planning of DNS addressing scheme
The first two pieces of content will be introduced by me, the last two pieces of content will be introduced by my colleague Long Shi.
Before expanding the content, let’s take a look at the background: The background of Service Mesh landing at ant Financial’s main site:
-
Goals: We need to meet our long-term goals. Specifically, the communication between services should follow the Service Mesh, and be in the form of Istio with a complete control plane, the infrastructure should be built on K8S, and the form of applications should be closer to microservices.
-
Status: The reality is that there are a lot of challenges. First of all, there are a lot of applications that are not microservified, and our K8S penetration is not enough, and there are a lot of applications that are not running on Kubernets. The maturity level of Istio is also a little less stable. The bigger challenge is that Istio cannot support the scale of Ant Financial. We are still trying to improve and expand Istio. Finally, there is a very practical point to consider when landing: the vast number of applications in existing systems cannot all migrate overnight.
-
Key requirements: Therefore, a very important requirement for implementation is that smooth migration be implemented. To put it simply, microservice + Service Mesh + Kubernetes is our goal, but it is necessary to provide practical guidance on how to move smoothly and firmly from the existing system to the goal.
The content of today’s speech is to introduce to you the ground-based evolution scheme of the Service Mesh master site selected by Ant Financial under such a background. The scheme is expected to be fully rolled out in early 2019.
The implementation principles of the main station landing program are summarized in our practice in the past six months:
-
Long term plan: Be sure to have a clear long term goal and know the overall direction of the future. Avoid detours and wasted investments. Ideally, each step in the plan provides a solid foundation for the next step. Even if you have to compromise or detour for some reason, you should have a clear idea of how to come back and refuse to start over in the middle – the cost is too high to bear.
-
Step by step: recognize the reality, such a big change, must be the need to step by step, do not have the illusion of one step, realistic feasible way is small fast run. Break down the whole process into several large steps, and keep the workload and complexity of each step within an acceptable range to ensure that each step is simple, convenient and feasible.
-
Operable: On the operational level, there should be enough flexibility, that is, the work in each step should be able to be done in batches. In a step – by – step approach, gradually expand the results of the war, eliminate the one-size-fits-all.
In the following evolution, we will realize the guiding role of these three principles in the actual implementation.
This diagram is a bit informative, describing the various possible evolution paths of Service Mesh and K8S landing.
Let’s start at the bottom. This is the current status of most applications on ant Financial’s main site: applications are “deployed on non-K8s” and applications are “not in Service Mesh form”. Looking at the top, this is the final application form we expect for ant Financial’s main site in the future: the application is “deployed on K8S”, and the application has migrated to the “Service Mesh form “.
In particular, we break down the Service Mesh morphology into two modes:
-
Sidecar mode: There is only Sidecar and no control plane. All integration with external systems is performed in Sidecar. This is the first generation of Service Mesh, Linkerd/Envoy, Huawei mesher based on ServiceComb, Sina Weibo Mesh, including our Mesh solution developed at Ant Financial to replace multilingual clients based on MOSN.
-
Istio mode: The second generation of Service Mesh, such as Istio and Conkduit/Linkerd 2.0, has a complete control plane that provides powerful control and is separated from the data plane.
The reason for subdivision of Service Mesh is that we have a special background: the current native Istio cannot support our ant Financial scale, so we have to stay in Sidecar mode for a while until we can improve Istio. Another reason is that considering the migration of stock applications, having a Sidecar mode as an intermediate buffer will make the migration process much smoother.
Now let’s introduce the four evolutionary paths shown in the figure:
-
In route 1 on the left, the idea is to migrate the application to k8S deployment and then to Service Mesh. The greatest benefit of this route is that the vast majority of the investment at each stage of the process will ultimately be preserved because it meets the long-term goals of the K8S + Service Mesh
-
On the right, route 2, the idea is to skip K8S and migrate first to Service Mesh mode, evolve all the way to Istio mode, and finally migrate to K8S.
-
The middle route is route 3, which is the default way of Istio. In other words, Istio does not consider migration at all. By default, the customer already has a complete K8S, and then the modified application is directly deployed on Istio. This route is, of course, unrealistic for the complex scene of ant Financial’s main site. (Note: It is just not suitable for ant Financial’s main site. For most companies, the scale is not so huge, nor have historical burden, and also have K8S foundation, which is totally feasible.)
-
There is also a special route 4, which is erratic. It first migrates to Sidecar mode like route 2, then goes back to Route 1, goes to K8S, and continues to evolve to Istio mode with the support of K8S.
Below we will analyze the advantages and disadvantages of each evolution route and implementation conditions in detail.
The core difference between route 2 and Route 1 is whether to proceed to K8S or Service Mesh first. Moreover, Route 2 evolves the Service Mesh all the way to the desired Istio mode under non-K8S conditions, which means there is a big deviation between the process and the final goal.
The advantage of Evolutionary route 2 is that the first step is very natural:
-
There are no K8S restrictions, so it is not dependent on infrastructure and easy to implement. After all, k8S popularity is a big issue
-
By wrapping a proxy around the original client SDK of the intrusive framework and reusing the capabilities of the original SDK, a basic usable Sidecar can be obtained very quickly
-
In addition to the addition of a proxy, there are not many new concepts and new ideas introduced, which are in line with the mind of existing developers/o&M personnel and easy to accept
Therefore, route 2 is particularly easy to land and can achieve short-term goals quickly and reap some of the benefits of Service Mesh, such as multi-language support and easy class library upgrades.
The problem with this approach, however, is that further down the road, when you start refining the functionality of the Service Mesh towards Istio mode, you have to do a lot of work to provide K8S-like functionality without the underlying support of K8S. In particular, Istio’s non-K8S support, the official solution is basically just a demo, which does not have production availability at all, and requires a lot of work to perfect. The key point is that these inputs, when migrated to K8S, are abandoned because they duplicate the functionality provided by K8S.
Therefore, in combination with our previous principles (consistent with long-term planning and no waste of investment), route 2 is not suitable for ant Financial’s main station landing.
Evolution Route 4 is a very special route that can be understood as a short-term compromise version of Route 1 (k8S first and then Service Mesh). Because route 1 preconditions a massive roll-out of K8S and the migration of existing applications to K8S before continuing to evolve into Service Mesh, this is a very high bar for companies that have not yet adopted K8S to get started.
Therefore, if you do not have the k8S qualification and do not want to stop there, route 2 is the only way out. As we have analyzed above, although route 2 can get short-term dividend quickly in the first step, it will have problems in the follow-up development due to deviation from the long-term goal. How to do?
Route 4 can be a compromise in this scenario: the first step is to follow route 2 before k8S is rolled out, taking the bonus of quick landing in non-K8S Sidecar mode. Then the second step is to avoid the pit that continues to evolve to Istio mode under non-K8S, switch to Route 1 and return to the long-term goal.
The benefits are clear:
-
Before the K8S is rolled out, take a step forward to avoid getting stuck
-
As with Route 2, step 1 pays a quick short-term dividend
-
After the subsequent conversion to Route 1, there is no waste of investment in the subsequent evolution because it conforms to the long-term planning
The disadvantage is that there is a small amount of wasted investment, after all, some work in non-K8S Sidecar mode will be changed after the migration to K8S. However, the change will be minor and worth the bonus.
During the operation of route 4, there is a variable: it takes some time for existing applications to migrate to the Service Mesh in Sidecar mode. One possibility is that during the migration, the popularity of K8S began. The occurrence of this variable depends on whether the Service Mesh of Sidecar mode is popularized quickly or k8S is popularized quickly.
The analysis of route 4 shows that it was chosen at a special time (when K8S was not popular).
After analyzing the four possible evolution paths, we will introduce ant Financial’s final choice in detail.
To be honest, we’ve had a few swings and revisions in our evolutionary path over the last six months, and there will be some changes in the path we’re announcing today and the way we’ve revealed it through meetup/ tech conferences/blog posts over the last few months. The main reason is that over the past six months, our understanding of Sercice Mesh has deepened and ant Financial’s K8S background has changed.
First, at the beginning of this year, when we confirmed the general direction of Service Mesh, K8S was not yet available to Ant Financial, and there was no clear timetable. So, after some research, we chose to walk on two legs:
-
In a non-K8S environment, a small amount of implementation will be carried out in Sidecar mode, mainly to replace the original multi-language client (short-term bonus).
-
Develop SOFAMesh, integrate MOSN into Istio, add support for multiple RPC protocols, increase compatibility with RPC service patterns (in preparation for the end goal)
In the first Service Mesh offline Meetup in Hangzhou at the end of June this year, we announced the SOFAMesh project. I gave a speech on the exploration of Service Mesh under the large-scale micro-service architecture. Interested students can go to review our background/requirements/design scheme at that time.
Around September of this year, we completed an in-depth study of isTIO running on non-K8S and concluded that a lot of work was needed to implement this mode. In addition, we have a deeper understanding of Service Mesh and defined the strategic direction of sinking traditional middleware capabilities to the infrastructure layer represented by K8S through Service Mesh. During this period, the general direction of K8S popularization was also made clear internally. Therefore, combining these two important inputs, we chose to give up the idea of continuing to evolve on route 2 (namely istio on not K8S). For those of you who are interested in this, read my QCon presentation in October. It’s a long way to go: Ant Financial Service Mesh Practical Exploration.
Recently, the timetable for the popularization of K8S has been clearly advanced again. Ant Financial will start the popularization of K8S in a large area in a short time. As a result, our evolutionary path has changed once again. The current evolution path will look like this:
-
Applications that have not yet been migrated (at the bottom of the evolution roadmap) will be migrated according to route 1: first to K8S and then to Service Mesh in Sidecar mode
-
Some of the applications already migrated (the first step of Route 2/4, non-K8S deployment of Sidecar mode) will migrate along Route 4 and merge with Route 1
-
Due to the large number of applications, k8S + Sidecar mode migration is expected to take a long time. During this time, we will improve Istio simultaneously and work with Istio officials to enable Istio to support very large scale deployment
-
Finally, migrate to the final goal (of course there is still a lot to be determined for this step, keep working)
It should be emphasized that this evolution path is targeted at the special scene of ant Financial’s main station, and is not specific and universal. You can make decisions based on your actual situation after you understand the thinking and trade-offs behind our evolutionary path. For example, when we landed in UC, we migrated directly from “deployed on K8S” + “not in Service Mesh” to the final state, since UC has full K8S support and the current landing scale is not that dramatic. This is expected to be the case when the financial cloud is implemented, as customers will not be on the same scale.
Summary: Earlier we presented several possible evolutionary paths for applications migrating to Service Mesh and K8s, and analyzed the pros and cons of each path. Taking ant Financial’s main site as an example, this paper introduces the background of our migration and the selection of evolution route, hoping to help people better understand the landing practice of Service Mesh, so that they can have reference when designing their own landing scheme in the future.
Before, WE introduced the Service Mesh evolution route of Ant Financial’s main website, during which we talked about the smooth migration of existing applications. The second part of today will introduce you to some of the key practices in implementing smooth migration.
First of all, the first key is to make sure that services are as accessible as possible before and after the migration.
Taking the migration to K8S as an example, in a non-K8S environment, a typical inter-service access would look like this:
-
Each service is registered with a registry
-
Before initiating access, the client obtains the instance list of the target service through the registry, such as the IP address and port
In the process of migration to K8S, our approach is to ensure that the internal and external network of K8S is open, that is, the IP address of the service (in k8S case, pod IP) is directly accessible to each other. Based on this premise, in the process of service migration to K8S, there is no need to modify the original service registration/service discovery/request initiation logic, whether it is in K8S, whether it is POD IP, the original service system is completely transparent.
Therefore, the migration to K8S can be very smooth and basic aware of business applications.
Transparent interception can play a key role in migration.
To access service-b using service-A, there are four permutation and combination scenarios before and after the application migrates to the Service Mesh in Sidecar mode:
-
Neither service-a nor service-b is migrated to the Serive Mesh: requests are sent directly from service-A to service-B. This is called direct connection. This is the standard way applications work before migrating to the Service Mesh
-
Service-a has migrated to the Service Mesh, service-b has not: In this case, the requests sent by service-A are hijacked and sent to the Outbound Sidecar deployed with service-A. In this case, there is only one Sidecar in the link, which is called single-hop (client)
-
Service-b has migrated to the Service Mesh, but service-a has not: When the request from service-A arrives at Service-B, it is hijacked to the Inbound Sidecar deployed with service-B. In this case, there is only one Sidecar in the link, which is called single-hop (server-side)
-
Service-a and service-b are migrated to the Serive Mesh. In this case, the requests sent by service-A are hijacked twice and enter Outbound sidecArs and Inbound Sidecars respectively. In this case, there are two SidecArs in the link, which is called double-hop. This is the standard working mode of Istio and the final working mode after our migration.
In these four scenarios, all network requests and request packets are completely consistent, that is, whether they are hijacked to Sidecar or not, they have no impact on the request packets, that is, both the client sending the request packets and the client receiving the request packets are transparent and completely insensitive.
As a result, individual services, or even individual instances of services, can be migrated one by one during migration without modifying the application itself.
Before moving on to the third key point, let’s explore: what would the ideal client look like in the age of Service Mesh?
In the figure, we list the functions contained in the client of a traditional intrusive framework. In the intrusive framework, most of the functions are realized by the client, so it contains many functions, such as basic functions such as service discovery and load balancing, as well as advanced functions such as encryption, authentication and routing. After the application is migrated to the Service Mesh, these functions sink into the Service Mesh. As a result, the client under the Service Mesh can be greatly simplified and become a new lightweight client.
For this lightweight client, we wanted to be as light and versatile as possible: simple to implement, easy to implement in any programming language, and therefore easy to implement across languages. And the easier it is, the less likely it is to upgrade, to avoid upgrading the client.
So what’s left of this lightweight client?
Of the three listed in the figure, the most important and essential is the identity of the target service, that is, no matter how simplified it is, should it at least tell you who to visit? Then there is serialization, which is definitely required for RPC classes, but for HTTP/REST classes many languages have standard implementations built directly into them. And then link tracing, it takes a little bit of work to pass parameters like SpanID, and again it’s possible to do that with automatic burying. Therefore, the most ideal and thin client may retain only one last piece of information: the identity of the target service.
In the intrusive framework, the target service identifier is directly associated with service registration/service discovery. This identifier is usually the service name, and a service name-to-service instance addressing is implemented through the service discovery mechanism. Under the Service Mesh mechanism, as the Service discovery mechanism is sunk into the Service Mesh, the identification of the target Service is not limited to the Service name as long as the underlying Service Mesh can support it.
The question, then, is: what is the simplest, most generic, and most widely supported addressing method for clients? DNS is!
In our migration scheme, we consider introducing DNS addressing. In addition to DNS being the best supported and most commonly used addressing mode, which is supported by all programming languages and platforms, we also want DNS addressing as a long-term direction for future products:
-
In SOFAMesh and SOFAMosn, we have implemented the DNS general addressing mode based on the method named X-Protocol to solve the access problems of traditional SOA Service models such as Dubbo/HSF/SOFA under the Service Mesh (Note: For more details, see my blog post x-Protocol Introduction series (1)-DNS Universal Addressing Scheme.
-
In the future, in our Serverless product, we hope to provide DNS addressing support for functions running on it
-
There may be other, more extensive use scenarios.
So, during our evolution, we had this idea for the client SDK:
-
On the one hand, the original SDK is simplified, and the content that is duplicated with Sidecar is removed (to meet short-term needs).
-
On the other hand, considering that there will inevitably be a change of client SDK, we hope to introduce common DNS-BASED addressing while simplifying so that this mechanism can be implemented in future migrations and function extensions (in line with long-term goals).
In the figure, under the Service Mesh, the client uses domain name to specify the target Service to be accessed, and then uses the DNS resolution mechanism to connect the underlying detailed implementation mechanisms, such as Service registration, DNS record update, transparent hijack transmission of original information, and Sidecar search for routing targets.
I’m just going to give you a little hint here, so I’m not going to go into all the details. In the following content, my colleague, Long Shi from the BASIC RESEARCH and development department of UC, will elaborate the details of the IMPLEMENTATION of DNS addressing scheme for you.
Hello everyone, I am Long Shi from the BASIC RESEARCH and development Department of UC. Thank you for introducing to us the evolution route of the Service Mesh co-constructed by ant and UC and the key to smooth migration.
Let me share with you the evolution of DNS addressing schemes that are key to smooth migration.
You can take a look at the EVOLUTION of DNS addressing schemes shown above, and let’s take a look at the background of each service addressing scheme.
From SOA addressing, to Kubernetes addressing, to Istio addressing, and finally to our DNS addressing scheme for SOFAMesh.
How their addressing schemes differ, we’ll look at them in detail and the evolution of the overall addressing scheme.
Now you can take a look at addressing based on service registration and service discovery in an SOA architecture.
We can see that the SOA in the figure is actually single-process multi-interface, dependent on service registration and service discovery of SOA.
Now let’s look at Kubernetes’ DNS addressing. Its addressing is actually through DNS.
From the figure, we can see that the UserService deployed on K8S generates a DNS record pointing to the ClusterIP of K8S.
When we initiate a request in Pod, we can query the ClusterIP from DNS through the DNS SearchDomain domain domain name completion rule. We can see that Kubernetes’ addressing scheme is single process single interface.
After looking at Kubernetes’ service discovery, let’s move on to Istio’s service discovery.
From the figure, we can see that the previous processes are in line with K8S. The difference is that there is a SideCar in Istio. It takes the ClusterIP, matches it with the Rule from VirtualHost, and forwards the Pod address to the target.
Finally, let’s look at SOFAMesh’s DNS universal addressing scheme.
-
According to the SOA addressing scheme and Kubernetes addressing scheme we analyzed earlier, we can see that if our microservices are to be integrated into the Service Mesh without being split and modified, we need to support a single Pod with multiple interfaces as before SOA.
-
Seen from the diagram is that we need to support com. Alipay. Userservice. Interface1, com. Alipay. Userservice. Interface2 these interfaces to parse ClusterIP, We know that service is not supported in K8S.
-
So what do we do? We have to work on DNS and modify DNS records to do this. After this solution is determined, let’s look at the implementation details of the DNS addressing scheme we designed.
Take a look at this picture:
-
We use CRD to define an RPCService that has the same selector tag as the previous Service.
-
Then Watch the RPCService with RPCService Controller. When RPCService have update we took the interface is the com. Alipay. Userservice. Interface1 records written CoreDNS inside
-
Interface is exposed in Dubbo through the Register Agent in Pod.
Okay, after the details of the plan. We can see that the other issues are minor, but to update this one for DNS we need support.
At the beginning of our K8S cluster is using KuBE-DNS to do DNS addressing, but let’s see this kube-DNS architecture diagram.
It can be seen that the cost of modifying it is relatively high, and all DNS are in the same domain, which is a high risk factor. Once the error is modified, it is bound to affect the previous SERVICE of K8S, leading to online failure.
-
At this time, we tracked the CoreDNS project of the community. Let’s take a look at the specific architecture of CoreDNS. It adopts the server framework as Web server Caddy, and extends the plug-in mechanism in Caddy, greatly increasing the flexibility of CoreDNS.
-
The plugin mechanism is also very simple: all plug-ins are registered into a Map, and when called, the Map pulls out functions that they share the same interface with. If you’re interested, take a look at Caddy’s plug-in code implementation.
-
Its DNS protocol library is based on the DNS library developed by Google engineer Meikg, who is also the developer of SkyDNS.
-
The back-end can use UDP/TCP, TLS, or gRPC as the back-end data query. Google engineers use gRPC to do a CoreDNS plugin backend data query example, interested students can take a look.
OK, since CoreDNS Plugins are so powerful, can we use them to implement the Renew DNS mechanism we just mentioned? The answer is clearly yes.
Taking a look at the figure above, the CoreDNS plug-in is very simple, just need to inherit the above interface. CoreDNS has a tutorial on how to write a plugin. I’m not going to go into this.
-
This brings us to our most critical point: how we should update our DNS. In fact, there is already a need in the CoreDNS community to provide an interface to update DNS in the form of a REST API.
-
The Internet task force engineering group also defined the standard DNS UPDATE in rfc2136. Both The Google Cloud and AWS have corresponding implementations.
-
The CoreDNS community actually implements the interface, but the back-end storage is file-based and the data is not landed. Ant and UC extended the interface of ETCD plug-in, and realized the corresponding DNS UPDATE interface. The DNS data was written into ETCD.
-
From the figure, we can see that the rpc.cluster.local domain and the K8S domain cluster.local are on different plug-in chains. So there is no dynapirest plug-in in THE K8S domain, we can not update the DNS in the K8S domain, so that the previous kube-DNS transformation will have an impact on the K8S domain to remove, more secure.
If we look at the CoreDNS back-end storage interface, it’s not that different from the interface we used before for data manipulation.
CoreDNS DynAPI is currently in the state of the main library code is not merged. The DynAPI project will then become a separate plug-in project. Take a look at the progress of the DynAPI plugin for the CoreDNS community.
OK, let’s take a look at one effect of our DynAPI implementing DNS updates. We can see the update of a domain name in record.json. Using DynAPI we have successfully updated the DNS record in Record. json and DNS is working properly. So far we have solved the DNS update requirement through CoreDNS plug-in.
CoreDNS has many interesting plugins that can enrich CoreDNS functionality and improve CoreDNS performance. Take a look at the Autopath plugin in the middle, which implements the DNS record query we pieced together in searchDomain on the server. Multiple data interactions between the Client and Server are avoided. For those of you interested, take A look at A-deep-dive-into-CoreDNS-2018.
We finished the CoreDNS function development, when it goes online, many people are concerned about its performance. We have done a simple performance test, it can be seen that the performance of CoreDNS and Bind DNS, which is now more common DNS is still a little gap.
However, we can see from the above figure that the latency of CoreDNS is very low under certain QPS. We can see that all the delays fall within 4ms.
To solve the QPS problem, we extended CoreDNS horizontally with Kubernetes HPA.
At first we just extended CoreDNS through the CPU dimensions, but found it was a bit volatile. Then we switched to expansion through the QPS dimension.
CoreDNS will be the default DNS service in Kubernetes after 1.13. We will follow the community to implement our plan and feedback to the community.
Let’s look at some of the plans that we have going forward.
You can see that our DynAPI actually lacks security. We will further enhance the security of DynAPI by enhancing HTTP to HTTPS.
In addition, if the updated Watch of CoreDNS backend changes, too much data will be returned due to the large range of Watch. This will affect the performance of The Watch. CoreOS added a proxy in ETCD3.2 to allow us to Watch according to different ETCD keyspaces, which greatly improves the performance of the Watch.
Finally, we recommend adding IDC information to the Kubernetes suffix domain name when creating a Kubernetes cluster. This way we can later use the Kubernetai plug-in to consolidate the domain names of different Kubernetes clusters through this IDC cache to improve the speed of cross-IDC DNS access.
Finally, we summarize. In general, Mr. Xiao Jian told us the progressive evolution route of Ant Financial’s main website Service Mesh and several keys to realize smooth migration. In terms of details, we solved the DNS addressing problem of SOFAMesh through CoreDNS single point breakthrough.
Thank you very much and I hope you will find this talk useful.