Authors: Kui Yu, Ten Sleep
How to implement the capability of grayscale, observable and roll-back in safe production to meet the demands of rapid iteration and careful verification in the case of rapid business development is a problem that enterprises must face in the deepening process of micro-service. In the cloud native popular at present, this problem has some new ideas and solutions.
Kubernetes Ingress gateway
Let’s start with the Ingress gateway and talk about configuring routing through Ingress.
Kubernetes cluster within the network and external is isolated, that is, outside the Kubernetes cluster can not directly access the cluster internal services, how to make the Kubernetes cluster internal services provided to external users? The Kubernetes community has three solutions: NodePort, LoadBalancer, and Ingress. The following is a comparison of these three solutions:
By comparison, it can be seen that Ingress is a more suitable method for business use, and it can be used for more complex secondary route distribution, which is also the mainstream choice of users at present.
As the microservization of cloud native applications goes deeper, users need to meet demands such as complex routing rule configuration, support for multiple application layer protocols (HTTP, HTTPS, and QUIC), security of service access, and observability of traffic. Kubernetes hopes that Ingress can standardize the definition of cluster traffic rules, but the actual business needs more function points than Ingress. In order to meet the growing business demands and make it easy for users to cope with the flow management of cloud native applications, Various ingress-providers are also extended under the Ingress standard.
How to route and forward various Ingress-Providers
Next, I will briefly introduce the implementation of various Ingress gateways under Kubernetes and how to configure routing and forwarding rules.
Nginx Ingress
Nginx Ingress consists of the Ingress, Ingress Controller, and Nginx. The Ingress Controller assembles the Ingress resource instance into an Nginx configuration file (nginx.conf) and reloads Nginx to make the changed configuration take effect. Ingress-nginx is the Ingress controller provided by Kubernetes community. It is the easiest to deploy, but it is limited by performance and has relatively simple functions. Besides, updating nginx configuration needs to be reload.
1. Configure routing forwarding based on Nginx Ingress Controller
Based on the Kubernetes cluster with the Nginx Ingress Controller component deployed, the routing and forwarding function can be realized. Routing and forwarding can be carried out according to domain name and path, and grayscale traffic management based on simple rules of Annotations can be supported. Such as weight, Header, etc. In the current trend, Nginx Ingress is still the most widely used.
ALB Ingress
1. Introduction to ALB products
Application Load Balancer (ALB) is a Load balancing service launched by Ali Cloud for HTTP, HTTPS, and QUIC Application layer scenarios. It has super elasticity and large-scale layer 7 traffic processing capability.
2. ALB characteristics
-
Elastic automatic scaling: ALB provides both domain names and Virtual IP addresses (VIPs), and distributes traffic to multiple cloud servers to expand the service capabilities of application systems and improve the availability of application systems by eliminating single points of failure. ALB allows you to customize the combination of available zones and supports elastic scaling across available zones, avoiding single-zone resource bottlenecks.
-
Advanced protocols: ALB Supports QUIC, which enables faster access and more secure and reliable transmission links in real-time audio and video, interactive live broadcast, and game mobile Internet application scenarios. ALB also supports gRPC framework, which can realize efficient API communication between massive micro-services.
-
Advanced content-based routing: ALB supports identifying specific business traffic and forwarding it to different back-end servers based on rules such as HTTP headers, cookies, and HTTP request methods. ALB also supports advanced operations such as redirection, rewriting, and custom HTTPS standards.
-
Security support ALB provides Distributed Denial of Service (DDoS) protection and integrates with the Web Application Firewall (WAF) with one click. In addition, ALB supports full-link HTTPS encryption, enabling HTTPS interaction with clients or back-end servers. Supports efficient and secure encryption protocols, such as TLS 1.3, for encryption-sensitive services, and meets the requirements of zero-Trust new-generation security technology architecture. Supports prefabricated security policies. You can customize security policies.
-
Cloud native applications: In the cloud native era, PaaS platforms will sink into the infrastructure and become part of the cloud. As cloud native becomes mature, many industries, such as the Internet, finance, and enterprise, choose cloud native deployment for new services, or carry out cloud original biochemical transformation for existing services. ALB is deeply integrated with Alibaba Cloud Container Service for Kubernetes (ACK), and is the official Cloud native Ingress gateway of Ali Cloud.
-
Flexible public network accounting: ALB provides public network capabilities through the Elastic IP Address (EIP) and shared bandwidth. A more advanced pricing scheme based on units of capacity (LCU), which is more suitable for elastic business peaks, is also adopted.
3. Configure route forwarding based on ALB Ingress Controller
The ALB Ingress Controller gets changes to Ingress resources through API Server, dynamically generates AlbConfig, and then creates ALB instances, listeners, routing and forwarding rules, and back-end Server groups in turn. Service, Ingress and AlbConfig in Kubernetes have the following relationships:
- A Service is an abstraction of a real back-end Service, and a Service can represent multiple identical back-end services.
- Ingress is a reverse proxy rule that specifies which Service HTTP/HTTPS requests should be forwarded to. For example, forwarding requests to different services depending on the Host and URL paths in the request.
- AlbConfig is a CRD resource provided in the ALB Ingress Controller. AlbConfig CRD is used to configure ALB instances and listeners. One AlbConfig corresponds to one ALB instance.
ALB Ingress Provides a more powerful Ingress traffic management mode based on the Application Load Balancer (ALB) of Ali Cloud. Compatible with Nginx Ingress, ALB Ingress is capable of processing complex business routes and automatic certificate discovery. Supports HTTP, HTTPS, and QUIC protocols, fully meeting the requirements of super elasticity and large-scale layer 7 traffic processing capability in cloud native application scenarios.
APISIX Ingress
The difference between APISIX Ingress and Kubernetes Ingress Nginx lies in that APISIX Ingress uses Apache APISIX as the data plane to carry service traffic. As shown in the figure below, when a user requests a specific service /API/ web page, the entire business traffic/user request is transmitted to the Kubernetes cluster through an external agent, and then APISIX Ingress for subsequent processing.
As you can see from the image above, APISIX Ingress is divided into two parts. One part is the APISIX Ingress Controller, which serves as the control surface for configuration management and distribution. APISIX Proxy Pod is responsible for carrying business traffic through Custom Resource Definitions (CRD). Apache APISIX Ingress supports native Kubernetes Ingress resources in addition to custom resources.
1. Application routing based on APISIX Ingress Controller
As shown in the figure above, we deploy a cluster of APISIX Ingress Controller components, which can implement route configuration based on Ingress resources and custom resources ApisixRoute. The Controller listens for resource change events. The Apisix-Admin API is called for persistent storage of rules. Traffic synchronizes configuration from ETCD through APISIX configured LoadBalancer type Service gateway and forwards requests upstream.
The APISIX Ingress Controller is based on Apache APISIX. It supports the routing configuration of Ingress resources in Kubernetes. It also supports the configuration of routes, plug-ins, and upstream resources connected to APISIX through customized resources. APISIX cloud Native Gateway supports dynamic configuration of routing rules, hot-swap plug-ins, and richer routing rule support. APISIX Cloud Native Gateway also provides observation, fault injection, and link tracing capabilities. Storage and distribution of Apisix configurations using a highly reliable ETCD cluster as the configuration hub.
MSE Cloud native gateway Ingress
MSE cloud native gateway is the next generation gateway launched by Ali Cloud. It combines traditional traffic gateway and micro-service gateway, and provides users with refined traffic governance capability while reducing resource cost by 50%. It supports ACK container service, Nacos, Eureka, fixed address, FaaS and other service discovery methods. Supports multiple authentication login methods to quickly build security defense lines and provides a comprehensive and multi-perspective monitoring system, such as indicator monitoring, log analysis, and link tracing.
1. Application routing based on MSE cloud native gateway Ingress Controller
The following figure shows the application scenario of MSE cloud native gateway in multi-cluster mode for traffic management. MSE cloud native gateway is deeply integrated with Ali Cloud container service ACK, which can automatically discover services and corresponding endpoint information and take effect dynamically in seconds. Users simply associate the corresponding Kubernetes ACK cluster on the MSE management platform, and expose the ACK services by configuring routes in the routing management module. At the same time, traffic diverting and failover can be performed according to the cluster dimension. In addition, users can enforce additional policies for business routing, such as common traffic limiting, cross-domain, or overwriting.
The traffic governance capability provided by the MSE cloud native Gateway is decouple from the specific service discovery mode. No matter which service discovery mode is adopted by the back-end service, the MSE cloud native Gateway provides unified interactive experience to lower the threshold of getting started and meet users’ increasing demands for traffic governance.
The route forwarding and configuration of Nginx Ingress, ALB Ingress, APISIX Ingress, and MSE cloud native gateway Ingress are described above. We can choose the appropriate Ingress implementation based on our business needs and complexity.
Assuming we have Ingress routing and forwarding configured, how can we simply play with full link gray scale in multiple applications?
The whole link traffic gray scale is realized based on Ingress
Based on the practice of full link gray scale, we put forward the concept of “swimlane”, which is not new in the field of distributed software.
Noun explanation
- Swimlanes: A set of isolated environments defined for the same version application. Only the request traffic that meets the flow control routing rules is routed to the marking application in the corresponding swimlane. An application can belong to multiple lanes, and a lane can contain multiple applications. The relationship between applications and lanes is many-to-many.
- Lane group: a collection of swimming lanes. Lane groups are mainly used to distinguish between teams or scenarios.
- Baseline: The environment to which all business services are deployed. The unlabeled app is the baseline stable version of the app, by which we mean the stable online environment.
- Entry application: traffic entry within the micro-service system. The Gateway application can be Ingress Gateway, or the self-built Spring Cloud Gateway, Netflix Zuul Gateway engine type Gateway, Spring Boot, Spring MVC, Dubbo application, etc.
Why distinguish between inbound and outbound apps? In the full-link gray scale scenario, we only need to configure the routing and forwarding rules for the entry application, and the subsequent micro-service only needs to dye the route according to the transparent label (to achieve the closed-loop capacity of the “swimlane”).
Can see from the above, we created the lane A and lane B respectively, involved in the transaction center, product center two application, is tag label 2 respectively, among them A lane shunt online 30% of the traffic, the shunt the online 20% of the traffic lanes, B baseline environment (i.e., not to play the environment) shunt online 50% of the traffic.
Technical analysis of the whole link gray scale
We configure the annotation alicloud.service.tag on the deployment: Gray identifies the gray version of the application and registers it in the registry with a label. Enable full-link swimlanes on the gray version application (automatic staining of traffic passing through the machine) and support the automatic addition of gray X-MSE-Tag for gray traffic: Gray tags, which extend the routing capabilities of the Consumer to forward traffic with gray tags to the target gray application.
Full link gray scale based on various Ingress gateways
Based on the full-link gray scale capability, the full-link traffic gray scale can be realized by matching appropriate entrance routing gateways. For example, ingress-Nginx, Ingress-Apisix, ALB, and MSE cloud native gateways can all be used as traffic portals.
Full link grayscale product realization
Functionality and ease of use are the points that must be considered in productization. We need to start from the perspective of users and think end-to-end about how to practice and land the whole process. On Ali cloud about micro services full link gray capacity there are the following two products to provide a complete full link gray solution.
MSE full link gray scale scheme
As the core function of MSE Microservice Governance Professional edition, full-link gray scale has the following six characteristics:
- Full link isolated traffic swimlane
-
The desired traffic is’ dyed ‘by setting traffic rules, and the’ dyed ‘traffic is routed to the grayscale machine.
-
Grayscale flow carries grayscale scale to the downstream to form a grayscale exclusive environmental flow swimlane. The application in a non-grayscale environment will choose the unmarked baseline environment by default.
-
An end-to-end stable baseline environment
The unmarked application belongs to the stable version of the baseline application, that is, the stable online environment. When we will publish the corresponding grayscale version of the code, we can then configure rules to direct the introduction of specific online traffic and control the risk of grayscale code.
- Flow one-key dynamic flow cutting
After traffic rules are customized, you can stop, add, delete, modify, and check traffic rules with one click as required. The rules take effect in real time. Gray drainage is more convenient.
- Low-cost access, implemented based on Java Agent technology without modifying a line of service code
MSE microservice governance capability is implemented based on Java Agent bytecode enhancement technology, which seamlessly supports all Spring Cloud and Dubbo versions available in the market for the past five years. Users can use it without changing a line of code, without changing the existing architecture of the business, and can go up and down at any time without binding. You only need to enable MSE Microservice Governance Professional edition to configure online and take effect in real time.
- Observable capacity
- Single application observability at lane level
- It has the observation capability of full-link applications and can observe whether traffic escapes from a global perspective. It’s easy to see if there’s any ash.
- With lossless up and down the ability to make the release more silky
After MSE microservice management is enabled, applications can be connected online or offline without damage. In scenarios such as publishing, rolling back, capacity expansion, and capacity reduction under heavy traffic, traffic is not damaged.
1. Create a traffic swimlane group
The application involved in a lane needs to be added to the application involved in a lane group
2. Create a flow lane
By default, we already have a baseline (sub-standard) environment that includes all services before using the swimlane feature.
We need to deploy the quarantined version of the application Deployment, tag them, and select the corresponding tag for the swimlane one-to-one association (test swimlane association Gray tag)
You need to configure Ingress rules for the traffic entry, for example, to route www.base.com to the Base version of application A and www.gray.com to the Gray version of application A.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-cloud-a-base
spec:
rules:
- host: www.base.com
http:
paths:
- backend:
serviceName: spring-cloud-a-base
servicePort: 20001
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: spring-cloud-a-gray
spec:
rules:
- host: www.gray.com
http:
paths:
- backend:
serviceName: spring-cloud-a-gray
servicePort: 20001
path: /
Copy the code
EDAS full-link gray scale scheme
1. Introduction to EDAS products
EDAS is a Cloud native PaaS platform for application hosting and microservice management. It provides full-stack solutions for application development, deployment, monitoring, operation and maintenance, and supports microservice operating environments such as Spring Cloud and Apache Dubbo to facilitate your application to the Cloud.
In EDAS platform, users can rapidly deploy applications to multiple underlying server clusters by WAR package, JAR package, image, etc., which enables easy deployment of baseline and grayscale versions of applications without cluster maintenance. EDAS seamlessly integrates MSE microservice governance capabilities, and applications deployed on EDAS can obtain advanced features such as lossless offline, Canary publishing, and full-link traffic control without additional Agent installation without code intrusion.
At present, EDAS supports the full-link grayscale capability of micro-service application as the entry. The following is a brief introduction to the method of configuring full-link grayscale traffic in EDAS.
2. Create a flow swimlane group and swimlane
You need to select an entry type when creating a swimlane. Currently, only the entry application deployed in EDAS can be used as the swimlane entry application. You need to add the baseline version and gray version involved in a swimlane to the application involved in a swimlane group.
When creating a swimlane, you can import specific online traffic based on path configuration rules, and configure the marking traffic to form a grayscale link. The swimlane supports traffic control based on Cookie, Header, and Parameter.
After a swimlane is configured, you can select a target swimlane group on the full-link traffic control screen to observe traffic, including the total application monitoring view, unmarked part monitoring view, marked part monitoring view, and all application monitoring view in the swimlane group.
3. Apply the route as the traffic inlet to realize the gray scale of the whole link
EDAS platform supports users to create application routes for Kubernetes applications based on Nginx Ingress. Combined with EDAS’s support for full-link flow control, users can directly realize full-link gray scale using Nginx Ingress as traffic gateway in EDAS console.
After deploying the baseline application, grayscale application, and entrance application on the EDAS platform, and creating a grayscale swimlane according to the preceding steps for creating a traffic lane group and lane, you can bind Kubernetes Service resources to the entrance application to provide a traffic entrance. A LoadBalancer Service can be configured for an inbound application to provide external access, or it can be based on the existing Nginx Ingress gateway in the Kubernetes cluster. Configure ClusterIP Service for inbound applications and configure application routes to avoid allocating additional public IP addresses.
The following describes how to configure application routes as traffic inlets for inbound applications.
On the application details page, you can configure the application access mode. You can bind a LoadBalancer Service to the imported application for direct access or create a ClusterIP Service, as shown in the following figure:
After the configuration is successful, click Create Application Route on the EDAS Application Route page, and select the cluster, namespace, application name, service name and port created in the preceding steps to configure the Ingress resource, as shown in the following figure:
After the configuration is successful, you can use the domain name and path configured for the Ingress resource and the gray traffic rule configured in the swimlane to achieve the full link gray scale.
4. Go one step further
Full-link traffic control in EDAS currently only supports the Ingress application deployed in EDAS as the traffic entrance. Under the trend of cloud protogenics led by Kubernetes, EDAS will support the Ingress as the traffic entrance. Users do not need to deploy additional gateway applications. You can use the Ingress resource to configure forwarding rules as the traffic inlet. In addition, EDAS will also support ALB Ingress, APISIX Ingress and MSE Cloud native gateway Ingress, and on this basis, the full-link gray scale capability will be further upgraded. Supports the full-link grayscale capability based on various Ingress-Provider gateways.
The tail
We find that under the Ingress of cloud native abstraction, all problems become more standardized and simple when talking about the whole link gray scale. This paper introduces the routing and forwarding capability of various Ingress gateways and the Ingress all-link gray scale scheme based on “swimlane”, so that enterprises can quickly implement the core capability of all-link gray scale.
Click “here” at the end of the article to learn more about products ~ Release the latest information of cloud native technology, collect the most complete content of cloud native technology, hold cloud native activities and live broadcast regularly, and release Ali products and user best practices. Explore the cloud native technology with you and share the cloud native content you need.
Pay attention to [Alibaba Cloud native] public account, get more cloud native real-time information!