Author: Wei Wang, CODING DevOps back-end development engineer, with years of r&d experience, veteran fan of cloud native, DevOps, Kubernetes, Servicemesher Service Grid Chinese community member. Kubernetes CKA, CKAD certification.
preface
Application in Kubernetes to achieve grayscale publishing, the simplest scheme is to introduce the official Nginx-Ingress to achieve.
We deployed two sets of Deployment and Services, respectively representing grayscale environment and production environment. Through load balancing algorithm, the two sets of environments were distributed according to grayscale proportion, and then grayscale release was realized.
It is common practice to update the service by modifying the image version of the YAML file and performing Kubectl Apply after the project has packaged the new image. If grayscale publishing is required in the publishing process, the grayscale publishing can be controlled by adjusting the weight of the configuration files of the two sets of services, which cannot be done without manual execution. If the number of projects is too large and the time span of gray scale is too long, the probability of human error operation will be greatly increased, relying too much on human execution, which is intolerable for DevOps engineering practice.
So, is there a way to automate grayscale without human intervention? For example, after code update, automatically publish to pre-release and grayscale environment, and automatically increase the grayscale proportion from 10% to 100% in a day’s time, and can terminate at any time, automatically publish to the production environment after the grayscale passes?
The answer is yes, CODING DevOps can meet such needs.
Nginx-ingress architecture and principles
A quick review of the architecture and implementation of Nginx-Ingress:
Nginx-ingress receives cluster traffic through the pre-configured Loadbalancer Service, forwards the traffic to the nginx-ingress Pod, checks the configured policy, and then forwards the traffic to the target Service. Finally, the traffic is forwarded to the Service container.
Traditional Nginx requires us to configure conf file policies. However, nginx-ingress converts the native CONF configuration file to the YAML configuration file by implementing nginx-ingress-controller. When we configure the yamL file policy, Nginx-ingress-controller will convert it and dynamically update the policy, dynamically Reload the Nginx Pod, and implement automatic management.
So how does nginx-Ingress-controller dynamically sense cluster policy changes? There are many methods, including webhook Admission interceptor, ServiceAccount and Kubernetes Api interaction, dynamic access. Nginx-ingress-controller uses the latter. Therefore, when deploying nginx-ingress, we will find that Pod ServiceAccount is specified in the Deployment and RoleBinding is implemented. Finally, Pod can interact with Kubernetes Api.
Implementation plan Preview
To achieve this goal, we designed the following continuous deployment pipeline.
This continuous deployment pipeline mainly implements the following steps:
Automatic deployment to pre-release environment 2. Whether to conduct A/B test 3. Automatic grayscale release (automatic three times to gradually increase the grayscale ratio) 4
At the same time, this example demonstrates the steps from committing code to automatically triggering continuous integration with Git:
2. After the image is built, the image is automatically pushed to the artifact library. 3
1. Continuous integration will be triggered after the code is submitted, and the image will be automatically built and pushed to the artifact library
2. Trigger continuous deployment and release to pre-release environment
3, manual confirmation: A/B test (or skip directly into automatic gray scale)
During A/B test, only the Header containing location=shenzhen can access the new version, and other users can access the production environment using the old version.
4, manual confirmation: whether automatic gray release (automatic 3 rounds of gradually increasing gray scale, each round interval 30 seconds)
First gray scale: 30% gray scale of the new version, at this time, about 30% of the traffic accessing the production environment will enter the gray scale environment of the new version:
The gray scale of the new version is 60% :
60 seconds after the third round of automatic gray scale: 90% gray scale of the new version:
In this case, we have configured automatic grayscale publishing to be carried out in 3 progressive steps, each time increasing by 30%, each time lasting 30 seconds and then automatically entering the next grayscale stage. At different grayscale stages, the probability of a new version being requested increases. Progressive gray scale can be configured arbitrarily according to business needs. For example, the gray scale can be automatically performed 10 times in one day until it is released to the production environment without manual attendance.
5, gray completed, 30 seconds later released to the production environment
Project source code and principle analysis
The project source address: wangweicoding.coding.net/public/ngin…
├ ─ ─ Jenkinsfile # continuous integration script ├ ─ ─ deployment │ ├ ─ ─ canary │ │ └ ─ ─ the deploy. The yaml # gray release deployment file │ ├ ─ ─ dev │ │ └ ─ ─ the deploy. # yaml ├─ │ ├─ ├─ │ ├─ │ ├─ │ ├─ │ ├─ │ ├─ │ ├─ │ ├─ │ ├─ │ ├─ │ ├─ Heavy Metal Flag School ── Heavy metal Flag School ── Heavy metal Flag School ── Heavy metal Flag RoleBinding. Yaml │ │ ├ ─ ─ clusterRole. Yaml │ │ ├ ─ ─ defaultBackendService. Yaml │ │ ├ ─ ─ defaultBackendServiceaccount. Yaml │ │ ├ ─ ─ deployment. Yaml │ │ ├ ─ ─ nginxDefaultBackendDeploy. Yaml │ │ ├ ─ ─ roles. The yaml │ │ ├ ─ ─ service. The yaml │ │ └ ─ ─ ServiceAccount. Yaml │ └ ─ ─ nginx ingress - helm # nginx - ingress helm package │ └ ─ ─ nginx - ingress - 1.36.3. TGZ └ ─ ─ pipeline # Continuous deployment assembly line template ├ ─ ─ gray - deploy. Json # gray release line ├ ─ ─ gray - init. Json # gray released initialization (first run) └ ─ ─ nginx - ingress - init. Json # nginx - ingress Initialization (first run)Copy the code
Gray scale and production environment is mainly composed of deployment/canary/deploy yaml and deployment/pro/deploy yaml, mainly is to realize the two sets of the environment:
- Deployment
- Service
- Ingress
A/B testing and grayscale are controlled by the configured Ingress:
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx # nginx=nginx-ingress| qcloud=CLB ingress nginx.ingress.kubernetes.io/canary: "True" # open gray nginx. Ingress. Kubernetes. IO/canary - by - the header: "Location" # A/B test cases Header key nginx. Ingress. Kubernetes. IO/canary - by - the Header - the value: Header value name: my-ingress namespace: pro spec: rules: -host: nginx-ingress.coding.pro HTTP: paths: - backend: serviceName: nginx-canary servicePort: 80 path: /Copy the code
A/B testing is mainly composed of annotation nginx. Ingress. Kubernetes. IO/canary – by – the header and nginx. Ingress. Kubernetes. IO/canary – by – the header – the value to control, To match the Key and Value of the request Header.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx # nginx=nginx-ingress| qcloud=CLB ingress
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: 30
name: my-ingress
namespace: pro
spec:
rules:
- host: nginx-ingress.coding.pro
http:
paths:
- backend:
serviceName: nginx-canary
servicePort: 80
path: /Copy the code
While gray by annotation nginx. Ingress. Kubernetes. IO/canary – weight control, value range can be 0-100, corresponding to grayscale weight ratio. In Nginx-Ingress, load balancing algorithm is mainly implemented by weighted polling algorithm.
The overall architecture is shown as follows:
Environment to prepare
1. Tencent cloud container service is recommended for K8S cluster; 2. Open CODING DevOps to provide image construction and pipeline deployment capabilities.
Practical steps
1. Clone source code and push it to your CODING Git repository
``` $ git clone https://e.coding.net/wangweicoding/nginx-ingress-gray/nginx-ingress-gray.git $ git remote set-url origin https://you coding git $ git add . $ git commit -a -m 'first commit' $ git push -u origin master ```Copy the code
Note that before pushing, change the deploy. Yaml image in the deployment/dev, deployment/canary, deployment/pro folders to your own artifacts image address.
Create a continuous integration pipeline. Use the Custom Build Process to create a build plan and select Jenkinsfile for the code repository
3. Add cloud account and create continuous deployment pipeline, copy pipeline Json template of the project into the pipeline created (3)
To facilitate the use of templates, create a continuous deployment pipeline application named nginx-ingress
Create continue to create a blank deployment flow, copy the Json template to the continuous deployment pipeline, and create three pipelines in total:
- Nginx-ingress-init – Used to initialize the nginx-ingress
- Gray-init – Used to initialize the environment for the first time
- Note: Please select the cloud account of the above pipeline as your own cloud account. In addition, in the Gray deploy pipeline, please reconfigure “Start Required Products” and “trigger”.
4. Initialize nginx-ingress (first run) Running the nginx-ingress pipeline for the first time will automatically deploy nginx-ingress for you. After the success of the deployment, run kubectl get SVC | grep nginx ingress – controller gain Ningx – ingress of EXTERNAL – IP, the IP for the cluster IP request entry. The Host is configured for easy access.
5. Initialize grayscale publishing (First run) Running the gray-init pipeline for the first time will automatically deploy a complete environment, otherwise the automated grayscale pipeline will fail.
Now, you can try to modify the project docker/ HTML /index.html file, push will automatically trigger the construction and continuous deployment, trigger, enter the “continuous deployment” page, view the deployment details and process.
conclusion
We mainly make use of the waiting stage of CODING continuous deployment. By setting the waiting time for stages with different gray scale proportions, we automatically run gray scale stages one by one, and finally realize automatic gray scale publishing without manual attendance.
With the wait phase, you can achieve a smooth release process, requiring human intervention only when the release goes wrong. With the continuous deployment notification function, it is very convenient to push the current release status to the enterprise wechat, Dingding and other collaboration tools.
For the convenience of display, the gray scale and waiting time are hardcoded in the case. You can also use the “custom parameter” of the stage to dynamically control the gray scale and waiting realization, and dynamically input the gray scale and process control according to the current release level to make the release more flexible.
Production of advice
Nginx-ingress in this article is implemented in a Deployment mode. As the edge gateway of Kubernetes cluster, Nginx-Ingress undertakes all incoming traffic, and its high availability directly determines the high availability of Kubernetes cluster.
To deploy Nginx-Ingress in a production environment, follow the following guidelines:
- DaemonSet is recommended to avoid node failures.
- Through the label selector, will
Nginx-ingress-controller
Deploy it on independent nodes, such as nodes with high frequency, high network, and HIGH I/O, or nodes with low load. - If the
Deployment
Can be deployed asNginx-ingress
Configure HPA horizontal scaling.
Learn more about CODING