The author is Lin Yuxin, product engineer of Tencent Cloud Container. Currently, I am mainly responsible for the research and development of the console of Tencent Cloud TKE.
An overview of the
In the open source community, there are many ways to implement Kubernetes Ingress Controller. Nginx Ingress Controller is only one of them. It is also the most widely used Ingress Controller in the community. It is not only powerful, but also extremely high performance. This article mainly introduces how to deploy Nginx Ingress in various ways using Tencent cloud container service, and briefly introduces the implementation principles, advantages and disadvantages and application scenarios of each way.
What is Nginx Ingress
Nginx Ingress is a Kubernetes object that translates user-declared nginx-ingress into Nginx forwarding rules via nginx-ingress-controller. Its core problem is traffic forwarding and east-west load balancing. The main working principle is that nginx-Ingress-Controller listens for apI-server changes (Kubernetes Informers). The Ingress, Service, Endpoint, Secret, and ConfigMap objects of Watch Kubernetes are used to change the configuration of Nginx instances and forward traffic.
There are two main implementations of Nginx Ingress in the community
- Implementation of Kubernetes open source community
- Nginx official implementation
Why Nginx Ingress
In the open source community, there are many ways to implement Ingress Controller. Each Controller has its own applicable scenarios and advantages and disadvantages. Why is nginx-Ingress Controller recommended? Let’s look at what happens if we don’t use nginx-Ingress-controller
Here, the ingress Controller recommended by Tencent Cloud Container Service Console (hereinafter referred to as TKE) is taken as an example. There are some problems as follows:
- The CLB-type Ingress capability cannot meet the requirements of existing services. For example, it cannot share the same extranet entrance and supports default forwarding backends
- The original business has been used
nginx-inrgess
And o&M is used to configurationnginx.conf
Don’t want to make too many changes
Nginx-ingress-controller is a good way to solve these problems.
What are the prerequisites
Deployed nginx – ingress – operator
Component Deployment and Installation
Enter the Tencent Cloud Container Service console, select the cluster where you want to deploy Nginx Ingress, enter the cluster-Component Management, deploy and install Nginx Ingess components, as shown below:
The components are installed and running
Deployment plan
TKE provides multiple deployment schemes for nginx-Ingress-Controller and LB access modes, which are suitable for different service scenarios. The following describes different schemes.
Nginx-ingress-controller Deployment scheme
Solution 1: DaemonSet + Node pool
As a key traffic access gateway, Nginx is a critical component. It is not recommended to deploy Nginx on the same node as other services. You can deploy Nginx using node pool stains. For details about node pools, see Tencent Cloud Container Service Node Pool Overview.
When using this deployment scenario, note the following:
-
Prepare the node pool to deploy nginx-Ingress-Controller in advance, and set Taint and Label for the node pool to prevent other pods from scheduling to the node pool.
-
Ensure that the nginx-ingress-operator component has been successfully deployed and installed. For details, see the preceding instructions
-
Go to component details and create
nginx-ingress-controller Copy the code
Instance (multiple instances can exist simultaneously in a single cluster)
- Deployment Mode Selection
Specify DaemonSet deployment for a node pool
- Set tolerance stain
- Set Request/Limit. Request must be smaller than the model of the node pool (the node has reserved resources to prevent instance unavailability due to insufficient resources). Limit is not required
- Set other parameters based on service requirements
- Deployment Mode Selection
Option 2: Deployment + HPA
Deploying with a Deployment + HPA solution, you can configure spottiness and tolerance to decentralize Deployment of Nginx and business Pod according to your business needs. With the HPA, you can set CPU and memory indicators for elastic scaling.
When using this deployment scenario, note the following:
-
Set the Label of the node on which nginx-ingress-Controller will be deployed in the cluster
-
Ensure that the nginx-ingress-operator component has been successfully deployed and installed. For details, see the preceding instructions.
-
Go to component details and create
nginx-ingress-controller Copy the code
Instance (multiple instances can exist simultaneously in a single cluster)
- Deployment Mode Selection
Customize Deployment+HPA Deployment
- Example Set the HPA triggering policy
- Set the Request/Limit
- Setting node scheduling policies is recommended
nginx-ingress-controller
Exclusive node to avoid unavailability caused by encroachment of other service resources - Set other parameters based on service requirements
- Deployment Mode Selection
Nginx front-end LB connection deployment mode
Nginx-ingress-operator and Nginx-Ingress-Controller are deployed in a TKE cluster. Nginx-ingress-operator and nginx-Ingress-Controller are deployed in a TKE cluster. However, to receive external traffic, you need to configure, and you need to configure the front-end LB of NGINx. Now that TKE has completed production support for Nginx Ingress, you can choose one of the following deployment patterns based on your business needs.
Solution 1: VPC-CNI clusters use CLB to pass through Nginx services (recommended)
Preconditions (satisfy one of them) :
- The network plug-in of the cluster is
VPC-CNI
- The network plug-in of the cluster is
Global Router
And has been startedVPC-CNI
Support for (both modes mixed)
Let’s take a node pool deployment load as an exampleThe current scheme has good performance and all the PODS use the elastic network adapter. The Pod of the elastic network adapter supports directly binding Pod of CLB, which can bypass NodePort, and does not need manual maintenance of CLB. It supports automatic capacity expansion and contraction, which is the most ideal scheme.
Solution 2: Global Router mode The cluster uses services in common LoadBalancer mode
The default implementation of TKE for LoadBalancer Service is based on NodePort. CLB binds the NodePort of each node as the RS of the back end to forward traffic to the NodePort of the node. The node then routes requests through Iptables or IPVS to the Service’s corresponding back-end Pod (Pod of the Nginx Ingress Controller).
If your cluster does not support vPC-CNI network mode, you can use the regular LoadBalancer Service to access traffic. This is the easiest way to deploy Nginx Ingress on TKE. Traffic is forwarded through a layer of NodePort, but there may be the following problems:
- The forwarding path is long. After the traffic reaches NodePort, it passes through Kubernetes internal load balancing and is forwarded to Nginx through Iptables or IPVS, which increases the network time
- SNAT is bound to occur through NodePort. If traffic is too concentrated, source port exhaustion or Conntrack insertion conflict may result in packet loss, and some traffic anomalies may occur.
- Nodeports of each node also act as a load balancer. If A CLB is bound to nodeports of a large number of nodes, the load balancing status will be scattered on each node, which may lead to global load imbalance.
- CLB performs health probes on NodePort, and probes are eventually forwarded to the Pod of Nginx Ingress. If the number of nodes bound to CLB is greater than the Pod of Nginx Ingress, This causes probe packets to put too much pressure on the Nginx Ingress.
Solution 3: Use HostNetwork and LB
Although scheme 2 is the simplest deployment mode, traffic will pass through a layer of NodePort and may have the problems described above. We can let Nginx Ingress use HostNetwork and CLB directly bind node IP + port (80,443). Because HostNetwork is used, pods of nginx ingress cannot be scheduled to the same node to avoid port listening conflicts. Since TKE has not yet productized this solution, some nodes can be selected by planning in advance to deploy nginx-Ingress-Controller and Label the nodes. Then deploy on these nodes on a DaemonSet (nginx-Ingress-Controller deployment scenario 1).
How to Integrate Monitoring
TKE by integrating Tencent cloud container team of high-performance cloud native monitoring service (portal: console.cloud.tencent.com/tke2/promet…). , and learn about Prometheus, Kvass, and how to use Kvas-based Prometheus clustering technology in the previous article “How to Monitor a 100k Container Kubernetes Cluster with Prometheus”.
Bind monitoring Instances
Viewing Monitoring Data
How are logs collected and consumed
TKE provides a complete set of product capabilities by integrating Tencent cloud log service CLS to achieve log collection and consumption capabilities of Nginx-Ingress-Controller. However, the following points need to be noted:
- Prerequisites: Ensure that the log collection function has been enabled for the current cluster
- in
nginx-ingress-controller
In this example, configure log collection options.
conclusion
This paper describes how to use Tencent cloud container service console to play Nginx Ingress. It mainly introduces two deployment modes and suggestions for Nginx-Ingress-Controller on the console, and three ways of front-end access to LB. In addition to one-click deployment of Nginx Ingress on TKE, TKE also provides production support for nginx-Ingress-Controller logging and monitoring capabilities for cluster deployment. This article is a good reference and guide for using Nginx Ingress on TKE.