preface
Ingress can be understood as Service of Service. That is, a new layer of Service is built in front of the existing Service to serve as a unified entrance for external traffic and forward request routes.
To put it more simply, build an Nginx or HaProxy at the front end, forward different hosts or urls to the corresponding back-end Service, and then the Service forwards them to Pod. Ingress does some decoupling and abstraction of nginx/ HaProxy.
Update history
- 20200628 – First draft – Li Zuo Cheng
- The original address – blog.zuolinux.com/2020/06/28/…
The meaning of the Ingress
Ingress addresses some of the shortcomings of the default Service in exposing access to the Internet, such as the failure to implement a unified entry level 7 URL rule, such as a default Service can only correspond to one back-end Service.
The ingress consists of an ingress-controller and an ingress object.
Ingress-controller corresponds to the nginx/haproxy program and runs in Pod form.
The ingress object corresponds to the nginx/ HaProxy configuration file.
Ingress-controller modifies the nginx/ haProxy rules in its Pod using the information described in the Ingress object.
The deployment of ingress
Preparing test resources
Deploy two services, access service 1, return Version 1 Access service 2, return Version 2Copy the code
Program configuration for two services
Yaml apiVersion: apps/v1 kind: deployment metadata: name: hello-v1.0 spec: selector: matchLabels: app: V1.0 replicas: 3 template: metadata: labels: app: v1.0 spec: containers: - name: hello-v1 image: Anjia0532 / Google-samples. Hello-app :1.0 ports: -containerPort: 8080 -- apiVersion: apps/v1 kind: Deployment metadata: Name: hello-v2.0 spec: selector: matchLabels: app: v2.0 replicas: 3 template: metadata: labels: app: v2.0 spec: selector: matchLabels: app: v2.0 replicas: 3 template: metadata: labels: app: v2.0 spec: Containers: - name: hello-v2 image: anjia0532/google-samples. Hello-app :2.0 ports: - containerPort: 8080 -- apiVersion: V1 kind: Service metadata: name: service-v1 spec: selector: app: v1.0 Ports: -port: 8081 targetPort: 8080 Protocol: selector: app: v1.0 Ports: -port: 8081 targetPort: 8080 Protocol: TCP -- apiVersion: v1 kind: Service metadata: name: service-v2 spec: selector: app: v2.0 ports: -port: 8081 targetPort: 8080 protocol: TCPCopy the code
Let the container run on 8080 and the service run on 8081.
Start the two services and their corresponding pods
# kubectl apply -f deployment.yaml
deployment.apps/hello-v1.0 created
deployment.apps/hello-v2.0 created
service/service-v1 created
service/service-v2 created
Copy the code
Check the startup status. Each service has three pods
# kubectl get pod,service -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES Pod/hello-v1.0-6594bd8499-lt6NN 1/1 Running 0 37s 192.10.205.234 work01 <none> < None > pod/ hello-v1.0-6594bd8499-q58CW 1/1 Running 0 37s 192.10.137.190 work03 < None > < None > pod/ hell-v1.0-6594bd8499-zcmF4 1/1 Running 0 37s 192.10.137.189 Work03 <none> <none> pod/hello-v2.0-6bd99fb9cd-9wr65 1/1 Running 0 37s 192.10.75.89 work02 <none> <none> Pod /hello-v2.0-6bd99fb9cd-pnhr8 1/1 Running 0 37s 192.10.75.91 work02 <none> <none> pod/ hello-v2.0-6bd99fb9cd-SX949 1/1 Running 0 37s 192.10.205.236 work01 < None > < None > NAME TYPE cluster-ip external-ip PORT(S) AGE SELECTOR Service /service-v1 ClusterIP 192.20.92.221 < None > 8081/TCP 37s app=v1.0 service/service-v2 ClusterIP 192.20.255.0 <none> 8081 / TCP 36 s app = v2.0Copy the code
Check the Pod mount status on the Service backend
[root@master01 ~]# kubectl get ep service-v1 NAME ENDPOINTS AGE service-v1 192.10.137.189:8080192.10. 137.190:8080192.10. 205.234:8080-113 s/root @ master01 ~ # kubectl get ep service - v2 NAME ENDPOINTS AGE service - v2 192.10.205.236:8080192.10. 75.89:8080192.10. 75.91:8080 113 sCopy the code
You can see that both services successfully mounted the corresponding Pod.
Next deploy the front-end Ingress-Controller.
Work01 /work02 run ingress-Controller
kubectl label nodes work01 ingress-ready=true
kubectl label nodes work02 ingress-ready=true
Copy the code
Ingress-controller uses the official Nginx version
wget -O ingress-controller.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
Copy the code
Change to start two ingress-controllers
Yaml apiVersion: apps/v1 kind: Deployment...... . RevisionHistoryLimit: 10 replicas: 2 # Add the rowCopy the code
Set it to a domestic mirror
# vim ingress-controller.yaml spec: dnsPolicy: ClusterFirst containers: - name: controller #image: Us. GCR. IO/k8s - artifacts - prod/ingress - nginx/controller: v0.34.1 @ sha256:0 e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae 3457e8bbceb20 image: Registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.32.0 imagePullPolicy: IfNotPresentCopy the code
The deployment of ingress – controller
kubectl apply -f ingress-controller.yaml
Copy the code
Viewing health
# kubectl get pod,service -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES Pod /ingress-nginx-admission-create- LD4nt 0/1 Completed 0 15m 192.10.137.188 work03 <none> <none> Pod /ingress-nginx-admission-patch- P5jMD 0/1 Completed 1 15m 192.10.75.85 work02 <none> < None > Pod /ingress-nginx-controller-75f89c4965-vxt4d 1/1 Running 0 15m 192.10.205.233 work01 <none> <none> Pod /ingress-nginx-controller-75f89c4965-zmjg2 1/1 Running 0 15m 192.10.75.87 work02 < None > < None > NAME TYPE cluster-ip EXTERNAL - IP PORT (S) AGE SELECTOR service/ingress - nginx - controller NodePort 192.20.105.10 192.168.10.17, 192.168.10.17 80:30698/TCP,443:31303/TCP 15m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Service/ingress-nginx-Controller-admission ClusterIP 192.20.80.208 < None > 443/TCP 15m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginxCopy the code
Ingress-nginx-controller Pod is running on work01/02.
Write access request forwarding rules
# cat ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: test-v1.com
http:
paths:
- path: /
backend:
serviceName: service-v1
servicePort: 8081
- host: test-v2.com
http:
paths:
- path: /
backend:
serviceName: service-v2
servicePort: 8081
Copy the code
Enable the rules
# kubectl apply -f ingress.yaml
ingress.networking.k8s.io/nginx-ingress created
Copy the code
You can see that the nginx configuration in ingress-Controller Pod has taken effect
# kubectl exec ingress-nginx-controller-75f89c4965-vxt4d -n ingress-nginx -- cat /etc/nginx/nginx.conf | grep -A 30 test-v1.com
server {
server_name test-v1.com ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "default";
set $ingress_name "nginx-ingress";
set $service_name "service-v1";
set $service_port "8081";
set $location_path "/";
Copy the code
We access the tests outside the cluster.
First resolve the domain name to work01
Use * /etc/hosts 192.168.10.15 cluster v1.com 192.168.10.15 cluster v2.comCopy the code
Access to the test
# curl test-v1.com Hello, world! Application: curl-v1-6594bd8489-SVJNf # curl v1 -v1, v1, v1, v1 Application: curl-v1-6594bd8489-ZqjTM # curl V1 -v1, world! Application: curl-v1-6594bd8489-WWw76 # curl v1 -v1, v1, v1 Version: 2.0.0 Hostname: hello - v2.0-6 bd99fb9cd - h8862 # curl test-v2.com hello, world! Version: 2.0.0 Hostname: hello - v2.0-6 bd99fb9cd - sn84jCopy the code
You can see that requests for different domains go to different pods under the correct Service.
Request work02 again
Hasty use of datablks (v1, v1, v1, v1) # curr * s: v1, v1, v1, v1, v1, v1, v1 Application: curl-v1-6594bd8489-WWw76 # curl v1 -v1, v1, v1 Application: curl-v1-6594bd8489-ZqjTM # curl V1 -v1, world! V1: curl-v1 - curl-v1, v1: curl-V1, v1, v1, v1, v1, v1, v1, v1, v1, v1, v1, v1, v1, v1, v1, v1, v1, v1 Version: 2.0.0 Hostname: hello - v2.0-6 bd99fb9cd - h8862Copy the code
No problem.
How to be highly available
High availability access to Work01 / work02 can be achieved by hanging 2 LVS+ Keepalived in front of work01/ Work02. It is also possible to use Keepalived to float a VIP directly on work01 / Work02, no extra machine is required, thus saving cost.Copy the code
conclusion
This article uses Deployment + NodePort Service to deploy the Ingress.
Use the Pod of Deployment management ingress-Controller to expose the Ingress Service using NodePort.
Check the ingress service
# kubectl get service -o wide -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR Ingress - nginx - controller NodePort 192.20.105.10 192.168.10.17, 192.168.10.17 80:30698 / TCP, 443:31303 / TCP 22 m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginxCopy the code
Port 30698 is exposed. Access port 30698 on any node to access Pod v1/ V2.
However, this port is random and will change after reconstruction, so we can directly access port 80 of work01/02 running ingress-Controller Pod.
Work01/02 before another set of LVS+ Keepalived for high availability load.
Work01/02 run iptables -t nat-l -n -v to check whether port 80 is open in NAT mode.
Ingress-controller can be deployed using DaemonSet + HostNetwork.
In this way, port 80 exposed on Work01/02 directly uses the host network without NAT mapping to avoid performance problems.
To contact me
Wechat official account: zuolinux_com