In this series of articles, we have been learning and mastering the roles, functions and usage scenarios of various components of K8S in the cluster. So what components do you think should be used for today’s topic task “do domain name resolution for Web services on K8S”?
If you read my last article, do you know the ways that K8S exposes its services? You can probably guess Ingress, so why not NodePort? In today’s article, we will explore these related issues in detail:
- why
NodePort
This method of exposing the service is not suitable for resolving the domain name of the service. - How to use
Ingress
Expose Web services (there will be a Demo to demonstrate). - Production of the cluster
Ingress
How to do high availability.
Why is NodePort not suitable for domain name resolution
Nodeport-type services are the most primitive and understandable way to expose services outside the cluster. NodePort, as the name implies, opens a specific port on all nodes (host or VM), and any traffic sent to this port is forwarded to the Service.
The NodePort Service works as follows:
In the figure above, we follow the flow direction arrow from bottom to top. The traffic enters the cluster through NodeIP+NodePort. The traffic on 30001 of the three nodes in the figure above is forwarded to Service, which then sends the traffic to the back-end endpoint Pod.
NodePort Service has the advantage of being simple, easy to understand, and accessible via IP+ port, but its disadvantages are obvious, such as:
- Each exposed service takes up a port on all nodes, which can be difficult to manage.
- The NodePort port range is fixed, and only 30,000 to 32767 ports can be used.
- If the Node IP address changes, the load balancing agent needs to change the back-end IP address.
How to expose Web services using Ingress
Ingress is not a Service in these components of K8S. Instead, it sits in front of multiple services and acts as an “intelligent routing proxy” or entrypoint for a cluster.
Add the Ingress in front of the Service. The ingress-controller is required in the cluster. If you have a self-built K8S cluster, you typically use nginx-ingress as the controller, which uses an Nginx server as a reverse proxy to route traffic to subsequent services.
The Ingress enables routing of back-end services based on domain names and URL paths. For example, you can send everything on foo.yourdomain.com to foo Service and everything on the yourdomain.com/bar/ path to bar Service.
We can compare this diagram with the NodePort schematic above to see the difference. The Ingress resource declaration file should look something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend: // If none of them match, go to the Service in the bottom of the pocket
serviceName: other
servicePort: 8080
rules:
- host: foo.mydomain.com
http:
paths:
- backend:
serviceName: foo
servicePort: 8080
- host: mydomain.com
http:
paths:
- path: /bar/
backend:
serviceName: bar
servicePort: 8080
Copy the code
Practice Ingress locally
Above said a lot of theory, below we can through a simple Demo to demonstrate, I use Docker Desktop K8S cluster, as for why to use it, nothing else is simple.
Ingress-controller: Nginx-Ingress-Controller: Nginx-Ingress-Controller: Nginx-Ingress-Controller: Nginx-Ingress-Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml
Copy the code
In the ingress-nginx namespace of the local cluster, three pods that create nginx-controller will be installed:
➜ ~ kubectl get Pods -n ingress-nginx NAME RESTARTS AGE ingress-nginx-admission-create- n59DN 0/1 Completed 2 49d ingress-nginx-admission-patch-hft52 0/1 Completed 3 49d ingress-nginx-controller-68649d49b8-nlch7 1/1 Running 77 49dCopy the code
The first two admission related pods are for nignx Controller configuration and exit after execution. The third Pod is running nginx-Controlelr.
I tried Ingress locally a few times and it didn’t work because the Nignx-Controller didn’t start properly. IO /ingress-nginx/controller:v0.46.0@sha256:…. Is very slow, so it is best to use the docker pull command to pull the image locally before installing the controller.
After the local installation of the Ingress controller, in order to demonstrate the convenience, a Service was made during the local construction of Nacos. Nacos is a component produced by Alibaba that can do automatic configuration and Service registration. It has a Web management interface, which can be used for demonstration.
I want to access nacos-service through dev.nacos.com, and finally have nacos-Service route traffic to the Pod with nacos service installed on the back end, the Ingress can declare
//ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nacos-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: dev.nacos.com
http:
paths:
- path: /
backend:
serviceName: nacos-service
servicePort: 8848
Copy the code
After applying the Ingress in the cluster
#Execute the following two commands
# kubectl apply -f nacos-ingress.yaml
# kubectl get ingress
➜ ~ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nacos-ingress <none> dev.nacos.com localhost 80 54d
Copy the code
Bind it to the local hosts file
127.0.0.1 dev.nacos.com # Local K8S cluster IP is 127.0.0.1Copy the code
You can access the Nacos service management interface through the domain name
GitHub LearningKubernetes (GitHub LearningKubernetes, GitHub LearningKubernetes)
https://github.com/kevinyan815/LearningKubernetes
Copy the code
If you have any questions, please leave a message or add wechat fSG1233210 to discuss with me.
Production of the clusterIngress
How to make high availability
We talked about how Ingress exposes the service and how to practice using Ingress expose service locally. Some of you may be wondering how Ingress makes it highly available in a production cluster. How should domain name resolution bind? Let me talk about how to do a simple resolution link.
In a normal production environment, because the Ingress is the traffic inlet of the public network, the pressure is high and multiple devices must be deployed. In general, several nodes will be set up in the cluster only to run the Ingress-Controller. Ingress-controller can be installed on the Node creation with deamonSet, and another layer of load balancing will be performed on the upper layer of these nodes to resolve the DNS of domain names to the load balancing IP addresses.
Considering multiple lines of business services do not affect each other, still need to make different lines of business of forwarding rules into different Ingress, this through our above statement Ingress note annotations: kubernetes. IO/Ingress. The class can do it.
In fact, each company’s plan is certainly not the same, especially if the resolution link with high Waf, it will be more complicated, because I am not professional operation and maintenance, but also just know some general ideas, if there is a professional boss, welcome to leave a message, let us make progress together.
The last
In fact, I’ve been trying to write about this topic for a long time, and it took me two or three attempts on and off to get the Ingress story right here. I hope today’s article can help you analyze the knowledge of Ingress. If you want to master and understand it by yourself, you still need to read the article carefully, practice the demonstration examples and review the previous knowledge about Service more.
If you like the content of this article, please follow my official account (wechat search: webmaster bi bi), like it and share it with more people.
Related reading:
Combine learning and practice to quickly master Kubernetes Service
Do you know any of the ways that K8S exposes its services?