A, takeaway
We deploy the application to a K8s cluster. If we want to access the deployed Pod, we need to access it through the Service. If we want to expose the application to access outside the cluster, we need to use the Ingress.
Second, request process
The process seems very simple, but there are a lot of things involved in it. The following process is broken down and combed separately.
Service access Pod
We should not expect Kubernetes Po D to be robust, but assume that containers in pods are likely to fail for various reasons and die. Controllers such as Deployment keep the overall application robust by dynamically creating and destroying pods. In other words, the Pod is fragile, but the application is robust, so every time the Pod is destroyed and rebuilt, the CORRESPONDING Ip address of the Pod changes. Kubernetes’ solution to this problem is to use Service to access the Pod
1, Service Select Pod
A Service selects a Pod by label. In the Service’s YAML description file, a selector is used to select the Pod
---
apiVersion: v1
kind: Service
metadata:
name: codereviewapi
namespace: codereview
spec:
ports:
- name: codereviewapip
port: 80
protocol: TCP
targetPort: 8888
selector:
app: codereviewapi
group: codereview
sessionAffinity: None
type: ClusterIP
Copy the code
Selected in the description file above is the Pod application with the tags app= CodereViewAPI, group= codereView, and exposed port 80
2, Cluster IP underlying implementation
Each Service is assigned a Cluster virtual Ip address. The Pod selected by the Service can be regarded as a virtual Cluster, and the Cluster Ip address is the Ip address of the Cluster
Kubectl describe SVC {ServiceName} -n {Namespace} Just use ServiceName
3. Service load balancing
The Service accesses the Pod Cluster through the Cluster Ip, which is mainly implemented by the iptables rules on the Kubernetes node.
At any node to perform: iptables – save | grep {} Cluster Ip
-A KUBE-SERVICES ! -s 10.244.0.0/16 - d 10.111.72.52/32 -p TCP - m comment - the comment "codereview/codereviewapi: codereviewapip cluster IP" - m TCP --dport 80 -j kube-mark-masq -a kube-services -d 10.111.72.52/32 -p TCP -m comment --comment "codereview/codereviewapi:codereviewapip cluster IP" -m tcp --dport 80 -j KUBE-SVC-HTMIAQY6LYBBMDLQCopy the code
The above two rules indicate:
- If a Pod in a Cluster (source address from 10.244.0.0/16) wants to access codereViewAPI, it is allowed.
- Other source address access to codereViewAPI, jump to the rule kube-SVC-HTMIaqy6LYBBMDlQ.
Look at the next KUBE – SVC – HTMIAQY6LYBBMDLQ rule command: iptables – save | grep KUBE – SVC – HTMIAQY6LYBBMDLQ
-A KUBE-SVC-HTMIAQY6LYBBMDLQ -m comment --comment "codereview/codereviewapi:codereviewapip" -m statistic --mode random --probability 0.50000000000 -j kube-sep-SWnxrcmt35eiija7 -a kube-SVC-htMIAQy6lybbMDlq-M comment --comment "codereview/codereviewapi:codereviewapip" -j KUBE-SEP-6M3F5MB5GS3SEKLXCopy the code
The above two rules indicate:
- A 1/2 chance jump to rule kube-sep-SWNXrCMT35EIiJA7
- The remaining 1/2 probability jumps to the rule kube-SEP-6m3F5MB5GS3seKLx
Let’s do an experiment and increase the number of Pod copies to five
Then check the IPtable rule corresponding to Service
The iptables - save | grep 10.111.72.52
-A KUBE-SERVICES ! -s 10.244.0.0/16 - d 10.111.72.52/32 -p TCP - m comment - the comment "codereview/codereviewapi: codereviewapip cluster IP" - m TCP --dport 80 -j kube-mark-masq -a kube-services -d 10.111.72.52/32 -p TCP -m comment --comment "codereview/codereviewapi:codereviewapip cluster IP" -m tcp --dport 80 -j KUBE-SVC-HTMIAQY6LYBBMDLQCopy the code
iptables-save | grep KUBE-SVC-HTMIAQY6LYBBMDLQ
-A KUBE-SVC-HTMIAQY6LYBBMDLQ -m comment --comment "codereview/codereviewapi:codereviewapip" -m statistic --mode random --probability 0.20000000019-j kube-sep-SWnxrcmt35EIija7 -a kube-svC-htMIAQy6lybbMDlq-M comment --comment "Codereview/codereviewapi: codereviewapip" -m statistic - mode random - 0.25000000000 j aim-listed probability KUBE-SEP-KBOJZ2BRSNPOQUXM -A KUBE-SVC-HTMIAQY6LYBBMDLQ -m comment --comment "codereview/codereviewapi:codereviewapip" -m Statistic --mode random --probability 0.33333333349-j kube-sep-pnchqmq4rhnhgk4s-a kube-SVC-htMIaqy6lyBBMDLQ-m The comment - the comment "codereview/codereviewapi: codereviewapip" -m statistic - mode random - 0.50000000000 j aim-listed probability KUBE-SEP-GDQMZT4WINOZF4G6 -A KUBE-SVC-HTMIAQY6LYBBMDLQ -m comment --comment "codereview/codereviewapi:codereviewapip" -j KUBE-SEP-6M3F5MB5GS3SEKLXCopy the code
You can see that the kube-SVC-HTMIAQY6LYBBMDLQ has changed, the probability of each rule has changed. Conclusion: Iptables forwards traffic to Service to back-end PODS using a poll-like load balancing policy.
Ingress accesses Service
1. DNS accesses Service
In Kubernetes, Pod access rules can be found using the Cluster Ip, but the Cluster Ip is not easy to remember, so Kuberbetes provides a CoreDns component to resolve the Service.
CoreDns is a DNS server. Each time a Service is created, a record is added to the DNS Service. Pods in the cluster can access Service through <SERVICE_NAME>.<NAMESPACE_NAME>
You can view the Service resolution by using nsLookup
Command: nslookup codereviewapi
Server: 10.96.0.10 Address: 10.96.0.10: Name: 53 codereviewapi. Codereview. SVC. Cluster. The local Address: 10.111.72.52Copy the code
Ingress accesses Service
In the YAML description file of the Ingress domain name, use serviceName and servicePort to set the corresponding service and port
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: codereviewapi
namespace: codereview
spec:
rules:
- host: xxx.xxx.xxx.cn
http:
paths:
- backend:
serviceName: codereviewapi
servicePort: codereviewapip
path: /
tls:
- hosts:
- xxx.xxx.xxx.cn
secretName: tls
Copy the code
Host indicates the domain name, serviceName indicates the Service and port, and TLS indicates the Https certificate.
Five, the summary
Through the article understand in Kubernetes, through the Cluster Ip Iptables rules to achieve Pod load request, also can access Pod through DNS resolution Service form. It helps us understand how Kubernetes works.