How do I access pod through Service
We know that the IP of Pod is not fixed, because if the Pod fails, it will be immediately replaced by a new Pod. At this time, new IP will be allocated. Note that these IP addresses are virtual IP, but think about how Pod can provide perfect functions under the condition of IP changes.
The answer is through Service
Create a Service
The K8s framework uses the Deployment controller to create external services
Based on the previous example, we add the Service resource (service.yaml)
The version of service
apiVersion: v1
# Resource type
kind: Service
The namespace and name of the service
metadata:
namespace: k8s-test
name: service
spec:
Map port SERVICE8080 to port 30000
ports:
- name: "service-port"
targetPort: 8080
port: 8080
nodePort: 30000
/ / pod with label as service-test is the backend of service
selector:
app: service-test
Expose to the extranet
type: NodePort
Copy the code
perform
kubectl apply -f service.yaml
Copy the code
Viewing service
[centos@wunaichi k8stest]$ kubectl get service -n k8s-test NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service NodePort 10.111.35.139 < none > 8080:30000 / TCP 18 dCopy the code
Service is assigned to cluster-IP as 10.111.35.139, and port mapping is 8080:30000
View the relationship between service and POD
[centos@wunaichi k8stest]$ kubectl describe service service -n k8s-test Name: service Namespace: k8s-test Labels: Annotations: < None > Annotations: <none> Selector: app=service-test Type: NodePort IP: 10.111.35.139 Port: service-port 8080/TCP TargetPort: 8080/TCP NodePort: service-port 30000/TCP Endpoints: 10.44.0.3:8080,10.44. 0.4:8080 Session Affinity: None External Traffic Policy: Cluster Events: < None >Copy the code
The Endpoints list the IP addresses and ports of the two pods. We know that the Pod IP is configured in the container. Where is the ClusterIP configured for Service? How does Cluster-IP map to Pod IP?
The answer is kube – proxy + iptables
2. How Service works
Service consists of kube-proxy components and iptables. The iptables rule maps the IP of the service to the IP of the pod.
View the iptables for your machine
sudo iptables-save
Copy the code
Let’s take the key parts and read them
The first: Kube-mark-masq: Kube-Mark-masq: kube-Mark-masq: Kube-Mark-masq: Kube-Mark-masq: Kube-Mark-masq: Kube-Mark-masq: Focus on number two
Skip to kube-SVC-iUAUJYFCIUumOL2O if the destination is 10.111.35.139(service IP) and port is 8080
Let’s keep jumping. Kube-svc-iuaujyfciuumol2o jumps to Kube-SEP-3C4G3TP5TJKPOH3O with a 50% probability and to kube-Sep-2ATDMDQBZCo4RJum with a 50% probability
Take the jump to kube-SEP-3C4G3TP5tJkPOH3O as an example
You can see that kube-sep-3C4g3TP5tjkPOH3o forwards the destination address via DNAT to 10.44.0.3:8080, which is the IP address of one of the pods
The DNAT function here is to change the incoming IP address and port into the new port and address, namely the IP address and port of the propped Pod
At this point, we have passed the service request to the corresponding POD for processing
So the iptables rules forward traffic to the Service to the back-end Pod with equal probability, using a load balancing strategy similar to polling. These rules are generated and maintained on the host by KuBE-Proxy listening for Pod change events.
Kube-proxy monitors the addition and removal of Service objects and Pod objects by the Kubernetes control node. For each Service, it configures the Iptables rules to capture requests arriving at the clusterIP and port of the Service and redirect the requests to a Pod in a set of back ends of the Service. For each Endpoints object, it also configures the iptables rule, which selects a back-end combination. The default policy is that kube-proxy randomly selects a back end in iptables mode.
As you can see, the process of kube-Proxy processing Service through iptables requires quite a few iptables rules to be set up on the host. Furthermore, kube-Proxy needs to constantly refresh these rules in the control loop to ensure that they are always correct
How to access service from extranet
In addition to accessing services from within a cluster, there are many cases where we want to expose services externally
Service provides external services through the IP address of the cluster, but only cluster nodes and PODS can access the service. How does the extranet access service? That is NodePort. Let’s go back to the YAML file from the first section
The version of service
apiVersion: v1
# Resource type
kind: Service
The namespace and name of the service
metadata:
namespace: k8s-test
name: service
spec:
Map port SERVICE8080 to port 30000
ports:
- name: "service-port"
# pod listening port
targetPort: 8080
# cluster listening port
port: 8080
The port to listen on the node
nodePort: 30000
/ / pod with label as service-test is the backend of service
selector:
app: service-test
Expose to the extranet
type: NodePort
Copy the code
You can see type: NodePort; It contains port 30000 that the node listens to
Ultimately, requests received by Node and ClusterIP on their respective ports are forwarded through iptables to Pod’s targetPort
The external port takes effect
Let’s look at the iptables rules again
It can be seen that port 30000 is monitored. All requests on this port will jump to KuBE-SVC-IUAUJYFCIUUMOL2O, and kuBE-SVC-iUAUJYFCIUumOL2O will jump to a Pod in the same proportion as cluster.
Should it be clear at this point how an external accesses a Pod
reference
Kubernetes official documentation – services
5 Minutes a day to play Kubernetes