What is a service?


A Service is an abstract way to expose an application running on a set of Pods as a web Service.

In a nutshell, K8s provides a Service object to access a POD. As we have said in K8S Network Model and Cluster Communication, every Pod (minimum dispatch unit) in K8S cluster has its own IP address, which is not easy to access.

In fact, it is not. First, THE POD in K8S is not persistent, and the destruction and reconstruction will obtain a new IP, so it is obviously unreasonable for the client to access it by changing the IP. Second, load balancing between multiple replicas is required. So the Service pops up.

So today we’re going to take a look at Service and see how it works.

Service and EndPoints, pod


When we create/modify a service object through the API, the informer mechanism of the EndPoints controller listens to the service object and then creates an EndPoints object based on the selector configured for the service. This object records the POD IP, container port and stores it in etCD so that the service can look at the endPoints under its name and know the pod information.

Take a look at the image below:

Let’s take a look at the example and start with the usual creation of a Deployment

#deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-demo
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: mirrorgooglecontainers/serve_hostname
        ports:
        - containerPort: 9376
          protocol: TCP
Copy the code

Serve_hostname is an official DEBUG image provided by K8S. It is a Web server that returns hostname. So we create three pods labeled app=nginx that return hostname when we access pod 9376.

Next comes the list of services, where we specify the selector app=nginx

#service.yaml
apiVersion: v1
kind: Service
metadata:
  name: service-demo
spec:
  selector:
    app: nginx
  ports:
  - name: default
    protocol: TCP
    #service port
    port: 80
    #container port
    targetPort: 9376
Copy the code

So we get the same cluster-IP 10.96.148.206 service

If pods are successfully started, endPoints with the same name as the service are automatically created to hold data for all three pods

If the service selector does not specify a label, endpoints need to manually create network addresses mapped to the service as follows:

ApiVersion: v1 kind: Endpoints Metadata: name: service subsets: -addresses: -IP: 10.96.148.206 Ports: -port: 9376Copy the code

When we constantly access the cluster-IP of the service:

# curl 10.96.148.206:80 deployments-demo-7d94cbb55F-674ns # curl 10.96.148.206:80 deployments-demo-7d94cbb55F-674ns # curl 10.96.148.206:80 deployments-demo-7d94cbb55F-674ns # curl 10.96.148.206:80 deployments-demo-7d94cbb55F-lFRm8 # curl 10.96.148.206:80 deployments-demo-7d94cbb55F-8mmxbCopy the code

You can see that the request has been routed to the backend POD, return hostname, and the load balancing mode is Round Robin.

How does traffic enter pod through Service?

The Service and kube – proxy


When it comes to traffic, of course, kube-proxy is on the stage!

Kube-proxy is a network proxy running on each node in a cluster, which is part of the concept of Kubernetes Service. Used to handle individual host subnets and expose services to the outside world. It forwards requests to the correct POD/container across the various isolated networks in the cluster.

Kube-proxy maintains network rules on nodes. These network rules allow network communication with pods from network sessions inside or outside the cluster.

As shown below:

Kube-proxy uses Informer to create Service and endpoints objects, and then uses cluster-IP and port information on Service to create IPtable NAT rules for forwarding, or uses IPVS module to create VS server. Traffic through Cluster-IP is then forwarded to back-end PODS.

The iptables mode

Kube-service chain created by kube-proxy

iptables -nvL OUTPUT -t nat

There is a cluster-IP address jump to kube-svC-ejuv4zbkPDWOznf4 in the kube-Services chain with destination 10.96.148.206

iptables -nvL KUBE-SERVICES -t nat |grep service-demo

Then you look at the chain, and with a 1/3 chance you jump to one of them

iptables -nvL KUBE-SVC-EJUV4ZBKPDWOZNF4 -t nat

Finally, the kube-SEP-BTFJGISFGMEBGVUf chain found its DNAT rule

iptables -nvL KUBE-SEP-BTFJGISFGMEBGVUF -t nat

The request will be sent via DNAT to 100.101.184.61:9376, which is one of our pods.

IPVS mode

Compared to the IPtalbes mode, IPVS mode works in kernel mode and provides better performance in synchronizing proxy rules, while improving network throughput and providing better scalability for large clusters.

To work in IPVS mode, kube-proxy first creates a virtual network card kube-ipvs0 on the host after the previous Service is created, and assigns Service VIP to it as an IP address, as shown in the figure

Then kube-Proxy adds three IPVS virtual hosts to the IP address through Linux’s IPVS module, and sets polling mode among the three virtual hosts as the load balancing policy.

Use ipvsadm

5 10.96.148.206 ipvsadm - ln | grep -c

You can see that the IP address of the virtual server is the address of the Pod, so that traffic is directed to the destination Pod.

Kube-proxy = kube-proxy = Kube-proxy = Kube-proxy = Kube-proxy = Kube-proxy And how the two modes of Kube-Proxy use service IP to create iptables and IPVS rules to forward traffic to Pod.


I hope the essay is helpful to you. Please correct me if there is any mistake in the content.

You are free to reprint, modify and publish this article without my consent. [Public Number: Container Cloud Practice]