“This is the sixth day of my participation in the First Challenge 2022. For details: First Challenge 2022”

【 K8S series 】 K8S learning 19, Service 2

Before, we briefly looked at the gameplay of service in K8S. Today, we will share some details about service. Let’s get started

Why is there a Service?

Because the service can make the external client do not care about the number of servers, how many PODS inside the service, can also connect to the server, and normal business processing

Let’s take an example, client -> front -> back – end

The client sends the traffic to the front-end Service, which sends the traffic to any POD, which in turn sends the traffic to the back-end Service’s Service, and finally requests the traffic to any POD of the back-end Service

At this point, the client does not need to know which POD provides the service, nor does it need to know the address of the pod, just the address and port of the front-end service

Create a Demo service

We can simply use a Service yamL file and deploy it. We still use tags to control which pods Service resources manage

For example, our minikube environment has three pods labeled appXMt-Kubia

We can write the Service list like this:

kubia-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-service
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: xmt-kubia
Copy the code

The selector here is the same as in ReplicaSet, where the Service exposes a port 80 that maps to the pod’s port 8080

We can also expose more than one port, but to expose more than one port, we have to write name, for example

Run kubectl create -f kubia-service.yaml to deploy a service resource

Kubectl get SVC You can see the following list of SVC resources

You can see if the request succeeds by accessing the SVC’s IP inside the POD

Select a POD and run the curl command to request an HTTP interface

Kubectl exec kubia-rs-SXgcq -- curl -s 10.106.228.254

It was possible to request permission

Of course, we can also access the address of the Service by going completely inside the POD

kubectl exec -it kubia-rs-sxgcq -- /bin/sh

The Cluster IP, which stands for internal Cluster IP, is only used for internal Cluster access. The current service we created is for other pods within the Cluster. You can access a set of pods managed by the Service by accessing its IP address 10.106.228.254

From the log, we can see that the request actually succeeds. We use any POD to request Service, which then sends traffic to a group of pods under its control and finally gets a response. We can draw a graph to understand the wave

Endpoint

Service and POD are directly connected to each other, but they also have a key resource, which is the Endpoint

Endpoints is a resource that exposes a list of IP addresses and ports for a service

kubectl get endpoints kubia-service

Endpoints: 172.17.0.6: $8080172.17. 0.7:8080172.17. 0.8:8080

As seen above, Endpoints expose a list of IP addresses and ports for the service

When a client connects to the service, the service proxy selects one of these IP and port pairs to send the request

Exposed services

** This IP address is a virtual IP address that can only be accessed from within the cluster. ** Requires access to the corresponding port

So, how do we expose the port of the service for external clients or external services to call?

Something like this request

Service resources can be exposed in the following three ways:

  • NodePort
  • LoadBalance
  • Ingress

Each of the three ways has its advantages and disadvantages. Let’s take a look at them in detail

The service of NodePort

NodePort, we can see that each cluster node will open a PORT, external clients can access the cluster node IP + PORT can access our exposed service, and then can request traffic to a group of any POD controlled by the service

We can look at the current service type in my experimental environment

The above service types are Cluster IP, so we need to change the existing SVC to NodePort or create a new service, which can be set to NodePort TYPE

Write a NodePort Service

kubia-service-nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia-svc
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 31200
  selector:
    app: xmt-kubia
Copy the code
  • Write a basic NodePort Service resource listing
    • Specify the service type as NodePort
    • Note that this is ports, which means that multiple port mappings can be configured and multiple ports can be exposed. As mentioned in the previous article, when exposing multiple ports, name should be written, and name should comply with the NAMING convention of K8S
    • Specify exposed port 31200 (external clients can access pod of service management via IP + 31200, pod of port 8080)
    • Note that if the nodePort port is not specified here, k8S will randomly assign a port, and the default range is30000-32767., we can also change the default port range of K8S to:/etc/kubernetes/manifests/kube-apiserver.yaml

Check the result after running

At this time, the service has exposed port 31200, we can access our cloud service experiment environment through Telnet IP port on our window, but please remember to open port 31200 in the firewall of the cloud server

We request the IP of the working node + the exposed port through the external client, and the process is like this:

External client traffic flows through port 31200 on the node and is redirected to any pod group managed by the Service

The service of LoadBalance

What are the drawbacks of the NodePort approach?

The port of the service is exposed above, but we need to specify the IP + port of the working node when accessing the service. ** If the working node we specify fails, then the external client requests the service will not respond, ** unless the client knows the IP of other working nodes

So this is where the load balancer comes in

To create a LoadBalance service, change type: NodePort to type: LoadBalance in the preceding NodePort resource list

LoadBalance

A LoadBalance load balancer can be placed in front of a node to ensure that requests from external clients can be sent to a healthy node and never to an abnormal working node

With LoadBalance, the above flow looks like this:

At this point, the client can easily access the POD on the healthy working node by simply accessing the fixed IP + port

The service of the Ingress

Now let’s look at a third way to expose a service, using the Ingress controller

Why ingress?

To use LoadBalance, a public IP address is required for LoadBalance and can be provided to only one Service

So if there are multiple services, we need to deploy multiple LoadBalance load balancers, so ingress comes into play

What can Ingress do?

Ingress can provide access and access to multiple services with only one public IP address. Like Nginx, ingress identifies requests to multiple services through routing

A simple process might look like this:

In this way, using ingress is much more convenient than using LoadBalance. However, when using ingress, it depends on your needs to choose the appropriate way

Write a demo of ingress

Before writing the Ingress demo, let’s make sure that the Ingress controller is enabled in our environment

minikube addons list

My environment is enabled. If not, I can execute commands to enable ingress control

minikube addons enable ingress

When opened, we can check the pod of ingress

kubectl get po --all-namespaces

Start creating the Ingress resource

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubia-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: hello.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubia-nodeport-svc
            port:
              number: 80

Copy the code
  • Ingress resource, enter apiVersion as networking.k8s.io/v1If K8S is before 1.17, then the apiVersion needs to be filled inextensions/v1beta1
  • ingressClassName: nginxSpecifies that rules are added to nginx
  • Rules, we can see that this rule can be added multiple times, which also proves that ingress can provide access to multiple services through multiple routes

Create the ingress resource and view the effect

Now we can configure the IP address of the local host hello.example.com pointing to ingress on any client (provided that the machine or cluster has access to K8S)

Once the configuration is complete, the client can make requests to hello.example.com and have fun

How does an external client access the ingress address?

As we can see in the figure above, an external client accesses the domain name hello.example.com

  • External clients first go to DNS and get the IP address of hello.example.com (ingress controller IP address)
  • The client sends an HTTP request to the Ingress controller. The host contains the domain name, and the Ingress queries the service corresponding to the domain name (in the Ingress resource) to find the mapping list of endpoint IP and port
  • Finally, the Ingress controller directs traffic directly to any pod in the set controlled by the Service

Instead of forwarding the traffic through the Service, the Ingress controller directs the traffic to the POD

Today is here, learning, if there is a deviation, please correct

Welcome to like, follow and favorites

Friends, your support and encouragement, I insist on sharing, improve the quality of the power

All right, that’s it for this time

Technology is open, our mentality, should be more open. Embrace change, live in the sun, and strive to move forward.

I am Nezha, welcome to like, see you next time ~