Hello everyone, I am xiao CAI, a desire to do CAI Not CAI xiao CAI in the Internet industry. Soft but just, soft praise, white piao just! Ghost ~ remember to give me a three – even oh!

This article mainly introduces the network setup in K8S

Refer to it if necessary

If it is helpful, do not forget the Sunday

Wechat public number has been opened, xiao CAI Liang, did not pay attention to the students remember to pay attention to oh!

K8s we have already introduced NameSpace, Pod, PodController and Volumn, and we believe that you will also have a lot of gain. So today we continue to come to k8S class, this section we are going to talk about how to access k8S after the service is built.

First, we need to understand what Service and Ingress are. In simple terms, both components are used for traffic loads. So what is traffic load? When we have deployed our application service via POD within the cluster, what is the next step? That is to give users access to our application services, this is the most important, otherwise you deploy, but users can not access, it is not useless ~

A, the Service

In K8S, THE POD is the carrier of the application, we can access our application through the POD IP, but we have made it clear that the POD has a life cycle, once the POD has a problem, the POD controller will destroy the POD and rebuild it. In order to solve this problem, K8S introduced the concept of a Service resource. Through this resource, multiple pods can be integrated to provide a unified entry address. Access the pod Service by accessing the Service entry address!

Service does not appear out of thin air. If you remember the key component of Node, kube-proxy! Let’s take a look at an old picture to remember:

This graph is familiar to all previous posts, yes! Kube-proxy plays a key role in this. Each Node runs a Kube-proxy Service process. When creating a Service, it writes the information of the created Service to etc through apI-server. However, Kube-Proxy will detect such Service changes based on the listening mechanism, and then it will convert the latest Service information into the corresponding access rules

At this point, you should have a general idea of what a Service does, but let’s take a closer look at it

1) Working mode

Kube-proxy supports three working modes, as follows:

1. userSpace

This mode is more stable, but less efficient! In userSpace mode, kube-proxy creates a listening port for each Service. When a request is sent to the Cluster IP, it is redirected to the port by the Iptables rules. Kube-proxy selects a Pod based on the LB algorithm to provide the service and establish the connection.

In this mode, kube-proxy acts as a four-layer responsible equalancer. Because kube-proxy runs in userSpace mode, data copy between kernel and userSpace will be increased during forwarding processing, so the efficiency is low.

2. iptables

In iptables mode, kube-proxy creates an IPtable rule for each pod at the Service backend, redirecting requests to Cluster IP directly to a POD IP. In this mode, Kube-Proxy does not play the role of layer 4 load balancer, but only creates iptables rules. The advantage of this mode is that it is more efficient than UserSpace mode, but it does not provide a flexible LB strategy. It also cannot retry when the backend Pod is unavailable.

3. ipvs

Similar to the Iptables schema, kube-Proxy monitors pod changes and creates appropriate IPVS rules. However, ipvS rules are more efficient than Iptables and support more LB algorithms.

practice

There are three modes of operation. Let’s take a quick look at what IPVs can do. Start by preparing a resource list:

The top half of this list is to create a Pod controller, and the bottom half is to create a Service.

To see the ipvS rule policy, enter the ipvsadm -ln command:

10.108.230.12 kube-Proxy sends requests to one of the pods based on the rr policy. This rule is generated on all nodes in the cluster at the same time, so access can be done on any node!

This mode must have the IPVS kernel module installed, otherwise it will be reduced to iptables

Open the ipvs:

  1. kubectl edit cm kube-proxy -n kube-system

Save the configuration and exit (:wq)

  1. kubectl delete pod -l k8s-app=kube-proxy -n kube-system
  2. ipvsadm -Ln

2) Service usage

We have covered several modes of Service. Let’s move on to the Service phase. We did the simple practice above, creating a Deploy, a Service, and then we can access the resource via serviceIp + targetPort or nodeIp + nodePort

However, this alone is not enough to learn the use of Services. Services are divided into five types, which will be introduced one by one below.

1. ClusterIP

Let’s look at the resource list for ClusterIP services:

Access clusterIp + Port through a post-creation test

If we look at the IPVS rules, we can see that the service can be forwarded to the corresponding three pods

Next we can use the describe directive to see what information the service has:

I found that both Endpoints and Session Affinity are new to us. So what is this thing?

Endpoint

An Endpoint is a resource object in K8S, stored in etCD, which records the access addresses of all the pods corresponding to a service. It is generated according to the selector description in the service configuration file. A Service consists of a set of Pods that are exposed through an Endpoint, which is a collection of ports that actually implement the Service. In layman’s terms, an Endpoint is a bridge between a service and a POD

Since it’s a resource, we can access it

The load distribution

Which we have successfully implemented by the Service access to the Pod resources, so we will make some changes, respectively into three Pod editor usr/share/nginx/index. The HTML file:

# pod01Pod01: IP-10.244.1.73# pod02Pod01: IP-10.244.1.73# pod03Pod03: IP-10.244.2.63Copy the code

Curl 10.96.10.10:80 curl 10.96.10.10:80 curl 10.96.10.10:80 curl 10.96.10.10:80 curl 10.96.10.10:80 curl

Have you noticed that the load distribution strategy is not polling! For Service access, K8S provides two load distribution policies:

  • If no distribution policy is defined, kube-proxy policies, such as random and polling, are used by default
  • Client address-based session persistence mode, in which all requests from the same client are forwarded to a fixed POD. And here we need something that we haven’t seen beforesessionAffinity

Earlier when we used ipvsadm-ln to look at the distribution policy, there was an RR field in the distribution policy. Yes, the RR value is polling

If we want to enable a session persistence distribution policy, we simply add the sessionAffinity:ClientIP option to the spec

If you look at the distribution policy again using the ipvsadm-ln command, you can see that the result has changed

Let’s take a quick test:

This has implemented the sticky session distribution strategy!

Note: ClusterIp services do not support external access, meaning that access through a browser is not valid and can only be accessed from within the cluster

2. HeadLiness

Many services need to support customization, if the product is positioned as a service, then the product is not a success. In some scenarios, developers do not want to use the load balancing functionality provided by a service, but rather want to control the load balancing policy themselves. K8s also supports this by introducing HeadLiness Services, which do not allocate ClusterIp and can only be accessed by the Service domain name.

Let’s look at HeadLiness’s resource manifest template:

The only difference with ClusterIp is the change to the ClusterIp: None property.

The ClusterIP is not assigned to the Service

Endpoints are enabled, and then we can enter any pod and check the resolution of the domain name:

You can see that the domain name has been resolved. The default domain name is service name. Namespace svc.cluster.local

3. NodePort

The above two service types are only accessible from within the cluster, but we deploy the service so that users can use it from outside the cluster. Then we need to use the service type we created earlier, which is NodePort Service.

This type of Service works by mapping the Service port to a Node port and accessing it via NodeIp+NodePort

Looks like the schematics just made sense. So let’s see how it is created from a resource list:

We create the service from the above resource list and then access:

We can see that it is accessible in both ways. We can also try it in the browser:

It turned out just as we had hoped!

Don’t feel satisfied with this, although it has been successfully accessed by users. We will continue to learn more about the remaining two types while the iron is hot

4. LoadBalancer

LoadBalancer: LoadBalancer. This type is similar to NodePort in that it exposes a port to the outside of the cluster. The main difference is that the LoadBalancer creates a LoadBalancer on the outside of the cluster, which requires the external environment to support. Requests sent by the external environment to this device are forwarded to the cluster after the device loads.

In the figure, there is a concept of Vip. Here, Vip refers to Vitual IP, also known as virtual IP. External users can load different services through accessing this virtual IP to achieve load balancing and high availability

5. ExternalName

A service of the ExternalName type is used to import services outside the cluster. It specifies the address of an external service using the ExternalName attribute, and then accesses the external service inside the cluster.

List of resources:

The domain name has been resolved successfully:

Dig @ 10.96.0.10 SVC - externalname. Cbuc - test. SVC. Cluster. The localCopy the code

Second, the Ingress

1) Working mode

We have already covered the use of several types of services. We have already seen that there are two types of services that are supported for external users to access in our POD: NodePort and LoadBalancer, but if we look at them carefully, we can find the disadvantages of these two services:

  • NodePort: Takes up a lot of ports on the cluster machine, which becomes more and more obvious as the number of cluster services increases
  • LoadBalancer: Each Service requires a LB, which is cumbersome and wasteful of resources and requires support from load balancing devices other than K8S

This shortcoming is certainly not unique to us, as the k8S founders have long been aware of it, and then introduced the concept of Ingress. The Ingress needs only one NodePort or LB to expose multiple services:

Ingress is actually a 7-layer load balancer. It is an abstraction of K8s for reverse proxy. It works like Nginx and can be understood as creating a lot of implicit rules in Ingress. The Ingress Controller then listens to these configuration rules to translate them into the Nginx reverse proxy configuration and then provides this service externally. There are two important concepts involved:

  • Ingress: A resource object in K8s that defines rules for how requests are forwarded to a service
  • Ingress Controller: The program to implement reverse proxy and load balancing. It parses the rules defined by Ingress and forwards requests according to the configured rules. There are many ways to implement this program, such asNginx, Contor, HaproxyEtc.

The Ingress controller has many ways to implement request forwarding, and we usually choose Nginx as the load. Let’s take Nginx as an example, let’s first understand its working principle:

  1. Users write Ingress Service rules that specify which Service in the K8s cluster each domain name corresponds to
  2. The Ingress controller dynamically senses changes in the Ingress service rules and generates a corresponding Nginx reverse proxy configuration
  3. The Ingress controller writes the generated Nginx configuration to a running Nginx service and updates it dynamically
  4. The client then accesses the domain name, and Nginx actually forwards the request to the specific Pod, completing the request process

Understand the working principle, we will come to the ground to achieve ~

2) Ingress use

1. Environment construction

Before using Ingress, we need to set up an Ingress environment

Step one:
#Pull the list of resources we needWget wget HTTP: / / https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodepor t.yamlCopy the code
Step 2:
#Create a resource
kubectl apply -f ./
Copy the code
Step 3:

Check whether the resource is created successfully

Here we are ready for the Ingress environment, next comes the test section ~

We prepared two Services, two Deployment, and a Pod with six copies created

If you are not ready for these resources by now, you should go back to your homework

The general structure diagram is as follows:

So let’s prepare an Ingress now to achieve the following results

Prepare the Ingress resource list:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-htpp
  namespace: cbuc-test
spec:
  rules:
  - host: dev.cbuc.cn
    http: 
      paths:
      - path: /
        backend:
          serviceName: svc-nodeport-dev
          servicePort: 80
  - host: pro.cbuc.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: svc-nodeport-pro
          servicePort: 80
Copy the code

We also need to add a domain name mapping to the local hosts on our computer:

It can then be accessed by using the domain name +nodePort on the web page

Here we have access to Ingress!

END

So far, we have also covered the use of K8s process, from the most basic nameSpace to this section of the network configuration, DO not know you learn to waste it ~! Now that K8S has come to an end, what are we going to do in the next chapter? Then remember to focus on ~

Today you work harder, tomorrow you will be able to say less words!

I am xiao CAI, a man who studies with you. 💋

Wechat public number has been opened, xiao CAI Liang, did not pay attention to the students remember to pay attention to oh!