preface

Kubernetes duplicates are introduced above, because duplicates automatically keep your deployment running and healthy without any manual intervention. This article continues to introduce another powerful kubernetes service, which provides a service layer between the client and pod, providing a single access point, making it easier for clients to use POD.

service

The Kubernetes service is a resource that provides a single, unchanging access point to a set of pods with the same functionality; When the service exists, its IP address and port do not change. The client establishes connections through the IP address and port number. These connections are routed to any POD that provides the service.

1. Create a service

The connection of the service is load-balanced for all back-end pods, which pods belong to which service, by setting the label selector when defining the service;

[d:\k8s]$ kubectl create -f kubia-rc.yaml
replicationcontroller/kubia created

[d:\k8s]$ kubectl get pod
NAME          READY   STATUS              RESTARTS   AGE
kubia-6dxn7   0/1     ContainerCreating   0          4s
kubia-fhxht   0/1     ContainerCreating   0          4s
kubia-fpvc7   0/1     ContainerCreating   0          4s
Copy the code

Use the previous YAML file to create the pod. In the template, the label is set to App: kubia, so you need to specify the same label in yamL for creating the service (and the kubectl Expose method described earlier can also create the service) :

apiVersion: v1
kind: Service
metadata:
  name: kubia
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: kubia
Copy the code

Port specifies the port provided by the Service, targetPort specifies the port listened by the pod process, and finally specifies the tag selector. Pods with the same tag are managed by the current Service.

[d:\k8s]$ kubectl create -fkubia-svc.yaml service/kubia created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d15h kubia ClusterIP 10.96.191.193 <none> 80/TCP 4s [D: k8s]$kubectlexec kubia-6dxn7 -- curl -s http://10.96.191.193
You've hit kubia-fhxht [d:\k8s]$kubectl exec kubia-6dxn7 -- curl -s http://10.96.191.193 You've hit kubia-fpvc7
Copy the code

After creating the service, you can see that Kubia has been assigned a cluster-IP, which is an internal IP; As for how to test you can use the kubectl exec command to remotely execute any command on an existing POD container; The POD name can be any one of the three. The POD that receives the CRUL command is forwarded to the Service, which decides which POD to send the request to. If you want all requests from a particular client to point to the same POD each time, you can set the sessionAffinity attribute of the service to ClientIP.

1.1 Configuring Session Stickiness

apiVersion: v1
kind: Service
metadata:
  name: kubia
spec:
  sessionAffinity: ClientIP
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: kubia
Copy the code

Everything is the same except sessionAffinity: ClientIP

[d:\k8s]$ kubectl delete svc kubia
service "kubia" deleted

[d:\k8s]$ kubectl create -fkubia-svc-client-ip-session-affinity.yaml service/kubia created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP External-ip PORT(S) AGE Kubernetes ClusterIP 10.96.0.1 < None > 443/TCP 6d15h Kubia ClusterIP 10.96.51.99 < None > 80/TCP 25s [d:\k8s]$ kubectlexec kubia-6dxn7 -- curl -s http://10.96.51.99
You've hit kubia-fhxht [d:\k8s]$kubectl exec kubia-6dxn7 -- curl -s http://10.96.51.99 You've hit kubia-fhxht
Copy the code

1.2 Exposing multiple ports for the same service

If a POD listens on two or more ports, then the service can also expose multiple ports:

apiVersion: v1
kind: Service
metadata:
  name: kubia
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
  - name: https
    port: 443
    targetPort: 8080
  selector:
    app: kubia
Copy the code

Node.js only listens on port 8080, so configure the Service to point both ports to the same destination port to see if they can both be accessed:

[d:\k8s]$ kubectl create -fkubia-svc-named-ports.yaml service/kubia created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ClusterIP 10.96.0.1 <none> 443/TCP 6d18h kubia ClusterIP 10.96.13.178 <none> 80/TCP,443/TCP 7s [d:\k8s]$ kubectlexec kubia-6dxn7 -- curl -s http://10.96.13.178
You've hit kubia-fpvc7 [d:\k8s]$kubectl exec kubia-6dxn7 -- curl -s http://10.96.13.178:443 You've hit kubia-fpvc7
Copy the code

You can find access using both ports;

1.3 Using named Ports

If the destination port changes, you can name the port in the template that defines pod. In the Service, you can specify the name directly:

apiVersion: v1
kind: ReplicationController
metadata: 
   name: kubia
spec: 
   replicas: 3
   selector: 
      app: kubia
   template:
      metadata: 
         labels:
            app: kubia
      spec: 
         containers: 
         - name: kubia
           image: ksfzhaohui/kubia
           ports: 
           - name: http
             containerPort: 8080
Copy the code

The yaml file of the Service is also changed to use the name:

apiVersion: v1
kind: Service
metadata:
  name: kubia
spec:
  ports:
  - port: 80
    targetPort: http
  selector:
    app: kubia
Copy the code

TargetPort uses the name HTTP directly:

[d:\k8s]$ kubectl create -f kubia-rc2.yaml
replicationcontroller/kubia created

[d:\k8s]$ kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
kubia-4m9nv   1/1     Running   0          66s
kubia-bm6rx   1/1     Running   0          66s
kubia-dh87r   1/1     Running   0          66s

[d:\k8s]$ kubectl create -fkubia-svc2.yaml service/kubia created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d kubia ClusterIP 10.96.106.37 < None > 80/TCP 10s [D :\k8s]$kubectlexec kubia-4m9nv -- curl -s http://10.96.106.37
You've hit kubia-dh87r
Copy the code

2. Service discovery

The service provides us with a single unchanging IP address to access the POD. Should we create the service each time, then find the cluster-IP of the service and give it to other pods to use? Kubernets also provide other ways to access the service;

2.1 Discovering Services based on environment Variables

When pod starts running, Kubernets initializes a set of environment variables to point to the existing service; If the service created predates the creation of the client POD, the process on the POD can obtain the IP address and port number of the service based on environment variables.

[d:\k8s]$kubectl get SVC NAME TYPE cluster-ip external-ip PORT(S) AGE Kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d14h kubia ClusterIP 10.96.106.37 < None > 80/TCP 14h [D :\k8s]$kubectl get pod NAME READY STATUS RESTARTS AGE kubia-4m9nv 1/1 Running 0 14h kubia-bm6rx 1/1 Running 0 14h kubia-dh87r 1/1 Running 0 14h [d:\k8s]$ kubectlexec kubia-4m9nv env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=kubia-4m9nv KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT = TCP: / / 10.96.0.1:443 KUBERNETES_PORT_443_TCP = TCP: / / 10.96.0.1:443 KUBERNETES_PORT_443_TCP_PROTO = TCP KUBERNETES_PORT_443_TCP_PORT = 443 KUBERNETES_PORT_443_TCP_ADDR = 10.96.0.1 KUBERNETES_SERVICE_HOST = 10.96.0.1 KUBERNETES_SERVICE_PORT=443 NPM_CONFIG_LOGLEVEL=info NODE_VERSION=7.10.1 YARN_VERSION=0.24.4 HOME=/rootCopy the code

Since the POD here predates the creation of the service, there is no relevant information about the service:

[d:\k8s]$ kubectl delete po --all
pod "kubia-4m9nv" deleted
pod "kubia-bm6rx" deleted
pod "kubia-dh87r"deleted [d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE kubia-599v9 1/1 Running 0 48s kubia-8s8j4 1/1 Running 0  48s kubia-dm6kr 1/1 Running 0 48s [d:\k8s]$ kubectlexec kubia-599v9 env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=kubia-599v9 ... KUBIA_SERVICE_HOST = 10.96.106.37 KUBIA_SERVICE_PORT = 80...Copy the code

If you remove pod to create new pod, such service before creating a pod, access to environment variables can be found again have KUBIA_SERVICE_HOST and KUBIA_SERVICE_PORT, respectively, represents the IP address and port number of kubia service; This allows you to get the IP and port using environment variables;

2.2 Discovering Services using DNS

Kube-system has a default service kube-DNS under the namespace kube-system, which is backed by a coreDNS pod:

[d:\k8s]$ kubectl get svc --namespace kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 9D [D :\k8s]$kubectl get po-o wide --namespace kube-system NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES CoreDNS-7f9C544F75-H2cwn 1/1 Running 0 9d 172.17.0.3 minikube < None > NOMINATED NODE READINESS GATES CoreDNS-7f9C544F75-H2cwn 1/1 Running 0 9d 172.17.0.3 minikube < None > <none> coredns-7f9C544f75-x2ttk 1/1 Running 0 9d 172.17.0.2 minikube <none> <none>Copy the code

Process DNS queries running on pods are responded to by Kubernets’ own DNS server, which knows all the services running on the system; The POD of the client can be accessed by a fully qualified domain name (FQDN) knowing the name of the service

[d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://kubia.default.svc.cluster.local
You've hit kubia-8s8j4
Copy the code

Kubia is the service name, default is the namespace where the service resides, and svc.cluster.local is the configurable cluster domain suffix used in all cluster local service names. If two pods are in the same namespace, you can omit svc.cluster.local and default and use the service name instead:

[d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://kubia.default
You've hit kubia-dm6kr [d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://kubia You've hit kubia-dm6kr
Copy the code

2.3 Running the shell in POD

d:\k8s>winpty kubectl exec -it kubia-599v9 -- sh
# curl -s http://kubia
You've hit kubia-dm6kr
# exit
Copy the code

Run bash on a POD container with the kubectl exec command so that you don’t have to execute kubectl exec for every command you want to run; Because in the Windows environment using winpty tools;

Connects services outside the cluster

The back end described above is a service for one or more PODS running in a cluster; However, there are also cases where you want to expose external services through Kubernetes service features. You can use the Endpoint and external service alias.

1.Endpoint

The service is not directly connected to the POD; There is one resource in between: the Endpoint resource

[d:\k8s]$ kubectl describe svc kubia Name: kubia Namespace: default Labels: <none> Annotations: <none> Selector: App =kubia Type: ClusterIP IP: 10.96.106.37 Port: <unset> 80 / TCP and TargetPort: HTTP/TCP Endpoints: 172.17.0.10: $8080172.17. 0.11:8080172.17. 0.9:8080 Session Affinity: None Events: <none> [d:\k8s]$ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES Kubia-599v9 1/1 Running 0 3H51M 172.17.0.10 minikube < None > < None > kubiA-8s8J4 1/1 Running 0 3H51M 172.17.0.11 minikube <none> <none> kubia-dm6kr 1/1 Running 0 3H51M 172.17.0.9 minikube <none> <none>Copy the code

You can see that Endpoints correspond to the IP and port of the pod. When a client connects to a service, the service proxy selects one of these IP and port pairs and redirects the incoming connection to the server listening at that location;

2. Manually configure the endpoint of the service (internal)

If a service is created that does not contain a POD selector, Kubernetes will not create an Endpoint resource. In this case, you need to create an Endpoint resource to specify the Endpoint list for the service;

apiVersion: v1
kind: Service
metadata:
  name: external-service
spec:
  ports:
  - port: 80
Copy the code

The above definition does not specify a selector:

[d:\k8s]$ kubectl create -fexternal-service.yaml service/external-service created [d:\k8s]$ kubectl get svc external-service NAME TYPE CLUSTER-IP External-ip PORT(S) AGE External-service ClusterIP 10.96.241.116 < None > 80/TCP 74s [d:\k8s]$kubectl describe SVC external-service Name: external-service Namespace: default Labels: <none> Annotations: <none> Selector: <none> Type: ClusterIP IP: 10.96.241.116 Port: <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
Copy the code

External-service Endpoints are none because no selector is specified. In this case, you can manually configure the Endpoint of the service.

ApiVersion: v1 kind: Endpoints metadata: name: external-service subsets: -addresses: -ip: 172.17.0.9 -ip: 172.17.0.10 Ports: -port: 8080Copy the code

The Endpoint object needs to have the same name as the service and contain a list of destination IP addresses and ports for the service:

[d:\k8s]$ kubectl create -fexternal-service-endpoints.yaml endpoints/external-service created [d:\k8s]$ kubectl describe svc external-service Name:  external-service Namespace: default Labels: <none> Annotations: <none> Selector: <none> Type: ClusterIP IP: 10.96.241.116 Port.unset> 80 / TCP and TargetPort: 80 / TCP Endpoints: 172.17.0.10: $8080172.17. 0.9:8080 Session Affinity: None Events: <none> [d:\k8s]$ kubectlexec kubia-599v9 -- curl -s http://external-service
You've hit kubia-dm6kr
Copy the code

[kubectl exec] [kubectl exec] [kubectl exec] [Kubectl exec]

3. Manually configure the service endpoint (external)

Kubernetes internal IP port can also be configured to start a service outside kubernetes:

ApiVersion: v1 kind: Endpoints metadata: name: external-service subsets: -addresses: -ip: 10.13.82.21 Ports: -port: 8080Copy the code

10.13.82.21:8080 is a common Tomcat service that can be started on the local computer

[d:\k8s]$ kubectl create -f external-service-endpoints2.yaml
endpoints/external-service created

[d:\k8s]$ kubectl create -f external-service.yaml
service/external-service created

[d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://external-service
ok
Copy the code

The test returns a response from the external service

4. Create an external service alias

In addition to manually configuring the Endpoint of the service instead of exposing the external service method, you can also specify an alias for the external service, such as the domain name api.ksfzhaohui.com for 10.13.82.21

apiVersion: v1
kind: Service
metadata:
  name: external-service
spec:
  type: ExternalName
  externalName: api.ksfzhaohui.com
  ports:
  - port: 80
Copy the code

To create a service with an aliased external service, set one of the type fields for creating the service resource to ExternalName; Specify the domain name of the external service in externalName:

[d:\k8s]$ kubectl create -f external-service-externalname.yaml
service/external-service created

[d:\k8s]$ kubectl exec kubia-599v9 -- curl -s http://external-service:8080
ok
Copy the code

The test returns a response from the external service

Expose the service to external clients

Kubernetes provides three ways to expose some services to the outside: NodePort service, LoadBalance service and Ingress resource mode. The following are introduced and used respectively.

1.NodePort services

Create a service and set its type to NodePort. By creating the NodePort service, you can have Kubernetes keep a port on all of its nodes (using the same port number on all nodes) and then forward incoming connections to pod.

apiVersion: v1
kind: Service
metadata:
  name: kubia-nodeport
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30123
  selector:
    app: kubia
Copy the code

Specify NodePort as the service type and 30123 as the NodePort.

d:\k8s]$ kubectl create -fkubia-svc-nodeport.yaml service/kubia-nodeport created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 < None > 443/TCP 31D kubia-nodeport nodeport 10.96.59.16 < None > 80:30123/TCP 3s  [d:\k8s]$ kubectlexec kubia-7fs6m -- curl -s http://10.96.59.16
You've hit kubia-m487j
Copy the code

In order for the external to access the internal POD service, it is necessary to know the IP of the node. The node we use here is Minikube, because minikube is installed in the local Windows system and can be accessed directly using the internal IP of Minikube

d:\k8s]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME minikube Ready Master 34D v1.17.0 192.168.99.108 < None > Buildroot 2019.02.7 4.19.81 docker://19.3.5Copy the code

2.LoadBalance service

Whereas NodePort allows access to internal pods through port 30312 of any node, LoadBalance has its own unique publicly accessible IP address. LoadBalance is an extension of NodePort that allows services to be accessed through a dedicated load balancer;

apiVersion: v1
kind: Service
metadata:
  name: kubia-loadbalancer
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: kubia
Copy the code

If the service type is LoadBalancer, no node port is required.

d:\k8s]$ kubectl create -fkubia-svc-loadbalancer.yaml service/kubia-loadbalancer created [d:\k8s]$ kubectl get svc NAME TYPE CLUSTER-IP External-ip PORT(S) AGE kubernetes ClusterIP 10.96.0.1 < None > 443/TCP 31D kubia-loadbalancer loadbalancer 10.96.207.113 <pending> 80:30038/TCP 7s kubia-nodePort Nodeport 10.96.59.16 < None > 80:30123/TCP 32MCopy the code

You can see that although we did not specify the node port, the 30038 node port was automatically started after the creation


3. Understand and prevent unnecessary network hops

When an external client connects to a service through a node port, the randomly selected POD does not necessarily run on the same node that receives the connection; You can prevent this extra hop count by configuring the service to redirect only external traffic to a POD running on the node receiving the connection;

apiVersion: v1
kind: Service
metadata:
  name: kubia-nodeport-onlylocal
spec:
  type: NodePort
  externalTrafficPolicy: Local
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30124
  selector:
    app: kubia
Copy the code

This is done by setting the externalTrafficPolicy field in the spec section of the service;

4.Ingress service

Each LoadBalancer service needs its own LoadBalancer and a unique public IP address. Ingress needs only one public IP address to provide access to many services. When a client sends an HTTP request to the Ingress, the Ingress forwards it to the corresponding service based on the host name and path of the request.

4.1 Ingress Controller

Ingress resources work properly only when the Ingress controller runs in the cluster. Different Kubernetes environments use different controller implementations, but some do not provide a default controller; The Minikube I used here requires add-ons to be enabled to use the controller;

[d:\Program Files\Kubernetes\Minikube]$ minikube addons list
- addon-manager: enabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- gvisor: disabled
- helm-tiller: disabled
- ingress: disabled
- ingress-dns: disabled
- logviewer: disabled
- metrics-server: disabled
- nvidia-driver-installer: disabled
- nvidia-gpu-device-plugin: disabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled
- storage-provisioner-gluster: disabled
Copy the code

Listing all the accessory components, you can see that the ingress is not available, so it needs to be enabled

[d:\Program Files\Kubernetes\Minikube]$ minikube addons enable ingress
* ingress was successfully enabled
Copy the code

Once started, you can view the pod in the kube-System namespace

[d:\k8s]$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f9c544f75-h2cwn 1/1 Running 0 55d coredns-7f9c544f75-x2ttk 1/1 Running 0 55d etcd-minikube 1/1 Running 0 55d kube-addon-manager-minikube 1/1 Running 0 55d  kube-apiserver-minikube 1/1 Running 0 55d kube-controller-manager-minikube 1/1 Running 2 55d kube-proxy-xtbc4 1/1 Running 0 55d kube-scheduler-minikube 1/1 Running 2 55d nginx-ingress-controller-6fc5bcc8c9-nvcb5   0/1     ContainerCreating   0          8s
storage-provisioner                         1/1     Running             0          55d
Copy the code

Create a pod named nginx-ingress-controller that will remain in the mirror state and display the following error:

Failed to pull image "Quay. IO/kubernetes - ingress - controller/nginx - ingress - controller: 0.26.1": rpc error: code = Unknown desc = context canceled
Copy the code

This is because the image below quay. IO cannot be downloaded in China, you can use the Aliyun image:

Image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1Copy the code

Can deploy under the ingress – nginx/static/mandatory. The yaml file modification of mirror for ali cloud image, and then to create:

[d:\k8s]$ kubectl create -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
Copy the code

Look again at the POD under the kube-System namespace

[d:\k8s]$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f9c544f75-h2cwn 1/1 Running 0 56d coredns-7f9c544f75-x2ttk 1/1 Running 0 56d etcd-minikube 1/1 Running 0 56d kube-addon-manager-minikube 1/1 Running 0 56d  kube-apiserver-minikube 1/1 Running 0 56d kube-controller-manager-minikube 1/1 Running 2 56d kube-proxy-xtbc4 1/1 Running 0 56d kube-scheduler-minikube 1/1 Running 2 56d nginx-ingress-controller-6fc5bcc8c9-nvcb5   1/1     Running   0          10m
storage-provisioner                         1/1     Running   0          56d
Copy the code

Nginx-ingress-controller is Running, and the ingress resource is Running.

4.2 Ingress resources

After the Ingress controller is started, the Ingress resource can be created

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - host: kubia.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kubia-nodeport
          servicePort: 80
Copy the code

Specify resource type as Ingress and a single rule that all requests to kubia.example.com will be forwarded to the kubia-nodeport service on port 80;

$kubectl get SVC NAME TYPE cluster-ip external-ip PORT(S) AGE Kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53d [D :\k8s]$kubectl get SVC NAME TYPE cluster-ip external-ip PORT(S) AGE Kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53d Kubia -nodePort nodePort 10.96.204.104 < None > 80:30123/TCP 21h [d:\k8s]$kubectl create-fkubia-ingress.yaml ingress.extensions/kubia created [d:\k8s]$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE kubia Kubia.example.com 192.168.99.108 m4s 80 6Copy the code

You need to map the domain name to ADDRESS:192.168.99.108, modify the hosts file, then you can directly access the domain name, and finally the request will be forwarded to the Kubia-nodeport service

The request process is as follows: When a browser requests a domain name, the DNS server is queried, and the DNS returns the IP address of the controller. The client sends the request to the controller and specifies kubia.example.com in the header; Then the controller determines which service the client needs to access according to the header information. The POD IP is then viewed through the Endpoint object associated with the service and the request is forwarded to one of them;

4.3 Ingress exposes multiple services

Rules and Paths are arrays and you can configure multiple paths

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubia2
spec:
  rules:
  - host: kubia.example.com
    http:
      paths:
      - path: /v1
        backend:
          serviceName: kubia-nodeport
          servicePort: 80
      - path: /v2
        backend:
          serviceName: kubia-nodeport
          servicePort: 80
  - host: kubia2.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kubia-nodeport
          servicePort: 80
Copy the code

Multiple hosts and paths are configured. The same service is mapped here for convenience.

[d:\k8s]$ kubectl create -fkubia-ingress2.yaml ingress.extensions/kubia2 created [d:\k8s]$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE kubia 41 m kubia2 192.168.99.108 80 kubia.example.com, kubia.example.com, kubia2.example.com 192.168.99.108 80 15 mCopy the code

The host file also needs to be configured. The tests are as follows:

4.4 Configuring Ingress to Process TLS Transmission

The preceding messages are based on THE Http protocol, which requires a certificate. When a client creates a TLS connection to the Ingress controller, the controller terminates the TLS connection. The client and the Ingress controller are encrypted, but the Ingress controller and pod are not. To enable the controller to do this, attach the certificate and private key to the Ingress;

[root@localhost batck-job]# openssl genrsa -out tls.key 2048Generating RSA private key, 2048 bit long modulus .................................................................. + + +... +++ e is 65537 (0x10001) [root@localhost batck-job]# openssl req -new -x509 -key tls.key -out tls.cert -days 360 -subj /CN=kubia.example.com

[root@localhost batck-job]# ll
-rw-r--r--. 1 root root 1115 Feb 11 01:20 tls.cert
-rw-r--r--. 1 root root 1679 Feb 11 01:20 tls.key
Copy the code

The two generated files create Secret

[d:\k8s]$ kubectl create secret tls tls-secret --cert=tls.cert --key=tls.key
secret/tls-secret created
Copy the code

You can now update the Ingress object so that it also receives HTTPS requests from kubia.example.com;

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubia
spec:
  tls:
  - hosts: 
    - kubia.example.com
    secretName: tls-secret
  rules:
  - host: kubia.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kubia-nodeport
          servicePort: 80
Copy the code

TLS specifies the relevant certificate

[d:\k8s]$ kubectl apply -f kubia-ingress-tls.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
ingress.extensions/kubia configured
Copy the code

Use a browser to access HTTPS, as shown in the following figure

Pod ready signal

As long as the POD tag matches the pod selector of the service, the POD can be used as the back end of the service. However, if the POD is not ready, it cannot process the request. At this time, the probe needs to be ready to check whether the POD is ready, if the check succeeds, it can be used as the back end of the service to process the message.

1. The probe type is ready

There are three types of ready probes:

  • Exec probe: where the process is executed, the state of the container is confirmed by the process exit status code;
  • Http GET probe: Sends an Http GET request to the container to determine whether the container is ready based on the Http status code of the response.
  • Tcp socket probe: It opens a Tcp connection to the container’s specified port, and if the connection is established, the container is considered ready.

Kubernetes calls the probe periodically and takes action based on the results of the ready probe. If a POD reports that it is not ready, the pod is removed from the service. If the POD is ready again, add the pod again;

2. Add a probe to the POD

Edit ReplicationController and modify the POD template to add ready probes

[d:\k8s]$ kubectl edit rc kubia
libpng warning: iCCP: known incorrect sRGB profile
replicationcontroller/kubia edited

[d:\k8s]$ kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
kubia-7fs6m   1/1     Running   0          22d
kubia-m487j   1/1     Running   0          22d
kubia-q6z5w   1/1     Running   0          22d
Copy the code

Edit ReplicationController as shown below and add readinessProbe

apiVersion: v1
kind: ReplicationController
metadata: 
   name: kubia
spec: 
   replicas: 3
   selector: 
      app: kubia
   template:
      metadata: 
         labels:
            app: kubia
      spec: 
         containers: 
         - name: kubia
           image: ksfzhaohui/kubia
           ports: 
           - containerPort: 8080
           readinessProbe:
           exec:
            command:
            - ls
            - /var/ready
Copy the code

The ready probe periodically executes the ls/var/ready command inside the container. If the file exists, the ls command returns an exit code of 0, otherwise a non-zero exit code is returned. If the file exists, the ready probe will succeed, otherwise it will fail; After editing ReplicationController, no new pods have been generated, so we can find that all the above pods are READY to process messages.

[d:\k8s]$ kubectl delete pod kubia-m487j
pod "kubia-m487j" deleted

[d:\k8s]$ kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
kubia-7fs6m   1/1     Running   0          22d
kubia-cxz5v   0/1     Running   0          114s
kubia-q6z5w   1/1     Running   0          22d
Copy the code

Delete a POD and immediately create a pod with a READY probe. You can see that long time READY is 0.

conclusion

This article first introduces the basic knowledge of services, how to create a service discovery service; Then it introduces the direct connecter endpoint of service and POD. Finally, three ways to expose services to external clients are highlighted.

reference

Kubernetes in Action

Blog address

Github