Service

Kubernetes Service defines an abstraction: a logical grouping of pods, and a policy to access them — often referred to as microservices. This is a set of pods that can be accessed by a Service through a Label Selector,

  • A Service provides load balancing capabilities, but has the following limitations: It provides only layer 4 load balancing capabilities, but does not provide layer 7 load balancing capabilities. But you can do it by increasingIngressTo add a layer 7 load balancing capability

Service Proxy mode

In a Kubernetes cluster, each Node runs a Kube-Proxy process. Kube-proxy is responsible for implementing a VIP (virtual IP) form for the Service instead of ExternalName. In Kubernetes V1.0, the agent is entirely in Userspace. In Kubernetes V1.1, the iptables agent was added, but it is not the default operating mode. As of Kubernetes V1.2, the default is an Iptables agent. In Kubernetes V1.8.0-beta.0, the ipvS proxy was added, which is used by default starting with Kubernetes version 1.14.

Why use a service proxy instead of DNS polling?

  1. DNS implementations have a long history of not complying with recorded TTL and caching the results of name lookups after they expire.
  2. Some applications perform a DNS lookup only once and cache the results indefinitely.
  3. Even if the application and library are properly reresolved, low or zero TTL values on DNS records can place a high load on DNS, making administration difficult.

  • inKubernetesIn the cluster, eachNodeRun akube-proxyprocess
  • apiserverBy monitoring thekube-proxyTo monitor services and endpoints
  • iptablesA Service proxy that stores address mappings and rules throughkube-proxyWrite the
  • Passed when the client accesses the nodeiptablesTo implement the
  • kube-proxyThrough the POD tag (lablesMatches to determine whether the breakpoint message is written toEndpointsContains references to all pods matched by the service selector (via tag matching).
  • kube-proxyAccess pods based on different load balancing policies.

1. userspace

2. iptables

3. ipvs(common)

In this mode, Kube-Proxy monitors Kubernetes Service objects and Endpoints, The NetLink interface is invoked to create ipvS rules accordingly and to periodically synchronize ipvS rules with Kubernetes Service objects and Endpoints objects to ensure that the IPVS state is as expected. When a service is accessed, traffic is redirected to one of the back-end pods

Like Iptables, IPVS follows the hook functionality of NetFilter, but uses hash tables as the underlying data structure and works in kernel space. This means ipvS can redirect traffic more quickly and has better performance when synchronizing proxy rules. In addition, IPVS provides additional options for load balancing algorithms, such as:

  • rr: Polling scheduling
  • lc: Indicates the minimum number of connections
  • dh: Target hash
  • shSource: hash
  • sed: Minimum expected delay
  • nq: No queuing scheduling

Note: To run Kube-Proxy in IPVS mode, you must ensure that the IPVS kernel module is installed before starting Kube-Proxy. When kube-Proxy starts in IPVS proxy mode, it verifies that the IPVS kernel module is available on the node. If no IPVS kernel module is detected, kube-Proxy will fall back to running in iptables proxy mode.

As you can see, the cluster uses the ipvS proxy:

[root@k8s-master01 yaml]# ipvsadm -Ln
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.66.10:6443           Masq    1      3          0

[root@k8s-master01 yaml]# kubectl get svcNAME TYPE cluster-ip external-ip PORT(S) AGE Kubernetes ClusterIP 10.96.0.1 < None > 443/TCP 21DCopy the code

Type of Service

1. ClusterIp

The default type is automatically assigned a virtual IP that is accessible only within the Cluster and can only be accessed by applications within the Cluster

ClusterIP uses the corresponding proxy mode (iptable is used as an example) on each node to forward data sent to the corresponding port of clusterIP to Kube-Proxy. Then kube-proxy implements load balancing internally, and can query the address and port of the corresponding POD under this service, and then forward the data to the corresponding POD address and port

In order to achieve the functions shown in the figure, the following components are mainly required to work together:

  1. apiserver: User approvedkubectlThe command toapiserverSend to createserviceThe command,apiserverAfter receiving the request, store the data toetcd
  2. kube-proxy:kubernetesEach node of phi is calledkube-porxyThis process is responsible for perceptionservice.podChange and write the change information to the localiptableIn the rules
  3. iptable: Uses technologies such as NATvirtualIPIs diverted toendpoint

ClusterIP instance

  1. To create a Deployment (the Deployment used by the following Service types), start with a list of svC-deployment. yaml resources:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: myapp-deploy		# the name of the Deployment
     namespace: default
    spec:
     replicas: 3
     selector:
       matchLabels:
         app: myapp
         release: stable
     template:
       metadata:
         name: myapp	# Pod name
         labels:
           app: myapp
           release: stable
       spec:
         containers:
         - name: myapp						# container name
           image: wangyanglinux/myapp:v2	# nginx
           imagePullPolicy: IfNotPresent
           ports:
           - name: http
             containerPort: 80
    Copy the code
  2. Create a Service resource listing to broker the three PODS created above. Myapp – SVC. Yaml:

    apiVersion: v1
    kind: Service
    metadata:
      name: myapp-svc       # the name of the Service
    spec:
      type: ClusterIP		The default value is ClusterIP
      selector:             Use the same tag as the Pod resource object on the back end
        app: myapp
        release: stable
      ports:
      - name: http
        port: 80            # Service Port number
        targetPort: 80      # back-end Pod port number
        protocol: TCP       # Protocol used
    Copy the code
  3. access

    [root@k8s-master01 yaml]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-deploy-6998f78dfc-nt28j 1/1 Running 0 11m 10.244.1.29k8s-node01 <none> <none> myapp-deploy-6998f78dfc-p9bkc 1/1 Running 0 11m 10.244.1.30k8s-node01 <none> <none> myapp-deploy-6998f78dfc-xqwbk 1/1 Running 0 11m 10.244.2.25 k8s-node02 <none> <none>Kubectl get service is also available
    [root@k8s-master01 yaml]# kubectl get svcNAME TYPE cluster-ip external-ip PORT(S) AGE myapp-svC ClusterIP 10.101.140.64 < None > 80/TCP 6sYou can see that the load balancing policy is polling
    [root@k8s-master01 yaml]# ipvsadm -LnIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.101.140.64:80 RR -> 10.244.1.29.80 Masq 10 2 -> 10.244.1.30:80 Masq 10 3 -> 10.244.2.25.25:80  Masq 1 0 3The load balancing policy is polling
    [root@k8s-master01 yaml]# curl 10.101.140.64 / hostname. HTML
    myapp-deploy-6998f78dfc-nt28j
    Copy the code

Headless Service

Sometimes load balancing and a separate Service IP are not required or desired. In this case, you can create a Headless Service by specifying the ClusterIP (spec. ClusterIP) value to None. Such services do not assign Cluster IP, kube-Proxy does not process them, and the platform does not load balance and route them. Mainly used to solve the problem of Hostname and Podname changes. When creating StatefulSet, you must first create a Headless Service.

Usage Scenarios:

  • First, the client can query DNS to obtain the Real Server information if it wants to decide which Real Server to use.

  • Second: Headless Services has another use (PS: the feature we need). Each endpoint, or Pod, of a Headless Service has a DNS domain name. When you delete a Pod, the IP address of the Pod changes, but the name of the Pod does not change. In this way, pods can access each other using the Pod name.

  1. Create a Headless Service, again matching the Pod under the Deployment (ClusterIP instance) created above

    apiVersion: v1
    kind: Service
    metadata:
      name: myapp-headless    # service object name
    spec:
      clusterIP: None    Set the ClusterIP field to None, indicating a service resource object of type Headless
      selector:
        app: myapp    Match the POD resource defined above
      ports:
      - port: 80    # service port
        targetPort: 80    # Back-end POD port
        protocol: TCP    # agreement
    Copy the code
  2. View the SVC

    Cluster-ip is None
    [root@k8s-master01 yaml]# kubectl get svc
    NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    myapp-headless   ClusterIP   None            <none>        80/TCP    8s
    Copy the code
  3. Query the A record of the domain name in DNS

    K8s coreDNS IP address
    [root@k8s-master01 yaml]# kubectl get pod -n kube-system -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-5c98db65d4-5ztqn 1/1 Running 6 21d 10.244.0.11k8s-master01 <none> <none> CorednS-5C98DB65d4-pc62t 1/1 Running 6 21d 10.244.0.10k8s-master01 <none> <none># yum -y install bind-utils # yum -y install bind-utils
    DNS server IP: select one of the two COREDNS IP addresses obtained above
    # the default domain name: SVC_NAME NAMESPACE. SVC. Cluster. The local
    [root@k8s-master01 yaml]# dig - t A myapp - headless. Default. SVC. Cluster. The local. @ 10.244.0.11;; ANSWER SECTION: Myapp - headless. Default. SVC. Cluster. The local. 30 IN A 10.244.1.30 myapp - headless. Default. SVC. Cluster. The local. 30 IN A 10.244.1.29 myapp - headless. Default. SVC. Cluster. The local. 30 IN A 10.244.2.25# You can see that the result of parsing corresponds to the Pod created earlier, so you can access the Pod through the domain name
    Copy the code

2. NodePort

Bind a port on each machine to the Service based on ClusterIP, so that the Service can be accessed through NodeIp:NodePort

NodePort instance

  1. Create a NodePort Service to match the Deployment created in the ClusterIP instance

    apiVersion: v1
    kind: Service
    metadata:
      name: myapp       # service object name
    spec:
      type: NodePort        ClusterIP = ClusterIP = ClusterIP
      selector:
        app: myapp          Match the POD resource defined above
        release: stable
      ports:
      - port: 80            # service port
        targetPort: 80      # Back-end POD port
        nodePort: 30001     Node ports, ports exposed on physical machines
        protocol: TCP       # agreement
    Copy the code
  2. View the SVC

    [root@k8s-master01 yaml]# kubectl get svcNAME TYPE cluster-ip external-ip PORT(S) AGE myapp NodePort 10.97.100.171 < None > 80:30001/TCP 7sCopy the code

    Port 30001 can be accessed externally (every node in the K8S cluster can access this port) :

3. LoadBalancer

LoadBalancer and nodePort are the same. The difference is that the loadBalancer creates the LB on the basis of the nodePort with the cloud provider to direct the flow to the node (external loadBalancer) and forward the request to NodeIp: nodePort

  1. LB is provided by the supplier and is charged
  2. The server must be a cloud server

4. ExternalName

Bring services from outside the cluster into the cluster and use them directly within the cluster. No proxy of any kind was created, which is only supported by Kube-DNS in Kubernetes 1.7 or later

This type of Service maps the Service to the contents of the externalName field (for example, private repository: hub.zyx.com) by returning CNAME and its value. ExternalName Service is a special case of a Service that does not have a selector and does not define any port or Endpoint. Instead, for a service running outside the cluster, it provides the service by returning the alias of the external service

ExternalName instance

apiVersion: v1
kind: Service
metadata:
  name: my-service-1
  namespace: default
spec:
  type: ExternalName
  externalName: hub.zyx.com
Copy the code

When the query host my – service. Defalut. SVC. Cluster. The local (SVC_NAME. NAMESPACE. SVC. Cluster. The local), The cluster’s DNS service will return a CNAME (alias) record with the value hub.zyx.com. Access to this service works like the others, except that the redirection takes place at the DNS layer and does not take place as a proxy or forward

[root@k8s-master01 yaml]# kubectl get svc
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
my-service-1     ExternalName   <none>          hub.zyx.com   <none>         13s

[root@k8s-master01 yaml]# t A dig - my - service - 1. The default. The SVC. Cluster. The local. @ 10.244.0.11
;; ANSWER SECTION:
my-service-1.default.svc.cluster.local.	30 IN CNAME hub.zyx.com.
Copy the code

Third, Ingress – Nginx

Ingress-nginx github: github.com/kubernetes/…

Ingress – Nginx’s official website: kubernetes. Making. IO/Ingress – ngi…

1. The deployment of Ingress – Nginx

Official site: kubernetes. Making. IO/ingress – ngi…

The version I installed here is Controller-V0.40.2, which adopts the bare-metal installation mode:

The official method:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/baremetal/deploy.ya mlCopy the code

I first downloaded the.yaml file and then created it:

Get the YAML file. You can use this file to create or delete the Ingresswget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/baremetal/deploy.ya ml# found in yaml files when installation ValidatingWebhookConfiguration version error, to obtain version first
[root@k8s-master01 yaml]# kubectl explain ValidatingWebhookConfiguration
KIND:     ValidatingWebhookConfiguration
VERSION:  admissionregistration.k8s.io/v1beta1

# modified yaml ValidatingWebhookConfiguration corresponding versions of pod

# installation Ingress
kubectl apply -f deploy.yaml

# remove Ingress
kubectl delete -f deploy.yaml
Copy the code

2. Ingress HTTP proxy access example

  1. Create two Pod and ClusterIP Services to provide Nginx internal access

    # vim deployment-nginx.yaml
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
     name: nginx-dm
    spec:
     replicas: 2
     selector:
       matchLabels:
         name: nginx
     template:
       metadata:
         labels:
           name: nginx
       spec:
         containers:
         - name: nginx
           image: wangyanglinux/myapp:v1
           ports:
           - name: http
             containerPort: 80
    ---
    Define SVC for nginx
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-svc
    spec:
      ports:
      - port: 80
        targetPort: 80
        protocol: TCP
      selector:
        name: nginx
    Copy the code
  2. The Ingress is then created to expose the service externally

    apiVersion: extensions/v1beta1		# kubectl explain ingress
    kind: Ingress						# type
    metadata:
      name: nginx-test					# the name
    spec:
      rules:							# rule, List, can configure multiple domain names.
      - host: www.zyx.com				# host domain
        http:
          paths:						# path
          - path: /						# Root path of the domain name
            backend:
              serviceName: nginx-svc	The name of the SVC created above is linked here
              servicePort: 80			# SVC port
    Copy the code
    # check ingress
    kubectl get ingress
    Copy the code

    The spec.rules in the Ingress resource list will eventually be translated into nginx’s virtual host configuration, which can be viewed in the Ingress-nginx container

    kubectl exec ingress-nginx-controller-78fd88bd5-sbrz5 -n ingress-nginx -it -- /bin/bash
    
    cat nginx.conf 
    Copy the code
  3. Modify the hosts file to set domain name resolution

    192.168.66.10   www.zyx.com
    Copy the code
  4. Check the port

    [root@k8s-master01 ingress]# kubectl get svc -n ingress-nginx
    NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
    ingress-nginx-controller             NodePort    10.96.189.184   <none>        80:31534/TCP,443:31345/TCP   10h
    Copy the code
  5. Domain name visit

Ingress HTTPS proxy access example

  1. Create a certificate and store cert

    Generate private key and certificate
    openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
    
    Kubectl create secret resource, this secret will be used later
    kubectl create secret tls tls-secret --key tls.key --cert tls.crt
    
    Check out kubectl's Secret resource
    kubectl get secret tls-secret
    Copy the code
  2. Create the Deployment and Service, again using the Deployment created above

  3. Create the Ingress

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: nginx-test-https
    spec:
      tls:
      - hosts:
        - www.zyx3.com					# host host
        secretName: tls-secret			# correspond to the secret created above
      rules:							# rule, List, can configure multiple domain names.
      - host: www.zyx3.com				# host domain
        http:
          paths:						# path
          - path: /						# Root path of the domain name
            backend:
              serviceName: nginx-svc	The name of the SVC created above is linked here
              servicePort: 80			# SVC port
    Copy the code
  4. Port for obtaining the HTTPS connection

    [root@k8s-master01 https]# kubectl get svc -n ingress-nginx
    NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
    ingress-nginx-controller             NodePort    10.96.189.184   <none>        80:31534/TCP,443:31345/TCP   11h
    Copy the code
  5. Configure hosts and access the domain name

4. Nginx BasicAuth

  1. Creating a key file
    yum install -y httpd
    
    Create a key file
    Create file auth and user name foo
    htpasswd -c auth foo
    Enter the password twice. This password will be used for later authentication
    
    # Build basic permission authentication, according to the file
    kubectl create secret generic basic-auth --from-file=auth
    Copy the code
  2. Create the Ingress
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-with-auth
      annotations:
        Authentication type
        nginx.ingress.kubernetes.io/auth-type: basic
        The name of # secret (defined above)
        nginx.ingress.kubernetes.io/auth-secret: basic-auth
        Why is authentication required
        nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
    spec:
      rules:
      - host: auth.zyx.com
        http:
          paths:
          - path: /
            backend:
              serviceName: nginx-svc
              servicePort: 80
    Copy the code
  3. Add hosts domain name resolution and visit BasicAuth to see that it has succeeded

5. Nginx is overwritten

The name of the describe value
nginx.ingress.kubernetes.io/rewrite-target The destination URI of the traffic must be redirected string
nginx.ingress.kubernetes.io/ssl-redirect Indicates whether the location part has SSL access only (default True when Ingress contains a certificate) Boolean
nginx.ingress.kubernetes.io/force-ssl-redirect The Ingress forces redirects to HTTPS even if TLS is not enabled Boolean
nginx.ingress.kubernetes.io/app-root Define the application root that the Controller must redirect if it is in the ‘/’ context string
nginx.ingress.kubernetes.io/use-regex Indicates whether a path defined on the Ingress uses a regular expression Boolean

Redirection examples:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-test
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: https://auth.zyx.com:31345
spec:
  rules:
  - host: re.zyx.com		# visit this address, it will be redirected to https://auth.zyx.com:31345
    http:					You can configure it either way
      paths:
      - path: /
        backend:
          serviceName: nginx-svc
          servicePort: 80
Copy the code