Kubernetes tutorial for external exposure services

The article addresses: blog.piaoruiqing.com/2019/10/20/…

preface

Through the explanation of the previous article, “follow the official documentation from zero to build K8S”, “application deployment” I believe that readers have a certain understanding of Kubernetes installation and deployment applications. Next, this article explains how to expose services to the outside world.

Read this article and you will learn:

  • Learn about several options for Kubernetes exposure services and their advantages and disadvantages.

To read this article you need:

  • Understand basic Kubernetes commands.
  • There is a Kubernetes environment

Several ways to expose services to external clients

  • throughport-forwardForwarding, as mentioned in a previous article, is easy to use and suitable for debugging,Not suitable for production environment.
  • throughNodePort, each Node in the cluster will listen to the specified port, and we can access the specified service through the port of any Node. However, too many services will open a large number of ports difficult to maintain.
  • throughLoadBalanceTo expose the service.LoadBalance LBIt is usually provided by the cloud service provider, and if LB is not available in the cloud environment, we usually use it directlyIngress, or useMetalLBTo configure the LB itself.
  • throughIngressExpose multiple services.IngressExposed from outside the cluster to within the clusterservicesHTTP and HTTPS routing. Traffic is routed byIngressRule control defined on resources. In the case that the cloud service provider does not provide LB service, we can use it directlyIngressTo expose the service. (Also, useLB + IngressTo avoid the cost of too many LB applications.

To prepare

Before starting, the author has created the test application. The code is too long and is omitted here. See Appendix [1] and Appendix [2] for details.

We look at the pod list via Kubectl get Pods.

You can use the –namespace parameter to view a list of pods for a given namespace. You can also use the –all-namespaces parameter to view a list of pods for all namespaces.

Since no namespace is specified, the application is placed in default.

[root@nas-centos1 k8s-test]# kubectl get pods --namespace default
NAME                        READY   STATUS    RESTARTS   AGE
k8s-test-578b77cd47-sw5pd   1/1     Running   0          6m29s
k8s-test-578b77cd47-v6kmp   1/1     Running   0          6m29s
Copy the code

port-forward

Port -forward access to the POD, you can specify the POD instance, simple and convenient, very suitable for debugging.

Kubectl port-forward kubectl port-forward

[root@nas-centos1 k8s-test]# kubectl port-forward --address 0.0.0.0 k8s-test-578b77cd47-sw5pd 9999:8080
Forwarding from 0.0.0.0:9999 -> 8080
Copy the code

At this point, we can query port 8080 of k8S-test-578b77CD47-SW5pd by accessing port 9999 of the host machine.

/k8s-test/timestamp is the only API for the sample application to get the current timestamp

[root@nas-centos1 k8s-test]# curl http://10.33.30.95:9999/k8s-test/timestamp
1571151584224
Copy the code
[Copyright Notice]


This article was published on
Park Seo-kyung’s blog, allow non-commercial reprint, but reprint must retain the original author
PiaoRuiQingAnd links:
blog.piaoruiqing.comFor negotiation or cooperation on authorization, please contact:
[email protected].

NodePort

Each Node in the cluster listens on the specified port, and we can access the specified service through the port of any Node. However, too many services will open a large number of ports difficult to maintain.

Reference: kubernetes. IO/docs/concep…

Create a Service and specify the type NodePort.

k8s-test-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: k8s-test-service
spec:
  selector:
    app: k8s-test
    env: test
  ports:
    - port: 80			Service port, internally accessible
      targetPort: 8080		Port 8080 for pod
      nodePort: 30080		Node port, externally accessible
      protocol: TCP
  type: NodePort
Copy the code

Execute kubectl apply-f k8s-test-service.yaml to publish the service.

Kubectl get Services kubectl get Services

[root@nas-centos1 k8s-test]# kubectl get servicesNAME TYPE cluster-ip external-ip PORT(S) AGE k8s-test-service NodePort 10.244.234.143 < None > 80:30080/TCP 8s kubernetes ClusterIP 10.244.0.1 < None > 443/TCP 32DCopy the code

We can see that port 30080 has been bound to port 80 of the service.

Try to access the application from the node node:

[root@nas-centos3 ~]# curl http://10.33.30.94:30080/k8s-test/timestamp
1571152525336
Copy the code

LoadBalance

Reference Documents:

  1. metallb.universe.tf/
  2. Kubernetes. IO/docs/concep…

A LoadBalance is usually provided by a cloud service provider. If the environment does not support LB, the LoadBalance created will always be in a state:

[root@nas-centos1 k8s-test]# kubectl get servicesNAME TYPE cluster-ip external-ip PORT(S) AGE k8s-test-service LoadBalancer 10.244.29.126 <pending> 80:32681/TCP 13s Kubernetes ClusterIP 10.244.0.1 < None > 443/TCP 33DCopy the code

If we want to test LB in a local development environment, we can choose MetalLB. It is a load balancing implementation. The installation mode is not displayed here. For details, see the official documents

When our environment supports LB, we can create the following Service to expose the Service:

apiVersion: v1
kind: Service
metadata:
  name: k8s-test-service
spec:
  selector:
    app: k8s-test
    env: test
  ports:
    - port: 80	        # service port
      targetPort: 8080	Port 8080 for pod
      protocol: TCP
  type: LoadBalancer
Copy the code

Run kubectl apply -f k8s-test-lb.yaml to publish the configuration. The Service has an EXTERNAL IP address (10.33.30.2). As follows:

[root@nas-centos1 k8s-test]# kubectl get servicesNAME TYPE cluster-ip external-ip PORT(S) AGE k8s-test-service LoadBalancer 10.244.151.128 10.33.30.2 80:31277/TCP 2s Kubernetes ClusterIP 10.244.0.1 < None > 443/TCP 33DCopy the code

Try to access the application using the LB IP as follows:

[root@nas-centos1 k8s-test]# curl http://10.33.30.2/k8s-test/timestamp
1571235898264
Copy the code

Ingress

Ingress exposes HTTP and HTTPS routing from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Reference documents: kubernetes. Making. IO/ingress – ngi…

We use the Nginx implementation of the Ingress Controller for testing. First download the deployment file:

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Copy the code

Edit the vim mandatory.yaml file and add the following:

template:
  spec:
    hostNetwork: true
Copy the code
  • HostNetwork: If hostNetwork is enabled, the ingress is bound to ports 80 and 443 of the host.

  • There are several ways to access ingress, (1). Bind directly to ports 80 and 443 via hostNetwork, (2). Expose ports on nodes using NodePort (but this method can only bind a fixed range of ports, default 3000-32767), (3). With LoadBalance, both this method and NodePort are exposed via Service, just different types of Service.

  • There is only one template in mandatory.yaml, which is easy to find.

  • Ingress Controller: Is a reverse proxy program that resolves the reverse proxy rules for Ingress.

  • Ingress: Ingress is a reverse proxy rule. Don’t confuse Ingress Controller with Ingress.

After publishing, we create an Ingress to test, which looks like this:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /k8s-test
spec:
  rules:
    - http:
        paths:
          - path: /k8s-test
            backend:
              serviceName: k8s-test-service
              servicePort: 80
Copy the code

Try to access:

[root@nas-centos1 ingress]# curl http://10.33.30.94/k8s-test/timestamp
1571321623058
Copy the code

summary

Typically, we consider LoadBalance in conjunction with Ingress. On the one hand, just using LB can incur a lot of expense, on the other hand, a large number of LBS can also increase maintenance costs. LB, in conjunction with the Ingress, can do a great job of distinguishing services by their different paths (the same principle applies to exposing multiple services via Nginx).

If this article is helpful to you, please give a thumbs up (~ ▽ ~)”

reference

  • kubernetes.io

The appendix

[1] Deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-test
  labels:
    app: k8s-test
spec:
  replicas: 2
  template:
    metadata:
      name: k8s-test
      labels:
        app: k8s-test
        env: test
    spec:
      containers:
        - name: k8s-test
          image: registry.cn-hangzhou.aliyuncs.com/piaoruiqing/k8s-test:0.0.1
          imagePullPolicy: IfNotPresent
          ports:
            - name: http-port
              containerPort: 8080
      imagePullSecrets:
        - name: docker-registry-secret
      restartPolicy: Always
  selector:
    matchLabels:
      app: k8s-test
Copy the code

[2] K8sTestApplication.java

/ * * *@author piaoruiqing
 * @description: k8s test
 * @date: 2019/09/22 10:01
 * @sinceJDK 1.8 * /
@RestController
@RequestMapping(value = "/k8s-test")
@SpringBootApplication
public class K8sTestApplication {
    /**
     * get timestamp
     * @return* /
    @GetMapping(value = "/timestamp")
    publicResponseEntity<? > getTimestamp() {return ResponseEntity.ok(System.currentTimeMillis() + "\n");
    }
  
    public static void main(String[] args) { SpringApplication.run(K8sTestApplication.class, args); }}Copy the code

series

  • Build K8S from scratch with official documentation
  • Kubernetes(2) Application deployment
  • How do I access the service from outside

Welcome to pay attention to the public account (code such as poetry):

[Copyright Notice]


This article was published on
Park Seo-kyung’s blog, allow non-commercial reprint, but reprint must retain the original author
PiaoRuiQingAnd links:
blog.piaoruiqing.comFor negotiation or cooperation on authorization, please contact:
[email protected].