The content to be shared includes

  • K8s Demo Deployment example
  • Basic architecture and principles of K8S
  • K8s Resource object
  • K8s network model
  • conclusion

K8s is like the cargo ship in the picture, managing various containers (containers)

K8s Demo deployment example

instructions

  • Get a feel for the K8S with a Hello World program
  • The program is deployed in the host, container, and K8S environments, comparing their differences
  • The code looks something like this
    @RestController
    public class K8sDemoController {
    
        @GetMapping("/hello")
        public String hello(a){
            return "hello k8s demo."; }}Copy the code

1. How does the host run

  1. MVN compiles the code into jar packages
  2. performjava -jar k8s-demo.jar &
  3. The browser type http://10.1.69.101:8080/hello address to access the service

2. How to run on Docker container

  1. MVN compiles the code into jar packages
  2. Type the JAR package into a Docker image
  3. performDocker run --name k8s-demo -d -p 8080:8080 k8S-demo :0.0.1-SNAPSHOT
  4. The browser type http://10.1.69.101:8080/hello address to access the service

3. How to run in K8S

This step gives you a sense of what YAML looks like without worrying about the script details, which will be covered later in the resources section. Just get a general idea of the basic process of deploying a K8S service.

  1. MVN compiles the code into jar packages
  2. Type the JAR package into a Docker image
  3. Build a YAML file to run the service
  4. Kubectl apply -f k8s-demo.yaml
  5. Enter www.k8s-demo.com/hello to access the service
  • Yaml file contents
# The file name is k8s-demo.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-demo
  namespace: spring-test
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: k8s-demo
    spec:
      containers:
      - name: k8s-demo
        image: K8s - demo: 0.0.1 - the SNAPSHOT
        ports:
          - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: k8s-demo
  namespace: spring-test
spec:
  type: NodePort
  selector:
    app: k8s-demo
  ports:
   - protocol: TCP
     port: 8888
     targetPort: 8080
     nodePort: 30003

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: k8s-demo
  namespace: spring-test
spec:
  rules:
  - host: www.k8s-demo.com
    http:
      paths:
      - path: /hello
        backend:
          serviceName: k8s-demo
          servicePort: 8888
Copy the code

4. Summarize the three deployment modes

  • High-definition view of the three deployment modes

To reduce deployment complexity, the code does not use Redis, but the instance diagram adds Redis

5. As the process becomes more complex, why not deploy it directly on the host machine?

5.1 What are the advantages of containers over hosts?

  • Portability: Containers provide a basic wrapper for running applications and can be deployed on any cloud that supports containers
  • High efficiency: It takes only one image to start a container, and the startup time is very short
  • Isolation: There are often many services installed on the host with different dependencies. A container runs only one service
  • Version control: easy to trace the differences between different versions, easy to quickly roll back, only need to replace the image version, without a set of complex processes on the host
  • Low cost: Small and light, does not require as many resources as host or virtual machine

5.2 What problems have been solved by the emergence of K8S?

  • Automatic scheduling: how to manage and schedule a large number of containers after the increase
  • Distributed solution: nodes can be horizontally expanded and containers can be easily expanded and shrunk
  • Self-healing ability: Fault detection and self-healing

We say that containers implement a basic wrapper for a single application and make it portable. In the figure above, the host is deployed in the same way as ingress is deployed in K8S with an Nginx reverse proxy. That is, K8S implements the portability of a whole set of distributed applications

Ii. Basic structure and principle

  • K8s architecture diagram brief version

  • K8s data flow diagram

1. Components of the Master node

apiServer

  • Provides a unique entry for resource operations, and provides API registration, discovery, authentication, and access control functions

etcd

  • A key-value database
  • Save the state of the entire machine

controller-manager

  • Responsible for maintaining machine status, such as automatic expansion, trouble shooting, rolling update
  • Key components for cluster automation

scheduler

  • Responsible for resource scheduling
  • Schedule pods from unallocated nodes to appropriate nodes

2. Node Components of a Node

kubelet

  • Responsible for container lifecycle management, such as create, delete
  • Manage the Volume and network

kube-proxy

  • Provide load balancing and Service discovery for Service

Container Runtime

  • Container operating environment
  • The default is Docker, and other container engines are also supported

Resource objects

An overview of the

  • Most concepts in K8S, such as Node, Pod, and Service, can be viewed as resource objects
  • Description of resource: YAML file or JSON file
  • Operations on resources: Objects can be added, deleted, modified, and searched through Kubectl (or API)
  • Storage of resources: Information is persisted in ETCD

K8s automates control by comparing the “actual state” of a resource with the “expected state” in ETCD

  • Overview diagram of common resources

1. Pod

  • Pod is the most important and basic resource in K8S
  • A POD is a layer of concepts encapsulated outside the container
  • Pods are the basic unit of container scheduling (not Docker containers)
  • Each POD contains a special root container: Pause, and one or more business containers
  • Each POD has a unique IP address, and containers within the POD communicate via localHost

Why add pod?

  1. As a group of containers as a unit, it is difficult to judge the overall state and control the whole. Added a service – independent pause container for overall control
  2. This simplifies the problem of associative container communication and sharing

2. Deployment

  • Realize Pod automatic choreography: create, delete, expand, shrink
  • Replicas controls the number of pods, and template controls the template of pods to be created
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-demo
  namespace: spring-test
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: k8s-demo
    spec:
      containers:
      - name: k8s-demo
        image: K8s - demo: 0.0.1 - the SNAPSHOT
        ports:
          - containerPort: 8080
Copy the code

3. Service

  • When the POD is abnormal, it may be scheduled to another machine. As a result, the IP address of the POD changes, and the IP access service is not reliable

3.1 an overview of the

  • One of the core resources in K8S, similar to the “microservices” in the microservices architecture.
  • The front-end application accesses the service through the entry address, and the service connects to the BACK-END POD through the label pair, even if the POD’s IP has changed
  • Kube-proxy is responsible for forwarding service requests to the back end and load balancing
  • During the service lifecycle, the ClusterIp address does not change and the external service address does not change
apiVersion: v1
kind: Service
metadata:
  name: k8s-demo
  namespace: spring-test
spec:
  type: NodePort
  selector:
    app: k8s-demo
  ports:
   - protocol: TCP
     port: 8888
     targetPort: 8080
     nodePort: 30003
Copy the code

4. Ingress

A service provides IP :port access, that is, it works at the TCP/IP layer. However, HTTP services cannot provide this function because they need to map different urls to different back-end services.

  • Ingress provides load distribution at the HTTP layer
  • Ingress can implement different requests and distribute them to different back-end services
  • After the Ingress is defined, it needs to be combined with the Ingress Controller to form a complete function
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: k8s-demo
  namespace: spring-test
spec:
  rules:
  - host: www.k8s-demo.com
    http:
      paths:
      - path: /hello
        backend:
          serviceName: k8s-demo
          servicePort: 8888
Copy the code

4.1 Ingress Controller definition

  • You can use the Ingress Controller provided by the public cloud
  • You can also use the Ingress Controller provided by Google to run in the form of POD with the following functions:
    • Listen on apiserver to get the definition of ingress
    • Generate the contents of the nginx configuration file based on the ingress definition
    • Run nginx -s reload to reload the configuration

4.2 Ingress definition

  • Create a YAML file of type Ingress
  • Configure spec.rules to specify the mapping between url and service in hostname

4. K8s network model

Hello World program, for how to access the service, it is necessary to understand the network model of K8S, before this first introduce docker network model

1. Docker network model

Docker network model

  • When Docker is first started, a virtual bridge docker0 is created
  • Assign a subnet to Docker0
  • When Docker creates each container, it creates a pair of Veth devices, with one end associated to the bridge and the other connected to the container using Linux’s network namespace technology, and assigns an IP address to the eth0 device inside the container

2. Limitations of Docker network

  • Docker network model does not take into account the network solution of multi-host interconnection, advocating simplicity as beauty
  • Containers within the same machine can communicate directly, but containers between different machines cannot
  • To communicate across nodes, ports must be assigned at the host address and routed or brokered to the container through ports
  • Allocating and managing containers is particularly difficult, especially when scaling horizontally

3. Overview of K8S network model

3.1 Principles of K8S network model:

  • Each POD has a unique and independent IP address, called the IP-per-POD model
  • All pods are in a connected network environment
  • Regardless of whether they are on the same node, they can communicate directly through IP
  • A POD is considered an independent physical machine or virtual machine

At present, native Docker and Kubernetes cannot communicate between multi-node containers. To support this model, third-party network plug-ins, such as Flannel, must be relied on

3.2 Reasons for designing this principle:

  • Users don’t need to worry about how to establish connections between pods
  • Users do not need to worry about mapping container ports to host ports
  • Compatible with past running on host and KVM applications

3.3 Differences between IP-per-Pod and Docker Port Mapping

  • Docker port mapping to host introduces complexity of port management
  • The IP and port that docker is ultimately accessed to are inconsistent with those provided, causing configuration complexity

4. Detailed explanation of K8S network model

K8s network implementation

4.1 Communication between containers

  • Pods in the same container directly share the same Linux protocol stack
  • As if on the same machine, accessible through localhost
  • Can be analogous to the situation of different applications on a physical machine

4.2 Communication between POD and POD

Communication between pods in the same Node
  • All the pods in the same Node are connected to the same Docker0 bridge through VEth and have the same address segment, so they can communicate directly
Pods of different nodes communicate with each other
  • Docker0 network segment and host computer are not in the same network segment, so different PODS cannot communicate directly with each other
  • Nodes can communicate with each other only through the physical network adapter of the host
  • As mentioned above, the K8S network model requires different pods to communicate with each other, so IP cannot be repeated, which requires that the network segment of Docker0 should be planned well when K8S is deployed
  • Also, make a note of which node the IP address of each pod is attached to
  • To this end, there is a lot of open source software that enhances the docker and K8S networks

4. Open source network component Flannel

4.1 Functions implemented

  • Assist K8S to assign non-conflicting IP addresses to docker containers on each Node
  • Overlay networks can be established between these IP addresses to transfer data to the target container

4.2 Underlying Principles

  • Flannel created a bridge called Flannel0
  • One end of the Flannel0 bridge connects to the Docker0 bridge and the other end connects to the Flanneld process
  • One end of the Flanneld process connects to ETCD, uses ETCD to manage the allocated IP address resources, monitors POD addresses, and establishes a ROUTING table of POD nodes
  • One end of The Flanneld process connects docker0 and the physical network, and with the routing table, completes packet delivery and communication between PODS

4.3 disadvantages

  • The introduction of multiple network components brings network delay and loss
  • By default, UDP is used as the underlying transport protocol, which is unreliable

In live.

  • This article starts with a demo, deployed in a different environment, to get a first-hand feel of how to use K8S. In this process, we need to think about what problems did the birth of K8S solve? The core provides a platform that is mainly responsible for the automatic scheduling and orchestration of containers
  • After the intuitive feeling of K8S, the basic architecture of K8S is introduced. How do the components interact, and how can you deepen your understanding of K8S by constantly recalling the architecture diagram while using k8S
  • Once you understand the basic architecture, deploying services into K8S requires some understanding of the common resource objects, so you go on to introduce the main ones. Other resource objects in K8S can be learned by analogy
  • When you deploy a service to K8S, how do you verify and expose your service? It takes some understanding of k8S’s network model to really understand how it exposes services. Therefore, the last part focuses on the network model of K8S

reference

  • The Definitive Guide to Kubernetes