This is the 8th day of my participation in the August More Text Challenge

1. Evolution of application deployment modes

  1. Deployment application changes
    • Traditional deploymentIn the early days of the Internet, applications were deployed directly on physical machines

      Advantages: Simple, no other technology required Disadvantages: resource usage boundaries cannot be defined for applications, it is difficult to allocate computing resources reasonably, and there is easy influence between programs

    • Virtualization Deployment: You can run multiple VMS on a physical machine, and each VM is an independent environment

      Advantages: The program environment does not affect each other, providing a degree of security disadvantages: increased operating system, waste some resources

    • Containerized deployment: Similar to virtualization, but with a shared operating system

      Advantages:

      1. You can ensure that each container has its own file system, CPU, memory, process space, and so on
      2. The resources needed to run the application are wrapped in containers and decoupled from the underlying infrastructure
      3. Containerized applications can be deployed across cloud services and across Linux operating system distributions

  2. Problems with containerized deployment: Container orchestration issues, such as container scheduling, deployment, cross-node access, automatic scaling, etc.
  3. What does a container choreography engine need

  4. Common container choreography engine tools
    • Kubernetes: Google’s open source container orchestration tool. The goal is to eliminate the burden of orchestrating physical or virtual computing, networking, and storage infrastructure so that application operators and developers can focus on container-centric applications and optimize cluster resource utilization. Kubernetes uses concepts like Pod and Label to combine containers into interdependent logical units. Related containers are combined into PODS and then deployed and scheduled together, forming services. This is also the biggest difference between Kuberentes and other two scheduling management systems.
    • Docker Swarm: Docker’s own product, can directly schedule Docker containers, and use the standard Docker API semantics, to provide users with a seamless user experience. Swarm is more developer-oriented and has poor fault tolerance support.
    • Mesos is a distributed resource management platform that provides Framework registration mechanisms. A Framework must have a Framework Scheduler module responsible for scheduling tasks within the Framework and a Framework Executor responsible for starting and running tasks within the Framework. Need to be used in conjunction with Marathon.

2. Kubernetes profile

The essence of Kubernetes is a cluster of servers that can run specific programs on each node of the cluster to manage the containers in the nodes. To automate resource management, it provides the following functions:

  • Self-healing: Once a container crashes, can quickly start a new container in about 1 second
  • Elastic scaling: The number of running containers in a cluster can be automatically adjusted as required
  • Service discovery: A service can find the services it depends on through automatic discovery
  • Load balancing: If a service starts multiple containers, the load balancing of requests can be implemented automatically
  • Version rollback: If problems are found with a newly released program version, you can immediately roll back to the original version
  • Storage orchestration: You can automatically create storage volumes based on container requirements

3. Kubernetes components

A Kubernetes cluster is mainly composed of control nodes (Master), work nodes (Node), Addons, and different components are installed on each Node.

  • Master: specifies the control plane of the cluster and is responsible for cluster decision-making (management).
    1. ApiServer: a unique entry for resource operations. It receives user input commands and provides authentication, authorization, API registration, and discovery mechanisms
    2. Scheduler: Is responsible for scheduling cluster resources and scheduling PODS to corresponding nodes based on scheduled scheduling policies
    3. Controller-manager: Maintains cluster status, such as program deployment, fault detection, automatic expansion, rolling updates, etc
    4. Etcd: Information about various resource objects in a storage cluster (including the status and configuration information of the current cluster)
  • Node: The data plane of the cluster that provides the environment for the container to run (work)
    1. Kubelet: Is responsible for maintaining the container life cycle, i.e. creating, updating, and destroying containers by controlling docker
    2. KubeProxy: provides service discovery and load balancing within the cluster
    3. Container engine: responsible for container operations on nodes (Docker, RKT…)
  • AddonsAddons uses Kubernetes resources (DaemonSet, Deployment, etc.) to implement cluster features. Because they provide cluster-level features, Kubernetes resources used by Addons are placed inkube-systemNamespace.

4. Kubernetes concept

  • Master: controller node of a cluster. Each cluster requires at least one Master node to manage and control the cluster
  • Node: workload nodes. The Master allocates containers to these node work nodes. Container engines on node nodes are responsible for running containers. Node is the host on which the Pod actually runs. It can be a physical machine or a virtual machine. To manage pods, each Node must run at least the Container Runtime (such as Docker or RKT),kubeletkube-proxyService.
  • Pod: The basic unit of kubernetes scheduling. Kubernetes uses pods to manage containers, and each Pod can contain one or more tightly related containers
  • Controller: A Controller used to manage pods, such as starting, stopping, scaling, and so on. There are many types of controllers, and different controllers have different application scenarios.
  • Service: Service is an abstraction of application services. Labels provide load balancing and Service discovery for applications. A unified portal for pod external services. Multiple PODS of the same class can be maintained below.
    • The Pod IP and port list matching labels form endpoints, and kube-proxy is responsible for balancing the service IP load to these endpoints.
    • Each Service is automatically assigned a cluster IP (a virtual address accessible only within the cluster) and a DNS name. Other containers can access the Service from this address or DNS without knowing about the operation of the back-end container.
  • Label: Label is a Label that identifies a Kubernetes object and is attached to it as a key/value. Tag, used to classify pods. Pods of the same class will have the same tag.
  • NameSpace: A NameSpace is an abstract collection of resources and objects. A namespace used to isolate the pod’s runtime environment.
  • Container: Container is a portable and lightweight operating system-level virtualization technology. It uses namespace to isolate different software runtime environments and mirror the runtime environment of the contained software, making it easy for the container to run anywhere.