The necessity of using Kubernetes
For each image packaged, Docker can run the image to form a container, which turns the running service into a running container. Containers in Docker have strong isolation and are very lightweight, which is the main reason why Docker is very popular.
However, standalone Docker engine and a single container image can only solve the packaging and testing problems of a single service. To run production-level enterprise applications, container scheduling management systems are needed. Here, Docker technology is like a container transporting system parts, delivering the standardized parts of cloud native application to different terminals of each enterprise, while container scheduling management system is the operation workshop of enterprise application, assembling, running and maintaining different parts.
Kubernetes system architecture
A K8S cluster consists of distributed storage (ETCD), a Master Node, and a group of nodes.
Kubernetes core technology concepts and API objects
In K8S, only the API Server communicates with the storage, and other modules access the cluster status through the API Server. First, to ensure the security of cluster state access. The second is to separate the way cluster state access is done from the way back-end storage is implemented: API Server is the way state access is done and will not change with the change of back-end storage technology ETCD. If etCD is replaced with another storage mode, it will not affect other K8s system modules that rely on API Server.
API objects
API objects are individual components in a K8s cluster. Every time the K8s cluster system supports a new function and introduces a new technology, the corresponding API object will be introduced to support the management operation of this function.
Each API object has three broad categories of attributes: metadata metadata, specification spec, and status status. Metadata is used to identify API objects, and each object has at least three metadata: namespace, name, and UID; In addition, there are various labels to label and match different objects. For example, you can use env=dev, env=testing, env=production to label different service deployment environments. The spec describes the Desired State of the distributed system in the K8s cluster. For example, the Desired number of Pod copies can be set to 3 by the Replication Controller. Status indicates the actual status of the system. For example, the actual number of Pod copies is 2. The current program logic of the copy controller is to automatically start the new Pod and try to achieve 3 copies.
All configurations in K8s are set through the spec of API objects, that is, users change the system by configuring the ideal state of the system, which is one of the important design concepts of K8s, that is, all operations are Declarative rather than Imperative. The advantage of declarative operation in distributed system is that it is stable, and there is no fear of losing operation or running multiple times. For example, setting the number of copies to 3 and running multiple times is still a result, while adding 1 to the number of copies is not declarative, and the result of running multiple times is wrong.
Pod: Pod is designed to allow multiple containers to share network addresses and shared storage in a Pod, providing services through a simple and efficient combination of interprocess communication and storage sharing. Containers in the same Pod can communicate with locahosts. When a container in a Pod needs to communicate with an entity outside the Pod, it needs to communicate through shared network resources such as ports. All containers in Pod can access the shared storage volume, allowing these containers to share data.
Pod is the basis of all types of business in THE K8s cluster. It can be regarded as a small robot running in the K8 cluster. Different types of business need different types of small robots to execute. Currently, services in K8s can be divided into long-running, batch, node-daemon, and stateful Application. The corresponding small robot controllers are Deployment, Job, DaemonSet and PetSet, which will be introduced one by one later in this paper.
Node: A Node is a physical or VIRTUAL machine on which the application container runs. A Kubelet runs on each Node, controlling containers, images, volumes, and so on.
Replication Controller (RC) : RC is the earliest API object in the K8s cluster that ensures Pod high availability. Monitor running pods to ensure that a specified number of Pod copies are running in the cluster. The specified number can be multiple or one; Less than the specified number, RC will start running a new Pod copy; More than the specified number, RC kills extra Pod copies. RC is an earlier technical concept for K8s and is only suitable for long-term servo-type business types, such as controlling small robots to provide highly available Web services.
Replica Set (RS) : RS is a new generation of RC, which provides the same high availability capability. The main difference is that RS comes from behind and supports more matching modes. Replica set objects are generally not used on their own, but as an ideal state parameter for deployment.
Deployment: Deployment represents an update operation by the user to the K8s cluster. Deployment is a broader API object than the RS application pattern, and can be the creation of a new service, updating a new service, or rolling upgrading a service. The rolling upgrade of a service is actually a compound operation of creating a new RS and gradually increasing the number of copies in the new RS to the ideal state and reducing the number of copies in the old RS to 0. Such a composite operation is not well described in an RS, so a more general Deployment is used to describe it.
Services: RC, RS, and Deployment only guarantee the number of pods supporting services, but do not solve the problem of how to access these services. A Pod is just an instance of a running service that can stop at any time on one node and start a new Pod on another node with a new IP, so it cannot be served with a defined IP and port number. Stable service delivery requires service discovery and load balancing capabilities. The task of service discovery is to find the corresponding back-end service instance for the service accessed by the client. In a K8 cluster, the Service that clients need to access is the Service object. Each Service corresponds to a valid virtual IP address in the cluster. The cluster accesses a Service through this virtual IP address. The load balancing of microservices in K8s cluster is implemented by Kube-proxy. Kube-proxy is the load balancer inside the K8s cluster. It is a distributed proxy server, with one on each node of K8s; This design reflects its scalability advantages, as the more nodes that need to access the service, the more Kube-Proxies that provide load-balancing capability, and the more highly available nodes.
Job: Job is an API object used by K8s to control batch tasks. The main difference between the batch service and the long-term server service is that the batch service runs from beginning to end, while the long-term server service runs forever without the user stopping. The Pod managed by the Job automatically exits after the Job is successfully completed according to the user’s Settings. The successful completion mark varies according to the spec.completions strategy: a single Pod task is completed if one Pod succeeds. Constant number of successful tasks ensure that N tasks are all successful; Work queue tasks are marked as successful based on global success confirmed by the application.
DaemonSet: The core of long-term servo and batch services is business application. Some nodes may run multiple Pod of the same business, and some nodes may not run such Pod. The core focus of the background support service is the nodes (physical machines or virtual machines) in the K8s cluster. It is necessary to ensure that each node has a Pod running. The nodes may be all cluster nodes or specific nodes selected through nodeSelector. Typical backend support services include storage, logging, and monitoring services that support K8s cluster running on each node.
Stateful service set (PetSet) :RC and RS mainly control the provision of stateless services. The names of the pods under their control are randomly set. When a Pod fails, it is discarded and a new Pod is restarted in another location. All that matters is the total number of pods; The PetSet is used to control stateful services, and the name of each Pod in the PetSet is predetermined and cannot be changed.
For PODS in RC and RS, there is generally no mounted storage or shared storage, but the shared state of all pods is saved, and pods are just like livestock (which does seem to mean losing their humanity). For the pods in the PetSet, each Pod mounts its own independent storage, and if a Pod fails, a Pod of the same name is started from another node and the storage attached to the original Pod continues to be serviced in its state. All the PetSet does is associate a certain Pod with a certain store to ensure continuity of state.
Storage volume (Volumn) : The storage volume in K8s cluster is similar to the storage volume in Docker, except that the scope of the Docker storage volume is a container, while the life cycle and scope of the K8s storage volume is a Pod. The storage volumes declared in each Pod are shared by all containers in the Pod. K8s supports a wide variety of storage volume types. In particular, K8s supports storage on multiple public cloud platforms, including AWS, Google, and Azure. Supports a variety of distributed storage including GlusterFS and Ceph; Easy to use host-local directories hostPath and NFS are also supported. K8s also supports logical storage using Persistent Volumn Claim, or PVC, which allows users of storage to ignore the actual storage technologies in the background (such as AWS, Google or GlusterFS and Ceph). The configuration of the actual storage technology is left to the storage administrator through Persistent Volumn.