This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.
Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”
Click “like” to see again, form a habit, wechat search [mu Xiao Nong] follow me to get more information, in the wind and rain, xiao Nong waiting for you, very glad to be your friend.
What is Kubernetes?
Kubernetes is Google’s open source distributed container management platform, designed to make it easier to manage our container applications on the server.
Kubernetes is short for K8S. Why does it have this title? Because K and S are Kubernetes first and last letters, and there are eight letters between K and S, so referred to as K8S, plus Kubernetes more around the mouth, so the general use of abbreviation K8S.
Kubernetes is a container orchestration tool, but also a new distributed architecture scheme based on container technology. Based on Docker, it can provide a series of services from creating applications > deploying applications > providing services > dynamic scaling > updating applications, which improves the convenience of container cluster management.
Causes of K8S
If we want to install the data inside, we need to manually install it one by one. It seems to be ok, anyway, there is only one, although it is a bit of trouble, but there is no delay.
But with the development of technology and the needs of the business, single server already cannot satisfy the needs of our everyday, more and more companies, more is needed is a cluster environment and more container deployment, so if it’s in a one to deployment, operations will have to be crazy, I’m afraid one day what also not stem went to deploy machine, sometimes, probably because a link error, again, It’s really vomiting blood… , as shown in the figure below:
If I want to deploy, the following machines:
3 – Nginx
> 5 - Redis > 7 - ZooKeeper > 4 - Tomcat > 6 - MySql > 5 - JDK > 10 - ServersCopy the code
If you want to deploy one, people are stupid, when is this going to stop, if it is one of the twenty thousand machine, whether to submit the resignation letter on the spot, therefore K8S help us to do this, convenient our management and application of automation of container deployment, reduce duplication of work, and can be deployed automation applications and fault self-healing.
And if K8S has good support for microservices, and a copy of microservices can be adjusted with the load change of the system, K8S internal service elastic expansion mechanism can also cope with sudden traffic.
Container choreography tool comparison
Docker-Compose
Docker-compose is used to manage containers, which is similar to the user container manager. When we have N containers or the application needs to be started, it will take a lot of time to manually operate docker-compose. With docker-compose, we only need a configuration file to help us. Docker-compose, however, can only manage dockers on the current host and cannot manage services on other servers. This means a stand-alone environment.
Docker Swarm
Docker Swarm is a tool developed by Docker company to manage Docker containers on clusters, which makes up for the defect of docker-compose single node. Docker Swarm can help us start containers and monitor the status of containers. If the container service fails, a new container is restarted to ensure normal external service provision and load balancing among services. And these things docker-compose doesn’t support,
Kubernetes
The role of Kubernetes itself is the same as that of Docker Swarm, that is to say, they are responsible for the same part of the container field, of course, there are some different characteristics, Kubernetes is Google’s own product, after a lot of practice and host machine experiments, very mature, So Kubernetes is emerging as a leader in container choreography, with configurability, reliability, and community support to surpass Docker Swarm, Google’s open source project that works with the entire Google cloud platform.
The duties of a K8S
- Automates container deployment and replication
- Expand or shrink the container at any time
- Containers are grouped into groups and provide load balancing between containers
- Real-time monitoring: Fault detection and automatic replacement
The basic concept of K8S
In the figure below, there is a cluster of K8S, in which there are three host machines. Each square here is our physical virtual machine. Through these three physical machines, we form a complete cluster, which can be divided into two types from roles
-
One is the Kubernetes Master server, which is the manager of the whole cluster. It can manage the nodes of the whole cluster. Through the Master server, it sends the functions of creating containers, automatic deployment, automatic publishing to these nodes. All external data will be received and distributed by the Kubernetes Master.
-
There is also node node, which can be an independent physical machine or a virtual machine. In each node, there is a very important concept unique to K8S, namely Pod. Pod is the most important and basic concept of K8S
Pod
- Pod is the smallest unit of Kubernetes control. A Pod is a process.
- A Pod can be regarded as the “logical host” of the application layer by a containerized environment. It can be understood as a Container of containers, and can contain multiple containers.
- Multiple container applications of a Pod are usually tightly coupled, with pods created, started, or destroyed on Node.
- Inside each Pod runs a special one called
Pause
The other service containers are called service containers. These service containers share the network stack of the Pause container and the mounted Volume of the Volume. - The Pod’s internal container network is interconnected, and each Pod has its own virtual IP.
- A Pod is a fully deployed application or module, and a Pod container can communicate with each other only through localHSOT.
- The Pod lifecycle is managed by a Replication Controller, defined by a template, and then assigned to run on a Node. The Pod ends when the Pod lock containment container runs.
To put it more figuratively, we can think of a Pod as a Pod. The container is the bean inside, which is a symbiont.
What exactly is inside the Pod?
-
In some small companies, a Pod is a complete application, which is installed with a variety of containers, a Pod may contain (Redis, Mysql, Tomcat, etc.), when the deployment of this Pod is equivalent to the deployment of a complete application, multiple Pod deployment after the formation of a cluster, This is the first application of Pod
-
Another way to use it is to serve only one container in a Pod, for example I only deployed Tomcat in a Pod
How to deploy the container in Pod is a reasonable choice according to the characteristics of our project and the allocation of resources.
Pause container:
Infrastucture Container (infra) The infrastucture container is an infrastucture container from which all other Pause pods are forked. This container is required for an application container to share the same resource:
- PID namespace: Different applications in Pod can see the process ids of other applications
- Network namespace: Multiple containers in a Pod can access the same IP and port range
- IPC namespace: Multiple containers of Pod can communicate using SystemV IPC or POSIX message queues
- UTS namespace: Multiple containers in pods share a host name; Volumes Shared Storage Volumes
- Each container in the Pod can access Volumes defined at the Pod level
Nginx and Ghost containers need to use their own IP addresses and ports to communicate with each other. If there is a pause container, the Pod can be used as a whole. That is, our Nginx and Ghost directly use localhost can be accessed, their only difference is the port, which may look relatively simple, but in fact, is the use of a lot of network bottom things to achieve, interested partners can learn by themselves.
Service (Service)
In Kubernetes, each Pod is assigned a separate IP address, but pods cannot communicate with each other directly. If you want to communicate over the network, you must communicate through another component, namely our Service
Service means “Service”. In K8S, the main work of Service is to connect Pod on multiple hosts through Service, so that Pod and Pod can communicate normally
We can think of a Service as a domain name, and clusters of pods for the same Service are different IP addresses. A Service is defined by Label Selector.
- A Service has a specified name, such as a domain name, and a virtual IP address and port number. The Service can only be accessed from the Intranet. If the Service wants to access or provide services from the Internet, it needs to specify a public IP address and NodePort or an external load balancer.
Using NodePort to provide external access, you only need to open a real host port on each Node so that internal services can be accessed through the Node client.
The Label (tag)
Label is usually attached to various objects in the form of KV. Label is an explanatory Label, which plays a very important role. When deploying containers, we need to search and filter the Pod in which we operate according to Label. Only the Master node of K8S can find the corresponding Pod to operate.
Replication Controller
-
This sibling is used to monitor the number of pods. For example, we need three pods with the same property in the following node, but there are only two. When the Replication Controller sees that there are only two, It will automatically help us create an extra copy of our rules and place it in our node.
-
The Replication Controller can also monitor pods in real time. When one of our pods becomes unresponsive, it will be removed and a new one will be created automatically if needed
-
Kubernetes selects Pod instances based on Replicas defined in RC and monitors their status and number in real time. If the number of instances is less than the defined number of Replicas, Kubernetes will create a new Pod based on the Pod template defined in RC. The Pod is then scheduled to run on the appropriate Node until the desired number of Pod instances is reached.
The overall architecture of K8S
- Kubernetes divides a cluster of machines into a Master Node and a group of working nodes.
- The Master node runs a set of cluster management-related processes
Etcd, API Server, Controller Manager, Scheduler
The last three components constitute the total control center of Kubernetes. These processes realize the management functions of the whole cluster, such as resource management, Pod scheduling, elastic scaling, security control, system monitoring and error correction, and all are automatically completed. - Node running kubelet, Kube-Proxy, Docker three components, is really support K8S technical solution. Responsible for managing the life cycle of Pod on this node and realizing the function of service proxy.
The user submits a request to create Replication Controller via Kubectl. This request is written to the ETCD via API Server. At this time, the Controller Manager hears the name of the creation through API Server. After careful analysis, it found that there was no corresponding Pod instance in the current cluster. It quickly created a Pod object according to the Replication Controller template definition, and then wrote it to our ETCD through the Api Server
Scheduler will not tell me if Scheduler finds it. It will immediately run a complex scheduling process to select a Node for the new Pod to reside in. It will then write the result into etCD via API Server. The Kubelet process running on our Node detects the new baby “Pod” through API Server, and starts the Pod according to its characteristics and takes care of it for the rest of its life until the end of Pod’s life.
Then we submit a request to create a new Service that maps to the Pod through Kubectl. The Controller Manager will query the Pod instance through the Label to generate Endpoints for the Service. Then, Proxy processes running on all nodes use API Server to query and listen for Service objects and their corresponding Endpoints. Build a software load balancer to realize the traffic forwarding function of Service accessing to back-end Pod.
Kube – proxy: Kube-proxy is a proxy that acts as a proxy for multi-host communication. The Service implements cross-host and cross-container network communication, which is technically implemented through Kube-proxy. The Service logically groups pods, and the underlying communication is implemented through Kube-proxy
Kubelet: used to execute K8S command, is also the core command of K8S, used to execute K8S related instructions, responsible for the current Node Node Pod creation, modification, monitoring, delete and other life cycle management, at the same time, Kubelet regularly “report” the Node status information to the API Server
Etcd: Used to persist all resource objects in a storage cluster. API Server provides encapsulating interface apis for operating ETCD. These apis are basically interfaces for operating resource objects and monitoring resource changes
API Server: provides the access to the operation of resource objects. Other components need to operate the resource data through the API Server. Through the “full query” + “change monitoring” of related resource data, relevant service functions can be completed in real time.
Scheduler: Scheduler that dispatches PODS to cluster nodes.
Controller Manager: the internal management and control center of the cluster, mainly to realize the automatic work of fault detection and recovery of Kubernetes cluster. For example, Pod replication and removal, Endpoints object creation and update, Node discovery, management, and status monitoring are all done by the Controller Manager.
conclusion
Here we have finished explaining the basic situation of K8S. If you like it, please remember to like and pay attention to it. Compared with Docker, K8S has more mature functions.
What do you want to know or question about K8S? Welcome to leave a message and tell me.
I am a small farmer, a humble worker, if you think the content of the article is helpful to you, remember that one key three companies, your three companies is the biggest power of small farmers.
Fear of what infinite truth, further have further joy, everyone come on ~Copy the code