Docker is easy to use, but in the face of powerful clusters with thousands of containers, it suddenly doesn’t smell good.

At this time we need our protagonist Kubernetes to play, first to understand the basic concept of Kubernetes, and then introduce practice, from shallow to deep step by step. K8s goes from beginner to master

For the basic concept of Kubernetes, we will focus on the following seven points:

1. Management pain points of Docker

If you want to apply Docker to a large business implementation, there are difficult choreography, management and scheduling problems. Therefore, we urgently need a management system to carry out more advanced and flexible management of Docker and containers.

Kubernetes came into being! Kubernetes, a noun of Greek origin, means “helmsman” or “pilot”. Google opened source the Kubernetes project in 2014, building on Google’s decades of experience in running production workloads on a large scale and incorporating the best ideas and practices in the community.

K8s is short for Kubernetes and uses 8 instead of ubernete. We’ll use the abbreviation below. K8s goes from beginner to master

What is K8s?

K8s is a portable, extensible, open source platform for managing containerized workloads and services that facilitates declarative configuration and automation. K8s has a large and rapidly growing ecosystem. K8s services, support, and tools are widely available.

Through K8s we can:

Rapid application deployment Rapid application expansion Seamless new application functions save resources and optimize the use of hardware resources

The K8s has the following features:

Portable: support public cloud, private cloud, hybrid cloud, multi-cloud scalability: modular, plug-in, mount, combination automation: automatic deployment, automatic restart, automatic replication, automatic scaling/expansion

Cloud Architecture & Cloud native

What is the relationship between cloud and K8s

A cloud is a network of service clusters built with containers. A cloud consists of a large number of containers. K8s is used to manage containers in the cloud.

There are several common types of cloud architectures

On-premises deployment (local) IaaS (infrastructure as a service) users: rent (purchase | assign permissions) cloud hosting, users do not need to consider network, DNS, hardware environment problems. Operators: provide network, storage, DNS, and so services are called infrastructure services

PaaS (Platform as a Service) MySQL/ES/MQ/… SaaS (software as a service) Nail financial management

ServerlessNo service, no server required. From the perspective of users, users only need to use the cloud server. In the basic environment where the cloud server is located, the software environment does not need to be cared about by users.

It can be predicted that the future of service development will be Serverless, and enterprises will build their own private cloud environments or use public cloud environments.

Cloud native K8s from beginner to master

Solutions that allow applications (projects, services) to run on the cloud are called cloud native.

Cloud native has the following characteristics:

Containerization, where all services must be deployed in a container microservices, Web services architecture Service Architecture CI/CD DevOps

Four, K8s architecture principle

K8s architecture K8s from beginner to master

Generally speaking, the K8s architecture is a group of nodes corresponding to a Master.

Let’s take a look at the Master and Node in the K8s architecture diagram one by one.

Master node structure

Apiserver is the K8s gateway. All command requests must pass through apiserver. Scheduler uses scheduling algorithm to schedule requested resources to a Node. Controller Maintains K8s resource objects. Etcd Storage resource object; The Node Node

There is a copy of Kubelet in each Node. The resource operation instructions on the Node are executed by Kubelet. Kube-proxy proxy service, handling load balancing between services; Pod is the basic unit (minimum unit) of K8S management. Inside Pod is the container. K8s does not directly manage containers, but manages PODS. Docker runs the basic container environment, container engine; Fluentd Log Collection service; After introducing the K8s architecture, we introduced a lot of technical terms. Don’t worry, first have the overall concept, then break down. Please read the following patiently, I believe you will have a different harvest.

Five, K8s core components

K8s components

K8s is used to manage the container, but does not operate the container directly. The minimum operation unit is Pod (indirect management container). A Master Node has a group of nodes. The Master Node does not store containers, but is responsible for scheduling, network management, controller, and resource object storage. The container is stored in the Node. Or multiple containers Kubelet is responsible for the maintenance of local Pod kube-proxy is responsible for load balancing, between multiple pods to do load balancing K8s from entry to master

What is Pod?

Pod is also a container, this container is created by Docker container, Pod is used to encapsulate a container container, Pod is a virtualization group; A Pod is equivalent to a stand-alone host and can encapsulate one or more containers. A Pod has its own IP address, host name and is equivalent to a separate sandbox environment.

What exactly is Pod for?

Typically, at service deployment time, a Pod is used to manage a set of related services. Either a service or a set of related services can be deployed in a Pod.

A set of related services is a set of services that are on the invocation line of a chained invocation.

How is Web services clustering implemented?

Realize service cluster: only need to copy multiple copies of Pod, which is the advanced part of K8s management. If K8s continues to expand, only need to control the number of Pod, similar to the principle of reducing capacity.

Pod underlying network, how does data storage work?

Create a Pause container before creating an internal Pod container. Accessing localhost between service containers is the same as accessing a local service, with very high performance. K8s goes from beginner to master

ReplicaSet specifies the replica controller

Control the number of Pod replicas “service clusters”, always the same as the expected number. When a Pod service goes down, the replica controller will immediately create a new Pod, always keeping the set number of copies.

Replica Controller: Label selector – selects to maintain a set of related services (its own services).

Selector: app = Web Release = stableCopy the code

ReplicationController replicator: Simple ReplicaSet Replicator: multiple ReplicaSet Replicator: simple ReplicaSet Replicator in the new VERSION of ReplicationController, ReplicaSet is recommended to be used as replicator. ReplicationController is no longer used.

Deployment Deployment object

Service deployment Architecture Model Rolling Update ReplicaSet Controller controls the number of Pod replicas. However, the requirements of the project are constantly iterated and updated, and the version of the project will be released continuously. Version changes, how to achieve service updates?

Deployment model:

ReplicaSet does not support rolling updates. Deployment objects support rolling updates and are usually used with ReplicaSet. Deployment management ReplicaSet, RS re-establishes the new RS and creates the new Pod. What are the problems with MySQL container deployment?

Container is life cycle, once down, data loss Pod Deployment, Pod has life cycle, data loss For K8s, you cannot deploy stateful service using Deployment.

Normally, Deployment is used to deploy stateless services, so for stateful service Deployment, use StatefulSet for stateful service Deployment.

What is stateful service?

Real-time data needs to be stored in a stateful service cluster. After a certain service is removed, it can be added to the machine network after a period of time. If the cluster network is unavailable, the data needs to be stored in a stateful service cluster

What is stateless service?

No real-time data needs to be stored in a stateless service cluster. If a service is removed from the cluster and added to the machine network after a period of time, cluster services are not affected

StatefulSet

To solve a problem with container deployment for stateful services. The deployment model stateful service StatefulSet guarantees that the Hostname will not change after the Pod is re-established so that the Pod can associate data with the Hostname.

6. Service registration and discovery of K8s

What is the structure of Pod?

Pod is equivalent to a container. Pod has its own IP address, Hostname, resource isolation by Namespace, and independent sandbox environment. Inside a Pod is a container, which can encapsulate one or more containers (usually a set of related containers) Pod networks

Pod has its own independent IP address Pod internal container access using Localhost Pod internal container access is Localhost, Pod communication belongs to remote access.

How does a Pod provide external service access?

A Pod is a virtual resource object (process) that does not have a corresponding entity (physical machine, physical network adapter) and cannot directly provide service access.

So how to solve this problem?

If a Pod wants to provide external services, it must be bound to a physical machine port. This means opening a port on the physical machine and mapping the port to the Pod port so that packets can be forwarded through the physical machine.

In general, access is performed through the IP + Port of the physical machine before packet forwarding.

A set of related Pod copies, how to achieve access load balancing?

Let’s start with the concept that a Pod is a process with a life cycle. New pods are created when there is an outage or a version update. In this case, the IP address will change, Hostname will change, and Nginx is not suitable for load balancing.

So we need to rely on the capabilities of services.

How does Service implement load balancing?

Briefly, a Service resource object consists of the following three parts:

Pod IP: Pod IP address Node IP: physical server IP address Cluster IP: Virtual IP is a Service object abstracted from K8s. This Service object is a VIP resource object

Service and Pod are both the same process, and Service cannot provide services on the Internet. Service and Pod can communicate directly with each other. Their communication belongs to LAN communication. After handing the request to the Service, the Service uses ipTable, IPVS for packet distribution. How is a Service object associated with a Pod?

Different businesses have different services; Service and Pod are associated through a label selector; Selector: app=x Select a set of order service pods, create a service; Store a set of POD IP through endpoints; Service selects a set of related copies through the label selector, and then creates a Service.

When a Pod goes down and a new version is released, how does the Service find out that the Pod has changed?

Each Pod has a Kube-proxy that listens on all pods. If the Pod changes, the corresponding IP mapping (stored in etCD) is dynamically updated. K8s goes from beginner to master

7. Key issues

What do enterprises mainly use K8s for?

Automatic operation and maintenance platform, entrepreneurial companies, small and medium-sized enterprises, using K8s to build a set of automatic operation and maintenance platform, automatic maintenance of the number of services, to keep the service and the expected data consistency, so that the service can always provide services. The immediate benefit is cost reduction and efficiency increase.

Make full use of server resources, Internet enterprises, there are a lot of server resources “physical machine”, in order to make full use of server resources, use K8s to build a private cloud environment, the project runs in the cloud. This is especially important for big Internet companies.

Seamless migration of services, project development, product requirements constantly iteration, update products. This means that projects are constantly releasing new releases, and K8s can seamlessly move projects from development to production.

How is the load balancing of K8s service implemented?

Containers in pods can fail and die for any number of reasons. Controllers such as Deployment keep the overall application robust by dynamically creating and destroying pods. In other words, the Pod is fragile, but the application is robust. Each Pod has its own IP address. When the controller replaces the failed Pod with a new Pod, the new Pod is assigned a new IP address.

This raises the question: If a set of PODS provides a service (such as HTTP) externally, and their IP is likely to change, how do clients find and access that service?

The solution offered by K8s is Service. Kubernetes Service logically represents a set of pods, which are selected by the Label.

A Service has its own IP and this IP is immutable. The client only needs to access the IP of the Service, and K8s is responsible for establishing and maintaining the mapping between the Service and the Pod. No matter how the backend Pod changes, it has no impact on the client because the Service does not change.

How are stateless services typically deployed?

Deployment provides a declarative definition method for Pod and ReplicaSet and is typically used to deploy stateless services.

Main roles of Deployment:

Define Deployment to create Pod and ReplicaSet rolling upgrade and rollback application expansion and call capacity pause and continue. Not only can the Deployment be rolled back to update, but it can also be rolled back quickly to version V1 if the service is not available after the upgrade to version V2. K8s goes from beginner to master

※ Some articles from the network, if any infringement, please contact to delete; More articles and materials | click behind the text to the left left left 100 gpython self-study data package Ali cloud K8s practical manual guide] [ali cloud CDN row pit CDN ECS Hadoop large data of actual combat operations guide the conversation practice manual manual Knative cloud native application development guide OSS Operation and maintenance actual combat manual cloud native architecture white paper Zabbix enterprise distributed monitoring system source document 10G large factory interview questions