Kubernetes

1, the background

1. Changes in deployment mode

  • Traditional deployment era:

    • Run the application on a physical server
    • Resource boundaries cannot be defined for the application
    • This leads to resource allocation problems

For example, if you run multiple applications on a physical server, one application may consume most of the resources, which can result in performance degradation of other applications. One solution is to run each application on a different physical server, but cannot scale due to insufficient resource utilization and the cost of maintaining many physical servers is high.

  • Virtualization Deployment era:

    • Virtualization was introduced as a solution
    • Virtualization technology allows you to run multiple virtual machines (VMS) on the CPU of a single physical server
    • Virtualization allows applications to be isolated between VMS and provides a degree of security
    • Information from one application cannot be arbitrarily accessed by another application.
    • Virtualization technology makes better use of resources on physical servers
    • Because applications can be easily added or updated, you can achieve better scalability, lower hardware costs, and so on.
    • Each VM is a complete computer that runs all its components, including its own operating system, on top of virtualized hardware.

Disadvantages: Resource waste and performance degradation caused by virtual layer redundancy

  • Container deployment era:

    • Containers are similar to VMS, but share the operating system (OS) between applications.
    • Containers are considered lightweight.
    • Containers are similar to VMS in that they have their own file system, CPU, memory, process space, and so on.
    • Because they are separated from the infrastructure, they can be ported across cloud and OS distributions.
    • See Docker Isolation principle – Namespace 6 isolation (resource isolation) and Cgroups 8 resource restrictions (resource restrictions)

Bare metal: A real physical server

Container advantages:

  • Agility: Agile application creation and deployment: Improved ease and efficiency of container image creation compared to using VM images.
  • Timeliness: Continuous development, integration, and deployment: Supports reliable and frequent container image builds and deployments with quick and easy rollback (due to image immutability).
  • Decoupling: Focus on the separation of development and operations: application container images are created at build/release time, not deployment time. Thus separating the application from the infrastructure.
  • Observability: Observability displays not only operating system-level information and metrics, but also application health and other metrics signals.
  • Cross-platform: Environmental consistency across development, test, and production: runs on portable computers as well as in the cloud.
  • Portability: Portability across cloud and operating system distributions: Runs on Ubuntu, RHEL, CoreOS, native, Google Kubernetes Engine, and anywhere else.
  • Simplicity: Application-centric management: Raising the level of abstraction from running the OS on virtual hardware to running applications on the OS using logical resources.
  • Large distribution: Loosely coupled, distributed, resilient, liberated microservices: Applications are broken down into smaller, independent parts that can be deployed and managed dynamically – rather than run as a whole on a single, large machine.
  • Isolation: Resource isolation: Predictable application performance.
  • Efficiency: Resource utilization: high efficiency and high density

K8S before:

10 servers: 25+15 middleware

After K8S:

10 servers: Hundreds of applications.

K8s manages more than 10 servers. Resource planning.

2. Containerization

  • Resilient containerized application management
  • Strong failover capability
  • High-performance load balancing access mechanism
  • Convenient extensions
  • Automated resource monitoring
  • .

Docker Swarm: Large-scale container choreography

Mesos: apache

Kubernetes: Google;

Rival product: Kubernetes victory

3、为什么用 Kubernetes

Containers are a great way to package and run applications. In a production environment, you need to manage the containers in which your applications are running and make sure they don’t go down. For example, if one container fails, you need to start another container. Would it be easier if the system handled this behavior?

This is how Kubernetes solves these problems! Kubernetes provides you with a framework that can run distributed systems flexibly. A service choreography framework on Linux;

Kubernetes will take care of your scaling requirements, failover, deployment patterns, etc. For example, Kubernetes can easily manage the system’s Canary deployment.

Kubernetes offers you:

  • Service discovery and load balancing

    Kubernetes can expose containers using DNS names or their own IP addresses, and if there is a lot of traffic coming into the container, Kubernetes can load balance and distribute network traffic to make the deployment stable.

  • Store layout

    Kubernetes allows you to automatically mount storage systems of your choice, such as local storage, public cloud providers, etc.

  • Automatic deployment and rollback

    You can use Kubernetes to describe the desired state of a deployed container, which can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and use all their resources for new containers.

  • Automatically complete the packing calculation

    Kubernetes allows you to specify the CPU and memory (RAM) required for each container. When the container specifies resource requests, Kubernetes can make better decisions to manage the container’s resources.

  • self-healing

    Kubernetes restarts containers that fail, replaces containers, kills containers that do not respond to user-defined health checks, and does not notify clients until the service is ready.

  • Key and configuration management

    Kubernetes allows you to store and manage sensitive information such as passwords, OAuth tokens, and SSH keys. You can deploy and update the keys and application configurations without recreating the container image and without exposing the keys in the stack configuration

  • .

For containerized large-scale application choreography in production environments, an automated framework is necessary. system

4. Market share

1. Containerization

docker swarm

2. Service Orchestration

Google — Kubernetes — launch CNCF — numerous projects assisting Kubernetes —- kubernetes + CNCF other software = entire large cloud platform

2, introduction,

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services that facilitates declarative configuration and automation. Kubernetes has a large and rapidly growing ecosystem. Kubernetes’ services, support and tools are widely available.

The name Kubernetes comes from Greek and means “helmsman” or “pilot”. Google open-source the Kubernetes project in 2014. Kubernetes builds on Google’s decades of experience in running production workloads on a large scale, combining the best ideas and practices in the community.

1. What is Kubernetes not

  • Kubernetes is not a traditional, all-encompassing PaaS (platform as a service) system.
  • Kubernetes runs at the container level, not the hardware level
  • It provides some of the universal features common to PaaS products, such as deployment, scaling, load balancing, logging, and monitoring.
  • However, Kubernetes is not a monolithic system and the default solutions are optional and pluggable. Kubernetes provides the foundation for building a developer platform, but retains user choice and flexibility where it is important.

Kubernetes:

  • There are no restrictions on the types of applications supported. Kubernetes is designed to support an extremely wide variety of workloads, including stateless, stateful, and data-processing workloads. If the application can run in a container, it should run just fine on Kubernetes.
  • No source code is deployed, and no application is built. Continuous integration (CI), delivery, and deployment (CI /CD) workflows depend on the culture and preferences of the organization as well as technical requirements.
  • Application-level services are not provided as built-in services, such as middleware (e.g., messaging middleware), data processing frameworks (e.g., Spark), databases (e.g., mysql), caches, clustered storage systems (e.g., Ceph). Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms (for example, open service brokers).
  • No logging, monitoring, or alerting solutions are required. It provides some integration as a proof of concept and provides a mechanism for collecting and exporting metrics.
  • No configuration language/system (such as JsonNet) is provided or required, which provides a declarative API that can be composed of any form of declarative specification. RESTful; Write yaml files
  • No comprehensive machine configuration, maintenance, management, or self-repair system is provided or adopted.
  • In addition, Kubernetes is more than just an orchestration system; it actually eliminates the need for orchestration. The technical definition of choreography is to execute A defined workflow: first A, then B, then C. In contrast, Kubernetes contains a set of independent, composable control procedures that continuously drive the current state to the required state provided. It doesn’t matter how you get from A to C, nor does it require centralized control, which makes the system easier to use and more powerful, more robust, more resilient, and more scalable.

Container steward:

Many applications are installed. ————————- QQ computer housekeeper. (automatic garbage killing, automatic unloading of useless things….)

There are many containers on the machine. ————————– Kubernete the steward of the container. (Container start and stop, failure escape, load balancing, etc.)

Kubernetes installation

1. Cluster principle

The cluster:

Subordinate:

  • Master/slave synchronization/replication; Mysql primary — mysql secondary
  • Master manages slave V

Sharding (data cluster) :

  • Everybody is the same
  • Everyone saves some stuff

1. Master-node architecture

11000 machines

Landlord + slave

Ground (machine)

Slave (to work on machines)

Master: indicates the primary node (landlord). There could be a lot of (multi-stakeholder companies)

Node: indicates the work node. A lot of. Really do the application work

How does the master and worker interact

What does the master determine in the worker

The worker just talks to the master (API); Each node does its own work

A programmer can use UI or CLI to manipulate the master of the K8S cluster to know the status of the entire cluster.

2. Working principle

Master node (Control Plane) : The master node controls the entire cluster

There are some core components on the master node:

  • Controller Manager: Indicates the control Manager
  • Etcd: key value database (redis)
  • Scheduler: the scheduler
  • API Server: API gateway (all controls need to go through apI-Server)

Node node (worker node) :

  • Kubelet (supervisor) : Components that must be installed on each node.
  • Kube-proxy: indicates the proxy. Agency network

Deploying an application?

Programmer: Calling the CLI tells the master that we are now going to deploy a Tomcat application

  • All programmer calls go first to the gateway api-server of the master node. This is the only entry to matser (Layer C in MVC pattern)
  • The received request is first sent to the master apI-server. The apI-server is handed over to controller-Mannager for control
  • Controller-mannager Implements application deployment
  • Controller-mannager generates a deployment. Tomcat –image:tomcat6 –port 8080, no application is deployed
  • The deployment information is recorded in the ETCD
  • The Scheduler takes the application to be deployed from the ETCD database and starts scheduling. See which node fits,
  • Scheduler puts the calculated scheduling information into the ETCD
  • The kubelet of each node will be monitored by the master at any time (sending requests to the API-server to continuously obtain the latest data), and the kubelet of all nodes will be from the master
  • Suppose node2’s Kubelet finally receives the command to deploy.
  • Kubelet will run an application on the current machine and report the status information of the current application to the master at any time and assign IP
  • Node and master are connected through the master’s apI-server
  • The Kube-Proxy on each machine knows all the networks of the cluster. Whenever a node visits someone else or someone visits a node, the Kube-proxy network agent on the node automatically calculates and forwards traffic

The picture below is the same as the picture above

No matter which machine you visit, you can access the real application (Service).

3. Principle decomposition

1. Master node

Quick introduction:

  • Master also installs kubelet and KubeProxy
  • Front-end access (UI\CLI) :
  • Kube – apiserver:
  • scheduler:
  • controller manager:
  • etcd
  • Kubelet + KubeProxy required for every node + Docker (container runtime environment)

2. Working Node

Quick introduction:

  • Pod:

    • A Docker run starts a container. A container is the basic unit of docker. An application is a container

    • Kubelet Run launches an application called a Pod; Pod is the basic unit of k8s.

      • A Pod is a repackaging of a container
      • Atguigu (never change) ==slf4j= log4j(class)
      • Apply ======= Pod== ======= docker containers
      • A container often does not represent a basic application. Blog (PHP +mysql combined)
      • Prepare a Pod that can contain multiple Containers; A Pod represents a basic application.
      • IPod (movies, music, games) [a basic product, atom];
      • Pod (Music Container, Movie Container) [a basic product, atomic]
  • Kubelet: the supervisor, responsible for the interaction of the master apI-server and the current machine application start and stop, etc. In the master machine is the master’s small assistant. Every machine does the real work of this Kubelet

  • Kube – proxy:

  • Other:

2. Component interaction principle

Want k8S to deploy a Tomcat?

The kubelet, scheduler and Controller-manager of all nodes are always monitoring the event changes sent by the master API-server (for ::)

1. Programmers use command line tools: Kubectl; Kubectl create deploy tomcat –image=tomcat8 kubectl create deploy tomcat –image=tomcat8

2, kubectl command line content sent to apI-server, apI-server save the creation information to etCD

3. Etcd reports an event to apI-server, saying that someone saved a message to me just now. (Deploy Tomcat[deploy])

4, Controller manager listens to apI-server event, is (deploy Tomcat[deploy])

Controller-manager handles the event (deploy Tomcat[deploy]). Controller-manager generates Pod deployment information

6. Controller-manager delivers Pod information to apI-server and saves it to ETCD

7. Etcd reports the event [POD message] to apI-server.

8. Scheduler specially listens to [POD information], gets the content of [POD information], calculates and sees which node is suitable to deploy this POD.

9. Scheduler sends the information after POD scheduling (node: Node-02) to API-server and saves it to ETCD

Etcd reports event [pod dispatch information (node: Node-02)] to API-server

11, other nodes kubelet special listening [POD scheduling information (node: Node-02)] event, cluster all nodes kubelet from apI-server get [POD scheduling information (node: Node-02)] event

12, each node kubelet judgment is their own thing; Node-02’s Kubelet discovery is his business

13. Node-02 kubelet starts this pod. Report to master all information that is currently started

3, installation,

1, understand

installation

  • Binary mode (recommended for production environment)

  • MiniKube…..

  • Kubeadm Boot Mode (official recommendation)

    • GA

A flowchart

  • Prepare N servers, Intranet communication,

  • Install Docker container environment 【 K8S discard dockershim】

  • Install Kubernetes

    • Three machines to install the core components (Kubeadm (bootstrap tool for creating clusters) , kubelet.Kubectl (command line for programmers)
    • Kubelet can create the original core component (API-server) directly by containerizing it.
    • Create the cluster with kubeadm boot

2, perform

1. Prepare the machine

  • Enable three servers to communicate with each other on the Intranet, and configure public IP addresses. Centos 7.8/7.9, basic experiment 2c4G three units is also acceptable
  • Do not use localhost for each machine. Use k8S-01, K8S-02, k8S-03, etc.

2. Install the pre-environment (both)

1. Base environment
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # close the firewall: If it's a cloud server, You need to set a security group policy to permit port # https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports systemctl Stop Firewalld systemctl disable firewalld # change hostnamectl set-hostname k8s-01 # change hostnamectl status # Echo "127.0.0.1 $(hostname)" >> /etc/hosts Sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0 # Swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab # Allow iptables to check bridge traffic #https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#%E5%85%81%E8%AE%B8-iptables-% A3 E6% % 80% % E6 9 f % % % % % E6 A1 is A5 A5 E6 B5 E6 A5 8 e % % % % % 81% E9%87% open br_netfilter 8 f # # # # sudo modprobe br_netfilter # # # # to confirm the lsmod | # grep br_netfilter ## modify the configuration ##### use this here, do not use the class configuration... # pass the bridge IPv4 traffic to the iptables chain /etc/sysctl.conf Sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf sed -i "S # ^ net. Ipv6. Conf. All the forwarding. * #.net in ipv6. Conf., all the forwarding = 1 # g"/etc/sysctl. Conf # may not, Add echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf  echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.lo.disable_ipv6 = 1 "> > / etc/sysctl. Conf echo" net. Ipv6. Conf. All. The forwarding = 1 "> > / etc/sysctl. Conf # execute commands to apply sysctl -p # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Copy the code
2. Docker environment
Sudo yum install docker* sudo yum install -y yum-utils # sudo yum install --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # installation docker 19.03.9 yum install - y X86_64 docker-ce-cli-3:19.03.9-3.el7.x86_64 containerd. IO # Install docker 19.03.9 docker-CE 19.03.9 Yum install -y docker-ce-19.03.9-3 docker-ce-cli-19.03.9 containerd. IO # yum install -y docker-ce-19.03.9 containerd. IO # yum install -y docker-ce-19.03.9 containerd. IO # yum install -y docker-ce-19.03.9 containerd. IO <<-'EOF' {"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart dockerCopy the code

3. Install K8S core (both executed)

# configuration K8S yum source cat < < EOF > / etc/yum repos. D/kubernetes. '[kubernetes] name = kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # uninstall the old version yum remove - y kubelet kubeadm kubectl # View can be installed version yum list kubelet - showduplicates | sort - r # kubelet installation, kubeadm, kubectl specify version yum install - y kubelet - 1.21.0 Kubelet systemctl enable kubelet && systemctl start kubeletCopy the code

4. Initialize the master node

# # # # # # # # # # # # download core image kubeadm config images list: need check what mirror # # # # # # # # # # # # # # # encapsulated into images. Sh file #! /bin/bash images=(kube-apiserver:v1.21.0 kube-proxy:v1.21.0 kube-controller-manager:v1.21.0 kube-scheduler:v1.21.0 Coredns: v1.8.0ETCD :3.4.13-0 Pause :3.4.1) for imageName in ${images[@]}; Do docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName done end of # # # # # encapsulation chmod + x images. Sh && . / images. Sh # # # registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns:v1.8.0 note 1.21.0 version of k8s coredns mirror is more special, Ali Cloud requires special treatment, To play tag docker tag registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns:v1.8.0 Registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns/coredns:v1.8.0 # # # # # # # # kubeadm init A master# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # kubeadm join other worker# # # # # # # # # # # # # # # # # # # # # # # # kubeadm init \ - apiserver - advertise - address = 10.170.11.8 \ - image - repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \ --kubernetes-version v1.21.0 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=192.168.0.0/16 # Pod-cidr and service-cidr # CIDR Classless inter-domain Routing, CIDR # specify a network reachable range Pod subnet range +service subnet range of the load balancing network + Subnet range of the local IP address Cannot have duplicate domains ###### Continue as prompted ###### ## To start using your cluster, you need To run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ## Alternatively, if you are the root user, you can run: Export KUBECONFIG = / etc/kubernetes/admin. Conf # # # to deploy a network of pod You should now deploy a pod network to the cluster. The Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ # # # # # # # # # # # # # # is as follows: Install calico# # # # # # # # # # # # # # # # # # # # # kubectl apply - f # # # https://docs.projectcalico.org/manifests/calico.yaml command to check kubectl get Pod kubectl get nodes Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 172.24.80.222:6443 --token nz9azl.9bl27Pyr4exy2wz4 \ --discovery-token-ca-cert-hash sha256:4bdc81a83b80f6bdd30bb56225f9013006a45ed423f131ac256ffe16bae73a20Copy the code

5. Initialize the worker node (worker execution)

Kubeadm join 172.24.80.222:6443 --token nz9azl.9bl27Pyr4exy2wz4 \ --discovery-token-ca-cert-hash Sha256: expires in 4 bdc81a83b80f6bdd30bb56225f9013006a45ed423f131ac256ffe16bae73a20 # # to do kubeadm token create --print-join-command kubeadm token create --ttl 0 --print-join-command kubeadm join --token y1eyw5.ylg568kvohfdsfco --discovery-token-ca-cert-hash sha256: 6c35e4f73f72afd89bf1c8c303ee55677d2cdb1342d67bb23c852aba2efc7c73Copy the code

6. Verify the cluster

Kubectl get nodes kubectl get nodes kubectl get nodes Node machine Pod: Application of container # # # tag "h1" kubectl label node node k8s - 02 - role. Kubernetes. IO/worker = '# # # to label kubectl label node k8s - 02 Node - role. Kubernetes. IO/worker - # # k8s cluster, and restart the machine will automatically add the cluster, the master reset the control center will automatically add the clusterCopy the code

7. Set the IPVS mode

K8s whole cluster for access through; The default is iptables, and kube-proxy synchronizes the contents of iptables between clusters.

Kube-proxy; kube-proxy; kube-proxy; kube-proxy; kube-proxy; kube-proxy; kube-proxy Ipvs excludeCIDRs: null minSyncPeriod kubectl edit CM kube-proxy-n kube-system ipvs: excludeCIDRs: null minSyncPeriod 0s scheduler: "" strictARP: false syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "Ipvs" ### changed the kube-proxy configuration to enable it to be reconfigured Need to kill before Kube - proxy kubectl get pod - A | grep Kube - proxy kubectl delete pod Kube - proxy - Kube PQGNT - n - system # # # After the modification is complete, you can restart Kube-Proxy to take effectCopy the code