K8s – Cluster setup

K8s introduction

What is a k8s

K8s is a helmsman, which is specially used to control the direction of docker. In other words, it is used to control the docker running container

It’s the same thing as Docker. So there is the concept of cluster

Why use k8S

Because when docker container is abnormal, Docker cannot restart the container if the number of containers is large

Advantages of swarm

1, simple architecture, low deployment operation and maintenance cost

Docker swarm mode due to the original integration into docker-engine, so the first learning cost is low, for the use of Docker-Engine version 1.12 and above can be smooth transition, service service can meet the dynamic increase and decrease in the number of containers, while having its own load balancing, Swarm manager multiple Settings to ensure a good disaster recovery mechanism in case of errors

2, fast start speed

Swarm clusters have only two layers of interaction, and the container starts in milliseconds

Swarm shortcomings

1 cannot provide finer management

Swarm API is compatible with docker API, so swarm is not able to provide more sophisticated management of clusters

2 Network Faults

In terms of network, the default docker container is through the bridge and NAT and host network communication, thus two problems, one is because is NAT, external host can not take the initiative to access to the container (in addition to the port mapping), and the default bridge IP is the same, so there will be a different host containers have the same IP. This makes the two containers even less able to communicate. At the same time, the performance of the bridge network was only 70% of that of the host network. Of course, these problems can be solved by using other tools, such as Flannel or OVS Bridges

3 Container Reliability

In terms of container reliability, swarm has no mechanism to guarantee the operation of the container if the container or its host crashes, while Kubernetes Replication Controllers can monitor and maintain the life of the container.

Kubernetes advantages:

1 management is more perfect and stable

Kubernetes cluster management tends to be more perfect and stable, at the same time pod function is more powerful than swarm service

2. Perfect health mechanism

Replication Controllers can monitor and keep the container alive

3 Easily cope with complex network environments

By default, Kubernetes uses Flannel as an overlay network.

Flannel is an OverlayNetwork tool designed by the CoreOS team for Kubernetes. The purpose of Flannel is to help every CoreOS host using Kuberentes have a complete subnet.

Kubernetes disadvantage:

1 The configuration and construction are complicated, and the learning cost is high

Due to the complex configuration, the learning cost is relatively high, and the cost of operation and maintenance is relatively high

2 Slow startup speed

Kubernetes will have five levels of interaction and start up in seconds, which is slower than Swarm

The website address

Kubernetes. IO/useful/docs/tut…

K8s cluster concept

Swarm and K8S concept comparison

Node Cluster Inc. (Workplace)

Manager master (assign tasks, load balancing)

Worker node Employee (working under load)

K8s Internal cluster concepts

Swarm internal and K8S internal comparison

Service POD Project Manager

Stack Deployment Project Director

K8s Cluster operation concept

Kubeadm: K8S cluster management component

Kubectl: Operate the K8S cluster client

Kubelet: Run each node container

The K8S cluster is constructed

Pay attention to

K8s has hardware requirements. The CPU must be 2 cores and the memory must be more than 2G

The premise condition

Docker: K8S run is used to run containers

Kubeadm: K8S cluster setup

Kubectl: Operate the K8S cluster client

Kubelet: Run each node container

steps

1 to 8 (except 4) is performed on all nodes

1. Turn off the firewall and configure a secret free login, which is basically all tutorials

Systemctl stop Firewalld # prevent k8S cluster from starting without port developmentCopy the code

2. Close the selinux

setenforce 0 
Copy the code

3. Close the swap

Vim /etc/fstab Permanently closes the comment swap line (k8s cannot start when accessing the memory partition)Copy the code

4. Add the relationship between the host name and IP address to avoid encryption (this step can only be performed on the master). This step is to prepare for the subsequent transmission network

Vim /etc/hosts 192.168.235.145k8s-master 192.168.235.146k8s-node1 ssh-keygen cat.ssh /id_rsa.pub >> SSH /authorized_keys chmod 600. SSH /authorized_keys # can be generated in master and copied to node scp-r. SSH [email protected]:/rootCopy the code

5. Pass the bridged IPV4 traffic to the iptables chain

vi /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
Copy the code

6. Install Docker and synchronize time

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repo yum -y install Docker -ce systemctl start docker systemctl enable docker Yum install ntpdate cn.pool.ntp.orgCopy the code

7. Add aliyun YUM software source

vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Copy the code

Install kubeadm, kubelet and kubectl

Yum makecache fast yum install -y kubectl-1.18.0 kubeadm-1.18.0 kubelet-1.18.0 --nogpgcheckCopy the code

9. Deploy Kubernetes Master

Initialize master (perform at master)

The first initialization is slow, Kubeadm init --apiserver-advertise-address=192.168.235.145 Registry.aliyuncs.com/google_containers - kubernetes - version v1.18.0 - service - cidr = 10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 # To use the flannel network, you must set the flannel network to this CIDR  mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configCopy the code

Check the status, find the first two are pending, get Pods find not ready

kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS   RESTARTS   AGE
kube-system   coredns-9d85f5447-fhdmx         0/1     Pending   0         100d
kube-system   coredns-9d85f5447-x5wfq         0/1     Pending   0         100d
kube-system   etcd-local1                     1/1     Running   0         100d
kube-system   kube-apiserver-local1           1/1     Running   0         100d
kube-system   kube-controller-manager-local1   1/1     Running   0         100d
kube-system   kube-proxy-2trv9                 1/1     Running   0         100d
kube-system   kube-scheduler-local1           1/1     Running   0         100d
Copy the code

A Flannel needs to be installed

Kubectl apply-f install flannel kubectl apply-f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # installed flannel, copy the configuration to the node node, SCP -r /etc/cni [email protected]:/etc # However, POD could not create scp-r /run/flannel/ [email protected]:/run due to network problemsCopy the code

reinitialize

Kubeadm init... Parameter --kubernetes-version Kubernetes version --apiserver-advertise-address Specifies the listening address of apiserver --pod-network-cidr 10.244.0.0/16 -- flanneld network --apiserver-bind-port api-server 6443 --ignore-preflight-errors all Skip the previously installed parts (if there is a problem, add continue after the problem is resolved)Copy the code

Check the cluster status. The master is normal

[root@local1 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} [root@local1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION local1 Ready master 2m16s V1.17.3 [root@local1 ~]# RESTARTS AGE for kubectl get Pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-9d85f5447-9s4mc 1/1 Running 0 16m kube-system coredns-9d85f5447-gt2nf 1/1 Running 0 16m kube-system etcd-local1 1/1 Running 0 16m kube-system kube-apiserver-local1 1/1 Running 0 16m kube-system kube-controller-manager-local1 1/1 Running 0 16m kube-system kube-proxy-sdbl9 1/1 Running 0 15m kube-system kube-proxy-v4vxg 1/1 Running 0 16m kube-system kube-scheduler-local1 1/1 Running 0Copy the code

**10, **node work node load

For node nodes, perform steps 1 to 8. If you do not perform step 5, the node fails to be added

Execute the join command generated during initialization above on the node node

Kubeadm join 192.168.235.145:6443 --token w5rify.gulw6l1yb63zsqsa --discovery-token-ca-cert-hash Sha256:4 # e7f3a03392a7f9277d9f0ea2210f77d6e67ce0367e824ed891f6fefc7dae3c8 output This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.Copy the code

In the master view

[root@local1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION local1 Ready master 4m58s v1.18.3 local2 Ready <none> 3 m36s v1.18.3Copy the code

View it on the Node node

[root@local3 ~]# kubectl get nodes Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: Verification error" while trying to verify candidate authority certificate "kubernetes") # Need to master the admin. Conf # copied "master perform SCP/etc/kubernetes/admin. Conf root @ local3: / etc/kubernetes / # and then under the node execution steps mkdir -p  $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Kubectl get nodes NAME STATUS ROLES AGE VERSION local1 Ready master 6m36s v1.18.0 local2 Ready < None > 31s v1.18.0 local3 Ready < None > 5M43s v1.18.0Copy the code

11. If the node fails, remove the node

Kubectl delete node node-1 kubectl delete node node-1Copy the code

12. If the token expires when you add the token to the node, you can generate the token again

Kubeadm Token list The default generated token validity period is one day. [root@k8s-master ~]# kubeadm token create -- TTL 0 W0501 09:14:13.887115 38074 validation.go:28] Cannot Validate kube-proxy config-no validator is available for W0501 09:14:13.887344 38074 validation.go:28 Kubelet config - no validator is available Create token [root@k8s-master ~]# openssl x509 -pubkey-in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' # 4 dc852fb46813f5b1840f06578ba01283c1a12748419ba8f25ce2788419ab1c2 token in the worker nodes perform join kubeadm join 192.168.0.104:6443 --token vahjcu.rhm7864v6l400188 --discovery-token-ca-cert-hash sha256:4dc852fb46813f5b1840f06578ba01283c1a12748419ba8f25ce2788419ab1c2Copy the code

K8s run

Basic commands

Access to the node

kubectl get node

Get trust node

kubectl get node -o wide

Run nginx pod(at this point, the container is only used inside the POD and cannot be accessed by the outside world)

kubectl run nginx –image=nginx –port=80

View nginx POD information (display demo success or failure information)

kubectl describe pod nginx-pod

Expose the Nginx pod(to the outside world for access)

kubectl expose pod nginx-pod –port=80 –target-port=80 –type=NodePort

View the exposed Nginx copy Deployment Service

kubectl get service -o wide

Copy command (flex command)

Create the replica set Deployment

Kubectl create command

Create the nginx copy deployment

kubectl create deployment nginx-deployment –image=nginx

View the nginx copy Deployment

kubectl create deployment -o wide

Expose the nginx copy deployment

kubectl expose deployment nginx-deployment –port=80 –target-port=8000 –type=NodePort

View the exposed Nginx copy Deployment Service

kubectl get service -o wide

Dynamically expand the nginx replica deployment

kubectl scale –replicas=3 deployment/nginx

Yaml file command

Nginx replica set deployment

ApiVersion: apps/v1 #k8s version number: kind: Deployment metadata: Nginx-deployment-tony 5 # resource namellabels: # resource labels (version number) app: nginx spec: # resource specific replicas: 3 # replicas: selector: MatchLabels: app: nginx template: matchLabels: app: nginx template: matchLabels: app: nginx spec: matchLabels: app: nginx template: matchLabels: nginx template: matchLabels: app: nginx Containers: # specify the container - name: nginx # container name image: nginx # containerPort: 80 # containerPort numberCopy the code

Nginx exposed service

ApiVersion: v1 # specifies the API version. This value must be in kubectl apI-versions. Kind: Service # Specifies the role/type to create the resource. Namespaces: default # Set all labels in namespaces: # Set all labels in namespaces app: Demo spec: Ports: -port: 8080 # service port targetPort: 80 # Protocol: TCP # protocol name: HTTP # port name: select a resource to publish to the outside world: pod deployment, etcCopy the code

K8s deployment project

Remark:

The mkdir -p $HOME /. Kube sudo cp – I/etc/kubernetes/admin. Conf. $HOME/kube/config sudo chown (I d – u) : (id – u) : (id – u) : (id – g) $HOME /. Kube/config

Kubeadm join 192.168.44.3:6443 –token CTc73p.8x7jxvsnv8qvo8kz — discovery-tok-ca-cert-hash sha256:b1af5f09a5f4820b73d6640da44d9905e1683c326ede2b672964d08732ad7dd5

Kubeadm init –kubernetes-version v1.18.0 –apiserver-advertise-address=123.57.164.54 –pod-network-cidr 10.244.0.0/16 –apiserver-bind-port api-server 6443 –ignore-preflight-errors all

Kubeadm init – apiserver – advertise – address = 192.168.44.3 – image – repository registry.aliyuncs.com/google_containers – kubernetes – version v1.18.0 – service – cidr = 10.1.0.0/16 – pod – network – cidr = 10.244.0.0/16

Kubeadm init – apiserver – advertise – address = 123.57.164.54 – image – repository registry.aliyuncs.com/google_containers – kubernetes – version v1.18.0 – service – cidr = 10.1.0.0/16 – pod – network – cidr = 10.244.0.0/16

Kubeadm init \ - apiserver - advertise - address = 123.57.164.54 \ - image - repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.18.0 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.244.0.0/16 \Copy the code