1 Environment Preparation

1.1 Machine Environment

The number of CPU cores on a node must be >=2. The memory must be >=2 GB. Otherwise, the K8S cannot be started

DNS network: Set it to the DNS connected to the local network. Otherwise, the network fails and some images cannot be downloaded

Linux kernel: The Linux kernel must be at least version 4, so you must upgrade the Linux core

The node the hostname role IP
kmaster master 192.168.8.121
knode1 node1 192.168.8.122
knode2 node2 192.168.8.123

1.2 the hostname

[root@base1 ~]# hostnamectl set-hostname kmaster --static
[root@base2 ~]# hostnamectl set-hostname knode1 --static
[root@base3 ~]# hostnamectl set-hostname knode2 --static
Copy the code

1.3 Network Settings

[root@base1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO="static" # DHCP static instead
ONBOOT="yes" # Enable this configuration on startupIPADDR = 192.168.8.121# static IP 192.168.8.122/192.168.8.123GATEWAY = 192.168.8.2# default gatewayNETMASK = 255.255.255.0# Subnet maskDNS1 = 114.114.114.114# the DNS configurationDNS2 = 8.8.8.8# the DNS configuration

$# reboot
Copy the code

1.4 Viewing the Host Name

hostname
Copy the code

1.5 Configuring IP Host Mapping

Vi /etc/hosts 192.168.8.121 kmaster 192.168.8.122 KNOde1 192.168.8.123 Knode2Copy the code

1.6 Installing a dependent Environment note: This dependent environment must be installed on each machine

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstatlibseccomp wget vim net-tools git iproute lrzsz bash-completion tree bridge-utils unzip bind-utils gcc
Copy the code

1.7 Installing iptables, starting iptables, setting iptables automatic startup, clearing iptables rules, and saving the current rules to the default rules

# disable firewall
systemctl stop firewalld && systemctl disable firewalld
# empty iptables
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
Copy the code

1.8 close the selinux

Close swap and permanently close virtual memory
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Close # selinux
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
Copy the code

1.9 Upgrading the Linux kernel to version 4.44

The RPM - Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpmInstall the kernel
yum --enablerepo=elrepo-kernel install -y kernel-lt
Setup boot from the new kernel
grub2-set-default 'CentOS Linux (4.4.248-1.el7.elrepo.x86_64) 7 (Core)'4.4.248-1. El7. Elrepo. X86_64 rebootNote: After setting the kernel, you need to restart the server for it to take effect.
# query kernel
uname -r
Copy the code

2 install k8s

2.1 For K8S, adjust the kernel parameter kubernetes.conf

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
Copy the optimized kernel file to /etc/sysctl.d/ so that the optimized file can be called at startup
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
# Manually refresh the optimized file to take effect immediately
sysctl -p /etc/sysctl.d/kubernetes.conf
sysctl: cannot stat /proc/sys/net/netfilter/nf_conntrack_max: No such file or directory
Copy the code

Error resolution:

lsmod |grep conntrack modprobe ip_conntrack lsmod |grep conntrack nf_conntrack_ipv4 20480 0 nf_defrag_ipv4 16384 1 nf_conntrack_ipv4 nf_conntrack 114688 1 nf_conntrack_ipv4 sysctl -p /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 net.ipv4.tcp_tw_recycle = 0 vm.swappiness = 0 vm.overcommit_memory = 1 vm.panic_on_oom = 0 fs.inotify.max_user_instances  = 8192 fs.inotify.max_user_watches = 1048576 fs.file-max = 52706963 fs.nr_open = 52706963 net.ipv6.conf.all.disable_ipv6 = 1 net.netfilter.nf_conntrack_max = 2310720Copy the code

2.2 Adjusting system temporary zones

Set the system time zone to China/Shanghai
timedatectl set-timezone Asia/Tokyo
Write the current UTC time to the hardware clock
timedatectl set-local-rtc 0
Restart services that depend on system time
systemctl restart rsyslog
systemctl restart crond
Copy the code

2.3 Stopping Unnecessary Services

systemctl stop postfix && systemctl disable postfix
Copy the code

2.4 Setting the Log Saving mode

2.4.1 Creating a Directory for Saving Logs

mkdir /var/log/journal
Copy the code

2.4.2 Creating a Directory for Storing configuration Files

mkdir /etc/systemd/journald.conf.d
Copy the code

2.4.3 Creating a Configuration File

cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
EOF
Copy the code

2.4.4 Restarting systemd Journald Configuration

systemctl restart systemd-journald
Copy the code

2.4.5 Adjusting the Number of Open Files (Ignore)

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
Copy the code

2.4.6 Kube-proxy IpvS Enablement Conditions

modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #! /bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF
Use the lsmod command to see if these files are booted
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 147456  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      20480  0
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
nf_conntrack          114688  2 ip_vs,nf_conntrack_ipv4
libcrc32c              16384  2 xfs,ip_vs
Copy the code

3 docker deployment

3.1 installation docker

yum install -y yum-utils device-mapper-persistent-data lvm2

/etc/yum.repos. D /docker-ce.repo
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install Docker CE for Yum
yum update -y && yum install docker-ce
Copy the code

3.2 Setting the Docker Daemon File

Create a /etc/docker directory
mkdir /etc/docker
# Update daemon.json file
cat > /etc/docker/daemon.json <<EOF
{"exec-opts":["native.cgroupdriver=systemd"],"log-driver":"json-file","log-opts":{"max-size":"100m"}}
EOF
Journalctl -amu docker: journalctl -amu docker: journalctl -amu docker: journalctl -amu docker
Create and store the Docker configuration file
mkdir -p /etc/systemd/system/docker.service.d
Copy the code

3.3 Restarting the Docker Service

systemctl daemon-reload && systemctl restart docker && systemctl enable docker
Copy the code

4 kubeadm[one-click install K8S]

4.1 Yum Repository Image

domestic

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
Copy the code

website

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Copy the code

4.2 Installing Kubeadm, kubelet, kubectl(1.20.1)

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# start kubelet
systemctl enable kubelet && systemctl start kubelet
Copy the code

5 Prepare the K8S image

5.1 Online Mirror Pulling

Generate the default kubeadm.conf file

kubeadm config print init-defaults > kubeadm.conf
Copy the code

Edit kubeadm.conf and change the Kubernetes version to V1.20.1

Download mirror

Kubeadm config images pull --config kubeadm.conf [config/images] Pulled k8s.gcr. IO /kube-apiserver:v1.20.1 IO/kube-controller-Manager :v1.20.1 [config/images] Pulled k8s.gcr. IO /kube-controller-manager:v1.20.1 [config/images] Pulled IO/k8s.gcr. IO /kube proxy:v1.20.1 [config/images] Pull K8s.gcr. IO/Pause :3.2 [config/images] Pulled k8s.gcr. IO /etcd:3.4.13-0 [config/images] Pulled k8s.gcr. IO /coredns:1.7.0 Docker Images k8s.gcr. IO/Kube-proxy v1.20.1e3f6fCD87756 11 days ago 118MB k8s.gcr. IO/Kube-apiserver v1.20.1 75C7f7112080 11 days ago 122MB k8s.gcr. IO/kube-Controller-Manager v1.20.1 2893d78e47DC 11 days ago 122MB IO /kube- Scheduler V1.20.1 4AA0b4397bbb 11 Days ago 46.4MB K8S.gcr. IO/etCD 3.4.13-0 0369CF4303ff 4 months ago K8s.gcr. IO/CoreDNS 1.7.0bfe3a36ebd25 6 months ago 45.2MB k8s.gcr. IO/Pause 3.2 80d28bedFE5d 10 months ago 683kBCopy the code

Save the image

mkdir kubeadm-basic.images
cdKubeadm-basic. images Docker save k8s.gcr. IO /kube-apiserver:v1.20.1 > apiserver.tar docker save k8s.gcr. IO /coredns:1.7.0 > coredns.tar docker save k8s.gcr. IO /etcd:3.4.13-0 > etcd.tar docker save k8s.gcr. IO /kube- Controller-Manager :v1.20.1 > Kubec-con-man. tar docker save k8s.gcr. IO /pause:3.2 > pause.tar docker save k8s.gcr. IO /kube-proxy:v1.20.1 > proxy.tar Docker save k8s.gcr. IO /kube-scheduler:v1.20.1 > scheduler.tarcd. tar zcvf kubeadm-basic.images.tar.gz kubeadm-basic.imagesCopy the code

5.2 Offline Mirroring

Link: pan.baidu.com/s/1UAF_-_sG… Extraction code: 548Z

Upload the image package kubeadm-basic.images.tar.gz and import the images in the package to the local image repository

[root@kmaster ~]# ll
total 216676
-rw-------. 1 root root      1391 Dec 22 04:42 anaconda-ks.cfg
drwxr-xr-x  2 root root       142 Dec 30 07:55 kubeadm-basic.images
-rw-r--r--  1 root root 221857746 Dec 30 08:01 kubeadm-basic.images.tar.gz
-rw-r--r--  1 root root       827 Dec 30 07:34 kubeadm.conf
-rw-r--r--  1 root root        20 Dec 30 07:00 kube-images.tar.gz
-rw-r--r--  1 root root       364 Dec 30 03:40 kubernetes.conf
[root@kmaster ~]# ll kubeadm-basic.images
total 692188
-rw-r--r-- 1 root root 122923520 Dec 30 07:54 apiserver.tar
-rw-r--r-- 1 root root  45364736 Dec 30 07:54 coredns.tar
-rw-r--r-- 1 root root 254677504 Dec 30 07:54 etcd.tar
-rw-r--r-- 1 root root 117107200 Dec 30 07:54 kubec-con-man.tar
-rw-r--r-- 1 root root    691712 Dec 30 07:55 pause.tar
-rw-r--r-- 1 root root 120377856 Dec 30 07:55 proxy.tar
-rw-r--r-- 1 root root  47643136 Dec 30 07:55 scheduler.tar
Copy the code

Docker image repository (docker image repository)

When the k8S cluster is initialized, the response image will be downloaded from the GCE Google cloud, and the image is relatively large and slow to download
Create a sh script file in any directory: image-load.sh
#! /bin/bash
Pay attention to the directory where the image extract is located
ls /root/kubeadm-basic.images > /tmp/images-list.txt
cd /root/kubeadm-basic.images
for i in $(cat /tmp/images-list.txt)
do
	docker load -i $i
done
rm -rf /tmp/images-list.txt

#2 Modify permission, executable permission
chmod 755 image-load.sh

#3 Start. Import the image
./image-load.sh

#4 Transfer files and images to other nodes
# copy to knode1
scp -r image-load.sh kubeadm-basic.images root@knode1:/root/
# copy to knode2
scp -r image-load.sh kubeadm-basic.images root@knode2:/root/
Copy the code

5.3 Node Importing an image file to a Node

Knode1 Imports the image

[root@knode1 ~]# ./image-load.shIO /kube-apiserver:v1.20.1 Loaded image: k8s.gcr. IO /coredns:1.7.0 Loaded image: k8s.gcr. IO /coredns:1.7.0 Loaded image: k8s.gcr. K8s.gcr. IO /etcd:3.4.13-0 Loaded image: k8s.gcr. IO /kube-controller-manager:v1.20.1 Loaded image: k8s.gcr. IO /kube-controller-manager:v1.20.1 Loaded image: IO /kube-proxy: k8s.gcr. IO /kube-proxy:v1.20.1 Loaded image: k8s.gcr. IO /kube-proxy:v1.20.1 Loaded image: K8s. GCR. IO/kube - the scheduler: v1.20.1 [root @ knode1 ~]# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE K8s.gcr. IO /kube-proxy v1.20.1e3f6fCD87756 11 days ago 118MB K8s.gcr. IO /kube-apiserver v1.20.1 75C7f7112080 11 days ago 122MB k8s.gcr. IO/kube-Controller-Manager v1.20.1 2893d78e47dc IO /kube- Scheduler v1.20.1 4AA0b4397bbb 11 Days ago 46.4MB k8s.gcr. IO /etcd 3.4.13-0 0369CF4303FF 4 months ago 253MB k8s.gcr. IO/coreDNS 1.7.0bfe3a36ebd25 6 months ago 253MB k8s.gcr. IO /pause 3.2 80d28bedfe5d 10 months ago 683kBCopy the code

Knode2 Import image

[root@knode2 ~]# ./image-load.shIO /kube-apiserver:v1.20.1 Loaded image: k8s.gcr. IO /coredns:1.7.0 Loaded image: k8s.gcr. IO /coredns:1.7.0 Loaded image: k8s.gcr. K8s.gcr. IO /etcd:3.4.13-0 Loaded image: k8s.gcr. IO /kube-controller-manager:v1.20.1 Loaded image: k8s.gcr. IO /kube-controller-manager:v1.20.1 Loaded image: IO /kube-proxy: k8s.gcr. IO /kube-proxy:v1.20.1 Loaded image: k8s.gcr. IO /kube-proxy:v1.20.1 Loaded image: K8s. GCR. IO/kube - the scheduler: v1.20.1 [root @ knode2 ~]# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE K8s.gcr. IO /kube-proxy v1.20.1e3f6fCD87756 11 days ago 118MB K8s.gcr. IO /kube-apiserver v1.20.1 75C7f7112080 11 days ago 122MB k8s.gcr. IO/kube-Controller-Manager v1.20.1 2893d78e47dc IO /kube- Scheduler v1.20.1 4AA0b4397bbb 11 Days ago 46.4MB k8s.gcr. IO /etcd 3.4.13-0 0369CF4303FF 4 months ago 253MB k8s.gcr. IO/coreDNS 1.7.0bfe3a36ebd25 6 months ago 253MB k8s.gcr. IO /pause 3.2 80d28bedfe5d 10 months ago 683kBCopy the code

6 k8s deployment

Initializing the primary node ---- only needs to be performed on the primary node

#1 Pull the YAML resource profile
kubeadm config print init-defaults > kubeadm-config.yaml

#2 Modify the YAML resource fileLocalAPIEndpoint: advertiseAddress: 192.168.8.121Change the IP address of the configuration fileKubernetesVersion: v1.20.1Note: The version number must be the same as the kubectl version
networking:
    dnsDomain: cluster.local
	The flannel model communication POD segment address is the same as the Flannel network
	podSubnet: "10.244.0.0/16"
	serviceSubnet: "10.96.0.0/12"
# specifies the use of ipvS networks for communication
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: kubeProxyConfiguration
featureGates:
	supportipvsproxymodedm.ymlvim kubeadm.yml: true
mode: ipvs

#3 Initialize the primary node and start deployment
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
Note: The number of CPU cores must be greater than 1, otherwise the command cannot be executed successfully
W1230 09:44:35.116411    1495 strict.go:47] unknown configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"kubeProxyConfiguration"} for scheme definitions in "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31" and "k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/scheme.go:28"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=kubeProxyConfiguration
[init] Using Kubernetes version: v1.20.1
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.8.121]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.8.121 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.8.121 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.503909 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
7ecfa579dfa66c0ea9c87146aa5130c1692b85a4d16cfc860473064a75c113c5
[mark-control-plane] Marking the node kmaster as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.8.121:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:7459fa01464531734d3eee182461b77b043d31eff7df2233635654d7c199c947
[root@kmaster ~]#
Copy the code

Reference kubeadm – config. Yaml

apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: AdvertiseAddress: 192.168.8.121 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim. Sock Name: kmaster taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.20.1 Networking: dnsDomain: cluster.local podSubnet: Scheduler: {} -- apiVersion: kubeadm.k8s. IO /v1beta2 kind: kubeProxyConfiguration featureGates: supportipvsproxymodedm.ymlvim kubeadm.yml: true mode: ipvsCopy the code

Execute the following commands as instructed by K8S:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
Copy the code

Before executing the command:

kubectl get node
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Copy the code

After executing the command

Kubectl get node NAME STATUS ROLES AGE VERSION kmaster NotReady Control-plane,master 7m24s v1.20.1Copy the code

The node information can be queried successfully. However, the node is in the NotReady state, not Runing state.

At this time, ipvS + Flannel is used for network communication. However, the Flannel plug-in has not been deployed. Therefore, the node status is NotReady

7 flannel plug-in

Deploy the Flannel plugin -- only on the primary node
#1 Download the Flannel plug-in
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 2 deployment flannel
kubectl create -f kube-flannel.yml
Deployment networks are also available
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Copy the code

validation

[root@kmaster ~]# kubectl get pod -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-5n6zs           1/1     Running   0          15m
coredns-74ff55c5b-r9469           1/1     Running   0          15m
etcd-kmaster                      1/1     Running   0          15m
kube-apiserver-kmaster            1/1     Running   0          15m
kube-controller-manager-kmaster   1/1     Running   0          15m
kube-flannel-ds-n4sbp             1/1     Running   0          89s
kube-proxy-t7bvn                  1/1     Running   0          15m
kube-scheduler-kmaster            1/1     Running   0          15m
Copy the code

8 Add nodes

If the master node and other working nodes, execute the command in the installation log
# View log files
cat kubeadm-init.log
The command is executed on several other nodesKubeadm join 192.168.8.121:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:7459fa01464531734d3eee182461b77b043d31eff7df2233635654d7c199c947Copy the code

knode1

Kubeadm join 192.168.8.121:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:7459fa01464531734d3eee182461b77b043d31eff7df2233635654d7c199c947Copy the code

knode2

Kubeadm join 192.168.8.121:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:7459fa01464531734d3eee182461b77b043d31eff7df2233635654d7c199c947Copy the code

9 Verifying Status

[root@kmaster ~]# kubectl get nodeNAME STATUS ROLES AGE VERSION Kmaster Ready control-plane,master 26M v1.20.1 KNODE1 Ready < None > 5M37s v1.20.1 KNOde2 Ready to < none > 5 m28s v1.20.1Copy the code
[root@kmaster ~]# kubectl get pod -n kube-system -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-74ff55c5b-5n6zs 1/1 Running 0 27m 10.244.0.2kmaster <none> <none> corednS-74FF55C5b-r9469 1/1 Running 0 27m 10.244.0.3kmaster <none> <none> etcd-kmaster 1/1 Running 0 27m 192.168.8.121 kmaster <none> <none> kube-apiserver-kmaster 1/1 Running 0 27m 192.168.8.121 kmaster <none> <none> kube-apiserver-kmaster <none> <none> kube-controller-manager-kmaster 1/1 Running 0 27m 192.168.8.121 kmaster <none> <none> Knode1 <none> <none> kube-flannel-ds-n4sbp 1/1 Running 0 13m Kmaster <none> <none> kube-flannel-ds-rvfbt 1/1 Running 0 7m3s 192.168.8.123 knode2 <none> <none> Kube-proxy-knhtb 1/1 Running 0 7m12s 192.168.8.122 Knode1 <none> <none> kube-proxy-t7bvn 1/1 Running 0 27m 192.168.8.121 Kmaster <none> <none> kube-proxy-vpxqm 1/1 Running 0 7m3s 192.168.8.123 Knode2 <none> <none> kube-scheduler-kmaster 1/1 Running 0 27m 192.168.8.121 kmaster <none> <none>Copy the code

10 Check the version of Docker and K8S

[root@kmaster ~]# docker versionClient: Docker engine-community Version: 20.10.1 API Version: 1.41 Go Version: go1.13.15 Git commit: 831ebea Built: Tue Dec 15 04:37:17 2020 OS/Arch: linux/amd64 Context: default Experimental:trueServer: Docker Engine - Community Engine: Version: 20.10.1 API Version: 1.41 (minimum Version 1.12) Go Version: Go1.13.15 Git commit: F001486 Built: Tue Dec 15 04:35:42 2020 OS/Arch: Linux/AMd64 ExperimentalfalseContainerd: Version 1.4.3 GitCommit: 269548 fa27e0089a8b8278fc4fc781d7f65a939b runc: Version: 1.0.0 - rc92 GitCommit: Ff819c7e9184c13b7c2607fe6c30ae19403a7aff docker - init: Version: 0.19.0 GitCommit: de40ad0 [root @ kmaster ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-18T12:09:25Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-18T12:00:47Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Copy the code

Note:

Using Docker version 20.10.1, Go version 1.13.15,

Use K8S version 1.20.1, Go version 1.15.5