First, environmental preparation

1. Server environment

The number of CPU cores on the node must be >= 2. Otherwise, K8S cannot start the DNS network. You are advised to set the DNS to the DNS connected to the local network. The Linux kernel must be version 4 or above, so the Linux core must be upgraded to prepare 3 virtual machine environments, or 3 Ali Cloud servers. K8s-master01: the operating environment for k8S-master. K8s-node01: the operating environment for k8S node. K8s-node02: the operating environment for K8S node

2. Depend on the environment

1. Set hostname for each machine
Hostnamectl set-hostname k8S-master01 Hostnamectl set-hostname k8s-node01 Hostnamectl set-hostname k8s-node02 Host mapping vi /etc/hosts 192.168.140.128k8s-master01 192.168.140.140k8s-node01 192.168.140.139k8s-node02Copy the code
2, install the dependency environment, note: each machine needs to install this dependency environment
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git iproute lrzsz bash-completion tree bridge-utils unzip bind-utils gcc

Copy the code
3. Disable the firewall and Selinux
systemctl stop firewalld & & Iptables yum -y install iptables-services & & systemctl start iptables & & systemctl enable iptables & & iptables -F & & service iptables save setenforce 0 & & sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/configCopy the code
4. Upgrade the Linux kernel to version 4.44
RPM - # to install the kernel yum Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm - enablerepo = elrepo - kernel install Y the kernel - lt all view the current kernel version cat/boot/grub/grub2. CFG | grep menuentry view the current start modify the kernel version grub2 - editenv list are start the kernel version, Grub2-set-default 'CentOS Linux (5.7.7-1.el7.elrebo.x86_64) 7 (Core) Uname -rCopy the code
5. Adjust kernel parameters for K8S
cat > kubernetes.conf < < EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 Net.net filter.nf_conntrack_max=2310720 EOF # copy the optimized kernel file to /etc/sysctl.d/ This optimization boot file to be invoked when the cp kubernetes. Conf/etc/sysctl. D/kubernetes. Conf # manually refresh, To optimize the file take effect immediately sysctl -p/etc/sysctl. D/kubernetes. ConfCopy the code
6. Adjust the system temporary zone. – If the time zone has been set, skip this step
Timedatectl set-timezone Asia/Shanghai write the current UTC time to the hardware clock timedatectl set-local-rTC 0 systemctl restart rsyslog systemctl restart crondCopy the code
7. Stop unnecessary services
systemctl stop postfix & & systemctl disable postfixCopy the code
8. Set the log saving mode
1. Create a log directory mkdir /var/log/journal 2. Create a configuration file storage directory mkdir/etc/systemd journald. Conf. 3 d. Create a configuration file cat > /etc/systemd/journald.conf.d/99-prophet.conf < < EOF [Journal] Storage=persistent Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 SystemMaxUse=10G SystemMaxFileSize=200M MaxRetentionSec=2week ForwardToSyslog=no EOF 4. Restart systemd journald configuration systemctl restart systemd-journaldCopy the code
9. Adjust the number of open files (can be ignored and not executed)
echo "* soft nofile 65536" > > /etc/security/limits.conf echo "* hard nofile 65536" > > /etc/security/limits.confCopy the code
10. Kube-proxy enable ipvS preconditions
modprobe br_netfilter cat > /etc/sysconfig/modules/ipvs.modules < < EOF #! /bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- Nf_conntrack_ipv4 EOF Note: Nf_conntrack_ipv4 applies to the kernel version 4. When deploying this kernel version, you need to change the parameter to modprobe -- nf_conntrack. Under the same # # use lsmod command to see if these files are guided chmod 755 / etc/sysconfig/modules/ipvs modules & amp; & bash /etc/sysconfig/modules/ipvs.modules & & lsmod | grep -e ip_vs -e nf_conntrack_ipv4Copy the code

3. Docker deployment

1. Install Docker
yum install -y yum-utils device-mapper-persistent-data lvm2 Repos. D /docker-ce.repo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # update related Docke Yum install packages & amp; Install Docker CE yum update -y & & yum install docker-ceCopy the code
2. Set up the Docker Daemon file
/etc/docker mkdir /etc/docker cat > /etc/docker/daemon.json < < EOF {"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": }} EOF # Journalctl - amu docker can find mistake # to create, store docker configuration file mkdir -p/etc/systemd/system/docker. Service. DCopy the code
3. Restart the Docker service
systemctl daemon-reload & & systemctl restart docker & & systemctl enable dockerCopy the code

4. Install kubeadm

1, install kubernetes, need to install kubelet, kubeadm package, but k8s website to yum source is packages.cloud.google.com, access to domestic no, at this time we can use ali cloud yum warehouse mirror.
cat < < EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum update -yCopy the code
Install kubeadm, kubelet, kubectl
Yum install -y kubeadm-1.15.1 kubelet-1.15.1 kubectl-1.15.1Copy the code
3. Start Kubelet
systemctl enable kubelet & & systemctl start kubeletCopy the code

2. Cluster installation

1. Rely on mirroring

Upload the image package and import the images in the package to the local image repository kubeadm-basic.images.tar.gz

Baidu network backup link: link: pan.baidu.com/s/1SplTajkP… Extraction code: GRCD

When kubeadm initializes k8S cluster, it will download (pull) the corresponding image from GCE Google cloud, and the image is relatively large, the download is relatively slow, so use the downloaded image.

Docker image repository (docker image repository)

1. Import the image script code (create sh script file: image-load.sh in any directory).
#! /bin/bash # /root/kubeadm-basic.images > /tmp/images-list.txt cd /root/kubeadm-basic.images for i in $(cat /tmp/images-list.txt) do docker load -i $i done rm -rf  /tmp/images-list.txtCopy the code
2. Modify the execution permission
chmod 755 image-load.sh 

Copy the code
3. Start importing the image
./image-load.sh

Copy the code
4. Transfer files and images to other nodes
Sh kubeadm-basic.images root@k8s-node01:/root/ # SCP -r image-load Kubeadm-basic. images root@k8s-node02:/root/ # Run the sh script on other nodes in sequence to import the imageCopy the code

2. K8s deployment

Initialize the primary node — only on the primary node

1. Pull the YAML resource profile
kubeadm config print init-defaults > kubeadm-config.yaml

Copy the code
2. Modify the YAML resource file
LocalAPIEndpoint: advertiseAddress: 192.168.66.10 # Note: Modify the IP address of the configuration file kubernetesVersion: V1.15.1 # note: Netpod addresses for flannel model communication podSubnet: 10.244.0.0/16 serviceSubnet: "10.96.0.0/12" # specified using ipvs network communication - apiVersion: kubeproxy. Config. K8s. IO/v1alpha1 kind: kubeProxyConfiguration featureGates: SupportIPVSProxyMode: true mode: ipvsCopy the code
3. Initialize the active node and start deployment
Kubeadm init -- config = kubeadm - config. Yaml - experimental - upload - certs | tee kubeadm - init. Log # note: The number of CPU cores must be greater than one. Otherwise, the command cannot be executed successfully. If the initialization fails, run kubeadm resetCopy the code

After the kubernetes primary node is successfully initialized, the following is displayed:

Execute the following command as instructed by K8S:

4. Run the following command after the initialization
Create directory, save connection configuration cache, Certification documents mkdir -p $HOME /. Kube # copy cluster management configuration file cp - I/etc/kubernetes/admin. Conf. $HOME/kube/config # authorization to chown configuration file $(id - u) : $(id -g) $HOME/.kube/configCopy the code

Run the following command to query node information:,

The node information can be queried successfully. However, the node is in the NotReady state, not Runing state. At this time, ipvS + Flannel is used for network communication. However, the Flannel plug-in has not been deployed. Therefore, the node status is NotReady.

3. Flannel plug-in

Deploy the Flannel plugin – Only on the primary node

1. Download the Flannel plug-in

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Copy the code
2. Deploy flannel
Kubectl apply-f kubectl create -f kube-flannel.yml # kubectl apply-f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlCopy the code

Note the following network connectivity problems when deploying the Flannel network plug-in:

Add the primary node and other working nodes and run the commands in the installation logs

Kubeadm join 192.168.140.128:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:a3d9827be411208258aea7f3ee9aa396956c0a77c8b570503dd677aa3b6eb6d8Copy the code

Some nodes are in the NotReady state because the POD container of these nodes is still in the initial state and needs to wait for a while.

Query details about pod containers in the workspace