I. Environmental description
cat /etc/hosts
192.168.10.11 node1 # master1
192.168.10.14 node4 # master2
192.168.10.15 node5 # master3
Note: Since the operation is performed on the own VM, only the master node is deployed. I will write down all the operations performed by the worker node.
2, environment configuration <master and worker execution >
1. Set aliyun YUM source (optional)
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
rm -rf /var/cache/yum && yum makecache
2. Install dependency packages
yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
3. Disable the firewall
systemctl stop firewalld && systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
4. Close SELinux
setenforce 0
sed -i “s/SELINUX=enforcing/SELINUX=disabled/g” /etc/selinux/config
5. Close the swap partition
swapoff -a
sed -i ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab
6. Load the kernel module
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#! /bin/bash
modprobe — ip_vs
modprobe — ip_vs_rr
modprobe — ip_vs_wrr
modprobe — ip_vs_sh
modprobe — nf_conntrack_ipv4
modprobe — br_netfilter
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
7. Set kernel parameters
cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
sysctl -p /etc/sysctl.d/k8s.conf
8. Install Docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager –add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
Yum install – y docker – ce – 18.09.6
systemctl start docker
systemctl enable docker
After the installation is complete, configure the startup command. Otherwise, Docker sets the default policy of the Iptables FORWARD chain to DROP
In addition, Kubeadm recommends setting systemd as a cgroup driver, so you need to modify daemon.json
sed -i “13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT” /usr/lib/systemd/system/docker.service
tee /etc/docker/daemon.json <<-‘EOF’
{ “exec-opts”: [“native.cgroupdriver=systemd”] }
EOF
systemctl daemon-reloadsystemctl restart docker
Install kubeadm and kubelet
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fastyum install -y kubelet kubeadm kubectl
systemctl enable kubelet
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Kubelet cgroup driver
KUBELET_KUBECONFIG_ARGS=–cgroup-driver=systemd
systemctl daemon-reload
systemctl restart kubelet.service
10. Pull the required mirror image
kubeadm config images list | sed -e ‘s/^/docker pull /g’ -e ‘s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g’ | sh -x
docker images | grep registry.cn-hangzhou.aliyuncs.com/google_containers | awk ‘{print “docker tag”,$1″:”$2,$1″:”$2}’ | sed -e ‘s/registry.cn-hangzhou.aliyuncs.com\/google_containers/k8s.gcr.io/2’ | sh -x
docker images | grep registry.cn-hangzhou.aliyuncs.com/google_containers | awk ‘{print “docker rmi “””$1″””:”””$2}’ | sh -x
Install Keepalived and HaProxy <master execution >
The high availability of Kubernetes mainly refers to the high availability of the control plane. To put it simply, there are multiple sets of Master node components and Etcd components, and the working nodes are connected to each Master through load balancing.
Mix etCD with Master node components:
Etcd mixing mode: Requires fewer machine resources, is easy to deploy, and is easy to manage. Horizontal expansion is risky. When a host machine is down, both the master and Etcd are lost, and the redundancy of the cluster is greatly affected.
3.1 the master installation
yum install -y keepalived haproxy
3.2 modifying the haproxy configuration file :(consistent on all three nodes)
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
logGlobal option Httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen statsbind*:1080 stats auth admin:awesomePassword stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin? stats frontend kubernetes-apiserver mode tcpbind *:8443
option tcplog
default_backend kubernetes-apiserver
backend kubernetes-apiserver
balance roundrobin
mode tcp
server node1 192.168.10.11:6443 check inter 5000 fall 2 rise 2 weight 1
server node4 192.168.10.14:6443 check inter 5000 fall 2 rise 2 weight 1
server node5 192.168.10.15:6443 check inter 5000 fall 2 rise 2 weight 1Copy the code
3.3 Modifying keepalived Configuration Files
A node:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33 # Host physical nic name
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.16 #VIP must be in the same network segment as its own IP address
}
track_script {
check_haproxy
}
}Copy the code
Node 2:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 80 Advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.10.16} track_script { check_haproxy } }Copy the code
Three nodes:
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 60 Advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.10.16} track_script { check_haproxy } }Copy the code
Execute in three masters:
cat > /etc/keepalived/check_haproxy.sh <<EOF
#! /bin/bash
systemctl status haproxy > /dev/null
if[[\$?!= 0]];then
echo "haproxy is down,close the keepalived"
systemctl stop keepalived
fi
EOF
chmod +x /etc/keepalived/check_haproxy.sh
systemctl enable keepalived && systemctl start keepalived
systemctl enable haproxy && systemctl start haproxy
systemctl status keepalived && systemctl status haproxy
If keepalived is not running, execute it again
systemctl restart keepalivedCopy the code
You can see it on the master node:
Here Keepalived and Haproxy are ready to finish.
4. Initialize the cluster
Kubeadm init \ –kubernetes-version=v1.16.3 \ –pod-network-cidr=10.244.0.0/16 \ –apiserver-advertise-address=192.168.10.11 \ –control-plane-endpoint 192.168.10.16:8443 –upload-certs
If yes, the initialization succeeds
1. Configure for users who need to use Kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
2. Install Pod Network
Install the Canal network plug-in
Wget docs.projectcalico.org/v3.1/gettin… Wget docs.projectcalico.org/v3.1/gettin…
Here you need to modify the canal.yaml file
Is amended as:
3. Then deploy:
If all states are running, the deployment succeeds
4. Add other master nodes
Kubeadm join 192.168.10.16:8443 –token 4r7i1t.pu099ydf73ju2dq0 \ –discovery-token-ca-cert-hash sha256:65547a2b5633ea663cf9edbde3a65c3d1eb4d0f932ac2c6c6fcaf77dcd86a55f \ –control-plane –certificate-key e8aeb23b165bf87988b4b30a80635d35e45a14d958a10ec616190665c835dc6a
Execute on any node:
kubectl get node
5. Test master high availability:
Down fell master1
View it on other nodes
5. Join worker node
Kubeadm join 192.168.10.16:8443 –token 4r7i1t.pu099ydf73ju2dq0 \ –discovery-token-ca-cert-hash sha256:65547a2b5633ea663cf9edbde3a65c3d1eb4d0f932ac2c6c6fcaf77dcd86a55f