0. Work ahead
1. Virtual box Create three Ubuntu Server 20.04 VMS and set the network to bridge mode
The host | ip |
---|---|
k8s-master | 192.168.1.248 |
k8s-node1 | 192.168.1.106 |
k8s-node2 | 192.168.1.251 |
- Ubuntu 20.04 Setting static IP:
# configure master, Sudo vim /etc/netplan/00-installer-config.yaml # This is the network config written by 'subiquity' network: Ethernets: enp0S3: DHcp4: no DHcp6: no addresses: [192.168.1.248/24] gateway4: 192.168.1.1 nameservers: addresses: [8.8.8.8] Version: 2 sudo Netplan applyCopy the code
- Configure hosts for the Settings to take effect
192.168.1.248 k8s-master
192.168.1.106 k8s-node1
192.168.1.251 k8s-node2
Copy the code
- Disabling swap partitions
Sudo swapoff -a # Check the status of the switch partition sudo free -mCopy the code
2. Install the docker
Docs.docker.com/engine/inst…
3. Install kubectk \ kubeadm \ kubelet
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Copy the code
sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
sudo apt update
sudo apt install kubelet kubeadm kubectl
sudo systemctl enable kubelet && sudo systemctl start kubelet
Copy the code
4. Configure the cluster master
Sudo kubeadm init --pod-network-cidr 172.16.0.0/16 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containersCopy the code
Possible problems:
- detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”.
sudo vim /etc/docker/daemon.json
"exec-opts": ["native.cgroupdriver=systemd"]
sudo systemctl restart docker
Copy the code
- the number of available CPUs 1 is less than the required 2
Virtualbox Sets the VM CPU to 2Copy the code
- Error response from daemon: The manifest for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0 not found:
Sudo Docker pull coreDNS/coreDNS :1.8.0 sudo Docker tag coreDNS/coreDNS :1.8.0 sudo Docker pull coreDNS/coreDNS :1.8.0 sudo Docker tag CoreDNS/coreDNS :1.8.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0Copy the code
The following information is displayed indicating that the master node is successfully initialized:
To start using your cluster, you need To run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configCopy the code
5. Configure cluster-node
Sudo kubeadm join 192.168.1.248:6443 --token... (Init master output last message)Copy the code
The node status is abnormal because the network plug-in has not been installed:
# # to install calio wget https://docs.projectcalico.org/manifests/calico.yaml vim calico. Yaml # # modify this value is kubeadm init the cidr - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/16" kubectl apply-f calico.yamlCopy the code
Possible problems:
- Get POD find kube-system coreDNS-57D4CBF879-22lwd 0/1 ErrImagePull, describe pod find it is not able to pull the image because it has been local before (manually label), And the pull policy in deploy is imagePullPolicy: IfNotPresent:
Solution: The image is not available on node, and the two pods are scheduled on node, so the image cannot be found. For details about how to manually label nodes, see the preceding steps to manually label nodes.
6. Check components after deploying the cluster
- Dial TCP 127.0.0.1:10251: connect: connection refused
CD/etc/kubernetes/manifests vim vim kube - scheduler. Yaml # # commenting - port = 0 sudo vim kube - controller - manager. Yaml # # will be --port=0 sudo systemctl restart kubeletCopy the code
Solution: