The Ubuntu server K8S cluster is set up

1. Version description


Ubuntu: 20.04 LTS

Docker – ce: 19.03.15 _3-0

Kubeadm: 1.20.5-00

2. Prepare for installation


  • Prepare one master node and multiple nodes

  • Disable the swap for the master and node

    #After the swap is closed, the swap becomes invalid after the restart
    sudo swapoff -a
    Copy the code

    Permanently close swap, modify the /etc/fstab file, and delete or comment the configuration of swap

  • Install the SSH, git

    sudo apt install -y openssh-sftp-server git
    Copy the code
  • Fix ipv4 addresses for the master and node to ensure that each machine can ping through each other

  • Complete the above work and restart the machine

3. Install the software


  1. Socat_1. 7.3.3-2 _amd64. Deb
  2. Ebtables_2. 0.11 3 build1_amd64. Deb
  3. Ethtool_5. 4.1 _amd64. Deb
  4. Conntrack_1. 4.5 2 _amd64. Deb
  5. Containerd. Io_1. 4.4 1 _amd64. Deb
  6. Docker – ce – cli_19. 03.15 _3-0 _ubuntu – focal_amd64. Deb
  7. Docker – ce_19. 03.15 _3-0 _ubuntu – focal_amd64. Deb
  8. Cri – tools_1. 13.0-01 _amd64. Deb
  9. Kubernetes – cni_0. 8.7-00 _amd64. Deb
  10. Kubelet_1. 20.5-00 _amd64. Deb
  11. Kubectl_1. 20.5-00 _amd64. Deb
  12. Kubeadm_1. 20.5-00 _amd64. Deb

Install the above software sequentially, or you can write a script to install it in batches

#Install commandSudo DPKG -i socat_1. 7.3.3-2 _amd64. Deb#Uninstall commandSudo DPKG -r socat_1. 7.3.3-2 _amd64. DebCopy the code

4. Configuration docker


  • Change the cgroup driver of the docker to systemd

    Add daemon.json to the /etc/docker directory

    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    Copy the code
  • Add the current user to the Docker group

    Sudo usermod -ag dockerCopy the code

5. Obtain the image


If the official mirror cannot be pulled, change the mirror source

IO/kuBE-controller-manager :v1.20.5 Docker pull k8s.gcr. IO/kuBE-Controller-manager :v1.20.5 docker pull IO/kuBE-proxy :v1.20.5 Docker pull K8S.gcr. IO/kuBE-proxy :v1.20.5 Docker pull K8S.gcr. IO /pause:3.2 docker pull K8s. GCR. IO/etcd: 3.4.13 0 docker pull k8s. GCR. IO/coredns: 1.7.0Copy the code

If these images already exist on the local PC, you can save them as files and load them directly to other nodes to solve the problem of slow image download

#Save the imageDocker save -o {save to filename} {mirror id}#Loading an image FileDocker load -i {docker load -i}#If the imported image name and tag are lost, you can add a new tag to the image
Copy the code

6. Initialize the cluster

Master the initialization
#Initialize the specified version. Using kubeadm int will load the latest image by defaultKubeadm init - kubernetes - version v1.20.5Copy the code

After the initialization is successful, the following information is displayed

Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a Pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one  of the options listed at: /docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>Copy the code

Run the following three commands to create a configuration file

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

After the master node is successfully initialized, it switches to a common user

Generate the child node join command

 kubeadm token create --print-join-command
Copy the code
node join

Run the join command generated on the master node

kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Copy the code

Mater Installs network add-on Calico

#Install command
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
#Uninstall command
kubectl delete -f https://docs.projectcalico.org/manifests/calico.yaml
Copy the code

By docs.projectcalico.org/manifests/c…

I used version V3.18.1

During Calico installation, a node may fail to bind an ipv4 address. You can add the following configuration in YAML

- name: IP_AUTODETECTION_METHOD value: "Interface =enp.*" # If there is a Windows node, you can add the following two lines: # Add a Windows node, change it to Never, Always: name: CALICO_IPV4POOL_IPIP value: - name: CALICO_AUTODETECTION_METHOD value: "interface=eth0"Copy the code

Check whether the installation is successful

#- n specified namespace
kubectl get pods -n kube-system
#View pods for all namespaces
kubctl get pods -A
Copy the code

Success is similar to the following, with pod set to ready = 1:

NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-mmjls 1/1 Running 0 6h26m coredns-86c58d9df4-p7brk 1/1 Running 0 6h26m  etcd-promote 1/1 Running 1 6h26m kube-apiserver-promote 1/1 Running 1 6h26m kube-controller-manager-promote 1/1 Running  1 6h25m kube-proxy-6ml6w 1/1 Running 1 6h26m kube-scheduler-promote 1/1 Running 1 6h25m calico-node-29gjr 1/1 Running 1  21h calico-node-7n8v6 1/1 Running 5 21h calico-node-8m6l2 1/1 Running 1 21h calico-node-h25hg 1/1 Running 0 18hCopy the code

8. Common failures are resolved

  • If the calio node is not ready, run kubectl delete -f calicao.yaml to delete the calio node and reinstall it
  • If the node cannot join, you can delete calico and run the kubeadm reset command to reset the node, and then join again
  • Note CaliO is not required for all nodes. Only the master is required

9. Kuborad installation

With all the above steps successful, you can install the Kubord visualization

Install command

#The installation
kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
#uninstall
kubectl delete -f https://kuboard.cn/install-script/kuboard.yaml
Copy the code

After kuboard is installed successfully, access any IP port 32567 on all nodes

For example: http://ip:32567 You can access the console

The token is generated and executed by the master

kubectl get secrets -n kube-system
kubectl describe  secrets -n kube-system kuboard-user-token-{}
Copy the code