Why deploy Kubernetes on ARM64 and the architecture of kunpeng 920? It’s a long story… 5000 words are omitted here.
This section describes the system information.
• architecture: Kunpeng920 (Kunpeng920)•OS: openEuler 20.03 (lts-sp1)•CPU: 4c• memory: 16 gb • hard disk: several
Although the whole process referred to the posts of Kunpeng forum [1], it still took a lot of trouble.
TL; DR
Note that Kubernetes and network components are installed on the ARM64 platform. You need to use the arm64 version of the image.
Environment configuration
1. Close the selinux
SELINUX=disabled vim /etc/sysconfig/selinuxCopy the code
2. Close the swap partition
# temporarily close swapoff -a # permanently close comment swap line vim /etc/fstabCopy the code
3. Disable the firewall
systemctl stop firewalld
ssystemctl disable firewalld
Copy the code
4. Configure the network
The endogenous bridging function that needs to be enabled for nF-Call inside iptables
vim /etc/sysctl.d/k8s.conf
Copy the code
Modify the following:
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm_swappiness=0
Copy the code
After the modification is complete, execute:
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
Copy the code
5. Add Kubernetes source
In the file/etc/yum. Repos. D/openEuler. Repo append the following contents:
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Copy the code
Install and configure iSula
yum install -y iSulad
Copy the code
To modify the iSula configuration, open the /etc/isulad/daemon.json file as follows:
{ "registry-mirrors": [ "docker.io" ], "insecure-registries": [ "rnd-dockerhub.huawei.com" ], "pod-sandbox-image": K8s.gcr. IO /pause:3.2", // network-plugin: "cni", "cni-bin-dir": "", "cni-conf-dir": "k8s.gcr. IO /pause:3.2", // Network-plugin: "cni", "cni-bin-dir": "", "cni-conf-dir": "", "hosts": [ "unix:///var/run/isulad.sock" ] }Copy the code
After the modification, restart isulad
systemctl restart isulad
systemctl enable isulad
Copy the code
Kubernetes deployment
1. Install kubelet, kubeadm, kubectl
Yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0Copy the code
2. Prepare an image
Failed to pull k8s.gcr. IO mirror due to some unknown network problem. You need to download it in advance.
Run the kubeadm config images list –kubernetes-version 1.20.0 command to obtain the image required for initialization. Note that the –kubernetes-version parameter is used to specify the version number, otherwise kubeadm prints the highest version of the initial image of 1.20.x (for example, 1.20.4 is the highest version of 1.20.x).
K8s. GCR. IO/kube - apiserver: v1.20.0 k8s. GCR. IO/kube - controller - manager: v1.20.0 k8s. GCR. IO/kube - the scheduler: v1.20.0 K8s. GCR. IO/kube - proxy: v1.20.0 k8s. GCR. IO/pause: 3.2 k8s. GCR. IO/etcd: 3.4.13-0 k8s. GCR. IO/coredns: 1.7.Copy the code
The arm64 version image is as follows:
K8s. GCR. IO/kube - apiserver - arm64: v1.20.0 k8s. GCR. IO/kube - controller - manager - arm64: v1.20.0 K8s. GCR. IO/kube - the scheduler - arm64: v1.20.0 k8s. GCR. IO/kube - proxy - arm64: v1.20.0 k8s. GCR. IO/pause - arm64:3.2 K8s.gcr. IO /etcd-arm64:3.4.2-0 # Supports the highest version of ARM64 3.4.x k8s.gcr. IO/coreDNS :1.7.0 # No special arm64 version requiredCopy the code
After downloading the image by “luck”, use the isula tag command to modify it to what we need:
IO/kuBE-apiserver-arm64: v1.20.0k8S.gcr. IO/kuBE-apiserver: v1.20.0ISula tag K8s. GCR. IO/kube - controller - manager - arm64: v1.20.0 k8s. GCR. IO/kube - controller - manager: v1.20.0 isula tag IO /kube-proxy-arm64: v1.20.0k8s.gcr. IO /kube-proxy-arm64: v1.20.0k8s.gcr. IO /kube-proxy-arm64: v1.20.0isula tag k8s.gcr. IO /kube-proxy-arm64:v1.20.0 IO /kube proxy:v1.20.0 ISula tag K8s.gcr. IO/pause-ARM64:3.2k8S.gcr. IO /pause:3.2 ISula tag IO/etCd-arm64 :3.4.2-0 k8s.gcr. IO/ETCD :3.4.13-0 ISula tag K8s.gcr. IO/coreDNS: 1.7.0k8S.gcr. IO/coreDNS :1.7.0Copy the code
3. Initialize the master node
Note that the –cri-socket parameter needs to use the ISulad API.
Kubeadm init --kubernetes-version v1.20.0 --cri-socket=/var/run/isulad.sock --pod-network-cidr=10.244.0.0/16Copy the code
If the installation is successful, you should see the following
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 12.0.0.3:6443 --token 0110xl. lqzLegbduz2qkdhr \ --discovery-token-ca-cert-hash sha256:42b13f5924a01128aac0d6e7b2487af990bc82701f233c8a6a4790187ea064afCopy the code
4. Configure the cluster environment
Then configure it based on the output above
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
Copy the code
5. Add nodes to the cluster
Repeat the previous steps: environment configuration, installation configuration of iSula, and Kubernetes deployment 1 and 2.
Again using the command output above, plus the –cri-socket argument:
Kubeadm join 12.0.0.9:6443 --token 0110xl.lqzlegbduz2qkdhr \ --discovery-token-ca-cert-hash \ --cri-socket=/var/run/isulad.sockCopy the code
Configuring network Plug-ins
After initializing the master node and configuring the cluster environment, you can execute kubectl commands.
Kubectl get Nodes NAME STATUS ROLES AGE VERSION host-12-0-0-9 NotReady control-plane,master 178M v1.20.0Copy the code
The node is in the NotReady state because the network plug-in has not been installed. If you run journalctl -uf kubelet to check the kubelet log, you will see that the log indicates that the network plug-in is not ready.
kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:iSulad: network plugin is not ready: cni config uninitialized
Copy the code
Remember the isulad configuration?
"Network - the plugin" : "the cni", "the cni - bin - dir" : "", / / use the default/opt/the cni/bin" the cni - conf - dir ":" ", / / use the default/etc/the cni/net. DCopy the code
In fact, both directories are empty, if the directory does not exist, create first:
mkdir -p /opt/cni/bin
mkdir -p /etc/cni/net.d
Copy the code
To use Calico as a web plug-in, download the Manifest first.
wget https://docs.projectcalico.org/v3.14/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networki Ng / 1.7 / calico. YamlCopy the code
Since it is arm64 hardware, also need to use the corresponding arm64 version of the image, first check which image to use:
Grep 'image:' the calico. Yaml | uniq image: the calico/the cni: v3.14.2 image: the calico/pod2daemon - flexvol: v3.14.2 image: The calico/node: v3.14.2 image: the calico/kube - controllers: v3.14.2Copy the code
For the arm64 version, refer to the preceding steps.
The calico/the cni: v3.14.2 - arm64 calico/pod2daemon - flexvol: v3.14.2 - arm64 calico/node: v3.14.2 - arm64 The calico/kube - controllers: v3.14.2 - arm64Copy the code
After the mirror is done, execute:
kubectl apply -f calico.yaml
Copy the code
You can then see that the node becomes Ready.
test
Nginx does not have a version of arm64. Instead, use the official Hello world image provided by Docker. That’s right, arm64 support.
Note: The process in the container prints a message and exits, so the POD is constantly restarted, but this is sufficient for testing.
kubectl run hello-world --image hello-world:latest
kubectl logs hello-world --previous
Copy the code
You can see
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(arm64v8)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
Copy the code
conclusion
So far, we have completed the deployment of Kubernetes on Kunpeng platform based on openEuler + iSula.
Reference links
[1] kunpeng BBS posts: bbs.huaweicloud.com/forum/threa…
This article uses the article synchronization assistant to synchronize