This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!

>>>> 😜😜😜 Github: 👉 github.com/black-ant CASE backup: 👉 gitee.com/antblack/ca…

A. The preface

This article walks through the main process of installing Kubernetes in binary and troubleshooting exceptions.

Here first to thank the pioneers of the expansion of the road, save a lot of time, personal reference documents with the use of the latest version, more or less appeared some problems, here to sort down for reference.

The original address is: brief, because the article has been reviewed again, here attached reprint address Zhihu, you can refer to the original, can also follow my.

2. Configure common modules

Common modules require that each machine in the cluster be configured. This part requires installing Docker for each server and configuring the Linux basic configuration

2.1 installation Docker

// Step 1: Install the tools required by Docker
yum install -y yum-utils device-mapper-persistent-data lvm2

// Step 2: Configure Aliyun docker source (here I am Tencent Cloud, so there is no Aliyun)
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

// Step 3: Specify to install this version of Docker-ce
yum install docker-ce docker-ce-cli containerd.io

// Step 4: Start Docker
systemctl enable docker && systemctl start docker

// Add command:-> View version: Docker version -> View guide: docker -help -> View running docker: Docker psCopy the code

2.2 Basic Configuration of Kubernetes

// Step 1: Disable the firewall
systemctl disable firewalld
systemctl stop firewalld

-------------------

// Turn off selinux temporarily or permanently
// - Temporarily disable Selinux
setenforce 0
// - Disable permanently modifying the /etc/sysconfig/selinux file Settings
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

-------------------

// Step 3: Disable the switch partition
// - Temporarily disable
swapoff -a
// - permanently disable and open /etc/fstab to comment out the swap line.
sed -i 's/.*swap.*/#&/' /etc/fstab

-------------------

// Step 4: Modify kernel parameters
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Copy the code

3. Kubernetes Master configuration

After the basic configuration is installed, you can enable the Master server configuration

3.1 Master Installation Process

// Step 1: Configure k8S Ali Cloud source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


// Step 2: Install kubeadm, kubectl, kubelet
yum install -y kubectl-1.212.-0 kubeadm-1.212.-0 kubelet-1.212.-0

// Step 3: Start kubelet service
systemctl enable kubelet && systemctl start kubelet

// Step 4: initialization. Note the configuration operation (Step 5) and the Token(Node joins the cluster) statement -> PS31014
kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v121.2. --apiserver-advertise-address 11.2233.111. --pod-network-cidr=10.244. 0. 0/16 --token-ttl 0

// Step 5: Perform admin configuration
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

// Step 6: Check the status of nodes. You can see a notReady Node
kubectl get nodes

Copy the code

PS31014 Initialization operation q&A

You can use Docker images to view the 6 Docker images used in the management node. At this time, if the source configuration is correct, the processing will be completed in about 5 minutes, but many questions will be queried in this process

It takes about two minutes to wait here, and You will get a card in [preflight] You can also perform this action in salesman using “kubeadm config images pull”

  • Image-repository: this parameter specifies the image location. If the download is slow or timeout, you need to select a new location
  • Kubernetes -version: indicates the current version. You can query the latest version
  • Apiserver-advertise-address: This address is your Apiserver address and will be invoked by node.
  • pod-network-cidr: Specifies the IP address range for the POD network. The value depends on which network plug-in you select in the next step
    • 10.244.0.0/16: Flannel
    • 192.168.0.0/16: the Calico

3.2 init Result of initialization

// After the installation is successful, the following information is displayed:

Your Kubernetes master has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

// And a Token statement that allows nodes to join the cluster, as follows
kubeadm join 11.2233.111.:6443 --token 2onice.mrw3b6dxcsdm5huv \
	--discovery-token-ca-cert-hash sha256:0aafa06c71a936868sde3e1fbf82d9fbsadf233da24c774ca80asdc0ccd36d09 

Copy the code

If you get that result at once, congratulations on everything, if there is an exception, please refer to the following problem record:

Detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”.

Cause of problem: Here is the solution to the problem with your Docker: modify Cgroup, refer to the Hellxz blog

// Step 1: Determine the problem- output Group types: docker info | grep"Cgroup Driver"

// Step 2: Reset the kubeadm configuration
kubeadm reset
/ / or using the echo y | kubead reset

// Step 3: Modify Docker
1.Open the/etc/docker/daemon. Json2.add"exec-opts": ["native.cgroupdriver=systemd"]
// PS: none can be created directly, the final effect is as follows
{
 "exec-opts": ["native.cgroupdriver=systemd"]}// Step 4: Modify kubelet
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

// Step 4: Restart the service
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

// Step 5: Verify the result. The output should be systemd
docker info|grep "Cgroup Driver"

/ / added:Kubelet configuration file: /var/lib/kubelet/kubeadm-flags.env

Copy the code

3.4 Error response from daemon: Head Registry – 1. Docker. IO/v2 / coredns /…: connection reset by peer

Cause: The configuration of the Docker source is incorrect

// Modify the image configuration in /etc/docker/daemon.json, you can directly go to Ali Cloud to apply for
{
 "registry-mirrors": ["https://...... mirror.aliyuncs.com"]}Copy the code

3.5 Failed to pull image…. / coreDNS :v1.8.0: output: Error response from daemon: manifest for….. / coreDNS: V1.8.0 not found: MANIFEST Unknown

Cause: The core keyword here is coreDNS. The problem is that downloading coreDNS in the image fails. Coredns coredns: v1.8.0, manifest unknown, registry.aliyuncs.com/google_containers/coredns


// Docker pulls coreDNS
docker pull coredns/coredns:1.8. 0
docker tag coredns/coredns:1.8. 0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v18.. 0

// -- modify init's image-repository property, for example (see Master)
kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers ....
Copy the code

3.6 Failed to watch *v1.Service: Failed to list *v1.Service: Get “h…… /api/v1/services? limit=500&resourceVersion=0”: dial tcp ….. :6443: connect: connection refused

Cause: The API server is not started. This occurs after init, but at runtime


// Step 1: Check docker service, you can see the corresponding K8S service
docker ps -a | grep kube | grep -v pause

// Step 2: Docker looks at the log and fixes it
 docker logs 70bc13ce697c
Copy the code

3.7 Listen TCP 81.888.888:2380: bind: cannot assign requested address

See:6.1
Copy the code

If you see here, your problem is not solved, refer to section 6, trying to identify and solve process!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

4. Kubernetes Nodes configuration

First don’t forget the first section of the generic module processing!!

4.1 Main Process for Node Creation

// Step 1: Configure Ali source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

// Step 2: Install kubeadm, kubectl, kubelet
yum install -y kubeadm-1.212.-0 kubelet-1.212.-0

// Step 3: Start kubelet service
systemctl enable kubelet && systemctl start kubelet

// Step 4: Join the cluster
kubeadm join 11.2233.111.:6443 --token 2onice.mrw3b6dxcsdm5huv --discovery-token-ca-cert-hash sha256:0aafa06c71a936868sde3e1fbf82d9fbsadf233da24c774ca80asdc0ccd36d09 

// Step 5: Check. If everything is normal, you can obtain the following results in the Master
[root@VM-0-5-centos ~]# kubectl get nodes
NAME                    STATUS     ROLES                  AGE     VERSION
localhost.localdomain   NotReady   <none>                 5m24s   v121.2.
vm-0-5-centos           NotReady   control-plane,master   37h     v121.2.

Copy the code

If the installation fails, the following problems will occur:

4.2 configmaps “cluster-info” is forbidden: User “system:anonymous” cannot get resource “configmaps” in API group “” in the namespace “kube-public”

Cause of problem: This is an anonymous login problem, in the test environment, do not need to be too complex, add anonymous

kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous

// Formal environment solution: TODO


Copy the code

4.3 Error execution phase preflight: Failed to validate the identity of the API Server: configmaps “cluster-info” not found

Cause: When the Master is installed, init fails. As a result, API Server parameters cannot be obtained. Reinstall the Master.

This kind of problem needs to be solved for the Master, see details6.2Master Troubleshooting ProcessCopy the code

4.5 Failed to load kubelet config file “err =” Failed to load kubelet config file/var/lib/kubelet/config yaml

If a node is to be added to a cluster, the configuration is null before the add command is run. <br> ** After the add command is run, the configuration is generatedCopy the code

4.6 Failed to pull image k8s.gcr. IO /kube-proxy:v1.21.2: output: Error response from daemon

kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
Copy the code

4.7 Err =” Failed to run Kubelet: misconfiguration: Kubelet cgroup driver: “systemd” is different from docker cgroup driver: “cgroupfs””

* * see3.2支那Copy the code

5. Flannel installation

5.1 What is Fiannel?

Flannel is a network planning service designed by CoreOS team for Kubernetes. Simply put, its function is to make the Docker containers created by different node hosts in the cluster have unique virtual IP addresses of the whole set.// TODO: Fiannel details

Copy the code

5.2 Installing a Flannel


// Step 1: Prepare kube-flannel.ymlSee appendix for details, mainly to modify the URL download path// Step 2: Install kubectl
kubectl apply -f kube-flannel.yml

// After the configuration is complete, wait for a moment to see the node ready
[root@VM-0-5-centos flannel]# kubectl get nodes
NAME                    STATUS   ROLES                  AGE    VERSION
localhost.localdomain   Ready    <none>                 131m   v121.2.
vm-0-5-centos           Ready    control-plane,master   39h    v121.2.

Copy the code

5.3 “Unable to update cni config” err=”no networks found in /etc/cni/net.d”

Note that there are two scenarios:

Scenario 1: The fault occurs on the Master, and the flannel version may not match the K8S version. The K8S version is 1.21 and the flannel version is 0.13

Scene 2: the node in this problem, the reason for lack of cni node node @ blog.csdn.net/u010264186/…

  1. Run the SCP -r master:/etc/cni /etc/command to copy the master cni file to the node
  2. Restart systemctl daemon-reload && systemctl restart kubelet

After the cni file is successfully created, a 10-flannel.conflist folder will appear under /etc/cni/net.d

6. Problems and troubleshooting process

6.1 Troubleshooting procedure for Master Kubelet Faults (for example, bind: cannot assign requested address)

If the above solution doesn’t solve your problem, you may need to identify and define the problem yourself

Usually the problem is in the init section, and if it’s in the earlier part, it’s probably a mirror address problem, so adjust it

Fault details: Init initialization fails, and execution fails. Solution:

  • Determine the Docker running status
  • View the corresponding POD log
  • Solve the problem by log
// Step 1: Check Docker running status
docker ps -a | grep kube | grep -v pause

// In this section, you can see an example of an exception. The following is an etCD and API server problem
"Etcd - advertise - cl..."   40 seconds ago   Exited (1) 39 seconds ago  
"Kube - apiserver -- AD..."   39 seconds ago   Exited (1) 18 seconds ago 

---------------------

// Step 2: Check the Pod log and kubelet log
docker logs ac266e3b8189
journalctl -xeu kubelet

// See the final problem details here
api-server : connection error: desc = "Transport: Error while dialing TCP 127.0.0.1:2379: connect: connection refused". Reconnecting
// PS: understand that 127.0.0.1:2379 is the etCD port (81.888.888.888 is my server IP)
etc-server : listen tcp 81.888888.888.:2380: bind: cannot assign requested address

---------------------

// Step 3: The etCD problem is obvious, and the problem is solved.Modify etcd configuration file/etc/kubernetes manifests/etcd yml, amend the IP to0.0. 0. 0--listen-client-urls= HTTPS: --listen-client-urls= HTTPS:/ / 0.0.0.0:2379, https://0.0.0.0:2379 (modified)
- --listen-peer-urls=https://0.0.0.0:2380 (modified location)


---------------------

// Step 4: back up etcd.yml from the previous Step and reset K8S
kubeadm reset

// PS: During the reset process, the manifests in the manifests are deleted. Remember to remove the backup here

---------------------

// Step 5: Replace the fileTo initialize the cluster - when the/etc/kubernetes/manifests/etcd yaml is created, Quickly etcd. Yaml file deletion - will be reset to save node before etcd. Yaml file to the/etc/kubernetes manifests/directory// PS: After the operation is complete, init is still downloading the image, the subsequent installation is successful


// Add command:- restart kubelet: systemctl restart kubelet. Service - check kubelet logs: journalctl -xeu kubelet - Check kubelet status: Systemctl status kubelet - view the Docker operation: Docker ps - a | grep kube | grep -v pause - view the log: Docker logs AC266E3b8189 - Get all nodes :kubectl get NodesCopy the code

7. Operation command supplement

7.1 Completely Uninstalling Kubernetes

# # unloading service kubeadm reset delete RPM package RPM - qa | grep kube * | xargs RPM -- nodeps - e # remove containers and mirror docker images - qa | xargs docker rmi - fCopy the code

7.2 the API Server

https://youKurbernatesHost:6443/


// Common API interface- Access Node: / API /v1/nodes - Access Pods: / API /v1/ PodsCopy the code

7.3 Master Common commands

Display the Token list: kubeadm Token listCopy the code

conclusion

Although Kubernetes is deployed in the same process every time, there are various problems every time…

Bits and pieces of the record so much, continue to add TODO

The appendix

Kube – flannel. Yml file

The original URL or there is a problem, the main change quay-mirror.qiniu.com/coreos/flannel:v0.13.0-ppc64le

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: | {" name ":" cbr0 ", "cniVersion" : "0.3.1", "plugins" : [{" type ":" flannel ", "delegate" : {" hairpinMode ": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] }  net-conf.json: | {" Network ", "10.244.0.0/16" and "Backend" : {" Type ":" vxlan "}}---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: Quay. IO/coreos/flannel: v0.13.0 - amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: Quay. IO/coreos/flannel: v0.13.0 - amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.13.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
Copy the code

Reference and thanks

Blog.csdn.net/haishan8899…

www.cnblogs.com/hellxz/p/ku…

My.oschina.net/u/4479011/b…

Kubernetes. IO/docs/tasks /…

Docs.docker.com/engine/inst…

www.jianshu.com/p/25c01cae9…