[TOC]

Environment to prepare

  • Host to role IP configuration | | | — — — — — – | — – | — — — — — — — — — — – | – k8s master | 10.10.40.54 application (network) / 172.16.130.55 (cluster network) | 4 core 8 g k8s work The node | 10.10.40.95 application (network) / 172.16.130.82 (cluster network) | 4 core 8 g

  • System environment

[root@10-10-40-54 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
[root@10-10-40-54 ~]# uname -aLinux 10-10-40-54 3.10.0-693. El7. X86_64#1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@10-10-40-54 ~]# free -hTotal Used Free Shared buff/ Cache available Mem: 7.6g 141M 7.4g 8.5m 153M 7.3g Swap: 2.0g 0B 2.0g [root@10-10-40-54 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 13
Model name: QEMU Virtual CPU version 2.5+
Stepping: 3
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 4096K L3 cache: 16384K NUMA node0 CPU(s): 0-3 Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl  xtopology pni cx16 x2apic hypervisor lahf_lm [root@10-10-40-54 ~]#
[root@10-10-40-54 ~]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host LO valid_lft forever preferred_lft forever Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:7a:e9:1b:ab:00 brd Ff :ff:ff:ff:ff:ff :ff inet 10.10.40.54/24 BRD 10.10.40.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::f87a:e9ff:fe1b:ab00/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:39:4d:ef:80:01 brd Ff :ff:ff:ff:ff:ff: FF: FF INET 172.16.130.55/24 BRD 172.16.130.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::f839:4dff:feef:8001/64 scope link valid_lft forever preferred_lft forever [root@10-10-40-54 ~]#
Copy the code

Pre-installation check

  • Ensure that the MAC addresses of all nodes do not conflict with the Product Uuid

Kubernetes uses this information to distinguish between nodes, which may fail to deploy (github.com/kubernetes/…)

Check the MAC

ip a
Copy the code

To view the product uuid

cat /sys/class/dmi/id/product_uuid
Copy the code
  • Ensure that all node swaps are closed

Kubelet cannot start without shutting it down

# Temporary shutdown
swapoff -a

# change /etc/fstag to comment out the swap line
[root@10-10-40-54 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed Jun 13 11:42:55 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=2fb6a9ac-835a-49a5-9d0a-7d6c7a1ba349 /boot xfs defaults 0 0
# /dev/mapper/rhel-swap swap swap defaults 0 0
[root@10-10-40-54 ~]#

Swapon -s is used to check whether the shutdown is complete. If no output is displayed, the shutdown is complete
[root@10-10-40-54 ~]# swapon -s
[root@10-10-40-54 ~]#
Copy the code
  • Close the SELinux

Allows the container to access the host file system

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# check SELinux Settings to make it Permissive
[root@10-10-40-54 ~]# getenforce
Permissive
[root@10-10-40-54 ~]#
Copy the code
  • Open the bridge – nf – call – iptables

The package first after arriving on Linux Bridge iptables rules for reference: news.ycombinator.com/item?id=164…

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Copy the code
  • Load the br_netfilter kernel module
modprobe br_netfilter

# check the load status, if you can see it, it is loaded
[root@10-10-40-54 ~]# lsmod | grep br_netfilter
br_netfilter 22209 0
bridge 136173 1 br_netfilter
[root@10-10-40-54 ~]#
Copy the code

Installation and Deployment Process

Installing The Container Runtime (Docker)

Run on all nodes

# Install Docker CE
## Set up the repository
### Install required packages.
yum install -y yum-utils device-mapper-persistent-data lvm2 git

### Add Docker repository.
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.Yum install - y docker - ce - 18.06.2. Ce## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"]."log-driver": "json-file"."log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"."storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl enable docker.service
systemctl restart docker

Copy the code

Install kubeadm/kubelet/kubectl

  • Add Aliyun Kubernetes YUM source

Google YUM YUM YUM YUM YUM YUM

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
#baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

yum -y install epel-release
yum clean all
yum makecache
Copy the code
  • Install kubeadm/kubelet/kubectl

The version can be viewed using yum Search kubeadm –show-duplicates. V1.3.5 is installed directly here

Yum install -y kubelet-1.13.5-0.x86_64 kubectl-1.13.5-0.x86_64 kubeadm-1.13.5-0.x86_64 kubeadm-1.13.5-0.x86_64 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:24:33Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
[root@10-10-40-54 ~]#

Copy the code
  • Enable kubelet

At this point kubelet will fail to start and keep trying to restart because it has not been initialized

systemctl enable --now kubelet
Copy the code

Pull Kubernetes component Docker image

It is also a wall – free version

REGISTRY=registry.cn-hangzhou.aliyuncs.com/google_containers VERSION = v1.13.5## Pull mirror
docker pull ${REGISTRY}/kube-apiserver-amd64:${VERSION}
docker pull ${REGISTRY}/kube-controller-manager-amd64:${VERSION}
docker pull ${REGISTRY}/kube-scheduler-amd64:${VERSION}
docker pull ${REGISTRY}/kube-proxy-amd64:${VERSION}
docker pull ${REGISTRY}/ etcd - amd64:3.2.18 docker pull${REGISTRY}/ pause - amd64:3.1 docker pull${REGISTRY}/ coredns: 1.1.3 docker pull${REGISTRY}/ pause: 3.1Add the Tag # #
docker tag ${REGISTRY}/kube-apiserver-amd64:${VERSION} k8s.gcr.io/kube-apiserver-amd64:${VERSION}
docker tag ${REGISTRY}/kube-scheduler-amd64:${VERSION} k8s.gcr.io/kube-scheduler-amd64:${VERSION}
docker tag ${REGISTRY}/kube-controller-manager-amd64:${VERSION} k8s.gcr.io/kube-controller-manager-amd64:${VERSION}
docker tag ${REGISTRY}/kube-proxy-amd64:${VERSION} k8s.gcr.io/kube-proxy-amd64:${VERSION}
docker tag ${REGISTRY}/ etcd - amd64:3.2.18 k8s. GCR. IO/etcd - amd64:3.2.18 docker tag${REGISTRY}/ pause - amd64:3.1 k8s. GCR. IO/pause - amd64:3.1 docker tag${REGISTRY}/ coredns: 1.1.3 k8s. GCR. IO/coredns: 1.1.3 docker tag${REGISTRY}/ pause: 3.1 k8s. GCR. IO/pause: 3.1Copy the code

So how do you know which mirror images you need?

 kubeadm config images list
Copy the code

Kubernetes Master initial configuration

  • Modify the cluster initial configuration to save the following configuration files askubeadm-config.yaml
ApiVersion: kubeadm.k8s. IO /v1beta1 apiServer: timeoutForControlPlane: 4m0s extraArgs: advertise-address: 172.16.130.55If there are multiple nics, the default gateway is used to specify the IP address of the NIC
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""  Set LB to the endpoint in high availability deployment
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local: the dataDir: / var/lib/etcd extraArgs: listen - the client - urls: https://172.16.130.55:2379 advertise - the client - urls: Listen - peer - urls: https://172.16.130.55:2379 https://172.16.130.55:2380 initial - advertise - peer - urls: https://172.16.130.55:2380 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containersK8s.gcr. IO will be pulled by defaultKind: ClusterConfiguration kubernetesVersion: v1.13.5Enter the version of Kubernetes you want to deploy here
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"  The Pod network segment must be the same as the Flannel network segmentServiceSubnet: 10.96.0.0/12 scheduler: {}Copy the code

You can also export a copy of the default configuration and modify it yourself

kubeadm config print init-defaults > kubeadm-config.yaml
Copy the code

Kubernetes master init

A real shot at the door

kubeadm init --config kubeadm-config.yaml
Copy the code

Sample output

[root@10-10-40-54 ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.13.5
[preflight] Running pre-flight checks
 [WARNING Hostname]: hostname "The 10-10-40-54" could not be reached
 [WARNING Hostname]: hostname "The 10-10-40-54": lookup 10-10-40-54: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [10-10-40-54 localhost] and IPs [10.10.40.54 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [10-10-40-54 localhost] and IPs [10.10.40.54 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [10-10-40-54 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.40.54]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.002435 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "The 10-10-40-54" as an annotation
[mark-control-plane] Marking the node 10-10-40-54 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node 10-10-40-54 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xkdkxz.7om906dh5efmkujl
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml"with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: Kubeadm join 10.10.40.54:6443 --token xkdkz.7om906dh5efmkujl -- discovery-tok-ca-cert-hash sha256:52335ece8b859d761e569e0d84a1801b503c018c6e1bd08a5bb7f39cd49ca056

[root@10-10-40-54 ~]#
Copy the code

How to add a node via kubeadm join

  • Add kubectl configuration
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code
  • Checking the Deployment
[root@10-10-40-54 ~]# kubectl get pod --all-namespaces -o wideNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-89cc84847-cmrkw 0/1 Pending 0 108s <none> <none> <none> <none> kube-system coredns-89cc84847-k2nqs 0/1 Pending 0 108s <none> <none> <none> <none> kube-system etcd-10-10-40-54 1/1 Running 0 45s 10.10.40.54 10-10-40-54 <none> <none> kube-system Kube-apiserver-10-10-40-54 1/1 Running 0 54s 10.10.40.54 10-10-40-54 <none> <none> kube-system Kube-controller-manager-10-10-40-54 1/1 Running 0 51s 10-10-40-54 <none> <none> kube-system kube-proxy-jbqkc 1/1 Running 0 108s 10.10.40.54 10-10-40-54 <none> <none> kube-system kube-10-10-40-54 1/1 Running 0 45s 10.10.40.54 10-10-40-54 <none> < None > [root@10-10-40-54 ~]#
Copy the code

At this point you should see that all the pods except coreDNS are in the Running state and coreDNS is pending because the network plug-in has not been deployed yet

Add work Node (kubeadm join)

Execute kubeadm join on the Work node

[root@10-10-40-95 ~]# kubeadm join 10.10.40.54:6443 -- Token hog5db.zh5p9z4xi5kvf1g7 --discovery-token-ca-cert-hash sha256:c9c8d056467c345651d1cb6d23fac08beb4ed72ea37e923cd826af12314b9ff0
[preflight] Running pre-flight checks
 [WARNING Hostname]: hostname "The 10-10-40-95" could not be reached
 [WARNING Hostname]: hostname "The 10-10-40-95": lookup 10-10-40-95: no such host
[discovery] Trying to connect to API Server "10.10.40.54:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.10.40.54:6443"
[discovery] Requesting info from "https://10.10.40.54:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.10.40.54:6443"
[discovery] Successfully established connection with API Server "10.10.40.54:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "Kubelet - config - 1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "The 10-10-40-95" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@10-10-40-95 ~]#
Copy the code
  • View the added nodes
[root@10-10-40-54 ~]# kubectl get node -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 10-10-40-54 NotReady Master 7H51M V1.13.5 10.10.40.54 < None > Red Hat Enterprise Linux Server 7.4 (Maipo) 3.10.0-693.el7.x86_64 Docker ://18.6.2 10-10-40-95 NotReady <none> 7H48m V1.13.5 10.10.40.95 < None > Red Hat Enterprise Linux Server 7.4 (Maipo) 3.10.0-693. El7. X86_64 docker: / / 18.6.2 [root @ 10-10-40-54 ~]#
Copy the code
  • Deploying the Network plug-in (Flannel) Flannel configuration

You can also just wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml If multiple network adapters exist, set the key parameter to the network adapter bound to the Flannel

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0"."plugins": [{"type": "flannel"."delegate": {
            "hairpinMode": true."isDefaultGateway": true}}, {"type": "portmap"."capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16"."Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: truenodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: Flannel initContainers: - name: install - the cni image: quay. IO/coreos/flannel: v0.11.0 - amd64command:
        - cp
        args:
        - -f- /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: Quay. IO/coreos/flannel: v0.11.0 - amd64command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1 Adjust the network interface used by the Flannel for external communication
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

Copy the code

The Flannel network plug-in is deployed

[root@10-10-40-54 ~]# kubectl create -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
[root@10-10-40-54 ~]#
Copy the code
  • Check the Flannel Pod running status. The flannel Pod installation is complete
[root@10-10-40-54 ~]# kubectl get pod -o wide --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-89cc84847-cmrkw 1/1 Running 0 8h 10.244.1.2 10-10-40-95 <none> <none> kube-system corednS-89CC84847-k2nqs 1/1 Running 0 8h 10.244.1.4 10-10-40-95 <none> <none> kube-system etcd-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none> Kube-system kube-apiserver-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none> kube-system Kube-controller-manager-10-10-40-54 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none> kube-system Kube-flannel-ds-amd64-69fjw 1/1 Running 0 2m37s 10.10.40.54 10-10-40-54 <none> <none> kube-system Kube-flannel-ds-amd64-8789j 1/1 Running 0 2m37s 10.10.40.95 10-10-40-95 <none> <none> kube-system kube-proxy JBQKC 1/1 Running 0 8h 10.10.40.54 10-10-40-54 <none> <none> kube-system kube-proxy-rv7hs 1/1 Running 0 8h 10.10.40.95 10-10-40-95 <none> <none> kube-system kube-10-10-40-54 1/1 Running 0 8h 10-10-40-54 <none> <none> [root@10-10-40-54 ~]#
Copy the code
  • Create a BusyBox Pod to test cluster availability save the following specs as busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always
Copy the code

Create the busybox Pod

[root@10-10-40-54 ~]# kubectl create -f busybox.yaml
pod/busybox created
[root@10-10-40-54 ~]# kubectl get pod -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 0/1 ContainerCreating 0 17s <none> 10-10-40-95 <none> <none>
busybox 1/1 Running 0 17s 10.244.1.5 10-10-40-95 <none> <none>
Copy the code

It can be seen that the Pod is running properly and the Kubernetes cluster is set up

Frequently asked Questions

Kubeadm init error how to re-init

Kubeadm reset kubeadm init

[root@10-10-40-54 ~]# kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables. [root@10-10-40-54 ~]#Copy the code

The last

To be continued, welcome to clap brick