Environment to prepare

Three machines:

  • Master: 192.168.0.1
  • Node1:192.168.0.2
  • 2:192.168.0.3

Basic Environment Settings

  • [1] set 3 machines hostname to communicate based on hostname: /etc/hosts; Set hostname for each machine

    sudo hostnamectl set-hostname k8s-master
    sudo hostnamectl set-hostname k8s-node1
    sudo hostnamectl set-hostname k8s-node2
    Copy the code

    Then edit the corresponding hosts

    vim /etc/hosts
    Copy the code

    Adds the host name orientation configuration for the current machine

    192.168. 01. k8s-master
    192.168. 02. k8s-node1
    192.168. 03. k8s-node2
    Copy the code
  • [2] Time synchronization

    The main purpose is to ensure that the cluster machine time is consistent, here you can search for information to configure

  • [3] Disable the firewall

    systemctl stop firewalld
    Copy the code
  • [4] Disable virtual memory swapping

    sudo swapoff -a
    Copy the code
  • [5] Enable kernel parameters

    echo1 > / proc/sys/net/bridge/bridge - nf - call - iptables note: if there is no bridge - nf - call - tip iptables solution: [root @ localhost ~]# modprobe br_netfilter 
    Copy the code

Configure the yum source for installation

  • Kubernetes yum source
[1] vim /etc/yum.repos.d/kubernetes.repo
[2[kubernetes] name = kubernetes baseurl = HTTPS://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled = 1
    gpgcheck = 1
    gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Copy the code
  • Docker yum source
[1] cd /etc/yum.repos.d/
[2] wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Copy the code
  • Perform the installation
yum install kubelet kubeadm kubectl docker-ce -y
Copy the code

Example Initialize the K8S cluster

Execute on master machine

kubeadm init --kubernetes-version=v122.2. --service-cidr=10.96. 0. 0/12 --pod-network-cidr=10.244. 0. 0/16 --ignore-preflight-errors=Swap --ignore-preflight-errors=NumCPU --image-repository registry.aliyuncs.com/google_containers
Copy the code

Explanation:

--apiserver-advertise-address: specifies which IP address of the Master is used to communicate with other nodes in the Cluster. --service-cidr: Specifies the range of the service network, that is, the IP address segment used by the LOAD balancing VIP. --pod-network-cidr: Specifies the POD network range, that is, the POD IP address segment. - image - the repository: IO is not accessible in China. In version 1.13, we can add -image-repository parameter, the default value is k8s.gcr. IO, and specify it as ali Cloud image address: Registry.aliyuncs.com/google_containers. --kubernetes-version=v1.13.3: specifies the version number to install. - ignore - preflight - errors = : Ignore runtime errors, such as [ERROR NumCPU] and [ERROR Swap], --ignore-preflight-errors=NumCPU and --ignore-preflight-errors=SwapCopy the code

Normally, the initialization is successful. If there is an exception, refer to the last problem record

Execute the command according to the initialization success message

[1] Execute as prompted

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
 export KUBECONFIG=/etc/kubernetes/admin.conf
Copy the code

[2] This line of information needs to be recorded here, which is used when node nodes are added to the cluster

kubeadm join 192.16899.100.:6443 --token k0mu9t.pkam9p0uk5qk1wez \
	--discovery-token-ca-cert-hash sha256:9e7165c8f91fe4d0448528f65e79193ef1a8f9aae24c92712e0045e623e87f97
Copy the code

The problem record

  • The Docker Service is not started
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
Copy the code

systemctl enable docker.service

  • Reinitialization will prompt you that the related file already exists
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists

Copy the code

Delete all the records, rm rf /etc/kubernetes/*

  • Reinitialization indicates that the port is occupied
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check 
To see the stack trace of this error execute with --v=5 or higher
Copy the code

kubeadm reset

  • Docker and Kubelet’s Cgroup driver are inconsistent
Sep 23 17:05:38 sa-service-istio-3 kubelet: E0923 17:05:38.928155 10560 server.go:294] "Failed to run kubelet" err=" Failed to run kubelet: misConfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""Copy the code
  • Change cgroup driver consistent, here is to change the docker configuration

    [1] Check the cgroup configuration of docker

    docker info
    Copy the code

    [2] Modify or create /etc/docker-daemon. json

    {"exec-opts": ["native. cgroupDriver =systemd"]}Copy the code

    [3] Systemctl restart Docker

  • Initialization succeeded

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.16899.100.:6443 --token k0mu9t.pkam9p0uk5qk1wez \
	--discovery-token-ca-cert-hash sha256:9e7165c8f91fe4d0448528f65e79192ef1a8f9aae24c92712e0045e623e87f97
Copy the code
  • unhealthy

    Here to see this kind of situation, is under the/etc/kubernetes manifests/kube – controller – manager. Yaml and kube – scheduler. Yaml set the default port is 0

    Comment out the corresponding port to edit the two files. Find the configuration of prot=0 and comment out directly

  • Restart kubelet

    So if you look at it again there’s a couple of seconds delay, so hold on

    scheduler            Healthy   ok
    controller-manager   Healthy   ok
    etcd-0               Healthy   {"health":"true"."reason":""}
    Copy the code