The environment
- Centos 7.6
- K8s 1.13.4
- 3 machines, 1 master, 2 workers
The preparatory work
Close the swap
Run the swapoff command to temporarily close the swap. Note The swap partition will become invalid after a restart. To permanently disable the swap partition, you can comment out a line in the /etc/fstab file
As for why shut down here is a description: https://github.com/kubernetes/kubernetes/issues/53533, https://www.zhihu.com/question/374752553 has also said that affect performance
Disable the firewall and Selinux
According to the document: https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
Setenforce 0 sed -i 's/^ SELinux =enforcing$/ SELinux =permissive/' /etc/selinux/configCopy the code
Open ports
Allows iptables to check bridge traffic
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Copy the code
Install Docker (all nodes)
The installation
Y y y y y y y y y y y y y y y y y y y y y y y y y y y y y http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # to check what are the docker version yum list docker - ce - showduplicates | Sort -r # yum makecache fast && yum install -y docker-ce-18.09.8-3.el7 docker-ce-cli-18.09.8-3.el7 Containerd.io-1.2.0-3.el7: containerd.io-1.2.0-3.el7: containerd.io-1.2.0-3.el7: containerd.io-1.2.0-3.el7: containerd.io-1.2.0-3.el7: containerd.io-1.2.0-3.el7: containerd.io-1.2.0-3.el7 docker.serviceCopy the code
Modify the default Docker storage location
Systemctl stop docker or service docker stop # then move the entire /var/lib/docker directory to the destination path: Mv /var/lib/docker /home/data/docker ln -s /home/data/docker /var/lib/docker #reload Configuration file systemctl daemon-reload // You can also modify the configuration file by way of vim /etc/docker/daemon.json {"registry-mirrors": ["http://7e61f7f9.m.daocloud.io"],"graph": "/new-path/docker"}Copy the code
Ali Cloud image acceleration
# # visit: https://cr.console.aliyun.com/cn-beijing/instances/mirrors to find acceleration methods, such as: sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://se35r65b.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart dockerCopy the code
Install kubeadm, kubelet and kubectl(master and worker installed)
Add to create the/etc/yum yum warehouse. Repos. D/kubernetes. Repo, file content as follows
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Copy the code
Install kubelet kubectl and kubeadm
Yum install -y kubelet-1.13.4 kubeadm-1.13.4 kubectl-1.13.4 kubernetes-cni-0.6.0 systemctl enable --now kubeletCopy the code
Manually pick image from Aliyun
Run kubeadm config images pull to check the connection to gcr. IO. If the pull succeeds, go to the next step. If it fails, grc.io cannot be accessed. In this case, you need to manually pull the image. You can execute the following script to pull the image from Ali Cloud
#! /bin/bash images=(kube-apiserver:v1.13.4 kube-controller-manager:v1.13.4 kube-scheduler:v1.13.4 kube-proxy:v1.13.4 Etcd :3.2.24 coreDNS :1.2.6) for imageName in ${images[@]}; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName doneCopy the code
Initialize (master)
Remember to add pod-network-CIDR because the following network components use flannel
Kubeadm init - pod - network - cidr = 10.244.0.0/16 - image - repository registry.aliyuncs.com/google_containersCopy the code
Installation success message
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: Kubeadm join 10.22.9.162:6443 --token e225cp.14g848dy4vPOas75 --discovery-token-ca-cert-hash sha256:aaf9910fb2b94e8c2bc2aea0b2a08538796d8322331561ef1094bebe8a7a790fCopy the code
This is the first time to use the Kubernetes cluster configuration command
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code
The reason for these configuration commands is that the Kubernetes cluster requires encrypted access by default. Kubectl will use the authorization information in this directory to access Kubernetes cluster by default. Kubectl will use the authorization information in this directory to access Kubernetes cluster. Otherwise, we would need to tell Kubectl the location of the security profile each time through the export KUBECONFIG environment variable.
The master node generates the mode for other nodes to join
kubeadm token create --print-join-command
Copy the code
Deploy the Flannel network component
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
Copy the code
Check the status
# use kubectl the get command to view the current state of a Node only the kubectl get # nodes with kubectl go to view the details of this Node (the Node) object (Event) kubectl, state, and events Describe node master # re-check Pod status via kubectl get Kubectl get Pods -n kube-system # Journalctl -l -u kubeletCopy the code
Master Node Configuration
- Delete the default stain on the master node
Taint: I don’t know. By default, the cluster does not schedule pods on the master node. If you prefer to schedule pods on the master node, you can do the following:
# check the stain kubectl the describe the node master | grep -i taints # remove stain kubectl taint nodes master node - role. Kubernetes. IO/master -Copy the code
Joining a cluster (worker)
Join the cluster using the information previously initialized by the master
Kubeadm join 10.22.9.162:6443 --token 43t2na.80oiehldy76rw6lz --discovery-token-ca-cert-hash sha256:67fd28cb6fd03242eda63c7a395096aba1a6784f7234a9b6269ff0941e9070e3Copy the code
Check the cluster status on the master
kubectl get nodes
Copy the code
Installing the Dashboard UI (Master)
Get the configuration file
Wget HTTP: / / https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yamlCopy the code
Obtaining a Mirror Manually
Docker pull anjia0532/ Google-containers. kubernetes-dashboards-AMD64 :v1.10.0 Docker tag Anjia0532 / Google - containers. Kubernetes dashboard - amd64: v1.10.0 k8s. GCR. IO/kubernetes - dashboard - amd64: v1.10.0 docker rmi Anjia0532 / Google - containers. Kubernetes dashboard - amd64: v1.10.0Copy the code
Modifying the Configuration File (Ports)
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
Copy the code
Run and view the status
Yaml # kubectl get Pods -n kubernetes-dashboard kubectl get pods,svc -n kubernetes-dashboardCopy the code
The login
Kubectl create ServiceAccount dashboard-admin -n kube-system kubectl create ClusterRoleBinding dashboard-admin -- ClusterRole =cluster-admin -- ServiceAccount =kube-system: dashdash-admin ## Obtain token kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | grep dashboard-admin | awk '{print $1}')Copy the code
Clear or uninstall K8s completely
This a gist for quick uninstall kubernetes If the cluster is node, First delete it from master
Kubectl drain <node name> -- delete-local-data -- force -- ignore-daemonsets kubectl delete node <node name>Copy the code
Then remove kubeadm completely
kubeadm reset
# on debian base
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
#on centos base
sudo yum remove kubeadm kubectl kubelet kubernetes-cni kube*
# on debian base
sudo apt-get autoremove
#on centos base
sudo yum autoremove
sudo rm -rf ~/.kube
Copy the code
Reference:
-
www.yinxiang.com/everhub/not…
-
Blog.51cto.com/3241766/240…
-
Juejin. Cn/post / 684490…