Deploy version 1.21.3 of Kubernetes using kubeadm
1 shows
This paper builds kubernetes v1.21.3 cluster notes, uses three VMS as CentOS7.9 system test machine, kubeadm, Kubelet, Kubectl are installed using YUM, the network component is flannel.
2 Environment Preparation
If no special instructions are specified for cluster deployment, run the commands as user root.
2.1 Hardware Information
IP | hostname | mem | disk | explain |
---|---|---|---|---|
192.168.4.120 | centos79-node1 | 4GB | 30GB | K8s Control plane node |
192.168.4.121 | centos79-node2 | 4GB | 30GB | K8s Execute node 1 |
192.168.4.123 | centos79-node3 | 4GB | 30GB | K8s Run node 2 |
2.2 Software Information
software | version |
---|---|
CentOS | CentOS Linux release 7.9.2009 (Core) |
Kubernetes | 1.21.3 |
Docker | 20.10.8 |
Kernel | 5.4.138-1. El7. Elrepo x86_64 |
2.3 Ensure that the environment is correct
purpose | commands |
---|---|
Ensure that nodes in the cluster can communicate with each other | ping -c 3 <ip> |
Ensure that the MAC address is unique | ip link 或 ifconfig -a |
Ensure that the host names in the cluster are unique | The queryhostnamectl status , modifyhostnamectl set-hostname <hostname> |
Ensure that the system product UUID is unique | dmidecode -s system-uuid 或 sudo cat /sys/class/dmi/id/product_uuid |
To change the MAC address, run the following command:
ifconfig eth0 down
ifconfig eth0 hw ether 00:0c:29:84:fd:a4
ifconfig eth0 up
Copy the code
If product_uUID is not unique, reinstall the CentOS.
2.4 Ensure that the port is open properly
Cetnos79-node1 Node port check:
Protocol | Direction | Port Range | Purpose |
---|---|---|---|
TCP | Inbound | 6443 * | Kube-apiserver |
TCP | Inbound | 2379-2380. | Etcd API |
TCP | Inbound | 10250 | Kubelet API |
TCP | Inbound | 10251 | Kube-scheduler |
TCP | Inbound | 10252 | Kube-controller-manager |
Check the ports on the Centos79-node2 and Centos79-node3 nodes:
Protocol | Direction | Port Range | Purpose |
---|---|---|---|
TCP | Inbound | 10250 | Kubelet api |
TCP | Inbound | 30000-32767. | NodePort Service |
2.5 Configuring Host Trust
Configure hosts resolution:
cat >> /etc/hosts <<EOF 192.168.4.120 Centos79-node1 192.168.4.121 Centos79-node2 192.168.4.123 Centos79-node3 EOF
Copy the code
Generate SSH keys in Centos79-node1 and distribute them to each node:
# Generate SSH key and press enter
ssh-keygen -t rsa
Copy the newly generated key to the trusted list of each node, and enter the password of each host respectively
ssh-copy-id root@centos79-node1
ssh-copy-id root@centos79-node2
ssh-copy-id root@centos79-node3
Copy the code
2.6 disable swap
Swap Only when the memory is insufficient, the disk block is used as extra memory. The I/O of the disk is greatly different from that of the memory. Therefore, to improve performance, disable swap.
swapoff -a
cp /etc/fstab /etc/fstab.bak
cat /etc/fstab.bak | grep -v swap > /etc/fstab
Copy the code
2.7 close the SELinux
If SELinux is disabled, kubelet may report Permission denied when mounting the directory, which can be set to permissive or Disabled. Permissive will prompt warn information for each node:
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
Copy the code
2.8 Setting the Time Zone and Time Synchronization
timedatectl set-timezone Asia/Shanghai
systemctl enable --now chronyd
Copy the code
Check the synchronization status:
timedatectl status
Copy the code
Write the current UTC time to the hardware clock
timedatectl set-local-rtc 0
Restart a service that is dependent on system time
systemctl restart rsyslog && systemctl restart crond
Copy the code
2.9 Disabling the Firewall
systemctl stop firewalld
systemctl disable firewalld
Copy the code
2.10 Modifying Kernel Parameters
cp /etc/sysctl.conf{,.bak}
Copy the code
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
echo "vm.swappiness = 0" >> /etc/sysctl.conf
Copy the code
modprobe br_netfilter
sysctl -p
Copy the code
2.11 Enabling IPVS support
vim /etc/sysconfig/modules/ipvs.modules
Copy the code
#! /bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
Copy the code
chmod 755 /etc/sysconfig/modules/ipvs.modules
sh /etc/sysconfig/modules/ipvs.modules
lsmod | grep ip_vs
Copy the code
2.12 Upgrading the Kernel Version
Refer to the link
3 the deployment Docker
Docker needs to be installed on all nodes.
3.1 Adding Docker yum source
Install necessary dependencies
yum install -y yum-utils device-mapper-persistent-data lvm2
# add aliyun docker-ce yum source
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Rebuild yum cache
yum makecache fast
Copy the code
3.2 installation Docker
View available Docker versions
yum list docker-ce.x86_64 --showduplicates | sort -r
Copy the code
Fast Mirror speeds from cached hostfile * elrepo: Mirrors.tuna.tsinghua.edu.cn docker - ce. X86_64 3:20. 10.8-3. El7 docker - ce - stable docker - ce. X86_64 3:20. 10.8-3. El7 @docker-ce-stable docker-ce-x86_64 3:20.10.7-3.el7 docker-ce-stable docker-ce-x86_64 3:20.10.6-3.el7 docker-ce-stable Docker-ce-x86_64 3:20.10.5-3.el7 docker-ce-stable docker-ce-x86_64 3:20.10.4-3.el7 docker-CE-stable 3:20.10.3-3.el7 docker-ce-stable docker-ce-x86_64 3:20.10.2-3.el7 Docker-ce-stable docker-ce-x86_64 3:20.10.3-3.el7 Docker-ce-STABLE docker-ce-x86_64 3:20.10.3-3.el7 Docker-ce-stable docker-CE-x86_64 3:20.10.0-3.el7 docker-CE-stable Docker-ce-x86_64 3:19.03.9 3.el7 docker-CE-stable X86_64 3:19.03.8-3.el7 docker-ce-stable docker-ce-x86_64 3:19.03.7-3.el7 docker-CE-stable 3:19.03.6-3.el7 docker-ce-stable docker-ce-x86_64 3:19.03.5-3.el7 Docker-ce-stable docker-ce-x86_64 3:19.03.4-3.el7 docker-ce-x86_64 Docker-ce-stable docker-ce-x86_64 3:19.03.3-3.el7 docker-CE-stable Docker-ce-x86_64 3:19.03.2-3.el7 docker-CE-stable Docker-ce-x86_64 3:19.03.15-3.3.el7 docker-ce-stable Docker-ce-x86_64 3:19.03.1-3.el7 docker-ce-stable docker-ce-x86_64 3:19.03.13-3.el7 Docker-ce-stable docker-ce-x86_64 3:19.03.12-3.el7 X86_64 3:19.03.11-3.el7 docker-CE -stable Docker-CE. X86_64 3:19.03.10-3.el7 docker-CE -stable Docker-ce-x86_64 3:19.03.0-3.el7 docker-ce-stable Docker-ce-x86_64 3:18.09.9-3.el7 docker-CE-stable Docker-CE-x86_64 3:18.09.8-3.el7 docker-ce-stable docker-ce-x86_64 3:18.09.7-3.el7 Docker-ce-stable docker-ce-x86_64 3:18.09.6-3.el7 docker-ce-stable Docker-ce-stable docker-ce-x86_64 3:18.09.5-3.el7 docker-CE-stable Docker-ce-x86_64 3:18.09.4-3.el7 docker-CE-stable X86_64 3:18.09.3-3.el7 docker-ce-stable docker-ce-x86_64 3:18.09.2-3.el7 docker-CE-stable docker-ce-x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce-x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce-x86_64 Docker-ce-stable docker-ce-x86_64 18.66.2.CE-3.el7 docker-ce-stable Docker-ce-x86_64 18.66.1.CE-3.el7 docker-ce-stable Docker-ce. X86_64 18.06.0.CE -3.el7 centos docker-CE -stable Centos docker-CE.x86_64 17.12.1.CE-1.el7. Centos docker-CE-stable Centos docker-CE.x86_64 17.12.0.CE-1.el7. Centos docker-CE-stable Centos docker-CE.x86_64 17.09.9.CE-1.el7. Centos docker-CE-stable Centos docker-CE.x86_64 17.06.1.CE-1.el7. Centos docker-CE-stable Docker-ce. X86_64 17.03.3.CE-1.el7 docker-CE -stable centos docker-CE Centos docker-CE.x86_64 17.03.1.CE-1.el7. Centos docker-CE-stableCopy the code
Install the specified version of DockerYum install - y docker - ce - 20.10.8-3. El7Copy the code
This section uses the installation version 20.10.8 as an example. Note that the version number does not include: and the previous number.
3.3 Ensure that the Network Module is automatically Loaded upon Startup
lsmod | grep overlay
lsmod | grep br_netfilter
Copy the code
If the preceding command does not return any output or a message is displayed indicating that the file does not exist, run the following command:
cat > /etc/modules-load.d/docker.conf <<EOF overlay br_netfilter EOF
Copy the code
modprobe overlay
modprobe br_netfilter
Copy the code
3.4 Make bridge traffic visible to iptables
Perform the following operations for each node:
cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl --system
Copy the code
Verify whether it is valid, return 1 for both.
sysctl -n net.bridge.bridge-nf-call-iptables
sysctl -n net.bridge.bridge-nf-call-ip6tables
Copy the code
3.5 configuration Docker
mkdir /etc/docker
Copy the code
Change the cgroup driver to systemd [k8s official recommendation], limit the container log amount, change the storage type, and the last docker home directory can be modified:
cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "registry-mirrors": ["https://gp8745ui.mirror.aliyuncs.com"], "data-root": "/data/docker" } EOF
Copy the code
Modify line 13 of the service script:
vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --default-ulimit core=0:0
Copy the code
systemctl daemon-reload
Copy the code
Add startup, start immediately:
systemctl enable --now docker
Copy the code
3.6 Verifying whether Docker is normal
# Check docker information to determine whether it is consistent with the configuration
docker info
Copy the code
Client:
Context: default
Debug Mode: falseBuildx: Build with BuildKit (Docker Inc., V0.6.1-Docker) Scan: Docker Scan (Docker Inc., V0.8.0) Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: Supports d_type: Supports Filesystem with supporting Filesystem.true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file locallogentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc Default Runtime: runc Init Binary: Docker - init containerd version: e25210fe30a0a703442421b0f60afac609f950a3 runc version: v1.0.1-0 - g4144b63 init version: De40ad0 Security Options: seccomp Profile: default Kernel Version: 5.4.138-1.el7.elrepo. X86_64 Operating System: CentOS Linux 7 (Core) OSType: Linux Architecture: x86_64 CPUs: 2 Total Memory: 3.846GiB Name: Centos79 -node1 ID: GFMO:BC7P:5L4S:JACH:EX5I:L6UM:AINU:A3SE:E6B6:ZLBQ:UBPG:QV7O Docker Root Dir: /var/lib/docker Debug Mode:false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Copy the code
# hello - docker test
docker run --rm hello-world
# Delete test image
docker rmi hello-world
Copy the code
3.7 Adding a User to a Docker group
For non-root users, the docker command can be used without sudo.
Add user to docker group
usermod -aG docker <USERNAME>
Update docker group immediately for current session
newgrp docker
Copy the code
4 Deploy the Kubernetes cluster
If no, perform the following steps for each node:
4.1 Adding the Kubernetes Source
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# rebuild yum cache, enter y to add certificate authentication
yum makecache fast
Copy the code
4.2 Installing Kubeadm, Kubelet, and Kubectl
- Each node must be installed
kubeadm
,kubelet
kubectl
仅centos79-node1
Nodes need to be installed (as worker nodes, Kubectl cannot be used, can not be installed)
Yum list docker - ce. X86_64 -- -- showduplicates | sort - r version = 1.21.3 0 yum install - y kubelet -${version} kubeadm-${version} kubectl-${version}
systemctl enable kubelet
Copy the code
4.3 Configuring Automatic Completion Commands
Install the bash autocomplete plugin
yum install bash-completion -y
Set kubectl and kubeadm commands to complete, and the next login takes effect
kubectl completion bash >/etc/bash_completion.d/kubectl
kubeadm completion bash > /etc/bash_completion.d/kubeadm
Copy the code
4.4 Set the proxy service for Docker (skip this step for now and use Aliyun image to solve it)
When Kubeadm deploys the Kubernetes cluster, it uses the image on Google Registry service k8s.gcr. IO by default, such as k8s.grc. IO /kube-apiserver, but this service cannot be accessed in China. If necessary, you can set up an appropriate agent to obtain the image, or download the image from Dockerhub to the local and label the image by yourself.
Here is a brief description of how to set up the proxy service. Edit/lib/systemd/system/docker. Service files, in [service], similar to the following content is added to the configuration section of PROXY_SERVER_IP and PROXY_PORT should be modified in accordance with the actual situation.
Environment="HTTP_PROXY=http://$PROXY_SERVER_IP:$PROXY_PORT"
Environment="HTTPS_PROXY=https://$PROXY_SERVER_IP:$PROXY_PORT"
Environment="NO_PROXY = 192.168.4.0/24"
Copy the code
Reload Systemd and restart the Docker service:
systemctl daemon-reload
systemctl restart docker.service
Copy the code
In particular, on the Kubernetes cluster deployed by Kubeadm, Cluster core components such as Kube-apiserver, Kube-controller-Manager, Kube-Scheduler and ETCD all run as static pods. By default, the image files they depend on are from k8s.gcr. IO, the Registry service. However, we do not have direct access to the service, and there are two common solutions. This example will use the easier one:
- Use a proxy service that can reach the service
- Use services on a domestic image server, for example
gcr.azk8s.cn/google_containers
和registry.aliyuncs.com/google_containers
Etc. (after testing, v1.22.0 version has been disabled)
4.5 Viewing The Images Required for a Specified K8S Version
Kubeadm config images list --kubernetes-version v1.21.3Copy the code
K8s. GCR. IO/kube - apiserver: v1.21.3 k8s. GCR. IO/kube - controller - manager: v1.21.3 k8s. GCR. IO/kube - the scheduler: v1.21.3 K8s. GCR. IO/kube - proxy: v1.21.3 k8s. GCR. IO/pause: 3.4.1 track k8s. GCR. IO/etcd: 3.4.13-0 k8s. GCR. IO/coredns/coredns: v1.8.0Copy the code
4.6 Pulling a Mirror
vim pullimages.sh
Copy the code
#! /bin/bash
# pull imagesVer = v1.21.3 registry=registry.cn-hangzhou.aliyuncs.com/google_containers images = ` kubeadm config images' list --kubernetes-version=$ver |awk -F '/' '{print $2}'`
for image in $images
do
if [ $image! = coredns ];then
docker pull ${registry}/$image
if [ $? -eq 0 ];then
docker tag ${registry}/$image k8s.gcr.io/$image
docker rmi ${registry}/$image
else
echo "ERROR: ERROR was reported downloading the image,$image"
fi
elseDocker pull coredns/coredns: 1.8.0 comes with docker tag coredns/coredns: 1.8.0 comes with k8s. GCR. IO/coredns/coredns: v1.8.0 docker rmi Coredns/coredns: 1.8.0 comes withfi
done
Copy the code
chmod +x pullimages.sh && ./pullimages.sh
Copy the code
When the pull is complete, execute docker images to view the image:
Docker Images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr. IO /kube-apiserver v1.21.33D174F00AA39 3 Weeks ago 126MB IO /kube-proxy v1.21.3 adb2816ea823 3 weeks ago k8s.gcr. IO /kube-proxy v1.21.3 adb2816ea823 3 weeks ago IO /kube-controller- Manager v1.21.3 bc2bb319a703 3 weeks ago 120MB k8s.gcr. IO /pause 3.4.1 0f8457a4c2EC 6 Have a line 683 KB k8s. GCR. IO/coredns coredns v1.8.0 296 a6d5035e2 9 have a line 42.5 MB k8s. GCR. IO/etcd 3.4.13-0 0369cf4303ff 11 months ago 253MBCopy the code
Export the image and copy it to another node:
docker save $(docker images | grep -v REPOSITORY | awk 'BEGIN{OFS=":"; ORS=" "}{print $1,$2}') -o k8s-images.tar
scp k8s-images.tar root@centos79-node2:~
scp k8s-images.tar root@centos79-node3:~
Copy the code
Import from another node:
docker load -i k8s-images.tar
Copy the code
4.7 Modifying the Default CGroup Driver configuration in Kubelet
mkdir /var/lib/kubelet
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
Copy the code
4.8 Initializing a Master Node
This step is required for only centos79-node1 nodes.
4.8.1 Generating the Kubeadm initial Configuration File
Optional. This parameter is used only when you need to customize initial configuration.
kubeadm config print init-defaults > kubeadm-config.yaml
Copy the code
Modify the configuration file:
LocalAPIEndpoint: advertiseAddress: 2Replace # with:LocalAPIEndpoint: advertiseAddress: 192.168.4.120 Name: Centos79-node1Copy the code
KubernetesVersion: 1.21.0 Networking: dnsDomain: cluster. Local serviceSubnet: 10.96.0.0/12Replace # with:KubernetesVersion: 1.21.3 networking: podSubnet:"10.244.0.0/16"ServiceSubnet: 10.96.0.0/12Copy the code
4.8.2 Check whether the Test environment is normal
kubeadm init phase preflight
Copy the code
I0810 13:46:36.581916 20512 version.go: 257] remote version is much newer: v1.22.0; Falling back to: stable-1.21 [preflight] Running pre-flight checks [preflight] Pulling images requiredfor setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
Copy the code
4.8.3 Initializing the Master
10.244.0.0/16 is the fixed IP address segment of flannel. The setting depends on the requirements of network components.
kubeadm init --config=kubeadm-config.yaml --ignore-preflight-errors=2 --upload-certs | tee kubeadm-init.log
Copy the code
The output is as follows:
W0810 14:55:25.741990 13062 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "name"
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "node" could not be reached
[WARNING Hostname]: hostname "node": lookup node on 223.5.5.5:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node] and IPs [10.96.0.1 192.168.4.120]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node] and IPs [192.168.4.120 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node] and IPs [192.168.4.120 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"This can take up to 4m0s [apiclient] All control plane components are healthy after 17.503592 seconds [upload-config] Storing the configuration usedin ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
fceedfd1392b27957c5f6345661d62dc09359b61e07f76f444a9e3095022dab4
[mark-control-plane] Marking the node node as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.4.120:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6ad6978a7e72cfae06c836886276634c87bedfa8ff02e44f574ffb96435b4c2b
Copy the code
4.8.4 Assign kubectl permission to daily cluster users
su - iuskye
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/admin.conf
sudo chown $(id -u):$(id -g) $HOME/.kube/admin.conf
echo "export KUBECONFIG=$HOME/.kube/admin.conf" >> ~/.bashrc
exit
Copy the code
4.8.5 Configuring master Authentication
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
. /etc/profile
Copy the code
The connection to The server localhost:8080 was refused – did you specify The right host or port? The master node has been successfully initialized, but no network components have been installed on it. Therefore, the master node cannot communicate with other nodes.
4.8.6 Installing Network Components
Take flannel as an example:
curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml It is very slow to download the image here, I will manually pull it down first, if not, try it several timesDocker pull quay. IO/coreos/flannel: v0.14.0 kubectl apply -f kube - flannel. YmlCopy the code
4.8.7 Checking the Status of Centos79-node1
kubectl get nodes
Copy the code
NAME STATUS ROLES AGE VERSION Centos79 -node2 NotReady < None > 7m29s v1.21.3 Centos79 -node3 NotReady < None > 7M15s v1.21.3 Node Ready Control-plane, Master 33M V1.21.3Copy the code
If “STATUS” indicates “NotReady”, run the kubectl describe node Centos79-node2 command to view details. Servers with poor performance take longer to reach the Ready state.
4.9 Initializing a Node and Adding it to a cluster
4.9.1 Obtaining the command for joining kubernetes
Access Centos79 -node1 Enter create new token command:
kubeadm token create --print-join-command
Copy the code
Run the following command to join the cluster:
Kubeadm join 192.168.4.120:6443 --token 8dj8i5.6jua6ogQVve1CI5U --discovery-token-ca-cert-hash sha256:6ad6978a7e72cfae06c836886276634c87bedfa8ff02e44f574ffb96435b4c2bCopy the code
This token can also use the initial output executed on the above master.
4.9.2 Running the command to join the cluster on node
Kubeadm join 192.168.4.120:6443 --token 8dj8i5.6jua6ogQVve1CI5U --discovery-token-ca-cert-hash sha256:6ad6978a7e72cfae06c836886276634c87bedfa8ff02e44f574ffb96435b4c2bCopy the code
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Copy the code
4.10 Checking cluster Node Status
kubectl get nodes
Copy the code
NAME STATUS ROLES AGE VERSION Centos79 -node2 NotReady < None > 7m29s v1.21.3 Centos79 -node3 NotReady < None > 7M15s v1.21.3 Node Ready Control-plane, Master 33M V1.21.3Copy the code
NotReady = NotReady = NotReady = NotReady = NotReady
NAME STATUS ROLES AGE VERSION Centos79-node2 Ready < None > 8M29s v1.21.3 Centos79-node3 Ready < None > 8M15s V1.21.3 node 34 m v1.21.3 Ready control - plane, masterCopy the code
4.11 deployment Dashboard
4.11.1 deployment
The curl - o it. The yaml https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yamlCopy the code
By default, the Dashboard can only be accessed from inside the cluster. Change the Service type to NodePort to expose it to external users.
vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
type: NodePort
selector:
k8s-app: kubernetes-dashboard
Copy the code
kubectl apply -f recommended.yaml It is very slow to download the image here, I will manually pull it down first, if not, try it several timesDocker pull kubernetesui/dashboard: v2.3.1 docker pull kubernetesui/metrics - scraper: v1.0.6 kubectl apply - f recommended.yamlCopy the code
kubectl get pods,svc -n kubernetes-dashboard
Copy the code
NAME READY STATUS RESTARTS AGE pod/dashboard-metrics-scraper-856586f554-nb68k 0/1 ContainerCreating 0 52s pod/kubernetes-dashboard-67484c44f6-shtz7 0/1 ContainerCreating 0 52s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Service /dashboard-metrics-scraper ClusterIP 10.96.188.208 <none> 8000/TCP 52s service/kubernetes-dashboard NodePort 10.97.164.152 < none > 443:30001 / TCP 53 sCopy the code
Check the status of the container being created and check again later:
NAME READY STATUS RESTARTS AGE pod/dashboard-metrics-scraper-856586f554-nb68k 1/1 Running 0 2m11s pod/kubernetes-dashboard-67484c44f6-shtz7 1/1 Running 0 2m11s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Service/dashboark-metrics-scraper ClusterIP 10.96.188.208 < None > 8000/TCP 2m11s service/kubernetes-dashboard NodePort 10.97.164.152 < none > 443:30001 / TCP 2 m12sCopy the code
Visit https://NodeIP:30001; Websites that do not trust SSL certificates cannot be opened using Firefox or Chrome.
Create a service account and bind it to the default cluster-admin administrator cluster role:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Copy the code
Name: dashboard-admin-token-q2kjk Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: fa1e812e-4487-4288-a444-d4ba49711366 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlJ4OWQ5ZUJ5MDlEMkdQSnBYeUtXZDg5M2ZjX090RkhPOUtQZ3JTc1B0Z0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3Nlc nZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZ WFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcTJramsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtY WNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZmExZTgxM mUtNDQ4Ny00Mjg4LWE0NDQtZDRiYTQ5NzExMzY2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9. nCpdYK5SjhAI8wqDP6QEDx9dyD4n5yCrx8eZ3R5XkR99vo8diMFdL_6VHtiQekQpwVc7vCkQ0qYhpaGjD2Pzn4EpU44UhQFH5EpG4L5zYvQf6QHBgaZJ68dQ e1nMUUMto2jbTq8lEBt3FsJT_If6TkfeHtwfR-X8D2Nm1M8E153hXUPycSbGZImPeE-JVqRC3IJuhv6xgYi-EE08va2d6kDd4MBm-XdCm7QweG5cZaCQAP1q qF8kPfNZzelAGDe6F8V2caxAUECpNE6e4ZW2-h0D7Hp4bZpM4hZZpVr6WCfxuKXwPd-2srorjLi8h_lqSdZCJKJ56TpsED6nkBRffgCopy the code
Get the token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IlJ4OWQ5ZUJ5MDlEMkdQSnBYeUtXZDg5M2ZjX090RkhPOUtQZ3JTc1B0Z0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3Nlc nZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZ WFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcTJramsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtY WNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZmExZTgxM mUtNDQ4Ny00Mjg4LWE0NDQtZDRiYTQ5NzExMzY2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9. nCpdYK5SjhAI8wqDP6QEDx9dyD4n5yCrx8eZ3R5XkR99vo8diMFdL_6VHtiQekQpwVc7vCkQ0qYhpaGjD2Pzn4EpU44UhQFH5EpG4L5zYvQf6QHBgaZJ68dQ e1nMUUMto2jbTq8lEBt3FsJT_If6TkfeHtwfR-X8D2Nm1M8E153hXUPycSbGZImPeE-JVqRC3IJuhv6xgYi-EE08va2d6kDd4MBm-XdCm7QweG5cZaCQAP1q qF8kPfNZzelAGDe6F8V2caxAUECpNE6e4ZW2-h0D7Hp4bZpM4hZZpVr6WCfxuKXwPd-2srorjLi8h_lqSdZCJKJ56TpsED6nkBRffgCopy the code
Note that when pasting, it may be newline. If it is newline, you can set it to one line in Notepad.
Log in to Dashboard using the output token.
4.11.2 Login Page
4.11.3 Pods
4.11.4 Service
4.11.5 Config Maps
4.11.6 Secrets
4.11.7 Cluster Role Bindings
4.11.8 NameSpace
5 Author provides resources
Docker pull registry.cn-beijing.aliyuncs.com/iuskye/kube-apiserver:v1.21.3 docker pull Registry.cn-beijing.aliyuncs.com/iuskye/kube-scheduler:v1.21.3 docker pull Registry.cn-beijing.aliyuncs.com/iuskye/kube-proxy:v1.21.3 docker pull Registry.cn-beijing.aliyuncs.com/iuskye/kube-controller-manager:v1.21.3 docker pull Registry.cn-beijing.aliyuncs.com/iuskye/coredns:v1.8.0 docker pull registry.cn-beijing.aliyuncs.com/iuskye/etcd:3.4.13-0 Docker pull registry.cn-beijing.aliyuncs.com/iuskye/pause:3.4.1 docker pull Registry.cn-beijing.aliyuncs.com/iuskye/dashboard:v2.3.1 docker pull Registry.cn-beijing.aliyuncs.com/iuskye/metrics-scraper:v1.0.6 docker pull registry.cn-beijing.aliyuncs.com/iuskye/flannel:v0.14.0Copy the code
Retag:
Docker tag registry.cn-beijing.aliyuncs.com/iuskye/kube-apiserver:v1.21.3 k8s. GCR. IO/kube - apiserver: v1.21.3 docker tag Registry.cn-beijing.aliyuncs.com/iuskye/kube-scheduler:v1.21.3 k8s. GCR. IO/kube - the scheduler: v1.21.3 docker tag Registry.cn-beijing.aliyuncs.com/iuskye/kube-proxy:v1.21.3 k8s. GCR. IO/kube - proxy: v1.21.3 docker tag Registry.cn-beijing.aliyuncs.com/iuskye/kube-controller-manager:v1.21.3 k8s. GCR. IO/kube - controller - manager: v1.21.3 Docker tag registry.cn-beijing.aliyuncs.com/iuskye/coredns:v1.8.0 k8s. GCR. IO/coredns/coredns: v1.8.0 docker tag Registry.cn-beijing.aliyuncs.com/iuskye/etcd:3.4.13-0 k8s. GCR. IO/etcd: 3.4.13 0 docker tag Registry.cn-beijing.aliyuncs.com/iuskye/pause:3.4.1 k8s. GCR. IO/pause: 3.4.1 track docker tag Registry.cn-beijing.aliyuncs.com/iuskye/dashboard:v2.3.1 kubernetesui/dashboard: v2.3.1 docker tag Registry.cn-beijing.aliyuncs.com/iuskye/metrics-scraper:v1.0.6 kubernetesui/metrics - scraper: v1.0.6 docker tag Registry.cn-beijing.aliyuncs.com/iuskye/flannel:v0.14.0 quay. IO/coreos/flannel: v0.14.0Copy the code
6 reference
- The Authoritative Guide to Kubernetes. 4th edition
- The official documentation