1. Pre-environment
⭐ Note: All commands in this section operate on all machines
1.1 Three interconnected machines
- For virtual machine setup, see my previous article: Environment: Virtualbox+Vagrant Installing Centos7
- Refer to this article to configure the hosts file so that machines in the cluster can be accessed using host names: you are still manually uploading files to the cluster! Distribution methods: SCP, rsync, xsync-plus
1.2 Checking the Linux Version
- Kubernetes Centos version 7.5 or later:
cat /etc/redhat-release
1.3 Checking host names
- If it is started by Vagrant, it is already configured in the Vagrant file, so there is no need to repeat the configuration here
- To view the host name:
hostnamectl
- Change the host name:
hostnamectl set-hostname <hostname>
- To view the host name:
1.4. Synchronize machine time
-
Kubernetes requires that the node time in the cluster must be accurate and consistent. Here, chronyd service is directly used to synchronize the time from the network. You are advised to configure an internal time synchronization server in an enterprise
-
Yum install -y ntpdate
-
View the time: timeDatectl
-
Change the time to Shanghai: timedatectl set-timezone Asia/Shanghai
-
Change the IP address of the synchronization server to Aliyun: sed -i.bak ‘3,6d’ /etc/chrony.conf && sed -i ‘3cServer ntp1.aliyun.com iburst’ /etc/chrony.conf
-
Start chronyd and add systemctl start chronyd && systemctl enable chronyd
-
View the synchronization result: chronyc Sources
-
To check chrony is in sync: chronyc Tracking
1.5 Disabling the Iptables and Firewalld services
- Kubernetes and Docker generate a large number of iptables rules during operation. In order not to confuse the system rules with them, directly turn off the system rules
- Disable the Firewalld service
systemctl stop firewalld
systemctl disable firewalld
- Disable the iptables service (above CentOS7, firewall management is managed by Firewalld)
systemctl stop iptables
systemctl disable iptables
1.6 disable selinux
- Selinux is a security service on Linux that can cause all kinds of weird problems in the installation cluster if it is not turned off
- Disable selinux
- Temporary validity:
sudo setenforce 0
- Permanently valid (select this) : edit /etc/selinux/config,
Change the value of SELINUX to disabled
(To restart
) :vi /etc/selinux/config
- Temporary validity:
1.7 Disabling swap Partitions
- A swap partition is a virtual memory partition that virtualizes disk space into memory after physical memory is used up
- Enabling swap can have a very negative impact on system performance, so Kubernetes requires that swap be disabled on every node
- However, if the swap partition cannot be disabled for some reason, you need to set specific parameters during cluster installation
- disable
- Temporary validity:
swapoff -a
- Permanently valid (select this one) :
echo "vm.swappiness=0" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
- Modify the partition configuration file, comment out the swap partition line (
To restart
) :vi /etc/fstab
- Temporary validity:
1.8 Configuring Linux Kernel Parameters
- Add bridge filtering and address forward function, edit/etc/sysctl. D/kubernetes. Conf file, add the following configuration:
sudo vi /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
Copy the code
- Reload configuration, load bridge filter module:
modprobe br_netfilter && sysctl -p /etc/sysctl.d/kubernetes.conf
- Check whether the bridge filtering module is loaded successfully.
lsmod | grep br_netfilter
1.9 Configuring ipvS
- There are two proxy models for services in Kubernetes:
- Is based on
ipvs
(Select this one) :- The performance of IPVS is obviously higher, but to use it, you need to manually load the IPVS module
- 1. Install ipset and ipvsadm:
yum install ipset ipvsadmin -y
- 2 Add the module to be loaded and write the script file:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #! /bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF Copy the code
- 3 Add the execute permission to the script file:
chmod +x /etc/sysconfig/modules/ipvs.modules
- 4 Execute the script file:
/bin/bash /etc/sysconfig/modules/ipvs.modules
- 5 Check whether the corresponding module is successfully loaded.
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
- Based on the
iptables
:-
There is also a chain configured with iptables to bridge IPv4 traffic to Iptables:
cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF Copy the code
-
Reload the configuration: sysctl -p
-
1.10 install ipvsadm
yum -y install ipset ipvsadm
1.11 Restarting the Server
- After completing the above steps, you need to restart the Linux system:
reboot
2. Install the docker
- Note: All commands in this section are in
All the machines
operation - K8s is based on docker container environment, first install docker: application container engine docker (I) : Docker installation
- Docker needs to be set up for boot
- Due to theK8s official documentIt is recommended to use Cgroupfs to manage Docker and K8S resources and systemd to manage other process resources on the node when the server resources are under great pressure
Modify docker and K8S to use systemd
:
sed -i.bak "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service
- Restart the Docker for the configuration to take effect
- systemctl daemon-reload
- systemctl restart docker
- systemctl status docker
3. Set up kubernetes warehouse and configure Ali Cloud YUM source
- Note: All commands in this section are in
All the machines
operation - Configure Ali cloud yum source to tell K8S where to accelerate the download installation
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
Copy the code
Refresh cache: yum makecache fast
4. Install kubeadm, kubelet, kubectl
- The kubeadm component can install the master node of K8S and add the slave node to the cluster
- The Kubelet component manages each node, proxy, pods, etc
- The Kubectl component is a command-line tool that can be installed only on the master node.
4.1 Querying the Version
Yum list | grep kube or yum – kubelet showduplicates list
Install the specified version format:
yum install -y [package-name]-[version_number]
4.2 Installing the specified version
- Note: All commands in this section are in
All the machines
operation
- Installation:
Yum install -y kubelet-1.22.4 kubeadm-1.22.4 kubectl-1.22.4
- Startup:
systemctl enable kubelet && systemctl start kubelet
(The activating state is not necessary at this time) - View version:
kubelet --version
5. K8s deployment
5.1 Cluster Planning
To check the hosts configuration, run cat /etc/hosts
The host name | ip | role | Installation of components |
---|---|---|---|
k8s1 | 10.0.2.9 | master | Master component, ETCD, Kubelet, Kubectl |
k8s2 | 10.0.2.8 | worker | Worker component, Kubelet |
k8s3 | 10.0.2.15 | worker | Worker component, Kubelet |
Note: At least 4G memory for each set
5.2 Initializing the Mater Node
5.2.1 Download the image required for kubeadm initialization
Note: Run the commands in this section on the master node
- Default IP address of the master network adapter:
ip route show
或ip addr
Eth0 nic IP address of
- Kubeadm init = kubeadm init = kubeadm init
-
Since the official mirror address is blocked, we need to get the required images and their versions first. Then get it from the domestic mirror. Kubeadm config images list –kubernetes-version=v1.22.4
-
Write download image script, version refer to the above command:
mkdir /app/k8s
vi /app/k8s/master_images.sh
#! /bin/bash
## The image below should remove the prefix "k8s.gcr. IO /" and replace it with the version obtained aboveImages =(kube-Apiserver :v1.22.4 kube-proxy:v1.22.4 kube-controller-manager:v1.22.4 kuBE-Scheduler :v1.22.4 coreDNS :v1.8.4 Etcd: 3.5.0-0 pause: 3.5)for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
# Below is the default foreign source, slow download
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done
Copy the code
- If you want to check whether ali cloud image has this version, you can log in to view the search: Ali Cloud container image service (aliyun.com)
- Script permissions:
chmod +x /app/k8s/master_images.sh
- Download the image:
./master_images.sh
- View the downloaded image:
docker images
5.2.2 Initializing kubeadm
Since the above has downloaded the required image from ali cloud warehouse, the following initialization command will not go to download the image abroad. Execute on the master node
Kubeadm init \ --apiserver-advertise-address=10.0.2.9 \ --image-repository Registry.cn-hangzhou.aliyuncs.com/google_containers \ - kubernetes - version v1.22.4 \ - service - cidr = 10.96.0.0/16 \ - pod - network - cidr = 10.244.0.0/16Copy the code
- Apiserver-advertise-address: indicates the IP address of the apiserver.
Modify the IP of your mater
) - Image-repository: because the default pull image address k8s.gcr. IO cannot be accessed in China, the address of ali Cloud image repository is specified here. You can manually pull the image from /app/k8s/master_images.sh and configure the –image-repository property to use the image you just downloaded. Address into registry.aliyuncs.com/google_containers can also save 5.2.2 above a step (but initialization commands may wait for a long time to download images, all separate two steps, make sure need to mirror has been download). (Don’t change)
- Kubernetes-version: specifies the same version as the installation.
Modify your own version
) - Service-cidr: set the subnet of the Services communication IP segment consisting of pods (no change)
- Pod-network-cidr: subnets for pods’ communication IP segments (never change)
- Classless inter-domain Routing (CIDR) is a method of classifying IP addresses for assigning IP addresses to users and Routing IP packets efficiently over the Internet.
Fault: Disable virtual memory swapping in the cluster
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[ERROR Swap]: running with swap on is not supported. Please disable swap
Copy the code
Running results:
Translation:
To start using your cluster, you need to run the following as a regular user: mkdir -p$HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config Or, if you are root, you can run:exportKUBECONFIG = / etc/kubernetes/admin. Conf you should now be a pod network deployment to the cluster. perform"kubectl apply -f [podnetwork].yaml", one of the options is listed in: https://kubernetes.io/docs/concepts/cluster-administration/addons/, then you can run the following command on each worker node to add any number of worker nodes: Kubeadm join 10.0.2.9:6443 --token gp34B0.p7258upebcqb7zy3 \ --discovery-token-ca-cert-hash sha256:7435c7011942ad6a754c743218ee5f1cd4ea2842305c3aa3b75666008af567fbCopy the code
Installation requirements: Execute the following commands:
- The current master node executes:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code
- To deploy a POD network to a cluster, select Flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Since kube-flannel.yml this file may be wallable and cannot be downloaded, I have put it in the code cloud: gitee for download
- Install POD network plug-in on master node:
kubectl apply -f /app/k8s/kube-flannel.yml
- If you want to delete the plug-in, change the apply command to delete
5.3 Worker Node Joins a Cluster
- Master node gets namespace:
kubectl get ns
- The master node checks all pods in all namespaces:
kubectl get pods --all-namespaces
- Master node View cluster nodes:
kubectl get nodes
- Wait for the status of the master node to be
Ready
, Flannel is in the statusRunning
. Then execute the join command on each worder node to join the cluster (this command has a time limit and will be invalid after expiration)
Worker node executes:
Kubeadm join 10.0.2.9:6443 --token gp34B0.p7258upebcqb7zy3 \ --discovery-token-ca-cert-hash sha256:7435c7011942ad6a754c743218ee5f1cd4ea2842305c3aa3b75666008af567fb
- Check again if the cluster node has been added:
kubectl get nodes
- If the worker node status is NotReady, monitor the POD progress for 3-10 minutes:
watch kubectl get pod -n kube-system -o wide
. Wait for the POD to become Running - If the network is faulty, shut down CNI0 and restart the VM to continue testing:
ip link set cni0 down
- If the following information appears, the K8S setup is complete
5.3.1 Join Command token Expired
- The master node prints a temporary token:
kubeadm token create --print-join-command
- The master node prints an unexpired token:
kubeadm token create --ttl 0 --print-join-command
5. 4 Deploy Tomcat verification
Note: Execute commands on master
- Deploying tomcat
Kubectl create Deployment tomcat6 --image=tomcat:6.0.53-jre8
- Expose nginx access
-
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort
- –type=NodePort: randomly assigns port 80 to pod for service, which maps to 8080 for tomcat
- View all information
kubectl get all -o wide
- access
- Only host NETWORK IP address (192.168.56.101/103) + port number (32285) : 192.168.56.101:32285
- If the verification succeeds, you can delete the Tomcat deployment and service
kubectl delete deployment.apps/tomcat6
kubectl delete service/tomcat6
6. One-click install kubernetes+ Kubeshere
6.1 KubeKey Environment Installation
- Only need to
1. Pre-environment
and2. Install the docker
KubeKey will install Kubernetes and Kubesphere in one click.
- Environmental installation
yum install -y ebtables socat ipset conntrack
- First execute the following command to ensure that you download KubeKey from the correct area.
export KKZONE=cn
- Execute the following command to download KubeKey:
The curl - sfL https://get-kk.kubesphere.io | VERSION = v1.2.0 sh -
- Check out the versions supported by Kubeshere
./kk version --show-supported-k8s
6.2 kubeshere installation
Here you need to specify the version of the installation, even if the latest version of the installation should also write the version number, otherwise there will be problems. Run the following command to generate the YAML file
/kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.0.0./kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.0.0
Change to your own configuration value:
Start installation:
./kk create cluster -f config-sample.yaml
If cannot download helm, has been stuck in there, view/app/k8s/kubekey/v1.21.5 / amd64 / path to download the file size and file size does not change is the card there, pulling cable at this moment, will pop up error, in use fast thunder to download according to the error
The error is as follows:
Failed to download kube binaries: Failed to download helm binary: Curl - L - o/app/k8s kubekey/v1.21.5 / amd64 / helm - v3.6.3 - Linux - amd64. Tar. Gz https://get.helm.sh/helm-v3.6.3-linux-amd64.tar.gz &&cd/ app/k8s/kubekey/v1.21.5 / amd64 && tar - ZXF helm - v3.6.3 - Linux - amd64. Tar. Gz && mv Linux - amd64 / helm. && rm - rf *linux-amd64*Copy the code
Downloaded plug-in into the path of the tip, the/app/k8s kubekey/v1.21.5 / amd64, and execute the command prompt:
CD/app/k8s/kubekey/v1.21.5 / amd64 && tar – ZXF helm – v3.6.3 – Linux – amd64. Tar. Gz && mv Linux – amd64 / helm. && rm – rf * Linux-amd64 * Re-running the install command at this time will skip the helm download. This method can also be used for other components that download slowly
Here are some of the components I downloaded for this installation and shared with Gitee
Installation process, you can also view logs: kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}’) -f
6.3 uninstall kubernetes + kubeshere
The delete command is used to uninstall the deployment file
./kk delete cluster [-f config-sample.yaml]
6.4 Kubesphere enables pluggable components
Enable pluggable components (kubesphere.com.cn)
Installation process:
- In order to
admin
Log in to the console as an identity and click on the top leftPlatform management, the choice ofCluster management. - Click on theCRD“In the search bar
clusterconfiguration
Click the result to view its details. - inCustom Resources, click on
ks-installer
On the right side of the! chooseEdit YAML. - In the YAML file, search for
openpitrix
That will beenabled
的false
Instead oftrue
. When you’re done, click on the one in the lower right cornerupdateTo save the configuration.
View the installation process: kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}’) -f
Port Requirements (kubesphere. IO)
- 👍🏻 : have harvest, praise encouragement!
- ❤️ : Collect articles, easy to look back!
- 💬 : Comment exchange, mutual progress!