K8s offline installation and deployment tutorial

The file name The version number The Linux kernel
The docker version 20.10.9 x86
K8s version v1.22.4 x86
kuboard v3 x86

A, k8s (x86)

1. Docker environment installation

Download 1.1.

Download docker-20.10.9-ce. TGZ, download address: address, here select centos7 x86_64 version.

Note: Please refer to the official documentation for installation

Upload 1.2.

Upload docker-20.10.9-ce. TGZ to /opt/tools.

1.3 unzip

Tar -zxvf docker-20.10.9-ce. TGZ cp docker/* /usr/bin/Copy the code

1.4. Create a docker. Service

vi /usr/lib/systemd/system/docker.service
Copy the code
[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target  firewalld.service Wants=network-online.target [Service] Type=notify# the default is not to use systemd for cgroups because the delegate issues still
 
# exists and systemd currently does not support the cgroup feature set required
 
# for containers run by docker

# Enable remote connectionExecStart = / usr/bin/dockerd -h TCP: / / 0.0.0.0:2375 - H Unix: / / / var/run/docker. The sock ExecReload = / bin /kill -s HUP $MAINPID
 
# Having non-zero Limit*s causes performance problems due to accounting overhead
 
# in the kernel. We recommend using cgroups to do container-local accounting.
 
LimitNOFILE=infinity
 
LimitNPROC=infinity
 
LimitCORE=infinity
 
# Uncomment TasksMax if your systemd version supports it.
 
# Only systemd 226 and above support this version.
 
#TasksMax=infinity
 
TimeoutStartSec=0
 
# set delegate yes so that systemd does not reset the cgroups of docker containers
 
Delegate=yes
 
# kill only the docker process, not all processes in the cgroup
 
KillMode=process
 
# restart the docker process if it exits prematurely
 
Restart=on-failure
 
StartLimitBurst=3
 
StartLimitInterval=60s
 
 
[Install]
 
WantedBy=multi-user.target
Copy the code

1.5. Specify the harbor

vi /etc/docker/daemon.json

{" insecure - registries: "[" 192.168 xx, xx"]}Copy the code

After modification, restart the Docker service

Systemctl daemon-reload service docker restart or systemctl restart dockerCopy the code

After restarting docker, login to Harbor

Docker LOGIN Harbor IP address account passwordCopy the code

2. Prepare for k8S installation

Install the tutorial

  • Allows iptables to check bridge traffic (also below)

Ensure that the BR_netfilter module is loaded. This operation can run through lsmod | grep br_netfilter to complete. To explicitly load the module, run sudo modprobe br_netfilter.

In order for iptables on your Linux node to view bridge traffic correctly, you need to make sure that net.bridge. bridge-nF-call-iptables is set to 1 in your SYsctl configuration. Such as:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Copy the code
  • hostname,selinux,swap,iptables
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Disable the firewall: If the server is a cloud server, you need to set a security group policy to permit ports
# https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
systemctl stop firewalld
systemctl disable firewalld

# change the hostname
hostnamectl set-hostname k8s-01
# check the modification result
hostnamectl status
# set hostname resolution
echo "127.0.0.1   $(hostname)" >> /etc/hosts

# close selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

# close swap:
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab 

Allow iptables to check bridge traffic
#https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#%E5%85%81%E8%AE%B8-iptables-% E6%A3%80%E6%9F%A5%E6%A1%A5%E6%8E%A5%E6%B5%81%E9%87%8F
Open br_netfilter # #
## sudo modprobe br_netfilter
Confirm the # #
## lsmod | grep br_netfilter

## Modify the configuration


##### use this here, not the configuration in class...
# pass the bridge IPv4 traffic to the iptables chain
# modify/etc/sysctl. Conf
If there is a configuration, modify it
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf

# probably not, append
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf


# Execute command to apply
sysctl -p

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Copy the code
  • Firewall port

Control plane node (Master)

agreement The direction of Port range role The user
TCP The inbound 6443 Kubernetes API server All of the components
TCP The inbound 2379-2380. Etcd server client API kube-apiserver, etcd
TCP The inbound 10250 Kubelet API Kubelet itself, control plane components
TCP The inbound 10251 kube-scheduler Kube – the scheduler itself
TCP The inbound 10252 kube-controller-manager Kube – controller – manager itself

Work node (Worker)

agreement The direction of Port range role The user
TCP The inbound 10250 Kubelet API Kubelet itself, control plane components
TCP The inbound 30000-32767. NodePort service † All of the components

Note: In a formal environment, you are advised to enable port access.

  • Install the CNI plug-in (required on most Pod networks) :
CNI_VERSION="v0.8.2"
ARCH="amd64"
sudo mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz" | sudo tar -C /opt/cni/bin -xz
Copy the code

For offline deployment, download the cni-plugins-linux-AMd64-v0.8.2. TGZ file and upload it to /opt/tools/k8s

Unzip installation:

Mkdir /opt/ cnI /bin tar -zxvf cni-plugins-linux-amd64-v0.8.2. TGZ -c /opt/ cnI /binCopy the code
  • Install Crictl (required for Kubeadm/Kubelet Container Runtime Interface (CRI))
DOWNLOAD_DIR=/usr/local/bin
sudo mkdir -p $DOWNLOAD_DIR
Copy the code
CRICTL_VERSION="v1.17.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
Copy the code

Offline deployment: Download crictl-v1.17.0-linux-amd64.tar.gz and upload it to /opt/tools/k8s

Unzip installation:

Tar -zxvf crictl-v1.17.0-linux-amd64.tar.gz -c /usr/local/binCopy the code

3. K8s service installation

Website tutorial

Installed version: V1.22.4

Install kubeadm, Kubelet, kubectl and add kubelet system service:

DOWNLOAD_DIR=/usr/local/bin
sudo mkdir -p $DOWNLOAD_DIR
Copy the code
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"

ARCH="amd64"

cd $DOWNLOAD_DIR

sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}

sudo chmod +x {kubeadm,kubelet,kubectl}

RELEASE_VERSION="v0.4.0"

curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service

sudo mkdir -p /etc/systemd/system/kubelet.service.d

curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Copy the code

Offline deployment:

Download kubeadm, kubelet,kubectl and upload it to /opt/tools/k8s

chmod +x kubeadm kubectl kubelet
cp kube* /usr/local/bin/
kubeadm version
kubectl version
kubelet version
Copy the code

/opt/tools/k8s /opt/tools/k8s /opt/tools/k8s/kubeadm.conf

DOWNLOAD_DIR=/usr/local/bin
sed -i "s:/usr/bin:${DOWNLOAD_DIR}:g" kubelet.service  
cp kubelet.service /etc/systemd/system/kubelet.service
Copy the code
mkdir -p /etc/systemd/system/kubelet.service.d
sed -i "s:/usr/bin:${DOWNLOAD_DIR}:g" 10-kubeadm.conf
cp 10-kubeadm.conf /etc/systemd/system/kubelet.service.d
Copy the code

Activate and start Kubelet:

systemctl enable --now kubelet
Copy the code

Kubelet now restarts every few seconds as it gets stuck in an endless loop waiting for Kubeadm instructions.

  • Configure the Cgroup driver

Note:

By default, docker uses cgroupfs and kubelet uses systemd. Otherwise, kubelet cannot be started.

Change the docker to systemd

# Edit or add daemon.json file
vi /etc/docker/daemon.json

Add the following configuration
{
  "exec-opts": ["native.cgroupdriver=systemd"]}Copy the code

Restart the docker:

systemctl restart docker
systemctl status docker
Copy the code

4. Kubeadm create cluster

4.1. Prepare required container images

This step is optional and only applies if you want kubeadm init and kubeadm Join not to download the default container image stored on k8s.gcr. IO.

Run kubeadm offline

To run Kubeadm without an Internet connection, you must pull the required control plane image in advance.

You can use the kubeadm config images subcommand to list and pull images:

kubeadm config images list
kubeadm config images pull
Copy the code

The required images are as follows: you can find a server capable of downloading these images.

K8s. GCR. IO/kube – apiserver: v1.22.4 k8s. GCR. IO/kube – controller – manager: v1.22.4 k8s. GCR. IO/kube – the scheduler: v1.22.4 K8s. GCR. IO/kube – proxy: v1.22.4 k8s. GCR. IO/pause: 3.5 k8s. GCR. IO/etcd: 3.5.0-0 k8s. GCR. IO/coredns/coredns: v1.8.4

Download the image, use docker save-o xxx.tar image, export, and then upload

Under the/opt/tools/k8s/images. Then use docker load -i xxx.tar to load the image into the local Docker environment.

Note that there will be a wonderful work, in this way, the coredns name in this image, becomes k8s. GCR. IO/coredns: v1.8.4, less a layer coredns, this needs to be playing tag tag.

Docker tag k8s. GCR. IO/coredns: v1.8.4 k8s. GCR. IO/coredns/coredns: v1.8.4Copy the code

4.2 perform init

Kubeadm init \ --apiserver-advertise-address=192.168.4.45 \ --kubernetes-version v1.22.4 \ --service-cidr=10.96.0.0/16 \ - pod - network - cidr = 192.168.0.0/16Copy the code

Apiserver-advertise-address: sets the broadcast address of the MASTER API server, that is, the IP address of the master host.

Kubernetes-version: indicates the version number of K8S

Service-cidr: indicates the IP address range reachable by load balancing

Pod-network-cidr: INDICATES the IP address range reachable by POD services

Note: POd-cidr and service-cidr

Cidr Classless inter-domain Routing (CIDR) Specifies a network reachable range: POD subnet range +service load balancing network subnet range + local IP subnet range There must be no duplicate Domain

4.3. Continue as prompted

Kubeadm init XXX, the log is as follows:

[init] Using Kubernetes version: v1.22.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 6.503100 seconds [uploa-config] Storing the configuration usedin ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.22" in namespace kube-system with the configuration for the kubelets inthe cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as  control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: jokbeq.logz5fixljdrna6r [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRsin order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.4.45:6443 --token jokbeq.logz5fixljdrna6r \
	--discovery-token-ca-cert-hash sha256:16ba5133d2ca72714c4a7dd864a5906baa427dc53d8eb7cf6d890388300a052a 
Copy the code
  • Step 1: Copy the related folders
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code
  • Step 2: Export environment variables
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
Copy the code
  • Step 3: Deploy a POD network (here: Calico)
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Copy the code

Calico Installation tutorial, version: V3.21.1

Install using Kubernetes API datastore – 50 or fewer nodes

Online deployment:

1. Download the Calico network listing for the Kubernetes API data store.

curl https://docs.projectcalico.org/manifests/calico.yaml -O
Copy the code

2. If you use POD CIDR 192.168.0.0/16, skip to the next step. If you are using a different POD CIDR, use the following command to set an environment variable named POD_CIDR, which contains your POD CIDR, and 192.168.0.0/16 is replaced by your POD CIDR in the listing. (Since this is what we are using, this step is not required.)

POD_CIDR="<your-pod-cidr>" \ sed -i -e "s? 192.168.0.0/16? $POD_CIDR? g" calico.yamlCopy the code

3. Run the following command to apply the list.

kubectl apply -f calico.yaml
Copy the code

Offline deployment:

Yaml requires an image. If you cannot access the Internet, you can download the image from a server with an Internet connection.

cat calico.yaml | grep image: | awk '{print $2}'
Copy the code

Docker. IO/calico/the cni: v3.21.1 docker. IO/calico/pod2daemon – flexvol: v3.21.1 docker. IO/calico/node: v3.21.1 Docker. IO/calico/kube – controllers: v3.21.1

# pull all mirrors
cat calico.yaml \
    | grep image: \
    | awk '{print "docker pull " $2}' \
    | sh

You can also pull one by one

Export image as compressed package in current directoryDocker save -o calico-cni-v3.21.1.tar calico/ cnI :v3.21.1 docker save -o calico-pod2daemon-flexvol-v3.21.1.tar Calico /pod2daemon -Flexvol :v3.21.1 docker save-o calico-node-v3.21.1.tar calico/node:v3.21.1 docker save-o The calico - kube - controllers - v3.21.1. Tar the calico/kube - controllers: v3.21.1Load into the Docker environmentTar docker load -i calico-pod2daemon-flexvol-v3.21.1.tar docker load -i calico-pod2daemon-flexvol-v3.21.1.tar docker load -i calico-pod2daemon-flexvol-v3.21.1.tar docker load -i Calico-node-v3.21.1.tar docker load -I calico-kube-controllers v3.21.1.tar# to install the calico
kubectl apply -f calico.yaml 

# remove the calico
kubectl delete -f calico.yaml 
Copy the code

Installation successful:

Also, coreDNS, it’s going to be Running.

  • Step 4: Create the worker
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.4.45:6443 --token jokbeq.logz5fixljdrna6r \
	--discovery-token-ca-cert-hash sha256:16ba5133d2ca72714c4a7dd864a5906baa427dc53d8eb7cf6d890388300a052a 
Copy the code
What if token expiresKubeadm token create --print-join-command kubeadm join 192.168.4.45:6443 --token l7smzu.uJY68m80prq526nh --discovery-token-ca-cert-hash sha256:16ba5133d2ca72714c4a7dd864a5906baa427dc53d8eb7cf6d890388300a052aCopy the code

K8s installation effect:

5. Verify the cluster

Get all nodes
kubectl get nodes

Tag the node
## k8s everything is an object. Node: machine Pod: application container
# # # tag
kubectl label node k8s-worker1 node-role.kubernetes.io/worker=' '

# # # to label
kubectl label node k8s-worker1 node-role.kubernetes.io/worker-

If the k8S cluster is restarted, it will be automatically added to the cluster. If the master cluster is restarted, it will be automatically added to the cluster control center
Copy the code

Next, tell Kubernetes to empty the node:

kubectl drain <node name>
Copy the code

Once it returns (without an error), you can take the node offline (or equivalently, if on a cloud platform, remove the VMS that support it). If you want to keep nodes in the cluster during maintenance operations, you need to run:

kubectl uncordon <node name>
Copy the code