1. Knowledge

1.1 Kubernetes clusters can be deployed in two ways in a production environment

There are two main ways to deploy a Kubernetes cluster in production:

  • Kubeadm
    • Kubeadm is a K8s deployment tool that provides Kubeadm init and Kubeadm Join for rapid deployment of Kubernetes clusters. Official Address:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/.
  • Binary package
    • Download the binaries of the distribution from Github and manually deploy each component to form a Kubernetes cluster. Kubeadm lowers the deployment threshold, but leaves out many details, making it difficult to troubleshoot problems. For easier control, deploying Kubernetes clusters in binary packages is recommended. Although manually deploying Kubernetes is tricky, you can learn a lot about how it works and maintain it later.

1.2 Installation Requirements

Before you can start, you need the following conditions to deploy a Kubernetes clustered machine:

  • One or more machines, OS Centos 7.X-86_X64
  • Hardware configuration: 2GB or more RAM, 2 or more cpus, 30GB or more hard disk
  • If the server can access the Internet, you need to pull the image. If the server cannot access the Internet, you need to download the image and import it to the node
  • Disable swap

1.3 Preparing the Environment

Software environment:

software version
The operating system CentOS7.6 _x64 (mini)
Docker 19-ce
Kubernetes 1.18

Server planning:

role IP component
k8s-master1 10.1.1.204 kube-apiserver, kube-controller-manager, kube-scheduler, etcd
k8s-master2 10.1.1.230 kube-apiserver, kube-controller-manager, kube-scheduler, etcd
k8s-worker1 10.1.1.151 kubelet, kube-proxy, docker, etcd
k8s-worker2 10.1.1.186 kubelet, kube-proxy, docker, etcd
k8s-lb1 10.1.1.220, 10.1.1.170(VIP) Nginx L4
k8s-lb2 10.1.1.169 Nginx L4

Note: Considering that some friends have low computer configuration and so many VMS cannot run, this set of high availability cluster is implemented in two parts. First, deploy a single Master architecture (10.1.1.204/151/186), and then expand the cluster to a multi-master architecture (as planned above). By the way, get familiar with the Master expansion process.

Single Master architecture diagram:

Planning a single Master server:

role IP component
k8s-master1 10.1.1.204 kube-apiserver, kube-controller-manager, kube-scheduler, etcd
k8s-worker1 10.1.1.151 kubelet, kube-proxy, docker, etcd
k8s-worker2 10.1.1.186 kubelet, kube-proxy, docker, etcd

1.4 Initial Configuration of the OPERATING system

Disable firewall
systemctl stop firewalld
systemctl disable firewalld

Close # selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # permanent setenforce 0 # temporary  Close # swap swapoff -a # temporary sed -ri 's/.*swap.*/#&/' /etc/fstab # permanent  Set the host name as planned hostnamectl set-hostname <hostname>  # Add hosts to all hosts cat >> /etc/hosts << EOF 10.1.1.204 k8s - master110.1.1.230 k8s - master210.1.1.151 k8s - worker110.1.1.186 k8s - worker210.1.1.220 k8s - lb110.1.1.169 k8s - lb2EOF  # Pass bridged IPv4 traffic to iptables chain cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system # take effect  # Time synchronization yum install ntpdate -y ntpdate time.windows.com Copy the code

2. Deploy the Etcd cluster

Etcd is a distributed key-value storage system. Kubernetes uses Etcd for data storage, so prepare an Etcd database first. To solve the single point of failure of Etcd, you should deploy it in a cluster. 2 machine failure can be tolerated.

The name of the node IP
etcd1(k8s-master1) 10.1.1.204
etcd2(k8s-worker1) 10.1.1.151
etcd3(k8s-worker2) 10.1.1.186

u

Note: In order to save machines, the K8s node machine is reused here. It can also be deployed independently of the K8S cluster, as long as apiserver can connect to it.

2.1 Preparing the CFSSL Certificate generation tool

CFSSL is an open source certificate management tool that uses JSON files to generate certificates, making it easier to use than OpenSSL.

I’m going to do it on any server, so I’m going to do it on the Master node.

Wget HTTP: / / https://pkg.cfssl.org/R1.2/cfssl_linux-amd64Wget HTTP: / / https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64Wget HTTP: / / https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo Copy the code

2.2 Generating an Etcd Certificate

1. Self Licensing Authority (CA)

Create a working directory:

mkdir -p ~/TLS/{etcd,k8s}
cd ~/TLS/etcd
Copy the code

Since the signing CA:

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
 },  "profiles": {  "www": {  "expiry": "87600h". "usages": [  "signing". "key encipherment". "server auth". "client auth"  ]  }  }  } } EOF  cat > ca-csr.json << EOF {  "CN": "etcd CA". "key": {  "algo": "rsa". "size": 2048  },  "names": [  {  "C": "CN". "L": "Beijing". "ST": "Beijing"  }  ] } EOF Copy the code

Generating a certificate:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem
ca-key.pem  ca.pem
Copy the code

2. Use a self-signed CA to issue the Etcd HTTPS certificate

Create a certificate application file:

cat > server-csr.json << EOF
{
    "CN": "etcd".    "hosts": [
    "10.1.1.204". "10.1.1.151". "10.1.1.186" ]. "key": {  "algo": "rsa". "size": 2048  },  "names": [  {  "C": "CN". "L": "BeiJing". "ST": "BeiJing"  }  ] } EOF Copy the code

u

Note: IP in the hosts field of the preceding file is the internal communication IP address of all ETCD nodes in the cluster. To facilitate capacity expansion, you can write more reserved IP addresses.

Generating a certificate:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem  server.pem
Copy the code

2.3 Downloading binaries from GitHub

Download address: https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

2.4 Deploying the Etcd cluster

The following operations are performed on node 1. To simplify the operation, all files generated on node 1 will be copied to nodes 2 and 3.

1. Create a working directory and decompress the binary package

mkdir /opt/etcd/{bin,cfg,ssl} -p
The tar ZXVF etcd v3.4.9 - Linux - amd64. Tar. GzMv etcd - v3.4.9 - Linux - amd64 / {etcd, etcdctl} / opt/etcd/bin /Copy the code

2. Create an ETCD configuration file

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.1.1.204:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.1.1.204:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.1.1.204:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.1.1.204:2379" ETCD_INITIAL_CLUSTER="Etcd - 1 = https://10.1.1.204:2380, etcd - 2 = https://10.1.1.151:2380, etcd - 3 = https://10.1.1.186:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF Copy the code
  • ETCD_NAME: indicates the node name, which is unique in the cluster
  • ETCD_DATA_DIR: data directory
  • ETCD_LISTEN_PEER_URLS: cluster communication listening address
  • ETCD_LISTEN_CLIENT_URLS: client access listening address
  • ETCD_INITIAL_ADVERTISE_PEER_URLS: cluster advertise address
  • ETCD_ADVERTISE_CLIENT_URLS: client advertisement address
  • ETCD_INITIAL_CLUSTER: cluster node address
  • ETCD_INITIAL_CLUSTER_TOKEN: Cluster Token
  • ETCD_INITIAL_CLUSTER_STATE: indicates the current status of the cluster to be added. New indicates a new cluster. Existing indicates that the cluster is added to an existing cluster

3. Systemd Manages the ETCD cluster

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd \ --cert-file=/opt/etcd/ssl/server.pem \ --key-file=/opt/etcd/ssl/server-key.pem \ --peer-cert-file=/opt/etcd/ssl/server.pem \ --peer-key-file=/opt/etcd/ssl/server-key.pem \ --trusted-ca-file=/opt/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \ --logger=zap Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF Copy the code

4. Copy the newly generated certificate

Copy the generated certificate to the configuration file:

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
Copy the code

5. Start and set the startup mode

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
Copy the code

6. Copy all files generated on node 1 to nodes 2 and 3

SCP - r/opt/etcd/[email protected]: / opt /SCP/usr/lib/systemd/system/etcd. Service [email protected]: / usr/lib/systemd/system /SCP - r/opt/etcd/[email protected]: / opt /SCP/usr/lib/systemd/system/etcd. Service [email protected]: / usr/lib/systemd/system /Copy the code

Then change the node name and current server IP address in the etcd.conf configuration file of node 2 and node 3 respectively:

vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"   Etcd-2 for node 2 and ETCD-3 for node 3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"   Change this to the current server IP address
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379" Change this to the current server IP address  #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380" Change this to the current server IP address ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379" Change this to the current server IP address ETCD_INITIAL_CLUSTER="Etcd - 1 = https://192.168.31.71:2380, etcd - 2 = https://192.168.31.72:2380, etcd - 3 = https://192.168.31.73:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" Copy the code

Finally start the ETCD and set the boot up, ibid.

7. Check the cluster status

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints=https://10.1.1.204:2379, https://10.1.1.151:2379, https://10.1.1.186:2379, "" endpoint health

https://10.1.1.204:2379 is healthy: successfully committed proposal: took = 12.415974ms
https://10.1.1.151:2379 is healthy: successfully committed proposal: took = 14.409434ms
https://10.1.1.186:2379 is healthy: successfully committed proposal: took = 14.447958ms
Copy the code

If the preceding information is displayed, the cluster deployment is successful. If there is a problem the first step is to look at the log: /var/log/message or journalctl -u etcd

3. Install the Docker

Download address: https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz.

The following operations are performed on all nodes. This is a binary installation, as is the case with yum.

3.1 Decompressing the Binary Package

The tar ZXF docker - 19.03.9. TGZmv docker/* /usr/bin
Copy the code

3.2 Systemd manages Docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target [Service] Type=notify ExecStart=/usr/bin/dockerd ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF Copy the code

3.3 Creating a Configuration File

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF Copy the code
  • Registry-mirrors The Ali cloud mirror accelerator

3.4 Starting and Setting the startup

systemctl daemon-reload
systemctl start docker
systemctl enable docker
Copy the code

4. Deploy the Master Node

4.1 Generating a Kube-Apiserver Certificate

1. Self Licensing Authority (CA)

cd ~/TLS/k8s/

cat > ca-config.json << EOF
{
  "signing": {
 "default": {  "expiry": "87600h"  },  "profiles": {  "kubernetes": {  "expiry": "87600h". "usages": [  "signing". "key encipherment". "server auth". "client auth"  ]  }  }  } } EOF  cat > ca-csr.json << EOF {  "CN": "kubernetes". "key": {  "algo": "rsa". "size": 2048  },  "names": [  {  "C": "CN". "L": "Beijing". "ST": "Beijing". "O": "k8s". "OU": "System"  }  ] } EOF Copy the code

Generating a certificate:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem  ca.pem
Copy the code

2. Use the self-signed CA to issue the KUbe-apiserver HTTPS certificate

Create a certificate application file:

cat > server-csr.json << EOF
{
    "CN": "kubernetes".    "hosts": [
      "10.0.0.1". "127.0.0.1". "10.1.1.204". "10.1.1.230". "10.1.1.151". "10.1.1.186". "10.1.1.220". "10.1.1.169". "10.1.1.170". "kubernetes". "kubernetes.default". "kubernetes.default.svc". "kubernetes.default.svc.cluster". "kubernetes.default.svc.cluster.local" ]. "key": {  "algo": "rsa". "size": 2048  },  "names": [  {  "C": "CN". "L": "BeiJing". "ST": "BeiJing". "O": "k8s". "OU": "System"  }  ] } EOF Copy the code

u

Note: The IP addresses in the hosts field of the preceding file are all Master/LB/VIP IP addresses, and no one is missing! To facilitate capacity expansion, you can write more reserved IP addresses.

Generating a certificate:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare  server
ls server*pem
server-key.pem  server.pem
Copy the code

4.2 Downloading binaries from GitHub

Download the address https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183.

u

Note: Open the link and you’ll find many packages. Just download the Server package, which includes the Master and Worker Node binaries.

For example: https://dl.k8s.io/v1.18.3/kubernetes-server-linux-amd64.tar.gz.

4.3 Decompressing the Binary Package

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar zxf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
Copy the code

Kube – apiserver 4.4 deployment

1. Create a configuration file

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
- etcd - the servers = https://10.1.1.204:2379, https://10.1.1.151:2379, https://10.1.1.186:2379 \ \- the bind - address = 10.1.1.204 \ \--secure-port=6443 \\ - advertise - address = 10.1.1.204 \ \--allow-privileged=true \\ - service - cluster - IP - range = 10.0.0.0/24 \ \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --enable-bootstrap-token-auth=true \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-32767 \\ --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\ --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/etcd/ssl/ca.pem \\ --etcd-certfile=/opt/etcd/ssl/server.pem \\ --etcd-keyfile=/opt/etcd/ssl/server-key.pem \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/opt/kubernetes/logs/k8s-audit.log" EOF Copy the code

u

Note: the first is an escape character and the second is a newline character. The escape character is used to preserve the newline character with EOF.

  • –logtostderr: Enable logging
  • –v: indicates the log level
  • –log-dir: specifies the log directory
  • — etcd-Servers: EtCD cluster address
  • –bind-address: listening address
  • –secure-port: indicates the HTTPS security port
  • — advertised-address: indicates the cluster advertise address
  • –allow-privileged: Enables the authorization
  • –service-cluster-ip-range: service virtual IP address range
  • –enable-admission-plugins: access control module
  • –authorization-mode: enables RBAC authorization and node self-management
  • –enable-bootstrap-token-auth: enables the TLS bootstrap mechanism
  • –token-auth-file: bootstrap token file
  • –service-node-port-range: service nodePort type Specifies the port range allocated by default
  • –kubelet-client-xxx: apiserver access kubelet client certificate
  • — tlS-xxx-file: apiserver HTTPS certificate
  • — etcd-xxxFile: Connects to the ETCD cluster certificate
  • –audit-log-xxx: indicates the audit log

2. Copy the newly generated certificate

Copy the generated certificate to the configuration file:

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
Copy the code

3. Enable TLS Bootstrapping

The TLS Bootstraping: Master apiserver After TLS authentication is enabled, kubelet and Kube-proxy must use valid certificates issued by CA to communicate with Kube-Apiserver. When there are many nodes, it takes a lot of work to issue such client certificates. It also increases the complexity of cluster scaling. In order to ease the process, Kubernetes has introduced TLS Bootstraping mechanism to automatically issue client certificates. Kubelet automatically applies to Apiserver as a low-privilege user, and The Kubelet certificate is dynamically signed by Apiserver. Therefore, it is strongly recommended to use this method on Node. Currently, it is mainly used for Kubelet, and we still issue a unified certificate for Kube-proxy.

TLS Bootstraping Workflow:

Create the token file in the preceding configuration file:

cat > /opt/kubernetes/cfg/token.csv << EOF
8054b7219e601b121e8d2b4f73d255ad,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
Copy the code

Format: Token, user name, UID, user group

Tokens can also generate their own replacements:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
Copy the code

4. Systemd manages apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF Copy the code

5. Start and set the startup

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
Copy the code

6. Authorize the kubelet-bootstrap user to request a certificate

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
Copy the code

4.5 deployment kube controller — the manager

1. Create a configuration file

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\ - master = 127.0.0.1: \ \ 8080- the bind - address = 127.0.0.1 \ \--allocate-node-cidrs=true \\ - cluster - cidr = 10.244.0.0/16 \ \- service - cluster - IP - range = 10.0.0.0/24 \ \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --experimental-cluster-signing-duration=87600h0m0s" EOF Copy the code
  • –master: connects to apiserver through non-secure local port 8080
  • — LEADer-ELECT: Automatic election (HA) when this component starts multiple
  • –cluster-signing-cert-file/ — cluster-signing-key-file: CA that automatically issues certificates for Kubelet, consistent with apiserver

2. Systemd manages controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF Copy the code

3. Start and set the startup

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
Copy the code

4.6 deployment kube – the scheduler

1. Create a configuration file

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \ - master = 127.0.0.1: \ 8080- the bind - = address 127.0.0.1" EOF Copy the code
  • –master: connects to apiserver through non-secure local port 8080
  • — LEADer-ELECT: Automatic election (HA) when this component starts multiple

2. Systemd manages scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF Copy the code

3. Start and set the startup

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
Copy the code

4. Check the cluster status

All components have started successfully. Use the Kubectl tool to check the status of the current cluster components:

kubectl get cs

NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} Copy the code

The preceding information indicates that the Master node is running properly.

5. Deploy Worker Node

The following operation is still performed on the Master Node, that is, as the Worker Node.

5.1 Creating a Working Directory and Copying binary Files

Create working directories on all worker Nodes:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
Copy the code

Copy from the master node:

cd ~/kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin   # Local copy
Copy the code

5.2 deployment kubelet

1. Create a configuration file

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master1 \\ --network-plugin=cni \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --config=/opt/kubernetes/cfg/kubelet-config.yml \\ --cert-dir=/opt/kubernetes/ssl \\ - pod - infra - container - image = lizhenliang/pause - amd64:3.0" EOF Copy the code
  • – hostname-override: specifies the display name that is unique in the cluster
  • – network-plugin: enable CNI
  • – Kubeconfig: indicates an empty path that is automatically generated and used to connect to the Apiserver
  • – bootstrap-kubeconfig: Applies for a certificate from apiserver for the first time
  • – config: indicates the configuration parameter file
  • – cert-dir: specifies the directory for generating the Kubelet certificate
  • – pod-infra-container-image: manages images of POD network containers

2. Configure the parameter file

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
Address: 0.0.0.0port: 10250
readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.0.0.2 clusterDomain: cluster.local failSwapOn: false authentication:  anonymous:  enabled: false  webhook:  cacheTTL: 2m0s  enabled: true  x509:  clientCAFile: /opt/kubernetes/ssl/ca.pem authorization:  mode: Webhook  webhook:  cacheAuthorizedTTL: 5m0s  cacheUnauthorizedTTL: 30s evictionHard:  imagefs.available: 15%  memory.available: 100Mi  nodefs.available: 10%  nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110 EOF Copy the code

3. Generate the bootstrap.kubeconfig file

KUBE_APISERVER="https://10.1.1.204:6443" # apiserver IP:PORT
TOKEN="8054b7219e601b121e8d2b4f73d255ad" # The same as in token.csv

# Generate kubelet bootstrap Kubeconfig configuration file
kubectl config set-cluster kubernetes \
 --certificate-authority=/opt/kubernetes/ssl/ca.pem \  --embed-certs=true \  --server=${KUBE_APISERVER} \  --kubeconfig=bootstrap.kubeconfig  kubectl config set-credentials "kubelet-bootstrap" \  --token=${TOKEN} \  --kubeconfig=bootstrap.kubeconfig  kubectl config set-context default \  --cluster=kubernetes \  --user="kubelet-bootstrap" \  --kubeconfig=bootstrap.kubeconfig  kubectl config use-context default --kubeconfig=bootstrap.kubeconfig Copy the code

Path to the configuration file:

cp bootstrap.kubeconfig /opt/kubernetes/cfg
Copy the code

4. Systemd manages Kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF Copy the code

5. Start and set the startup

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
Copy the code

5.3 The Kubelet certificate application is approved and the kubelet certificate is added to the cluster

Kubelet certificate request
kubectl get csr

NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-ngC06Pj7kmQUvmzBnbfLRxUVG1J90dlT2lWMkCNbnBA   26s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
 # Approve the application (node-* generated after approve) kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A  # View nodes kubectl get node NAME STATUS ROLES AGE VERSION K8s -master1 NotReady < None > 6s v1.18.3Copy the code

u

Note: The node will not be ready for NotReady because the network plug-in has not been deployed.

5.4 deployment kube – proxy

1. Create a configuration file

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF Copy the code

2. Configure the parameter file

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
BindAddress: 0.0.0.0MetricsBindAddress: 0.0.0.0:10249clientConnection:  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig hostnameOverride: k8s-master1 ClusterCIDR: 10.0.0.0/24EOF Copy the code

3. Generate the kube-proxy.kubeconfig file

Generate kube-proxy certificate:

# Switch working directory
cd ~/TLS/k8s

Create certificate request file
cat > kube-proxy-csr.json << EOF
{  "CN": "system:kube-proxy". "hosts": []. "key": {  "algo": "rsa". "size": 2048  },  "names": [  {  "C": "CN". "L": "BeiJing". "ST": "BeiJing". "O": "k8s". "OU": "System"  }  ] } EOF  # Generate certificate cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy  ls kube-proxy*pem kube-proxy-key.pem kube-proxy.pem Copy the code

Generate kubeconfig file:

KUBE_APISERVER="https://10.1.1.204:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
 --server=${KUBE_APISERVER} \  --kubeconfig=kube-proxy.kubeconfig  kubectl config set-credentials kube-proxy \  --client-certificate=./kube-proxy.pem \  --client-key=./kube-proxy-key.pem \  --embed-certs=true \  --kubeconfig=kube-proxy.kubeconfig  kubectl config set-context default \  --cluster=kubernetes \  --user=kube-proxy \  --kubeconfig=kube-proxy.kubeconfig  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig Copy the code

Copy the configuration file to the specified path:

cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
Copy the code

4. Systemd manages kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF Copy the code

5. Start and set the startup

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
Copy the code

5.5 Deploying the CNI Network

Prepare the CNI binaries:

Download address: https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz.

Decompress the binary package and move it to the default working directory:

mkdir -p /opt/cni/bin
Tar ZXVF nci-plugins-linux-amd64-v0.8.6. TGZ -c /opt/cni/binCopy the code

CNI network deployment:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i -r "S # quay. IO/coreos/flannel:. * - amd64 # lizhenliang/flannel: v0.12.0 - amd64 # g" kube-flannel.yml
A mirror address is required here, and foreign ones cannot be accessed
Copy the code

The default image address cannot be accessed. Change it to docker Hub image repository.

kubectl apply -f kube-flannel.yml

kubectl get pods -n kube-system
NAME                          READY   STATUS     RESTARTS   AGE
kube-flannel-ds-amd64-jj98k   0/1     Init:0/1   0          14s
 kubectl get node NAME STATUS ROLES AGE VERSION K8s -master1 Ready < None > 14m v1.18.3Copy the code

The network plug-in is deployed and Node is ready.

5.6 Authorizing Apiserver to Access Kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
 rbac.authorization.kubernetes.io/autoupdate: "true"  labels:  kubernetes.io/bootstrapping: rbac-defaults  name: system:kube-apiserver-to-kubelet rules:  - apiGroups:  - ""  resources:  - nodes/proxy  - nodes/stats  - nodes/log  - nodes/spec  - nodes/metrics  - pods/log  verbs:  - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:  name: system:kube-apiserver  namespace: "" roleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:kube-apiserver-to-kubelet subjects:  - apiGroup: rbac.authorization.k8s.io  kind: User  name: kubernetes EOF Copy the code
kubectl apply -f apiserver-to-kubelet-rbac.yaml
Copy the code

5.7 Worker Node is added

1. Copy the deployed Node files to the new Node (k8s-master1).

Copy Worker Node involved files to 10.1.1.151/186 on the Master Node

SCP - r/opt/kubernetes [email protected]: / opt /SCP - r/usr/lib/systemd/system / {kubelet, kube - proxy}. Service [email protected]: / usr/lib/systemd/systemSCP - r/opt/the cni/[email protected]: / opt /SCP - r/opt/kubernetes/SSL/ca. Pem [email protected]: / opt/kubernetes/SSL
SCP - r/opt/kubernetes [email protected]: / opt /SCP - r/usr/lib/systemd/system / {kubelet, kube - proxy}. Service [email protected]: / usr/lib/systemd/systemSCP - r/opt/the cni/[email protected]: / opt /SCP - r/opt/kubernetes/SSL/ca. Pem [email protected]: / opt/kubernetes/SSLCopy the code

2. Delete kubelet certificate and kubeconfig (k8s-Worker1/2)

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*
Copy the code

u

Note: These files are automatically generated after the certificate application is approved. Each Node is different and must be deleted and regenerated.

3. Change the host name (on K8s-Worker1/2)

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-worker1

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-worker1
 vi /opt/kubernetes/cfg/kubelet.conf --hostname-override=k8s-worker2  vi /opt/kubernetes/cfg/kube-proxy-config.yml hostnameOverride: k8s-worker2 Copy the code

4. Start and set boot on k8S-Worker1/2

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
Copy the code

5. Approve the application for a new Node Kubelet certificate on Master

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-mZ1Shcds5I90zfqQupi-GaI_4MeKl7PizK5dgOF2wC8   14s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-x-lWfrl2pFrMUblecU5NYgpn3zI6j_iiTmcKZ1JRefY   20s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

kubectl certificate approve node-csr-mZ1Shcds5I90zfqQupi-GaI_4MeKl7PizK5dgOF2wC8  kubectl certificate approve node-csr-x-lWfrl2pFrMUblecU5NYgpn3zI6j_iiTmcKZ1JRefY Copy the code

6. Check the Node status

kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
K8s -master1 Ready < None > 65m V1.18.3K8s -worker1 Ready < None > 12m v1.18.3K8s -worker2 Ready < None > 81s v1.18.3Copy the code

6. Deploy Dashboard and CoreDNS

6.1 deployment Dashboard

Wget HTTP: / / https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yamlCopy the code

By default, the Dashboard can only be accessed from inside the cluster. Change the Service type to NodePort to expose it to external users.

vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
 k8s-app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kubernetes-dashboard spec:  ports:  - port: 443  targetPort: 8443  nodePort: 30001  type: NodePort  selector:  k8s-app: kubernetes-dashboard Copy the code
kubectl apply -f recommended.yaml    # There is a pit, wait a while for STATUS to be Running
Copy the code
kubectl get pods,svc -n kubernetes-dashboard

NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-694557449d-dhlrq   1/1     Running   0          27s
pod/kubernetes-dashboard-9774cc786-7d6qn         1/1     Running   0          27s
 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Service/dashboark-metrics-scraper ClusterIP 10.0.0.36 < None > 8000/TCP 27sService /kubernetes-dashboard NodePort 10.0.0.154 < None > 443:30001/TCP 27sCopy the code

Visit https://NodeIP:30001

Create a service account and bind it to the default cluster-admin administrator cluster role:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Copy the code

Get the token:

eyJhbGciOiJSUzI1NiIsImtpZCI6IlVZczd4TDN3VHRpLWdNUnRlc1BOY2E0T3Z1Q3UzQmlwUEZpaEZHclJ0MWcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3Nlc nZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZ WFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNWRod2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtY WNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTMzZmVkO DItM2Q3OC00ODk5LWJiMzYtY2YyZjU4NzgzZTRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9. gTuj33hMUmvgdPU499ifrn4V8AjinlRhzkw2ItdNRVdUmU6YLbSjZIp8WlPOkdtmDM2c3LCkLmnBqIabkIiIwb1HkNOtmglnFghzV8Td5GIGnoRqAmKPXO0a Y4Y97w_X0sVC3DB34EY6soPlcI_v6_-nSqRC1rPOSJ9xd8yU774YHYucYqiU-ViiIgqHELo7BT6CVD1iLw4K_C5wfWs4o6htplhGpJ0edLPsZpwnTrW-4qn4 d-EdKXcVJhpTUxxLoKL7HrerTwxeUMM1SL0vGnD5z2io9gybylubbgV7xRsSQQCwuXWqlzI6A_YWzl3-wmhwty4x9cWjQHMGw87YhACopy the code

Log in to Dashboard using the output token.

6.2 deployment CoreDNS

CoreDNS is used to resolve Service names within a cluster.

curl -o ./coredns.yaml https://github.com/kubernetes/kubernetes/blob/e176e477195f92919724bd02f81e2b18d4477af7/cluster/addons/dns/coredns/coredn s.yaml.sedkubectl apply -f coredns.yaml    # CLUSTER_DNS_IP = CLUSTER_DNS_IP

kubectl get pods -n kube-system     # wait for the first STATUS to be Running
NAME                          READY   STATUS    RESTARTS   AGE
coredns-5ffbfd976d-j6shb 1/1 Running 0 32s kube-flannel-ds-amd64-2pc95 1/1 Running 0 38m kube-flannel-ds-amd64-7qhdx 1/1 Running 0 15m kube-flannel-ds-amd64-99cr8 1/1 Running 0 26m Copy the code

DNS resolution test:

Kubectl run-it --rm dns-test --image=busybox: 1.28.4shIf you don't see a command prompt, try pressing enter.

/ # nslookup kubernetes
Server: 10.0.0.2 Address: 1 10.0.0.2 kube - DNS. Kube - system. SVC. Cluster. The local Name: kubernetes Address: 1 10.0.0.1 kubernetes. Default. SVC. Cluster. The localCopy the code

The parsing is fine.

7. High Availability Architecture (Expanded multi-Master architecture)

As a container cluster system, Kubernetes realizes Pod fault self-repair through health check + restart strategy, distributes Pod deployment through scheduling algorithm, keeps the expected number of copies, and automatically pulls Pod up from other nodes according to the failure status of nodes, thus realizing high availability of application layer.

For Kubernetes clusters, high availability should also include two levels of consideration: high availability of the Etcd database and high availability of the Kubernetes Master component. For Etcd, we have used three nodes to build a cluster to achieve high availability. This section will explain and implement the high availability of the Master node.

The Master node plays the role of the Master control center and maintains the healthy working status of the whole cluster through continuous communication with Kubelet and Kube-Proxy on the work node. If the Master node fails, you will not be able to do any cluster management using kubectl tools or apis.

The Master node has three main services: kube-apiserver, Kube-controller-Manager, and Kube-scheduler. Among them, kube-controller-Manager and Kube-Scheduler components themselves have achieved high availability through the selection mechanism, so Master high availability is mainly targeted at Kube-Apiserver component, which provides services based on HTTP API. Therefore, it is similar to a Web server. Add a load balancer to balance its load and expand its capacity horizontally.

Multi-master architecture diagram:

7.1 installation Docker

Ditto, no further details.

7.2 Deploying Master2 Node (10.1.1.230)

The operations of Master2 and Master1 are the same. So we just need to copy all the K8s files of Master1, and then change the server IP and host name can be started.

1. Create an ETCD certificate directory

Create etCD certificate directory in Master2:

mkdir -p /opt/etcd/ssl
Copy the code

2. Copy files (Master1 operation)

Copy all K8s files and ETCD certificates on Master1 to Master2:

SCP - r/opt/kubernetes [email protected]: / optSCP - r/opt/the cni/[email protected]: / optSCP - r/opt/etcd/SSL [email protected]: / opt/etcdSCP/usr/lib/systemd/system/kube * [email protected]: / usr/lib/systemd/systemSCP/usr/bin/kubectl [email protected]: / usr/binCopy the code

3. Delete the certificate file

Delete kubelet certificate and kubeconfig file:

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*
Copy the code

4. Change the IP address and host name of the configuration file

Change the apiserver, kubelet, and kube-proxy configuration files to local IP addresses:

vi /opt/kubernetes/cfg/kube-apiserver.conf 
.--bind- the address = 10.1.1.230 \- advertise - address = 10.1.1.230 \. vi /opt/kubernetes/cfg/kubelet.conf --hostname-override=k8s-master2  vi /opt/kubernetes/cfg/kube-proxy-config.yml hostnameOverride: k8s-master2 Copy the code

5. Startup Setting Startup

systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy systemctl enable kube-apiserver systemctl enable kube-controller-manager systemctl enable kube-scheduler systemctl enable kubelet systemctl enable kube-proxy Copy the code

6. Check the cluster status

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} Copy the code

7. Approve the application for kubelet Certificate

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU   85m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU
 kubectl get node NAME STATUS ROLES AGE VERSION K8s -master1 Ready < None > 34h V1.18.3K8s -master2 Ready < None > 83m V1.18.3K8s -worker1 Ready < None > 33h V1.18.3K8s -worker2 Ready < None > 33h V1.18.3Copy the code

7.3 Deploying the Nginx Load Balancer

Kube-apiserver High Availability Architecture Diagram:

  • Nginx is a mainstream Web service and reverse proxy server that uses four layers to implement load balancing against Apiserver.
  • Keepalived is a mainstream high availability software, based on VIP binding server hot standby, in the above topology, Keepalived mainly according to the Nginx running status to determine whether failover (offset VIP), such as when the Nginx active node down, VIP will automatically bound to the Nginx standby node, In this way, VIP is always available and Nginx high availability is realized.

1. Install software packages (active/standby)

yum install epel-release -y
yum install nginx keepalived -y
Copy the code

2. Nginx configuration file (active/standby)

cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
 include /usr/share/nginx/modules/*.conf;  events {  worker_connections 1024; }  # Layer 4 load balancing for two Master Apiserver components stream {   log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';   access_log /var/log/nginx/k8s-access.log main;   upstream k8s-apiserver { Server 10.1.1.204:6443;# Master1 APISERVER IP:PORT Server 10.1.1.230:6443;# Master2 APISERVER IP:PORT  }   server {  listen 6443;  proxy_pass k8s-apiserver;  } }  http {  log_format main '$remote_addr - $remote_user [$time_local] "$request" '  '$status $body_bytes_sent "$http_referer" '  '"$http_user_agent" "$http_x_forwarded_for"';   access_log /var/log/nginx/access.log main;   sendfile on;  tcp_nopush on;  tcp_nodelay on;  keepalive_timeout 65;  types_hash_max_size 2048;   include /etc/nginx/mime.types;  default_type application/octet-stream;   server {  listen 80 default_server;  server_name _;   location / {  }  } } EOF Copy the code

3. Keepalived Configuration file (Nginx Master)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
 [email protected]  }  notification_email_from [email protected] Smtp_server 127.0.0.1 smtp_connect_timeout 30  router_id NGINX_MASTER } vrrp_script check_nginx {  script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 {  state MASTER  interface ens33  virtual_router_id 51 # VRRP route ID instance, each instance is unique  priority 100 # Priority, set to 90 for the standby server  advert_int 1 Specifies the interval for VRRP heartbeat packet notification. The default interval is 1 second  authentication {  auth_type PASS  auth_pass 1111  }  # virtual IP  virtual_ipaddress {  10.1.1.170/24  }  track_script {  check_nginx  } } EOF Copy the code
  • Vrrp_script: specifies a script that checks the working status of nginx (for failover based on nginx status).
  • Virtual_ipaddress: virtual IP address (VIP)

Check the nginx status script:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#! /bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$")

if [ "$count" -eq 0 ];then
 exit 1 else  exit 0 fi EOF Copy the code
chmod +x /etc/keepalived/check_nginx.sh
Copy the code

4. Keepalived Configuration file (Nginx Backup)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
 [email protected]  }  notification_email_from [email protected] Smtp_server 127.0.0.1 smtp_connect_timeout 30  router_id NGINX_BACKUP } vrrp_script check_nginx {  script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 {  state BACKUP  interface ens33  virtual_router_id 51 # VRRP route ID instance, each instance is unique  priority 90  advert_int 1  authentication {  auth_type PASS  auth_pass 1111  }  virtual_ipaddress {  10.1.1.170/24  }  track_script {  check_nginx  } } EOF Copy the code

Check the nginx running status script in the above configuration file:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#! /bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$")

if [ "$count" -eq 0 ];then
 exit 1 else  exit 0 fi EOF Copy the code
chmod +x /etc/keepalived/check_nginx.sh
Copy the code

u

Note: Keepalived determines failover based on the script return status code (0 for working properly, non-0 for abnormal).

5. Start and set the startup

systemctl daemon-reload
systemctl start nginx
systemctl start keepalived
systemctl enable nginx
systemctl enable keepalived
Copy the code

6. Check keepalived status

ip a

Copy the code

As you can see, the ENS33 network adapter is bound with the 10.1.1.170 virtual IP address, indicating that it is working properly.

7. Nginx+Keepalived high availability testing

Disable the active node Nginx and test whether the VIP is migrated to the standby node.

Run the pkill Nginx command on the Nginx MasterRun the Nginx Backup, IP addr command to check that the VIP is successfully bound.Copy the code

8. Access the load balancer test

Select a node from the K8s cluster and use curl to view the K8s version test.

The curl https://10.1.1.170:6443/version - k{
  "major": "1".  "minor": "18".  "gitVersion": "v1.18.3". "gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40". "gitTreeState": "clean". "buildDate": "2020-05-20T12:43:34Z". "goVersion": "go1.13.9". "compiler": "gc". "platform": "linux/amd64" } Copy the code

If the K8s version information is obtained, the load balancer is properly set up. Curl -> VIP (nginx) -> apiserver

You can also see the forwarding apiserver IP by looking at the Nginx log:

tail /var/log/nginx/k8s-access.log -f
10.1.1.170 10.1.1.204:6443 - [30/May/2020:11:15:10 +0800] 200 422
10.1.1.170 10.1.1.230:6443 - [30/May/2020:11:15:26 +0800] 200 422
Copy the code

But that’s not all. Here’s the most important step.

7.4 Modifying all Worker Nodes to connect to LB VIPs

For example, although we added Master2 and load balancer, we expanded from single Master architecture, which means that all Node components are still connected to Master1. If we do not connect to VIP load balancer instead, then Master will still be a single point of failure.

So the next step is to change all Worker Node component configuration files from 10.1.1.204 to 10.1.1.170 (VIP) :

role IP
k8s-master1 10.1.1.204
k8s-master2 10.1.1.230
k8s-worker1 10.1.1.151
k8s-worker2 10.1.1.186
sed -i 's # 10.1.1.204:6443 # 10.1.1.170:6443 #' /opt/kubernetes/cfg/*
systemctl restart kubelet
systemctl restart kube-proxy
Copy the code

Check the node status: Kubectl get node

NAME          STATUS   ROLES    AGE    VERSION
K8s -master Ready <none> 34h V1.18.3K8s -master2 Ready < None > 101m v1.18.3K8s -node1 Ready < None > 33h v1.18.3K8s -node2 Ready < None > 33h v1.18.3Copy the code

At this point, a complete Kubernetes high availability cluster is deployed!