Master deployment, also refer to the great God’s article, and then their own hands-on operation many times

1. Environment information

System version: CentOS 7.3 (minimum installation) Kernel: 3.10.0-514.el7.x86_64 Kubernetes: v1.13.3 Docker-CE: 18.06 Keepalived Ensures that apiserever server IP Haproxy implements load balancing for ApiserverCopy the code

VIP 192.168.1.65

Node 1 192.168.1.60

Node 2 192.168.1.61

Node 3 192.168.1.62

2. Prepare the environment

2.1 Disabling Selinux and the firewall

sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalldCopy the code

2.2 close the swap

swapoff -aCopy the code

2.3 Adding host resolution records for each server

Cat >>/etc/hosts<<EOF 192.168.1.60 Host60 192.168.1.61 Host61 192.168.1.62 Host62 EOFCopy the code

2.4 Configuring Kernel Parameters

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

sysctl --systemCopy the code

2.5 Loading the IPVS Module

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4Copy the code

2.6 Adding the YUM Source

cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

wget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/CentOS-Base.repo
wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo 
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repoCopy the code

3. Deploy Keepalived and HaProxy

3.1 Installing and deploying Keepalived and HaProxy

yum install -y keepalived haproxyCopy the code

3.2 configuration keepalived

The weights of the three servers are Priority 100, 90, 80

cat /etc/keepalived/keepalived.conf
! Configuration File forkeepalived global_defs { notification_email { *****@163.com } notification_email_from [email protected] Smtp_server 127.0.0.1 smtp_connect_timeout 30 Router_id LVS_1} vrrp_instance VI_1 {state MASTER interface eth0 lvs_sync_daemon_inteface eth0 virtual_router_id 88 advert_int 1 priority 100 authentication { auth_type PASS auth_pass 1111} virtual_ipAddress {192.168.1.65/24}}Copy the code

3.3 configuration harpoxy

cat /etc/haproxy/haproxy.cfg 
global
        chroot  /var/lib/haproxy
        daemon
        group haproxy
        user haproxy
        log 127.0.0.1:514 local0 warning
        pidfile /var/lib/haproxy.pid
        maxconn 20000
        spread-checks 3
        nbproc 8

defaults
        log     global
        mode    tcp
        retries 3
        option redispatch

listen https-apiserver
        bind 192.168.1.65:8443
        mode tcp
        balance roundrobin
        timeout server 15s
        timeout connect 15s

        server apiserver01 192.168.1.60:6443 check port 6443 inter 5000 fall 5
        server apiserver02 192.168.1.61:6443 check port 6443 inter 5000 fall 5
        server apiserver03 192.168.1.62:6443 check port 6443 inter 5000 fall 5Copy the code

3.4 Starting the Service

systemctl enable keepalived && systemctl start keepalived 
systemctl enable haproxy && systemctl start haproxy Copy the code

4. The deployment of kubernetes

4.1 Installing software

Yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 ipvsadm ipset docker-ce-18.06.1# start docker
systemctl enable docker && systemctl start docker

# set kubelet to boot automatically
systemctl enable kubelet Copy the code

4.2 Configuring the Kubeadmin initialization file

[root@host60 ~]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.60
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: host60
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
  timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.1.65:8443"
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.13.3
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: "10.245.0.0/16"
scheduler: {}
controllerManager: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
Copy the code

4.3 Downloading an Image In Advance

[root@host60 ~]# kubeadm config images pull --config kubeadm-init.yaml The config/images Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.3 config/images Pulled Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.3 config/images Pulled Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.3 config/images Pulled Registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.3 config/images Pulled Registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 config/images Pulled Registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 config/images Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6Copy the code

4.4 Initializing a Cluster

[root@host60 ~]# kubeadm init --config kubeadm-init.yaml 
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [host60 localhost] and IPs [192.168.1.60 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [host60 localhost] and IPs [192.168.1.60 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [host60 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.245.0.1 192.168.1.60 192.168.1.65]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 41.510432 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "host60" as an annotation
[mark-control-plane] Marking the node host60 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node host60 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.65:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e02b46c1f697709552018f706f96a03922b159ecc2c3d82140365e4a8d0a83d4
Copy the code

Kubeadm init does the following:

  • [init] : initializes the specified version

  • [preflight] : check and download the required Docker image file before initialization

  • [kubelet – start] : generate kubelet configuration file “/ var/lib/kubelet/config yaml”, not the file kubelet cannot be started, so kubelet actually start before initialization failed.

  • [certificates] : generate Kubernetes certificates and store them in /etc/kubernetes/pki.

  • [kubeconfig] : the kubeconfig file is generated and stored in /etc/kubernetes. The corresponding file is used for communication between components.

  • [control-plane] : Use the YAML file in /etc/kubernetes/manifest to install the Master component.

  • [etcd] : use the/etc/kubernetes/manifest/etcd yaml etcd installation services.

  • [wait-Control-plane] : Waits for the Master component deployed by Control-plan to start.

  • [apiclient] : checks the service status of the Master component.

  • [uploadConfig] : updates the configuration

  • [kubelet] : Use configMap to configure kubelet.

  • [PatchNode] : Updates CNI information to Node and records it in a note.

  • [mark-Control-plane] : labels the current node with the role Master and non-schedulable, so that the Master node is not used to run Pod by default.

  • [bootstrap-token] : generates tokens, which will be used later when adding nodes to the cluster using kubeadm Join

  • [addons] : Install add-ons CoreDNS and Kube-proxy

4.5 Prepare Kubeconfig files for Kubectl

Kubectl by default looks for config files in the. Kube directory under the user’s home directory of execution. Here is a copy of admin.conf generated in the [kubeconfig] step at initialization to.kube/config.

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
Copy the code

4.6. View the cluster status

[root@host60 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
Copy the code

[root@host60 ~]# kubectl get node
NAME     STATUS     ROLES    AGE   VERSION
host60   NotReady   master   16h   v1.13.3
Copy the code

4.7 Copying the Certificate to Other Nodes

USER=root
CONTROL_PLANE_IPS="host61 host62"
for host in ${CONTROL_PLANE_IPS}; do
    ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
    scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done
Copy the code

4.8 Adding other nodes to a Cluster

Kubeadm join 192.168.1.65:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e02b46c1f697709552018f706f96a03922b159ecc2c3d82140365e4a8d0a83d4 --experimental-control-planeCopy the code

4.9 Checking cluster status again

Because the network is not connected, it is not ready

[root@host60 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSION Host60 NotReady Master 16H v1.13.3 HOST61 NotReady Master 81s v1.13.3 Host62 NotReady Master s v1.13.3 43Copy the code

4.10 Configuring the Cluster Network

DNS does not start successfully when the network is not configured

[root@host60 ~]# kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-89cc84847-lg9gr          0/1     Pending   0          16h
coredns-89cc84847-zvsn8          0/1     Pending   0          16h
etcd-host60                      1/1     Running   0          16h
etcd-host61                      1/1     Running   0          10m
etcd-host62                      1/1     Running   0          9m20s
kube-apiserver-host60            1/1     Running   0          16h
kube-apiserver-host61            1/1     Running   0          9m55s
kube-apiserver-host62            1/1     Running   0          9m12s
kube-controller-manager-host60   1/1     Running   1          16h
kube-controller-manager-host61   1/1     Running   0          9m55s
kube-controller-manager-host62   1/1     Running   0          9m9s
kube-proxy-64pwl                 1/1     Running   0          16h
kube-proxy-78bm9                 1/1     Running   0          10m
kube-proxy-xwghb                 1/1     Running   0          9m23s
kube-scheduler-host60            1/1     Running   1          16h
kube-scheduler-host61            1/1     Running   0          10m
kube-scheduler-host62            1/1     Running   0          9m23s
Copy the code

export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net? k8s-version=$kubever"Copy the code

Weave does not need to use this option. Weave does not need to use this option

kubectl apply -f "https://cloud.weave.works/k8s/net? k8s-version=$(kubectl version | base64 | tr -d '\n')"Copy the code

Wait for the network plug-in to complete

Check the POD status again and find that DNS scheduling is successful

There is a failure of this one, which is related to my network configuration. The reason has not been found out yet, but one node is normal

[root@host60 ~]# kubectl get pod -n kube-system -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-89cc84847-9hpqm 1/1 Running 1 19m 10.32.0.4 host61 < None > < None > CoreDNS-89CC84847-JFGMx 0/1 ContainerCreating 0 9m49s < None > Host60 < None > < None > Etcd -host60 1/1 Running 2 17h 192.168.1.60 Host60 <none> < None > etcd-host61 1/1 Running 2 73m 192.168.1.61 Host61 <none> <none> etcd-host62 1/1 Running 2 73m 192.168.1.62 Host62 < None > <none> kube-apiserver-host60 1/1 Running 2 17h 192.168.1.60 Host60 <none> <none> kube-apiserver-host61 1/1 Running 1 73m 192.168.1.61 Host61 <none> <none> Kube-apiserver-host62 1/1 Running 2 73m 192.168.1.62 Host62 < None > < None > kube-controller-manager-host60 1/1 Running 3 17h 192.168.1.60 Host60 <none> <none> kube-controller-manager-host61 1/1 Running 3 73m 192.168.1.61 Host61 <none> <none> Kube-controller-manager-host62 1/1 Running 3 73m 192.168.1.62 Host62 < None > <none> kube-proxy-64pwl 1/1 Running 2 17h 192.168.1.60 Host60 < None > < None > kube-proxy-78bm9 1/1 Running 1 73m 192.168.1.61 Host61 <none> <none> kube-proxy-xwghb 1/1 Running 2 73m 192.168.1.62 Host62 <none> <none> kube-scheduler-host60 1/1 Running 3 17h 192.168.1.60 host60 <none> <none> kube-scheduler-host61 1/1 Running 2 73m 192.168.1.61 Host61 <none> <none> kube-scheduler-host62 1/1 Running 2 73m 192.168.1.62 Host62 < None > < None > weave-net-57xhp 2/2 Running 4 54m 192.168.1.60 Host60 < None > < None > weave-net-d9l29                  2/2     Running             2          54m     192.168.1.61   host61   <none>           <none>
weave-net-h8lbk                  2/2     Running             4          54m     192.168.1.62   host62   <none>           <none>Copy the code

The cluster status is normal

[root@host60 ~]# kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
host60   Ready    master   17h   v1.13.3
host61   Ready    master   76m   v1.13.3
host62   Ready    master   75m   v1.13.3
Copy the code

5. Add a node

5.1 Initializing the System

Please refer to the above steps

5.2 Installing Required Software

Please refer to the above steps

5.3 Joining a Cluster

Kubeadm join 192.168.1.65:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e02b46c1f697709552018f706f96a03922b159ecc2c3d82140365e4a8d0a83d4Copy the code

5.4 Checking Cluster Status

[root@host60 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSION Host60 Ready Master 17h v1.13.3 Host61 Ready Master 95m v1.13.3 Host62 Ready Master 95m V1.13.3 Host63 Ready < None > 2M51s v1.13.3Copy the code

Ps: The DNS is deleted. Now the DNS is scheduled to the newly added node and the status is normal

[root@host60 ~]# kubectl get pod -n kube-system -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-89cc84847-9hpqm 1/1 Running 1 45m 10.32.0.4 host61 < None > < None > CoreDNS-89CC84847-sglw7 1/1 Running 0 103s 10.37.0.1 host63 < None > < None > etcd-host60 1/1 Running 2 17h 192.168.1.60 Host60 <none> <none> etcd-host61 1/1 Running 2 100M 192.168.1.61 Host61 <none> <none> Etcd -host62 1/1 Running 2 99m 192.168.1.62 Host62 < None > < None > kube-apiserver-host60 1/1 Running 2 17h 192.168.1.60 Host60 <none> < None > kube-apiserver-host61 1/1 Running 1 100M 192.168.1.61 Host61 < None > < None > kube-apiserver-host62 1/1 Running 2 99m 192.168.1.62 Host62 < None > < None > kube-controller-manager-host60 1/1 Running 3 17h 192.168.1.60 Host60 <none> <none> kube-controller-manager-host61 1/1 Running 3 100M 192.168.1.61 Host61 <none> <none> Kube-controller-manager-host62 1/1 Running 3 99m 192.168.1.62 Host62 <none> <none> kube-proxy-64pwl 1/1 Running 2 17h 192.168.1.60 Host60 < None > < None > kube-proxy-78bm9 1/1 Running 1 100M 192.168.1.61 Host61 < None > < None > kube-proxy-v28fs 1/1 Running 0 6m59s 192.168.1.63 Host63 <none> <none> kube-proxy-xwghb 1/1 Running 2 99m 192.168.1.62 host62 <none> <none> kube-scheduler-host60 1/1 Running 3 17h 192.168.1.60 Host60 <none> <none> kube-scheduler-host61 1/1 Running 2 100M 192.168.1.61 Host61 <none> <none> kube-scheduler-host62 1/1 Running 2 99m 192.168.1.62 Host62 <none> <none> Weave-net-57xhp 2/2 Running 4 80m 192.168.1.60 Host60 < None > < None > weave-net-d9l29 2/2 Running 2 80m 192.168.1.61 Host61 <none> <none> weave-net-h8lbk 2/2 Running 4 80m 192.168.1.62 Host62 <none> <none> weav-net-MHbpr 2/2 Running 1 6m59s 192.168.1.63 host63 <none> <none>Copy the code

6. View the entire cluster

[root@host60 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[root@host60 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSION Host60 Ready Master 18h v1.13.3 Host61 Ready Master 114M v1.13.3 Host62 Ready Master 113m V1.13.3 Host63 Ready < None > 21m v1.13.3 [root@host60 ~]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-67d4b848b4-qpmbz   1/1     Running   0          8m9s
nginx-deployment-67d4b848b4-zdn4f   1/1     Running   0          8m9s
nginx-deployment-67d4b848b4-zxd7l   1/1     Running   0          8m9s
[root@host60 ~]# kubectl get serviceNAME TYPE cluster-ip external-ip PORT(S) AGE kubernetes ClusterIP 10.245.0.1 < None > 443/TCP 18h nginx-server ClusterIP 10.245.117.70 < None > 80/TCP 68s [root@host60 ~]# ipvsadm -L -nIP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.245.0.1:443 RR -> 192.168.1.60:6443 Masq 1 10 -> 192.168.1.61:6443 Masq 10 0 -> 192.168.1.62:6443 Masq 1 10 TCP 10.245.0.10:53 RR -> 10.32.0.4:53 Masq 10 0 -> 10.37.0.1:53 Masq 10 0 TCP 10.245.117.70:80 RR -> 10.37.0.2:80 Masq 10 0 -> 10.37.0.3:80 Masq 10 1 -> 10.37.0.4:80 Masq 10 0 UDP 10.245.0.10:53 Rr -> 10.32.0.4:53 Masq 10 0 -> 10.37.0.1:53 Masq 10 0Copy the code