1 Host Configuration

Perform the host configuration on each deployment machine

1) Host name configuration

Because of K8S regulations, the host name can only contain two special symbols – and.(dash and dot), and the host name cannot be repeated.

2) Configure Hosts

Configure hosts(/etc/hosts) on each host and add host_ip $hostname to the /etc/hosts file.

3) Close Selinux

sudo sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config

4) Disable the firewall

systemctl stop firewalld.service && systemctl disable firewalld.service

5) Configure the host time, time zone, and system language

To check the time zone date - R or ln timedatectl modification time zone - sf/usr/share/zoneinfo/Asia/Shanghai/etc/sudo localtime change system language environmentecho 'LANG="en_US.UTF-8"' >> /etc/profile;source/etc/profile Configures NTP time synchronization for hostsCopy the code

6) Kernel performance tuning

echo "
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.ipv4.neigh.default.gc_thresh1=128
net.ipv4.neigh.default.gc_thresh2=6144
net.ipv4.neigh.default.gc_thresh3=8192
net.ipv4.neigh.default.gc_interval=60
net.ipv4.neigh.default.gc_stale_time=120
kernel.perf_event_paranoid=-1
kernel.softlockup_panic=0
kernel.watchdog_thresh=30
fs.file-max=2097152
fs.inotify.max_user_instances=8192
fs.inotify.max_queued_events=16384
fs.inotify.max_user_watches=524288
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
kernel.yama.ptrace_scope=0
vm.swappiness=0
kernel.core_uses_pid=1
net.ipv4.conf.default.promote_secondaries=1
net.ipv4.conf.all.promote_secondaries=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_synack_retries=2
net.ipv4.tcp_max_syn_backlog=8096
kernel.sysrq=1
net.ipv4.tcp_tw_recycle=0
" >> /etc/sysctl.conf
Copy the code

Set the value based on the actual environment. Run the sysctl -p command to save the value.

cat >> /etc/security/limits.conf <<EOF
* soft nofile 65535
* hard nofile 65536
EOF
Copy the code

7) Kernel module

The following modules need to be loaded on the host

	module_list='br_netfilter ip6_udp_tunnel ip_set ip_set_hash_ip ip_set_hash_net iptable_filter iptable_nat iptable_mangle iptable_raw  nf_conntrack_netlink nf_conntrack nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat nf_nat_ipv4 nf_nat_masquerade_ipv4 nfnetlink udp_tunnel veth vxlan x_tables xt_addrtype xt_conntrack xt_comment xt_mark xt_multiport xt_nat xt_recent xt_set xt_statistic xt_tcpudp'

	for module in $module_list;
	do
		  if ! lsmod | grep -q $module; then
				echo "module $module is not present"
				modprobe $module
		  fi
	done;
Copy the code

2 docker deployment

Perform the Docker installation on each deployment machine

1) Download and unzip docker-18.09.6.tgz

The tar – ZXVF docker – 18.09.6. TGZ

2) Copy the directory to /usr/bin

cp docker/* /usr/bin/

3) Configure the DockerD boot option

cat > /etc/docker/daemon.json <<EOF { "oom-score-adjust": -1000, "log-driver": "json-file", "log-opts": { "max-size": "100m", "max-file": "3"}, "max-concurrent-downloads": 10, "max-concurrent-uploads": 10, "BIP ": "192.168.100.1/24", "registry-mirrors": ["https://harbor01.io"], "insecure-registries": [ "harbor01.io" ], "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "graph": "/ export/docker", "hosts" : [" Unix: / / / var/run/docker. The sock ", "TCP: / / 0.0.0.0:8100"]} EOF
Copy the code

4) Configure dockerd to systemd

cat > /usr/lib/systemd/system/docker.service <<EOF [Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.com After=network.target docker.socket [Service] OOMScoreAdjust=-1000 Type=notify EnvironmentFile=-/run/flannel/docker WorkingDirectory=/usr/bin ExecStart=/usr/bin/dockerd ExecReload=/bin/kill -s HUP $MAINPID ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill  only the docker process, not all processes in the cgroup KillMode=process Restart=on-failure [Install] WantedBy=multi-user.target EOF
Copy the code
  1. Start the
systemctl daemon-reload
systemctl start docker
systemctl enable docker.service
Copy the code

3 harbor deployment

Deploy the mirror repository on one of the machines

1) Download and install Docker-compose

mv docker-compose /usr/bin/
chmod +x /usr/bin/docker-compose
Copy the code
  1. Download the decompressed Harbor installation package
Tar - ZXVF harbor - ha - offline - installer - v1.7.0. TGZcd harbor
Copy the code
  1. The configuration of harbor
vi harbor/harbor.cfg
hostname = harbor01.io
customize_crt = false
Copy the code

Note: Parameters UI_url_protocol, auth_mode, db_host, db_password and so on can be modified as required

  1. Mount GlusterFS for shared storage
mkdir -p /share_storage
mount -t glusterfs <glusterfs_node_ip>:<glusterfs_vol_name> /share_storage
mkdir -p /share_storage/harbor
cp -r -f common /share_storage/harbor/
cp -r -f harbor.cfg /share_storage/harbor/
Copy the code
  1. Start the

sh install.sh

  1. check

docker ps

The browser is opened after the installation. The default port is 80

The default username and password of harbor. CFG are admin and Harbor12345

4 Rancher high availability deployment

4.1 Generating a Self-signed SSL Certificate

Download create_self-signed-cert.sh and execute

./create_self-signed-cert.sh –ssl-domain=rancher.test.com

4.2 Uploading a Rancher Image

Perform the Rancher deployment operation on one of the servers

1) Download image packages rancher-images.tar.gz, rancher-images.txt and script rancher-load-images.sh

2) Log in to the Harbor page and create a project named Rancher. The access level is public

3) Upload the image to Harbor

Run the docker login command

docker login harbor01.io -u admin

Note: The default admin password of harbor mirror warehouse is Harbor12345. If there is any change, enter the new password

Sh –image-list rancher-images. TXT –registry harbor01.io

4.3 install nginx

Edit /etc/nginx.conf and replace IP_NODE_1, IP_NODE_2, and IP_NODE_3 with real IP addresses

Nginx.conf is configured as follows:

worker_processes 4; worker_rlimit_nofile 40000; events { worker_connections 8192; } stream { upstream rancher_servers_http { least_conn; server <IP_NODE_1>:80 max_fails=3 fail_timeout=5s; server <IP_NODE_2>:80 max_fails=3 fail_timeout=5s; server <IP_NODE_3>:80 max_fails=3 fail_timeout=5s; } server { listen 80; proxy_pass rancher_servers_http; } upstream rancher_servers_https { least_conn; server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s; server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s; server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s; } server { listen 443; proxy_pass rancher_servers_https; }}Copy the code

The container starts the nginx service

docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -v /etc/nginx.conf:/etc/nginx/nginx.conf \ Harbor01. IO/rancher/nginx: 1.14Copy the code

4.4 RKE Installing k8S

  • Download Rke, helm, and Kubectl and put them in the /usr/bin directory
mv rke kubectl helm /usr/bin/
chmod 777 /usr/bin/rke /usr/bin/kubectl /usr/bin/helm
Copy the code
  • Setting SSH Communication

Add docker user and set docker user password for each machine

groupadd docker
useradd docker -g docker
passwd docker
Copy the code

Rke execution machine execution

Su -- docker ssh-keygen ssh-copy-id docker@<IP_NODE_1> ssh-copy-id docker@<IP_NODE_2> ssh-copy-id docker@<IP_NODE_3>Copy the code
  • Add the /etc/hosts configuration on each machine
<IP_nginx_node> rancher.gly.com
Copy the code
  • Create the RKE configuration file rancher-cluster.yml and replace IP_NODE_1, IP_NODE_2, and IP_NODE_3 with real IP addresses
nodes:
- address: <IP_NODE_1>
  user: docker
  role: [ "controlplane"."etcd"."worker" ]
  ssh_key_path: /home/docker/.ssh/id_rsa
nodes:
- address: <IP_NODE_2>
  user: docker
  role: [ "controlplane"."etcd"."worker" ]
  ssh_key_path: /home/docker/.ssh/id_rsa
nodes:
- address: <IP_NODE_3>
  user: docker
  role: [ "controlplane"."etcd"."worker" ]
  ssh_key_path: /home/docker/.ssh/id_rsa
private_registries:
- url: harbor01.io # private registry url
  user: admin
  password: "Harbor12345"
  is_default: true
Copy the code
  • Create a Kubernetest cluster

rke up –config ./rancher-cluster.yml

  • After the command is successfully executed, a kube_config_rancher-cluster.yml file is generated in the current directory

mkdir /root/.kube

mv kube_config_rancher-cluster.yml /root/.kube/config

  • To check the cluster status, go to

kubectl get nodes

4.5 installation rancher

CRT, tls.key, and cacerts.pem certificates are generated in 3.3.4.1

kubectl create namespace cattle-system
kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=./tls.crt --key=./tls.key   
kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem
helm install rancher ./rancher \
    --namespace cattle-system \
    --set hostname=rancher.gly.com \
    --set ingress.tls.source=secret \
    --set privateCA=true\ -set useBundledSystemChart=true\ -set rancherImage=harbor01.io/rancher/rancher
Copy the code

Installation of kubernetes

1) Add cluster globally in Rancher interface

Visit rancher.test.com and log in to the Rancher screen

Click Add Cluster, select Add host created Kubernetes cluster, enter the cluster name Ocean, and click Next

2) Add server in host in Rancher interface:

For the production environment, you are advised to set the cluster size to three masters and two workers.

The specification of 1 master and 1 worker can be used to demonstrate POC.

Master: Select Etcd and Control in host specifications

Worker: Select Worker in host specifications

Run commands on the corresponding host based on the node specifications