Required:
First of all, I would like to say that this construction article is my actual combat, has been used in the company’s production environment.
This article will be from the K8S cluster construction, code warehouse, mirror warehouse, continuous integration, log monitoring and other aspects of a detailed introduction.
This article is not a basic K8S teaching, but let everyone in the actual work can quickly get started and put into production. Therefore, the installation method is based on script installation, and the internal components are not explained step by step. The detailed principle of internal components will be filled with time later.
Minimum Hardware Requirements
- Number of nodes: 3
- Memory information of a node: 8 GB or larger
- Single-node processor information: two cores and two threads or more
- Total disk capacity: 100 GB or higher
- System version: CentOS7.4 or later
The above is the configuration when I test the setup. Ali Cloud is used for the formal environment.
Test Server Information
192.168.10.111 root root
192.168.10.112 root root
192.168.10.113 root root
Network requirements
- Intranet communication between servers the recommended Intranet bandwidth is greater than 1Gbps
- Each server can access the Internet
Technology stack
Kubernetes (v1.18.6) +Helm (V3.2.4) +NFS server +Minio+Mysql+Redis+Gitlab+Gitlab Runner+Harbor+Nexus+Jenkins+skywalking+loki+prometheus
Microservice adoption is pigx, open source on Gitee
Kubernetes cluster deployment
Build K8S cluster quickly based on Kubeadm
Environment to prepare
CD /root sudo yum install git -y https://github.com/choerodon/kubeadm-ha.git # into the project directory CD kubeadm - ha # sudo installed ansible environment. / install - ansible. ShCopy the code
Copy the /root/kubeadm-ha/example/hosts.m-master.ip.ini file to the root directory of the project and name it inventory.ini. Change the IP address, user name, and password of each server, and maintain the relationship between each server and the role.
An example inventory.ini file
; Fill in the information of all nodes here; The first field is the internal IP address of the node. After the deployment, the field is nodeName of the Kubernetes node. The second field anSIBLE_port is the SSHD listening port of the node. The third field anSIBLE_USER is the remote login user name of the node. [all] c7n-node1 anSIBLE_host =192.168.10.111 anSIBLE_port =22 anSIBLE_user ="root" Ansible_ssh_pass ="root" C7n-node2 anSIBLE_host =192.168.10.112 ANSIBLE_port =22 ansibLE_user ="root" Ansible_ssh_pass ="root" C7n-node3 anSIBLE_host =192.168.10.113 AnSIBLE_port =22 ansibLE_user ="root" ansible_ssh_pass="root" ; Private cloud:; VIP load mode:; Load balancer + Keepalived mode, such as haProxy + Keepalived. ; Load balancer options in this script include Nginx, OpenResty, Haproxy, and envoy, setting Lb_mode to toggle as desired. ; Set lb_KUBE_APiserver_IP to enable Keepalived. Please negotiate with the server provider to reserve an IP address as Lb_KUbe_APiserver_IP. A group of two lb nodes is sufficient. The first node is a keepalived master node and the rest are backed nodes. ; ; Node local load mode:; Start load balancer only, keepalived is not enabled (i.e. lB_KUBE_APiserver_IP is not set),; In this case, the apiserver address of kubelet link is 127.0.0.1:lb_kube_apiserver_port. ; In this mode, empty the LB node group. ; ; Public cloud:; The SLB mode is not recommended. You are advised to use the node local load mode. ; If the SLB mode is used, deploy the node in local load mode. Switch to SLB mode after the deployment is successful. Change lb_mode to SLB, and set lb_KUbe_APiserver_IP to the purchased SLB Intranet IP. Example Change lb_kube_APiserver_port to the SLB listening port. ; To switch to SLB mode, run the cluster initialization script again. [lb] ; Note that the etcd cluster must be 1,3,5,7... Odd number of nodes [etcd] c7n-node1 c7n-node2 C7n-node3 [kube-master] c7n-node1 c7n-node2 [kube-worker] c7n-node1 c7n-node2 c7n-node3 ; For reserved groups, use [new-master] to add master nodes. For the reserved group, use [new-worker] to add worker nodes later. For reserved groups, use [new-etcd] to add ETcd nodes. For the reserved group, use [del-worker] to delete worker roles. For the reserved group, use [del-master] to delete the master role. For reserved groups, use [del-etcd] to delete etcd roles. For reserved groups, use [del-node] to delete nodes. -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- the following configuration on the basis of information -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --. [all:vars] ; Skip_verify_node =true Whether to skip physical resource verification on nodes. The requirement is at least 2c2G for Master nodes and at least 2C4G for Worker nodes. #kube_version="1.20.1" kube_version="1.18.6"; Container runtime type. Optional: Containerd, docker. Default containerd container_Manager ="docker"; Load balancer; Nginx, OpenResty, Haproxy, envoy, and SLB optional, default to nginx lb_mode="nginx"; After using load balancing cluster APiserver IP, set lb_KUbe_APiserVER_IP variable, then enable load balancer + Keepalived. Lb_kube_apiserver_ip = "192.168.56.15"; After load balancing, cluster apiserver port lb_KUbe_APiserver_port ="8443"; Network segment selection: The network segment of POD and Service cannot overlap the network segment of the server. Kube_pod_subnet = kube_service_subnet = kube_pod_subnet = kube_service_subnet For example, the server network segment is 10.0.0.1/8. Pod network segment can be set to 192.168.0.0/18. Service Network segment: 192.168.64.0/18. The server network segment is 172.16.0.1/12. Pod network segment can be set to 10.244.0.0/18. The service network segment can be 10.244.64.0/18. If the server network segment is 192.168.0.1/16, run the following command: Pod network segment can be set to 10.244.0.0/18. The service network segment can be 10.244.64.0/18. Kube_pod_subnet ="10.244.0.0/18"; Service IP segment kube_service_subnet="10.244.64.0/18"; The default VALUE of pod subnet mask assigned to nodes is 24, that is, 256 IP addresses. Therefore, 16384/256=64 nodes can be managed by using these default values. kube_network_node_prefix="24" ; Node Maximum number of pods on a node. The number of IP addresses must be greater than the number of POD subnets allocated to nodes. ; https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr kube_max_pods="110" ; Support for Flannel, Calico; Network_plugin ="calico"; If the server disks are divided into system disks and data disks, change the following paths to the customized directories of data disks. ; Kubelet_root_dir ="/data/lib/ Kubelet "; Docker container storage directory docker_storage_dir="/data/lib/docker"; Containerd Containerd_storage_dir ="/data/lib/containerd"; Etcd Data root directory etcd_data_dir="/data/lib/ Etcd"Copy the code
Make two changes to suit your configuration
All of my servers have a disk directory /data
Other information, such as the parent node and the number of servers, is optional, paying attention to the container type and network plug-in selection.
Remember to modify the file/root/kubeadm – ha/roles/plugins/kubernetes – dashboard/templates/kubernetes – dashboard. Yaml. J2
Set external access ports for the cluster
Spec: type: NodePort ports: -port: 443 targetPort: 8443 NodePort: 31443Copy the code
Cluster deployment:
- Run the following command in the directory CD /root/kubeadm-ha
- ansible-playbook -i inventory.ini 90-init-cluster.yml
Check that the status of the waiting POD is runnning:
- Execute under any master node
- kubectl get po –all-namespaces -w
If the deployment fails and you want to reset the cluster, run the following command:
- Execute in the project root directory
- ansible-playbook -i inventory.ini 99-reset-cluster.yml
Install the Kubernetes-Dashboard monitoring page
Create a Dashboard administrator (otherwise you will get an error when entering the page)
Create two files
dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kubernetes-dashboard
Copy the code
dashboard-admin-bind-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-bind-cluster-role
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
Copy the code
Put the two files under /root/kubeadm-ha of the 192.168.10.111 server
Kubectl create -f dashboard-admin.yaml Assign permissions to the user kubectl create -f dashboard-admin-bind-cluster-role-yaml # view and copy the user Token kubectl -n kubernetes - dashboard go in secret $(kubectl -n kubernetes - dashboard get secret | grep dashboard-admin | awk '{print $1}')Copy the code
You can save the token to log in to the Dashboard page
To access the page https://192.168.10.111:31443/, input a token.
By now, THE K8S cluster has been set up, and I will keep updating when I have time later. Like from friends can follow me, if you have a question, please leave a message below.