Kubernetes is a Goole open source container orchestration engine that supports automated deployment, large-scale scalability, and containerized application management.
Contact K8s more than half a year, also based on ali cloud platform to build the multiple service, currently running stable K8s cluster (interested can reference K8s gathered group of mixed mode, can help you save more than 50% of the service cost, K8s gather group of mixed mode ground share), but before has been on the system of learning, Like the previous Docker series, this series of articles will be recorded and shared in the form of notes, including theory and practice. Interested students can pay attention and explore the current popular container and service orchestration solutions together.
This article introduces how to build a K8S cluster locally and use Ansible to improve productivity (see Ansible tutorial)
All of the configuration files covered in this article can be found here at Github
1. Prepare a server node
If no server is available, you can create a virtual server by following instructions in the Procedure on Setting up a KVM Virtual Machine Environment on Ubuntu18.04.
Server node IP (hostname) :
- 192.168.40.111 (kmaster)
- 192.168.40.112 (knode1)
- 192.168.40.113 (knode2)
- 192.168.40.114 (knode3)
Operating system Version:
cat /etc/redhat-release
CentOS Linux Release 7.6.1810 (Core)uname -a
: 3.10.0-957. El7. X86_64
Configure Ansible
If you haven’t Ansible environment, you can refer to [Ansible introductory tutorial] mp.weixin.qq.com/s/JIZE1RvN7…). Set up.
1. Add k8S server node information to the /etc/hosts file on the Ansible server (see hosts).
192.168.40.111 kmaster
192.168.40.112 knode1
192.168.40.113 knode2
192.168.40.114 knode3Copy the code
2. Add the K8S server node to the /etc/ansible/hosts file on the Ansible server (see anSIBLE_hosts).
[k8s-all]
kmaster
knode1
knode2
knode3
[k8s-master]
kmaster
[k8s-nodes]
knode1
knode2
knode3Copy the code
Modify /etc/hosts of each node in the K8S cluster (optional)
Modify the /etc/hosts files of all hosts to add IP address /host name mapping to facilitate host name SSH access
1. Create the playbook file (see set_hosts_playbook.yml).
vim set_hosts_playbook.yml
---
- hosts: k8s-all
remote_user: root
tasks:
- name: backup /etc/hosts
shell: mv /etc/hosts /etc/hosts_bak
- name: copy local hosts file to remote
copy: src=/etc/hosts dest=/etc/ owner=root group=root mode=0644 Copy the code
2. Perform ansible – the playbook
ansible-playbook set_hosts_playbook.ymlCopy the code
4. Install Docker
Install Docker on all hosts
1. Create playbook file (see install_docker_playbook.yml)
Vim install_docker_playbook.ym -hosts: k8s-all remote_user: root Vars: docker_version: 18.09.2 Tasks: - name: install dependencies #shell: yum install -y yum-utils device-mapper-persistent-data lvm2 yum: name={{item}} state=present with_items: - yum-utils - device-mapper-persistent-data - lvm2 - name: config yum repo shell: yum-config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo - name: install docker yum: name=docker-ce-{{docker_version}} state=present - name: start docker shell: systemctl enable docker && systemctl start dockerCopy the code
2. Perform ansible – the playbook
ansible-playbook install_docker_playbook.ymlCopy the code
Deploy the K8S master
1. Before the deployment, initialize the firewall, selinux, swap, and K8S Aliyun YUM. All operations are performed in the script pre-setup.sh and executed in the Script module of Playbook 2
2. Playbook file deploy_master_playbook.yml for master only install kubectl, kubeadm, kubelet, Yml to quay-mirror.qiniu.com to avoid timeout, see kube-flannel. Yml)
Vim deploy_master_playbook.yyL-hosts: k8s-master remote_user: root: q Vars: kube_version: 1.16.0-0 k8s_version: V1.16.0k8s_master: 192.168.40.111 Tasks: - name: prepare env script:./pre-setup.sh -name: install kubectl,kubeadm,kubelet yum: name={{item}} state=present with_items: - kubectl-{{kube_version}} - kubeadm-{{kube_version}} - kubelet-{{kube_version}} - name: init k8s shell: kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version {{k8s_version}} --apiserver-advertise-address {{k8s_master}} --pod-network-cidr=10.244.0.0/16 --token-ttl 0-name: config kube shell: mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config - name: copy flannel yaml file copy: src=./kube-flannel.yml dest=/tmp/ owner=root group=root mode=0644 - name: install flannel shell: kubectl apply -f /tmp/kube-flannel.yml - name: get join command shell: kubeadm token create --print-join-command register: join_command - name: show join command debug: var=join_command verbosity=0Copy the code
3. Perform ansible – the playbook
ansible-playbook deploy_master_playbook.ymlCopy the code
4. The command for adding the node to the K8S cluster is displayed, as shown in the following figure. Note down this command for later deployment of Node
Deploy the K8S node
1. Like the master, you need to do some initialization processes before the deployment: close the firewall, close selinux, disable swap, configure k8S Aliyun YUM source, etc. All operations are put in the script pre-setup.sh and executed in the Script module of playbook 2
2. Create playbook file deploy_nodes_playbook.yml, install kubeadm and kubelet for all cluster nodes except master, and add the nodes to k8S cluster using the previous command of adding master
Vim deploy_nodes_playbook.yyL-hosts: k8S-nodes remote_user: root Vars: kube_version: 1.16.0-0 Tasks: - name: prepare env script: ./pre-setup.sh - name: install kubeadm,kubelet yum: name={{item}} state=present with_items: - kubeadm-{{kube_version}} - kubelet-{{kube_version}} - name: start kubelt shell: systemctl enable kubelet && systemctl start kubelet - name: join cluster shell: Kubeadm join 192.168.40.111:6443 --token zgx3ov.zlq3jh12atw1zh8r --discovery-token-ca-cert-hash sha256:60b7c62687974ec5803e0b69cfc7ccc2c4a8236e59c8e8b8a67f726358863fa7Copy the code
3. Perform ansible – the playbook
ansible-playbook deploy_nodes_playbook.ymlCopy the code
4. On the master node, check kubectl get Nodes to see the nodes added to the cluster. The status is Ready, as shown in the following figure
[root@kmaster ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kmaster Ready master 37m v1.16.0 knode1 Ready <none> 7m1s v1.16.0 knode2 Ready < None > 7M1s v1.16.0 Knode3 Ready < None > 4m12s v1.16.0Copy the code
The K8S cluster deployment is complete. Next you can install Ingress and Dashboard.
Install the Ingress
Ingress provides external access to services within the cluster, both based on Nginx and Traefik, where the familiar Nginx version is used. The Ingress installation is performed on the master node (because kubectl is installed and configured on the master node, it can also be performed on other nodes where Kubectl is installed and configured).
1. Download the yaml file (this directory contains nginx-ingress.yaml and has changed the image address, you can go to step 3 directly).
wget -O nginx-ingress.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yamlCopy the code
2. Change the word “quay. IO” to “quay-mirror.qiniu.com” to avoid mirrors pull timeout. Also add hostNetwork true and nginx-ingress tags on the Deployment of nginx-ingress-controller to use the hostNetwork and the nodes that control the ingress Deployment
vim nginx-ingress.yaml
:s/quay.io/quay-mirror.qiniu.com/g
vim nginx-ingress.yaml
spec:
hostNetwork: true
nodeSelector:
nginx-ingress: "true"Copy the code
3. The deployment of Ingress
Knode1 displays the nginx-ingress=true tag to control the ingress deployment on knode1 and keep the IP address fixed.
[root@kmaster k8s-deploy]# kubectl label node knode1 nginx-ingress=true
node/knode1 labeledCopy the code
Then complete the deployment of nginx-ingress
kubectl apply -f nginx-ingress.yamlCopy the code
4. After the deployment is complete, wait until the Pod creation is complete. You can run the following command to check the Pod status of the ingress
[root@kmaster k8s-deploy]# kubectl get pods -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx- Admission - Create -drpg5 0/1 Completed 0 79m 10.244.2.2 Knode1 < None > < None > Ingress -nginx-admission-patch-db2rt 0/1 Completed 1 79m 10.244.3.2 KNOde3 <none> <none> Ingress-nginx-controller-575cffb49c-4xm55 1/1 Running 0 70mCopy the code
Install the Kubernetes Dashboard
1. Download the yaml file (this directory already contains the kubernetes-dashboard.yaml file, go to step 3 directly)
wget -O kubernetes-dashboard.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yamlCopy the code
2. Modify kubernetes – dashboard. Yaml
Change Service type to NodePort so that Dashboard can be accessed through IP. Comment out the default Secret (default Secret has limited permissions and doesn’t see much data)
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30443
selector:
k8s-app: kubernetes-dashboardCopy the code
3. Deploy the Dashboard and create a ServiceAccount — admin-user bound to the cluster-admin role (see auth.yaml).
kubectl apply -f kubernetes-dashboard.yaml
kubectl apply -f kubernetes-dashboard-auth.yamlCopy the code
4. Access to the Dashboard
Access https://ip address of any node in the cluster :30443, open the Dashboard login page, and run the following command to obtain the login token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')Copy the code
Complete the login using token, as shown in the figure
Solve the problem of invalid certificate
After the installation is complete, the default certificate may be invalid. If you cannot open Dashboard in Chrome browsing, re-generate the certificate to resolve the problem.
1. Create a custom certificate
[root@kmaster ~]# CD /etc/kubernetes/pki/ # generate private key [root@kmaster pki]# openssl genrsa -out dashboard. Key 2048 # Generate certificate [root@kmaster pki]# openssl req -new -key dashboard.key -out dashboard.csr -subj "/O=JBST/CN=kubernetes-dashboard" [root@kmaster pki]# Openssl x509 -req -in dashboard. csr-ca ca.crt -cakey ca.key -cacreateserial -out CRT -days 3650 # Check the self-created certificate [root@kmaster pki]# openssl x509 -in dashboard. CRT -noout -textCopy the code
2. Comment kubernetes-dashboard.yaml default Secret,
#---
#
#apiVersion: v1
#kind: Secret
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
# name: kubernetes-dashboard-certs
# namespace: kubernetes-dashboard
#type: OpaqueCopy the code
3. Redeploy Dashboard and create Secret using the customized certificate
[root@kmaster k8s-deploy]# kubectl delete -f kubernetes-dashboard.yaml
[root@kmaster k8s-deploy]# kubectl apply -f kubernetes-dashboard.yaml
[root@kmaster k8s-deploy]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.crt=/etc/kubernetes/pki/dashboard.crt --from-file=dashboard.key=/etc/kubernetes/pki/dashboard.key -n kubernetes-dashboardCopy the code
10. Manage k8s cluster locally (win10)
1. Download kubectl Windows version: storage.googleapis.com/kubernetes-…
2. Add the kubectl.exe directory to the Path of the system environment variable
3. The master node/etc/kubernetes/admin. The content of the conf copy to local user directory. The kube/config file, such as C: \ Users \ \ Administrator \. Kube \ config
4. Verify
C:\Users\Administrator>kubectl get Nodes NAME STATUS ROLES AGE VERSION kmaster Ready master 4d19h v1.16.0 Knode1 Ready < None > 4d19h v1.16.0 knode2 Ready <none> 4d19h v1.16.0 knode3 Ready < None > 4d19h v1.16.0Copy the code
All of the configuration files covered in this article can be found here at Github
Related reading:
- K8s cluster mix mode, may help you save more than 50% of the service cost
- K8s cluster mix mode landing share
- Ansible brief tutorial
Welcome to pay attention to the author’s official account: Technical space of Empty Mountain Xinyu