1 server
Three Linux centos servers
2 Install the Docker environment
2.1 update the yum!
yum update
2.2 Setting the Yum Source
*Sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo*
Copy the code
You can view all docker versions in all repositories and select a specific version to install:
yum list docker-ce --showduplicates | sort -r
Copy the code
2.3 installation
Sudo yum install docker - ce - 18.06.0. CeCopy the code
2.4 Startup, setting startup Startup
sudo systemctl start docker
Copy the code
sudo systemctl enable docker
Copy the code
Verify whether the installation is successful (the client and service parts indicate that the installation and startup of docker are successful) :
docker version
Copy the code
3 Install the K8S cluster
3.1 Disabling the Firewall
systemctl stop firewalld
systemctl disable firewalld
Copy the code
3.2 close the selinux
Setenforce 0 # temporarily disable sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # Permanently disable sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/configCopy the code
1.3.3 closed swap
Swapoff -a # Temporary shutdown; Sed -ri 's/.*swap.*/#&/' /etc/fstab #Copy the code
3.4 Adding the Mapping between host names and IP addresses
$ vim /etc/hosts
Copy the code
Add the following:
192.168.190.128k8S-master 192.168.190.129k8s-node1 192.168.190.130k8S-node2Copy the code
3.5 Passing the bridged IPV4 traffic to the Iptables chain
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
Copy the code
$ sysctl --system
Copy the code
3.6 Adding the Aliyun YUM Software Source
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF
Copy the code
3.7 Installing Kubeadm, kubelet, and kubectl
Kubelet # runs on all nodes of the Cluster and is responsible for starting pods and containers. Kubeadm # initializes the Cluster. Kubectl # is the Kubernetes command line tool. Kubectl allows you to deploy and manage applications, view resources, and create, delete, and update components.Copy the code
When deploying Kubernetes, it is required that the version on the master node and worker node be the same, otherwise the version mismatch will cause strange problems. This article describes how to install the specified version of Kubernetes using yum on a CentOS system.
We need to install the specified version of Kubernetes. So how do you do that? When installing yum, you can use the following format:
Yum install -y kubelet-<version> kubectl-<version> kubeadm-<version> yum install kubelet-1.19.6 kubeadm-1.19.6 Kubectl 1.19.6 -- yCopy the code
The output
Installed:
kubeadm.x86_64 0:1.20.1-0 kubectl.x86_64 0:1.20.1-0
kubelet.x86_64 0:1.20.1-0
Copy the code
3.8 Setting up kubelet automatically
At this point, kubelet cannot be started because the configuration cannot be done at this time. Now you can only set it to boot automatically
systemctl enable kubelet
Copy the code
3.9 Deploying Kubernetes Master
3.9.1 Initializing kubeadm
Kubeadm init \ --apiserver-advertise-address=192.168.190.128 \ --image-repository Registry.aliyuncs.com/google_containers \ - kubernetes - version v1.19.6 \ - service - cidr = 10.1.0.0/16 \ - pod - network - cidr = 10.244.0.0/16Copy the code
# -- image-repository string: This specifies the location from which to pull the image (version 1.13 only). The default is k8s.gcr. IO, which we specify as the domestic image address: Registry.aliyuncs.com/google_containers # - kubernetes - version string: Specify kubenets version number, the default value is stable - 1, will lead to download the latest version from https://dl.k8s.io/release/stable-1.txt, we can be designated as a fixed version (v1.15.1) to skip the network request. -apiserver-advertise-address specifies which interface of the Master is used to communicate with other nodes in the Cluster. If the Master has more than one interface, it is recommended to specify it explicitly. Otherwise, Kubeadm automatically selects the interface with the default gateway. -- pod-network-cidr Specifies the range of the POD network. Kubernetes supports a variety of networking schemes, and different networks have their own requirements for -pod-network-CIDR. This is set to 10.244.0.0/16 because we will be using flannel networks and must be set to this CIDR.Copy the code
Output:
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.190.128:6443 --token 09eyru.hv3d4mkq2rxgculb \
--discovery-token-ca-cert-hash sha256:d0141a8504afef937cc77bcac5a67669f42ff32535356953b5ab217d767f85aa
Copy the code
Run the preceding commands later on the other two servers.
3.9.2 Using Kubectl
Copy the following command to execute directly (master)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code
Kubectl command (master)
3.9.3 Installing the Pod Network Plug-in (MASTER)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Copy the code
Or:
kubectl apply -f kube-flannel.yml
Copy the code
Check the installation service status by running commands
kubectl get pods -n kube-system
Copy the code
4 Configure NFS storage
4.1 Installing AN NFS Client on all Cluster Nodes
yum install -y nfs-utils
Copy the code
4.2 Installing Plug-ins
kubectl apply -f deploymentnfs.yaml
Copy the code
apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: arcgis spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: mynfs - name: NFS_SERVER value: 192.168.190.81 -name: NFS_PATH value: / GIS volumes: -name: nfs-client-root NFS: server: 192.168.190.81 path: / GISCopy the code
Kubectl apply - f rbacnfs yamlCopy the code
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: arcgis
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: arcgis
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: arcgis
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: arcgis
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: arcgis
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
Copy the code
4.3 Adding a Storage Class
Kubectl apply - f classnfs yamlCopy the code
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storage-class-default
provisioner: mynfs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
Copy the code
5 installation ingress
kubectl apply -f mandatory-ingress.yaml
Copy the code
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1\. It serves a 404 page at /
# 2\. It serves 200 on a /healthz endpoint
image: 192.168.190.126:5000/defaultbackend-amd64:1.5
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
spec:
ports:
- port: 80
targetPort: 8080
selector:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: 192.168.190.126:5000/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
---
Copy the code