Writing in the front

I recently encountered various poops while setting up a DevOps environment on a CLUSTER version 1.18.2 of K8S. At present, all kinds of pits in the process of setting up the environment have been filled up, we hereby record and share with you!

The yML files required for the article and the setup environment are collected at github.com/sunshinelyz… And gitee.com/binghe001/t… . If the file is helpful, don’t forget to give a Star!

Server Planning

IP The host name node The operating system
192.168.175.101 binghe101 K8S Master CentOS 8.0.1905
192.168.175.102 binghe102 K8S Worker CentOS 8.0.1905
192.168.175.103 binghe103 K8S Worker CentOS 8.0.1905

Installation Environment Version

The name of the software Software version instructions
Docker 19.03.8 Provide container environment
docker-compose 1.25.5 Define and run applications that consist of multiple containers
K8S 1.8.12 Kubernetes is an open source, used to manage containerized applications on multiple hosts in the cloud platform. The goal of Kubernetes is to make the deployment of containerized applications simple and powerful. Kubernetes provides a mechanism for application deployment, planning, updating and maintenance.
GitLab 12.1.6 Code repository (install one with SVN)
Harbor 1.10.2 Private mirror warehouse
Jenkins 2.89.3 Continuous integration delivery
SVN 1.10.2 Code repository (just install one with GitLab)
JDK 1.8.0 comes with _202 Java runtime infrastructure
maven 3.6.3 Build the base plug-in for your project

Password – free login to the server

Run the following command on each server.

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
Copy the code

Copy the id_RSA.pub file from binghe102 and Binghe103 servers to binghe101 server.

[root@binghe102 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/102
[root@binghe103 ~]# scp .ssh/id_rsa.pub binghe101:/root/.ssh/103
Copy the code

Run the following command on the binghe101 server.

cat ~/.ssh/102 >> ~/.ssh/authorized_keys
cat ~/.ssh/103 >> ~/.ssh/authorized_keys
Copy the code

Then copy the authorized_keys file to binghe102 and binghe103 servers, respectively.

[root@binghe101 ~]# scp .ssh/authorized_keys binghe102:/root/.ssh/authorized_keys
[root@binghe101 ~]# scp .ssh/authorized_keys binghe103:/root/.ssh/authorized_keys
Copy the code

Delete files 102 and 103 under ~/. SSH on binghe101.

rm ~/.ssh/102
rm ~/.ssh/103
Copy the code

Install the JDK

The JDK environment needs to be installed on each server. Download the JDK from Oracle. The JDK version is 1.8.0_202. Decompress the JDK and configure the system environment variables.

Tar -zxvf jdk1.8.0_212.tar.gz mv jdk1.8.0_212 /usr/local
Copy the code

Next, configure the system environment variables.

vim /etc/profile
Copy the code

The configuration items are as follows:

JAVA_HOME=/usr/local/ jdk1.8.0 _212 CLASS_PATH =. :$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH
Copy the code

Run the following command to make the system environment variables take effect.

source /etc/profile
Copy the code

Install Maven

Download Maven from Apache. Maven version 3.6.3 is available here. Decompress the file and configure system environment variables.

Gz mv apache-maven-3.6.3-bin /usr/local
Copy the code

Next, configure the system environment variables.

vim /etc/profile
Copy the code

The configuration items are as follows:

JAVA_HOME=/usr/local/ jdk1.8.0 _212 MAVEN_HOME = / usr /local/ apache maven - 3.6.3 - bin CLASS_PATH =. :$JAVA_HOME/lib
PATH=$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH MAVEN_HOME PATH
Copy the code

Run the following command to make the system environment variables take effect.

source /etc/profile
Copy the code

Next, modify the Maven configuration file as shown below.

<localRepository>/home/repository</localRepository>
Copy the code

Store the Jar packages downloaded by Maven in the /home/repository directory.

Install the Docker environment

This document builds a Docker environment based on Docker 19.03.8.

Create the install_docker.sh script on all servers, as shown below.

export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
dnf install yum*
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8
systemctl enable docker.service
systemctl start docker.service
docker version
Copy the code

Grant executable permissions to the install_docker.sh script on each server and execute the script.

Install the docker – compose

Note: Install Docker-compose on each server

1. Download docker-compose file

The curl -l https://github.com/docker/compose/releases/download/1.25.5/docker-compose- ` ` uname - s - ` uname -m ` -o/usr /local/bin/docker-compose 
Copy the code

2. Grant executable permissions to the docker-compose file

chmod a+x /usr/local/bin/docker-compose
Copy the code

3. Check the Docker-compose version

[root@binghe ~]# docker-compose versionDocker-compose version 1.25.5, build 8a1C60f6 Docker-py version: 4.1.0 CPython version: 3.7.5 OpenSSL version: OpenSSL 1.1.0L 10 Sep 2019Copy the code

Install the K8S cluster environment

This document describes how to build a K8S cluster based on K8S 1.8.12

Install the K8S base environment

Create the install_k8s.sh script file on all servers, as shown in the following figure.

# Configure ali Cloud image accelerator
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

NFS - # utils
yum install -y nfs-utils
yum install -y wget

# start the NFS server
systemctl start nfs-server
systemctl enable nfs-server

# disable firewall
systemctl stop firewalld
systemctl disable firewalld

Close # SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

Close # swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# modify/etc/sysctl. Conf
If there is a configuration, modify it
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g"  /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g"  /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g"  /etc/sysctl.conf
# probably not, append
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf
# Execute command to apply
sysctl -p

# configure the K8S yum source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF

# Uninstall old version K8S
yum remove -y kubelet kubeadm kubectl

# install kubelet, kubeadm, kubectl. Here I installed version 1.18.2, you can also install version 1.17.2Yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2Alter docker Cgroup Driver to systemd
# # to/usr/lib/systemd/system/docker. This line of service files ExecStart = / usr/bin/dockerd -h fd: / / --containerd=/run/containerd/containerd.sock
# # modified for ExecStart = / usr/bin/dockerd -h fd: / / -- containerd = / run/containerd containerd. The sock - exec - opt native.cgroupdriver=systemd
If you do not modify the worker node, you may encounter the following error when adding the worker node
# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
# Please follow the guide at https://kubernetes.io/docs/setup/cri/
sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

# Set docker image, improve docker image download speed and stability
If the speed of accessing https://hub.docker.io is very stable, you can also skip this step
# curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR}

# restart Docker and start Kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

docker version
Copy the code

Grant executable permission to the install_k8s.sh script on each server and execute the script.

Initialize the Master node

Action performed only on the Binghe101 server.

1. Initialize the network environment of the Master node

Note: The following commands need to be executed manually on the command line.

Execute only on the master node
The # export command is valid only in the current shell session. If you want to continue the installation process after opening a new shell window, re-execute the export command here
exportMASTER_IP = 192.168.175.101# Replace k8s.master with the dnsName you want
export APISERVER_NAME=k8s.master
# Kubernetes the network segment where the container group resides. This network segment is created by Kubernetes after installation and does not exist in the physical network beforehand
exportPOD_SUBNET = 172.18.0.1/16echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
Copy the code

2. Initialize the Master node

Create the init_master.sh script file on the binghe101 server, as shown in the following figure.

#! /bin/bash
Abort script execution if error occurs
set -e

if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then
  echo -e "\033[31;1m Make sure you have set the environment variables POD_SUBNET and APISERVER_NAME \033[0m"
  echoThe current POD_SUBNET =$POD_SUBNET
  echoThe current APISERVER_NAME =$APISERVER_NAME
  exit 1
fi


# see the full configuration options at https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
rm -f ./kubeadm-config.yaml
cat <<EOF > ./kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: V1.18.2 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers controlPlaneEndpoint: "${APISERVER_NAME}:6443" Networking: serviceSubnet: "10.96.0.0/16" podSubnet: "${POD_SUBNET}" dnsDomain: "cluster.local" EOF

# kubeadm init
You will need to wait 3 to 10 minutes depending on the speed of the server
kubeadm init --config=kubeadm-config.yaml --upload-certs

# configuration kubectl
rm -rf /root/.kube/
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

# Install the Calico web plugin
https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises # reference documentation
echo "Install the calico - 3.13.1"The rm -f calico - 3.13.1. Yaml wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml kubectl apply - f The calico - 3.13.1. YamlCopy the code

Grant the execution permission to the init_master.sh script file and execute the script.

3. View the initialization result of the Master node

(1) Ensure that all container groups are in the Running state

Run the following command and wait 3 to 10 minutes until all container groups are in the Running state
watch kubectl get pod -n kube-system -o wide
Copy the code

Perform the following operations:

[root@binghe101 ~]# watch kubectl get pod -n kube-system -o wideEvery 2.0s: kubectl get pod-n kube-system-o wide binghe101: Sun May 10 11:01:32 2020 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES Calico-kube -controllers-5b8b769fcd-5dtlp 1/1 Running 0 118s 172.18.203.66 binghe101 <none> <none> calico-node-fnv8g 1/1 Running 0 118s 192.168.175.101 binghe101 < None > < None > coreDNS-546565776C-27t7h 1/1 Running 0 2m1s 172.18.203.67 Coredns-546565776c-hjb8z 1/1 Running 0 2m1s 172.18.203.65 binghe101 < None > < None > etcd-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none> kube-apiserver-binghe101 1/1 Running 0 2m7s 192.168.175.101 Kube-controller-manager-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 <none> <none> kube-controller-manager-binghe101 1/1 Running 0 2m7s 192.168.175.101 Kube-proxy-dvgsr 1/1 Running 0 2m1s 192.168.175.101 binghe101 < None > <none> kube-scheduler-binghe101 1/1 Running 0 2m7s 192.168.175.101 binghe101 < none > < none >Copy the code

(2) View the initialization result of the Master node

kubectl get nodes -o wide
Copy the code

Perform the following operations:

[root@binghe101 ~]# kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME binghe101 Ready master 3m28s v1.18.2 192.168.175.101 < None > CentOS Linux 8 (Core) 4.18.0-80.el8.x86_64 docker://19.3.8Copy the code

Initialize the Worker node

1. Obtain parameters of the join command

Run the following command on the Master node (binghe101 server) to obtain the join command parameters.

kubeadm token create --print-join-command
Copy the code

Perform the following operations:

[root@binghe101 ~]# kubeadm token create --print-join-commandW0510 11:04:34.828126 56132 Configset. go:202] WARNING: kubeadm cannot validate Component configsfor API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
Copy the code

There is the following line of output.

kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
Copy the code

This line of code is the obtained JOIN command.

Note: The token in the join command is valid for 2 hours, within 2 hours, you can use this token to initialize any number of worker nodes.

2. Initialize the Worker node

Execute for all worker nodes, in this case, on binghe102 server and binghe103 server.

Run the following commands respectively.

# execute only on worker nodes
# 192.168.175.101 is the internal IP address of the master node
exportMASTER_IP = 192.168.175.101# replace k8s.master with APISERVER_NAME used when initializing the master node
export APISERVER_NAME=k8s.master
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

# replace join with kubeadm token create command output on master
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2     --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d 
Copy the code

Perform the following operations:

[root@binghe102 ~]# export MASTER_IP = 192.168.175.101
[root@binghe102 ~]# export APISERVER_NAME=k8s.master
[root@binghe102 ~]# echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
[root@binghe102 ~]# kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d
W0510 11:08:27.709263   42795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "Kubelet - config - 1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Copy the code

According to the output result, the Worker node joins the K8S cluster.

Kubeadm join… It is the join output of kubeadm token create command on master node.

3. View the initialization result

Run the following command on the Master node (binghe101 server) to view the initialization result.

kubectl get nodes -o wide
Copy the code

Perform the following operations:

[root@binghe101 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSION binghe101 Ready master 20M v1.18.2 binghe102 Ready < None > 2M46s v1.18.2 binghe103 Ready The < none > 2 m46s v1.18.2Copy the code

Note: The kubectl get Nodes command is followed by the -o wide parameter to output more information.

The K8S cluster is restarted

1. The Worker node fails to start

The IP address of the Master node is changed, causing the worker node to fail to start. You need to reinstall the K8S cluster and ensure that all nodes have fixed internal IP addresses.

2. The Pod crashes or cannot be accessed

Restart the server and run the following command to check the Pod running status.

kubectl get pods --all-namespaces
Copy the code

If many pods are not in the Running state, run the following command to delete abnormal pods.

kubectl delete pod <pod-name> -n <pod-namespece>
Copy the code

Note: If the Pod is created using Deployment, StatefulSet, etc., K8S will create a new Pod as an alternative, and the restarted Pod usually works fine.

Ingress – nginx K8S installation

Note: On the Master node (executed on the Binghe101 server)

1. Create the ingress-nginx namespace

Create the ingress-nginx-namespace.yaml file as follows.

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    name: ingress-nginx
Copy the code

Run the following command to create the ingress-nginx namespace.

kubectl apply -f ingress-nginx-namespace.yaml
Copy the code

2. Install ingress Controller

Create the ingress-nginx-mandatory.yaml file as follows.

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

---
Copy the code

Run the following command to install the Ingress Controller.

kubectl apply -f ingress-nginx-mandatory.yaml
Copy the code

Install K8S SVC: ingress-nginx

Mainly used to expose pod: nginx-ingress-controller.

Create the service-nodeport.yaml file as follows.

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
Copy the code

Run the following command to install it.

kubectl apply -f service-nodeport.yaml
Copy the code

4. Access K8S SVC: ingress-nginx

Check the deployment of the Ingress-nginx namespace, as shown below.

[root@binghe101 k8s]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
default-http-backend-796ddcd9b-vfmgn        1/1     Running   1          10h
nginx-ingress-controller-58985cc996-87754   1/1     Running   2          10h
Copy the code

On the cli server, run the following command to view the port mapping of ingress-nginx.

kubectl get svc -n ingress-nginx 
Copy the code

The details are as follows.

[root@binghe101 k8s]# kubectl get svc -n ingress-nginx NAME TYPE cluster-ip external-ip PORT(S) AGE default-http-backend ClusterIP 10.96.247.2 < None > 80/TCP 7m3s ingress-nginx NodePort 10.96.40.6 < none > / TCP 80-30080, 443:30443 / TCP 4 m35sCopy the code

So, ingress-nginx can be accessed using the IP address of the Master node (binghe101 server) and port number 30080, as shown below.

[root@binghe101 k8s]# curl 192.168.175.101:30080       
default backend - 404
Copy the code

It can also be opened in a browserhttp://192.168.175.101:30080To access ingress-nginx as shown below.