preface

I want to play K8S, but my cloud server is the Intranet does not communicate, installed for a long time, so record

Install the docker

yum update -y
Install docker # #
yum install -y docker
## Replace with a domestic mirror
cat <<EOF> /etc/docker/daemon.json
{
  "registry-mirrors": [
    "https://dockerhub.azk8s.cn"."https://registry.docker-cn.com"
 ] 
}
EOF
Docker will start when you start up
systemctl enable docker
systemctl start docker
Copy the code

Install kubeadm kubelet kubectl

## Replace k8S source in yum
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# close
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Install kubectl and other components
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# start kubelet
systemctl enable --now kubelet
 # load the br_netfilter module and use lsmod to view the enabled module
yum install -y bridge-utils.x86_64
modprobe  br_netfilter 
# Change route
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  
# disable firewall
systemctl disable firewalld 
# k8s request to close swap (QXL)
swapoff -a && sysctl -w vm.swappiness=0  
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab  
Copy the code

This is the end of the preparations

Solution of Intranet incommunication

(Here are my ideas to solve the problem – although they are Baidu, the installation process is below)

The first thing I thought of was Baidu and found this article

K8S pits encountered by two cloud servers on different network segments and solutions

The flannel network problem is not solved because the IP address recorded by the Flannel is your Intranet IP address.

[root@instance-l33lcu5d ~]#kubectl describe node instance-l33lcu5d
Name:               instance-l33lcu5d
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=instance-l33lcu5d
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"0a:20:4b:67:0c:1c"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: trueFlannel.alpha.coreos.com/public-ip: 192.168.48.4 kubeadm. Alpha. Kubernetes. IO/cri - socket: / var/run/dockershim sockCopy the code

Using the preceding command, you can see that Flannel makes the comment public-IP + Intranet address. Although I set forward, Flannel still failed to communicate with the PODS in the cluster.

Flannel’s documentation has the answer to flannel’s question: How can a K8S cluster be composed of nodes behind different NAT devices?

flannel github

It took me a while to find out how to set external IP for Node, but THEN I thought, why don’t I add my own comments

kubectl annotate nodes instance-l33 lcu5d flannel.alpha.coreos.com/public-ip-overwrite=180.76.155.95 kubectl annotate nodes instance - jak59008 X864u54 flannel.alpha.coreos.com/public-ip-overwrite=106.12.206.160 kubectl annotate nodes instance - 8 flannel.alpha.coreos.com/public-ip-overwrite=106.13.82.225Copy the code

Sure enough, flannel’s public-IP has been changed with this comment. Kubectl apply-f ingress-nginx.yml Have a try

Ingress-nginx has pod IP 10.244.3.3, instance-8x864U54, and master root@instance-l33lcu5d. Test cluster access to POD IP;

Check the ingress service here. The exposed port is 32626

Go to the browser and test it

Perfect, a service deployed via nodePort can access pod correctly.

However, due to the internal network communication, kubectl exec will have some problems accessing the container unless NAT forwarding is added to iptables, as described in the first document

But you can also go to node and use Docker Exec directly, so this is not a problem

The installation

The official documentation for the installation

Because my internal network does not communicate with each other, I choose external IP when KUbeadm init, but I use Baidu cloud server, so THERE is no external network card

Kubeadm init - kubernetes - version = v1.17.0 - image - repository registry.aliyuncs.com/google_containers - apiserver - advertise - address = [public IP] - pod - network - cidr = 10.244.0.0/16Copy the code

Because there is no external network card, but selected an external IP, so the initialization will have some problems. K8s will write files when it is installed. In this case, let’s change the configuration

vim /etc/kubernetes/manifests/etcd.yml
Copy the code

I can just change these two places to look like this

After the initialization is successful, take the token provided by the node and run the command to join the cluster.

View nodes after joining the cluster.

Flannel added the publuc-ip-overwrite annotation to Node so that Flannel could register the subnet correctly

kubectl annotate nodes instance-l33 lcu5d flannel.alpha.coreos.com/public-ip-overwrite=180.76.155.95 kubectl annotate nodes instance - jak59008 X864u54 flannel.alpha.coreos.com/public-ip-overwrite=106.12.206.160 kubectl annotate nodes instance - 8 flannel.alpha.coreos.com/public-ip-overwrite=106.13.82.225Copy the code

Installing the Flannel plug-in

 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Copy the code

Install the ingress – nginx

Ingress-nginx does not mirror well below. But we can download it in the way of Docker first

Docker pull quay.mirrors.ustc.edu.cn/kubernetes-ingress-controller/nginx-ingress-controller:0.26.2 docker tag quay.mirrors.ustc.edu.cn/kubernetes-ingress-controller/nginx-ingress-controller:0.26.2 Quay. IO/kubernetes - ingress - controller/nginx - ingress - controller: 0.26.2Copy the code

The installation

kubectl apply -fhttps://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.2/deploy/static/provider/baremetal/service-nodepor t.yamlCopy the code

Install dashboard and configure HTTPS access

Install the dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc1/aio/deploy/recommended.yaml
Copy the code

Configure HTTPS and use ingress for forwarding

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sudooom-dashboard
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" This is mandatory because Dashboard requires HTTPS access
spec:
  rules:
    - host: dashboard.sudooom.com
      http:
        paths:
        - path: /
          backend:
            serviceName: kubernetes-dashboard
            servicePort: 443
Copy the code

To configure DNS resolution, I added dashboard.sudooom.com

Access through a browser

Since other accounts have no permissions, I have configured an account with high permissions for Dashboard

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system


---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
Copy the code
kubectl apply -f dashboard_auth.yml
Copy the code

Query the token of the account

kubectl -n kube-system describe secret `kubectl describe sa admin-user -n kube-system|grep 'Mountable secrets'|awk '{print $3}'`
Copy the code

Enter the token to log in