Plan to move their own small project to K8S, learning a wave, by the way to make notes, convenient to refer to later.

Environment preparations: centos 7.8, 2 x 2-core CPUS, 2 GB memory Node2:192.168.157.137 Serving as the master node Node3: 192.168.157.138 is used as the work node to ensure that all the machines in the cluster can ping through each other to ensure that the machines can connect to the external network (not necessary, can connect to the local Docker warehouse) # No special instructions is executed on all machines. Vim /etc/fstab # /dev/mapper/centos-swap swap swap defaults 0 0 echo vm.swappiness=0 >> /etc/sysctl.confCopy the code

2. Disable firewall, selinux, configure hosts file, set NTP time synchronization 3. Set bridge parameters:  vim /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 sysctl --system # Set kernel parameters (load bridge parameters) 4. Install the docker: Yum - config - manager - add - 'http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # for ali cloud docker mirror yum - y Install docker-ce-19.03.13 vim /etc/docker-/daemon.json # To speed up the image download {"registry-mirrors": ["https://registry.docker-cn.com"] // hkUST mirror:  https://docker.mirrors.ustc.edu.cn/ } systemc enable docker systemc start docker 5. Add ali cloud K8S yum source [kubernetes] name = kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64  enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg # other yum source I also use ali's CD/etc/yum repos. D wget http://mirrors.aliyun.com/repo/epel-7.repo wget http://mirrors.aliyun.com/repo/Centos-7.repo 6. Install kubeadm, Kubelet, kubectl: (1.21.2 release on Github) # kubeadm: Admin cluster script # kubelet: used to start pod and container # kubectl: K8S yum -y install kubelet-1.20.8 kubeadm-1.20.8 kubectl-1.20.8 systemctl enable kubelet # Do not start kubelet. Otherwise, init will automatically start kubelet. Kubeadm init \ --kubernetes-version=v1.20.8 \ # K8S version --apiserver-advertise-address=192.168.157.137 \ # Master IP address Mirror -- image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \ # warehouse Ali cloud - service - cidr = 10.96.0.0/12 \ -- pod-networker-cidr =10.244.0.0/16 # flannel default network segmentCopy the code

To add wokr to the cluster, run the second red arrow command on the work node.Copy the code

# replication configuration file init after successful also have prompt mkdir -p $HOME /. Kube cp - I/etc/kubernetes/admin. Conf. $HOME/kube/config chown $(id - u) : $(id - g) $HOME/. Kube /config kubectl get nodes Kubeadm config print init-defaults > k8s-init.yml kubeadm init --config=k8s-init. Kubeadm join 192.168.157.137:6443 --token # Add a work node to the cluster by executing this command on the work node tlkqrd.z2lrv6tgwdwyj5w4 \ --discovery-token-ca-cert-hash Sha256: # 334 f3f80b5e2a0ca6a63209bad8a110ebd8988db12cde92fe98618e69ce90728 the final result is as follows, node3 are work nodesCopy the code

NotReady = NotReady = NotReady = NotReady Kubectl apply-f kube-flannel. Yml # Create a pod based on the fileCopy the code

Kubectl get namespaces # kubectl get pods -n kube-system # kube-flannel. Yml Or it'll be hard to find next timeCopy the code
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN'.'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: | {" name ":" cbr0 ", "cniVersion" : "0.3.1", "plugins" : [{" type ":" flannel ", "delegate" : {" hairpinMode ": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] }  net-conf.json: | {" Network ", "10.244.0.0/16" and "Backend" : {" Type ":" vxlan "}}---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: Quay. IO/coreos/flannel: v0.13.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: Quay. IO/coreos/flannel: v0.13.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"."NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

Copy the code
About high availability for Master, When used to add usually using haproxy + keepalived form keepalived maintenance maintain a VIP on multiple master haProxy will receive requests to different master API -server. A binary installation is still recommended for production environments. Using TLS Bootstraping, nodes can automatically issue certificates using TLS Bootstraping.Copy the code

Reference: kubernetes. IO/useful/docs/ref…