Initializing the system
Upgrade the kernel :www.chenmx.net/?p=208
Selinux sed -i 's/enforcing/disabled/' /etc/selinux/config # permanent setenforce 0 # temporary # disable swap swapoff -a # temporary seap-ri 's/.*swap.*/#&/' /etc/fstab # permanent # Set hostname according to plan The hostname: name, Hostnamectl set-hostname hostname # Add hosts cat >> /etc/hosts << EOF 82.156.215.56k8s-master #k8smaster-> hostname 124.71.156.166k8s-node /etc/sysconfig/modules/ipvs.modules <<EOF #! /bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack # kernel version is less than 4.19 nf_conntrack_ipv4 EOF chmod 755 / etc/sysconfig/modules/ipvs modules && bash The/etc/sysconfig/modules/ipvs. Modules && lsmod | grep ip_vs - e - e nf_conntrack # # generally do not need to set the system time zone cloud server for China/Shanghai timedatectl Set-timezone Asia/Shanghai # Write the current UTC time to the hardware clock timeDatectl set-local-rtc 0 # Restart the systemctl restart rsyslog service systemctl restart crondCopy the code
Add aliyun YUM software source
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
Copy the code
The installationkubeadm
.kubelet
andkubectl
Yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 systemctl enable kubelet # yum install -y kubelet-1.21.0 kubectl-1.21.0 systemctl enable kubelet version kubelet --versionCopy the code
Install the Docker and prepare the image
- Mirrors required by each node
#! Ver = / bin/bash # pull images v1.21.0 registry=registry.cn-hangzhou.aliyuncs.com/google_containers images = ` kubeadm config images list --kubernetes-version=$ver |awk -F '/' '{print $2}'` for image in $images do if [ $image != coredns ]; then docker pull ${registry}/$image if [ $? -eq 0 ]; then docker tag ${registry}/$image k8s.gcr.io/$image docker rmi ${registry}/$image else echo "ERROR: Error downloading image, $image "fi else docker pull coredns/coredns: 1.8.0 comes with docker tag coredns/coredns: 1.8.0 comes with k8s. GCR. IO/coredns/coredns: v1.8.0 Docker RMI CoreDNS/coreDNS: 1.8.0fi done chmod +x pullimages.sh &&./pullimages.shCopy the code
- Install and configure Docker
https://www.chenmx.net/?p=31
Copy the code
Create virtual network card (so nodes)
Cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 <<EOF BOOTPROTO=static DEVICE=eth0:1 IPADDR=82.156.215.56 # Your public IP address PREFIX=32 TYPE=Ethernet USERCTL=no ONBOOT=yes EOF # restart network systemctl restart networkCopy the code
Modify thekubelet
Launch parameters
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS IP - node - = 82.156.215.56Copy the code
usekubeadm
Initialize the master node
Yaml <<EOF apiVersion: kubeadm.k8s. IO /v1beta2 kind: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.21.0 apiServer: certSANs: -k8s-master # please replace hostname - 82.156.215.56 # please replace public network - 82.156.215.56-10.96.0.1 # Do not replace, this IP is API cluster address, Public IP Networking: podSubnet: 10.244.0.0/16 serviceSubnet: 82.156.215.56:6443 10.96.0.0/12 -- Ipvs apiVersion: kubeproxy-config.k8s. IO /v1alpha1 kind: KubeProxyConfiguration featureGates: SupportIPVSProxyMode: true mode: Ipvs EOF # init if 1 core or 1 gb of memory please add parameter at the end (-- ignore-prefly-errors =all) Kubeadm init --config=kubeadm --config=kubeadm-config.yaml -- ignore-prefly-errors =all Please perform the following operations mkdir -p $HOME /. Kube sudo cp - I/etc/kubernetes/admin. Conf. $HOME/kube/config sudo chown $(id - u) : $(id - g) Use kubeadm join 82.156.215.56:6443 --token Kias9b.0sngusi94r8bh6f6 --discovery-token-ca-cert-hash sha256:09868a449efed1bb017d3a7b6e7fc3386feac6fcbc076350e5868ff1fc5be3f5Copy the code
You just need tosysctl -w net.ipv4.ip_forward=1
Can solve
Modify thekube-apiserver
Parameters (primary node)
vim /etc/kubernetes/manifests/kube-apiserver.yaml apiVersion: v1 kind: Pod metadata: annotations: Kubeadm. Kubernetes. IO/kube - apiserver. Advertise - address. The endpoint: 82.156.215.56:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: -kube-apiserver - --advertise-address=82.156.218.219 - --bind-address=0.0.0.0 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - - etcd - keyfile = / etc/kubernetes/pki/apiserver - etcd - client. The key - etcd - the servers = https://127.0.0.1:2379 - - insecure - port = 0 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-issuer=https://kubernetes.default.svc.cluster.local - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key - - service - cluster - IP - range = 10.96.0.0/12 - - the TLS - cert - file = / etc/kubernetes/pki/apiserver CRT - - the TLS - private - key - the file = / etc/kubernetes/pki/apiserver. Key image: k8s. GCR. IO/kube - apiserver: v1.21.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 82.156.215.56 Path: / Livez port: 6443 Scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-apiserver readinessProbe: FailureThreshold: 3 httpGet: host: 82.156.215.56 Path: /readyz port: 6443 Scheme: HTTPS periodSeconds: 1 timeoutSeconds: 15 resources: requests: cpu: 250m startupProbe: failureThreshold: 24 httpGet: host: 82.156.215.56 Path: / Livez Port: 6443 Scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/pki name: etc-pki readOnly: true - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true hostNetwork: true priorityClassName: system-node-critical volumes: - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/pki type: DirectoryOrCreate name: etc-pki - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs status: {}Copy the code
Adding a Node
Kubeadm join 82.156.215.56:6443 -- Token Kias9b.0sngusi94r8bh6f6 --discovery-token-ca-cert-hash Sha256:09868 # a449efed1bb017d3a7b6e7fc3386feac6fcbc076350e5868ff1fc5be3f5 master node, Kubectl get nodes -o wide # The default validity period of the token is 24 hours. After the validity period expires, the token is unavailable. Kubeadm token create --print-join-commandCopy the code
Modify theflannel
File and install (master node)
# # wget download file for https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml, a total of two changes, one is the args, Add args: - --public-ip=$(PUBLIC_IP) # add args: - --public-ip=$(PUBLIC_IP) ValueFrom: fieldRef: fieldPath: status.podIP -- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: The cni - the conf. Json: | {" name ":" cbr0 ", "cniVersion" : "0.3.1", "plugins" : [{" type ":" flannel ", "delegate" : { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": True}}}] net - the conf. Json: | {" Network ":" 10.244.0.0/16 "and" Backend ": {" Type" : "vxlan"}} - apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: Quay. IO/coreos/flannel: v0.14.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: Quay. IO/coreos/flannel: v0.14.0 command: / opt/bin/flanneld args. - --ip-masq - --public-ip=$(PUBLIC_IP) - --iface=eth0 - --kube-subnet-mgr args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: PUBLIC_IP valueFrom: fieldRef: fieldPath: status.podIP - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: Kubectl get pods -o wide kubectl get pods -o wide kubectl get pods -o wide kubectl get pods -o wide kubectl get pods -o wide kubectl --all-namespacesCopy the code
Manually enable configuration, enableipvs
Forwarding mode (Master node)
kubectl edit configmaps -n kube-system kube-proxy # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: config.conf: |- apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 bindAddressHardFail: false clientConnection: acceptContentTypes: "burst: 0 contentType:" /var/lib/kube-proxy/kubeconfig.conf QPS: 0 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 0s Conntrack: maxPerCore: null min: null tcpCloseWaitTimeout: null tcpEstablishedTimeout: null detectLocalMode: "" enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: "ipvs" nodePortAddresses: null oomScoreAdj: null portRange: "" showHiddenMetricsForVersion: "" udpIdleTimeout: 0s winkernel: enableDSR: false networkName: "" sourceVip: "" kubeconfig.conf: |- apiVersion: v1 ....Copy the code
If you are willing to spend money to make SLB or local Intranet environment is very simple, but a poor ah…