Kubeadm is Kubernetes official provided for the rapid installation of Kubernetes cluster tools, with each version of the release of Kubernetes will be synchronized update, Kubeadm will be on the cluster configuration of some practices to make adjustments, By experimenting with Kubeadm, you can learn some new Kubernetes official best practices on cluster configuration.
Kubeadm features are currently in beta and will enter GA status in 2018. Kubeadm features are currently in beta and will enter GA status in 2018. Kubeadm is getting closer to being usable in a production environment.
The Kubernetes cluster is a highly available cluster deployed in binary form using Ansible. Here we experience Kubeadm in Kubernetes 1.12 to follow the official best practices on cluster initialization and configuration. Further refine our Ansible deployment scripts.
1. Prepare
1.1 System Configuration
Before the installation, make the following preparations. The two CentOS 7.4 hosts are as follows:
cat /etc/hosts
192.168.61.11 node1
192.168.61.12 2
Copy the code
If the firewall is enabled on each host, you need to open the ports required by each component of Kubernetes. You can see the “Check Required Ports” section of Installing Kubeadm. Here is a simple way to disable the firewall on each node:
systemctl stop firewalld
systemctl disable firewalld
Copy the code
Disable SELINUX:
setenforce 0
Copy the code
vi /etc/selinux/config
SELINUX=disabled
Copy the code
Create the /etc/sysctl.d/k8s.conf file and add the following information:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
Copy the code
Run the following command to make the modification take effect.
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
Copy the code
1.2 installation Docker
Kubernetes uses the Container Runtime Interface (CRI) since 1.6. The default container runtime is still Docker, using the built-in Dockershim CRI implementation in Kubelet.
Install docker yum
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Copy the code
Check out the latest Version of Docker:
yum list docker-ce.x86_64 --showduplicates |sort -r
Docker - ce. X86_64 18.06.1. Ce - 3. El7 docker - ce - stable
Docker - ce. X86_64 18.06.0. Ce - 3. El7 docker - ce - stable
Docker - ce. X86_64 18.03.1. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 18.03.0. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.12.1. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.12.0. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.09.1. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.09.0. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.06.2. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.06.1. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.06.0. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.03.3. Ce - 1. El7 docker - ce - stable
Docker - ce. X86_64 17.03.2. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.03.1. Ce - 1. El7. Centos docker - ce - stable
Docker - ce. X86_64 17.03.0. Ce - 1. El7. Centos docker - ce - stable
Copy the code
Kubernetes 1.12 has been verified against Docker versions 1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06, etc. Note that Kubernetes 1.12 supports a minimum Docker version of 1.11.1. Here we install docker version 18.06.1 on each node.
yum makecache fast
yum install -y --setopt=obsoletes=0 \
Docker - ce - 18.06.1. Ce - 3. El7
systemctl start docker
systemctl enable docker
Copy the code
Verify that the default policy (PLlicy) for the iptables Filter FOWARD chain is ACCEPT.
iptables -nvL
Chain INPUT (policy ACCEPT 263 packets, 19209 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 ! Docker0 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Copy the code
Since version 1.13 Docker has adjusted the default firewall rules to disable the FOWARD chain in the Iptables Filter, which causes Pod communication across nodes in the Kubernetes cluster to fail. However, by installing Docker 1806 here, we found that the default policy was changed back to ACCEPT. I don’t know which version was changed back, because the online version of 1706 still needs to manually adjust this policy.
2. Deploy Kubernetes using kubeadm
2.1 Installing kubeadm and kubelet
Install kubeadm and kubelet on each node:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Copy the code
Test the address https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 is available, if need science to get to the Internet is not available.
curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
Copy the code
yum makecache fast
yum install -y kubelet kubeadm kubectl
.
Installed:
X86_64 0:1.12.0-0 kubectl.x86_64 0:1.12.0-0 kubelet.x86_64 0:1.12.0-0
Dependency Installed:
Cri-tools.x86_64 0:1.11.1-0 Kubernetes-cni.x86_64 0:0.6.0-0 SOcat.x86_64 0:1.7.3.2-2.el7
Copy the code
Kubernetes-cni; socat; kubernetes-cnI;
The cnI dependency was officially upgraded to version 0.6.0 starting with Kubernetes 1.9, which is still the current version in 1.12
Socat is kubelet’s dependency
Cri-tools is a command line tool used to run the Container Runtime Interface (CRI)
Run kubelet –help to see that most of kubelet’s command line flag parameters have been DEPRECATED. For example:
.
--address 0.0.0.0 The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
.
Copy the code
It is recommended that we use –config to specify the configuration file and specify the original configuration of these flags in the configuration file. Set Kubelet parameters via a config file Reconfigure a Node’s Kubelet in a Live Cluster Reconfigure a Node’s Kubelet in a Live Cluster
The kubelet configuration file must be in JSON or YAML format, as shown here.
Kubernetes 1.8 requires that the system Swap be disabled. If it is not disabled, kubelet will not start in default configuration.
You can disable the Swap function as follows:
Swapoff -a Modify the /etc/fstab file to comment out the automatic mount of SWAP. Run the free -m command to confirm that SWAP is disabled. Add the following line to /etc/sysctl.d/k8s.conf:
Vm. swappiness=0 Run the sysctl -p /etc/sysctl.d/k8s.conf command to make the change take effect.
Because this is used to test other services running on the two hosts, closing swap may have an impact on other services, so the configuration of Kubelet is modified to remove this restriction. In previous versions of Kubernetes we used kubelet’s fail-swap-on=false to remove this limitation. As analyzed earlier, Kubernetes no longer recommends using startup parameters, but configuration files instead. So let’s change this to configuration file configuration.
View/etc/systemd/system/kubelet. Service. D / 10 – kubeadm. Conf, see the following content:
# Note: This dropin only works with Kubeadm and Kubelet V1.11 +
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Copy the code
Shown above kubeadm deployed kubelet configuration file – config = / var/lib/kubelet/config yaml, practical to check/var/lib/kubelet and the config yaml configuration file could not be created. We can assume that this configuration file will be generated automatically when we run kubeadm to initialize the cluster, and that the first initialization of the cluster would have failed if we hadn’t shut down Swap.
So let’s go back to using kubelet’s fail-swap-on=false to remove the limitation that swap must be disabled. /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--fail-swap-on=false
Copy the code
2.2 Using kubeadm init to Initialize a cluster
Start kubelet service on each node:
systemctl enable kubelet.service
Copy the code
Next use kubeadm to initialize the cluster, select node1 as the Master Node, and execute the following command on node1:
kubeadm init \
- kubernetes - version = v1.12.0 \
- pod - network - cidr = 10.244.0.0/16 \
- apiserver - advertise - address = 192.168.61.11
Copy the code
Because we chose Flannel as a Pod networking plug-in, the command above specifies -pod-network-cidr =10.244.0.0/16. The execution reported the following error:
Using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=... `
Copy the code
An error message is running with swap on is not supported. Please disable swap. Since we decided to configure failSwapOn: false, re-add the — ignore-preflight-errors=Swap argument to ignore this error and run again.
kubeadm init \
- kubernetes - version = v1.12.0 \
- pod - network - cidr = 10.244.0.0/16 \
- apiserver - advertise - address = 192.168.61.11 \
--ignore-preflight-errors=Swap
Using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etCD /peer serving Cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1] [certificates] ETCD /peer serving Cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] ETCD /server serving Cert is signed for DNS names [node1 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default Kubernetes. Default. SVC kubernetes. Default. SVC. Cluster. The local] and IPs [10.96.0.1 192.168.61.11]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 26.503672 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in [Kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node node1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node node1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
[bootstraptoken] using token: zalj3i.q831ehufqb98d1ic
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
Kubeadm join 192.168.61.11:6443 --token zalj3i.q831ehufqb98d1ic -- discovery-tok-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa
Copy the code
The completed initialization output is recorded above and basically shows the key steps required to manually initialize and install a Kubernetes cluster.
Among them are the following key elements:
[kubelet] generated kubelet configuration file “/ var/lib/kubelet/config yaml” various related certificates (certificates) to generate [kubeconfig] generate kubeconfig related files The following command is used to configure how regular users use kubectl to access the cluster: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/. Kube /config Kubeadm join 192.168.61.11:6443 –token zalj3i.q831ehufqb98d1ic –discovery-token-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa
Check the cluster status:
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
Copy the code
Verify that all components are in a healthy state.
If you encounter problems with cluster initialization, you can use the following command to clean up the problems:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
Copy the code
2.3 Installing a Pod Network
Next install flannel Network Add-on:
mkdir -p ~/k8s/
cd ~/k8s
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
Copy the code
Note here kube – flannel. Yml is a mirror image of this file in flannel 0.10.0, quay. IO/coreos/flannel: v0.10.0 – amd64
If a Node has multiple network adapters, run the –iface parameter in kube-flannel.yml to specify the name of the network adapter on the cluster host. Otherwise, DNS resolution may fail. You need to download kube-flannel.yml locally, flanneld startup parameter plus –iface=
.
containers:
- name: kube-flannel
Image: quay. IO/coreos/flannel: v0.10.0 - amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth1
.
Copy the code
Daemonset deployment of the Flannel:
kubectl get ds -l app=flannel -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds-amd64 0 0 0 0 0 beta.kubernetes.i/oarch=amd64 17s
kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 17s
kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 17s
kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 17s
kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 17s
Copy the code
In combination with Kube-Flannel. Yml, fannel’s official deployment YAML document is to create five DaemonSet onset systems for different platforms in the cluster, via Node Label beta.kubernetes.i/oarch, Start the Flannel container on nodes of different platforms. The current node1 node is beta.kubernetes. I /oarch=amd64, so the DESIRED quantity of Kube-flannel-DS-amd64, a DaemonSet node, should be 1. Kube-flannel. yML kube-flannel-DS-AMD64
-
spec :
-
template:
-
metadata :
-
labels :
-
tier : node
-
app : flannel
-
spec :
-
hostNetwork : true
-
nodeSelector :
-
beta .kubernetes .io /arch : amd64
-
tolerations :
-
- key : node -role .kubernetes .io /master
-
operator: Exists
-
effect : NoSchedule
Copy the code
The nodeSelector and tolerations associated with scheduling for Kube-flannel-DS-AMD64 have been correctly configured in KUbe-flannel. yML, Is this DaemonSet Pod scheduling to Label for the beta. The kubernetes. IO/arch: amd64, and tolerate node – role. Kubernetes. IO/master: NoSchedule stains on the node. Based on previous deployment experience, the current primary node node1 should be satisfied, but is it now? Let’s look at the basic information about node1:
-
kubectl describe node node1
-
Name : node1
-
Roles : master
-
Labels : beta .kubernetes .io /arch =amd64
-
beta .kubernetes .io /os =linux
-
kubernetes .io /hostname =node1
-
node -role .kubernetes .io /master =
-
Annotations : kubeadm .alpha .kubernetes .io /cri -socket : /var/run /dockershim .sock
-
node .alpha .kubernetes .io /ttl : 0
-
volumes .kubernetes .io /controller -managed -attach -detach : true
-
CreationTimestamp : Wed, 03 Oct 2018 09 :03 :04 + 0800
-
Taints : node -role .kubernetes .io /master :NoSchedule
-
node .kubernetes .io /not -ready :NoSchedule
-
Unschedulable : false
Copy the code
Node.kubernetes.io /not-ready:NoSchedule. If the node is not ready, it will not accept the schedule. Nodes will not be ready if the Kubernetes network plugin has not been deployed. Yaml to tolerate node.kubernetes. IO /not-ready:NoSchedule:
-
tolerations :
-
- key : node -role .kubernetes .io /master
-
operator: Exists
-
effect : NoSchedule
-
- key : node .kubernetes .io /not -ready
-
operator: Exists
-
effect : NoSchedule
Copy the code
Kubectl apply-f kube-flannel. Yml kubectl apply-f kube-flannel.
Use Kubectl get Pod –all-namespaces -o wide to make sure all pods are Running.
kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
Kube-system coredns-576cbf47c7-njt7l 1/1 Running 0 12m 10.244.0.3node1 <none>
Kube-system coredns-576cbf47c7-vg2gd 1/1 Running 0 12m 10.244.0.2node1 <none>
Kube-system etcd-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>
Kube-system kube-apiserver-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>
Kube-system kube-controller-manager-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>
Kube-system kube-flannel-ds-amd64-bxtqh 1/1 Running 0 2m 192.168.61.11 node1 <none>
Kube-system kube-proxy-fb542 1/1 Running 0 12m 192.168.61.11 node1 <none>
Kube-system kube-scheduler-node1 1/1 Running 0 12m 192.168.61.11 node1 <none>
Copy the code
Node.kubernetes. IO /not-ready:NoSchedule will soon be set correctly. See https://github.com/coreos/flannel/issues/1044.
2.4 Master Node participating in workload
For a cluster initialized with kubeadm, pods are not scheduled to the Master Node for security reasons, meaning that the Master Node does not participate in the workload. This is because the current node1 master node is the node – role. Kubernetes. IO/master: NoSchedule stain:
kubectl describe node node1 | grep Taint
Taints: node-role.kubernetes.io/master:NoSchedule
Copy the code
Since this is set up as a test environment, remove this stain and make Node1 participate in the workload:
kubectl taint nodes node1 node-role.kubernetes.io/master-
node "node1" untainted
Copy the code
2.5 test the DNS
-
kubectl run curl --image =radial /busyboxplus :curl -it
-
kubectl run --generator =deployment /apps .v1beta1 is DEPRECATED and will be removed in a future version . Use kubectl create instead .
-
If you don't see a command prompt, try pressing enter.
-
[ root@curl-5cc7b478b6-r997p:/ ]$
Copy the code
Run nslookup kubernetes.default to check that the resolution is normal.
nslookup kubernetes.default
Server: 10.96.0.10
Address: 1 10.96.0.10 kube - DNS. Kube - system. SVC. Cluster. The local
Name: kubernetes.default
Address: 1 10.96.0.1 kubernetes. Default. SVC. Cluster. The local
Copy the code
2.6 Adding Nodes to the Kubernetes Cluster
We added node2 to the Kubernetes cluster, because we also removed the kubelet startup parameter that must disable swap, so we also need –ignore-preflight-errors= swap. Execute on node2:
Kubeadm join 192.168.61.11:6443 --token zalj3i.q831ehufqb98d1ic -- discovery-tok-ca-cert-hash sha256:6ee48b19ba61a2dda77f6b60687c5fd11072ab898cfdfef32a68821d1dbe8efa \
--ignore-preflight-errors=Swap
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
[WARNING Swap]: running with swap on is not supported. Please disable swap
[discovery] Trying to connect to API Server "192.168.61.11:6443"
[discovery] Created cluster - info discovery client, requesting the info from "https://192.168.61.11:6443"
[discovery] Requesting the info from "https://192.168.61.11:6443" again to validate the TLS against the pinned the public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.61.11:6443"
[discovery] Successfully established connection with API Server "192.168.61.11:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
Copy the code
Node2 is successfully added to the cluster. Run the following command on the master node to view the nodes in the cluster:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
Node1 Ready Master 26M V1.12.0
Node2 Ready < None > 2m v1.12.0
Copy the code
To remove node2 from the cluster, run the following command:
Execute on the master node:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2
Copy the code
Execute on node2:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
Copy the code
Execute on node1:
kubectl delete node node2
Copy the code
3. Deployment of common components in Kubernetes
More and more companies and teams are using Helm as a package manager for Kubernetes, and we will also use Helm to install common components of Kubernetes.
3.1 Installing the Helm
Helm is composed of client command Helm command tool and server tiller. Helm installation is very simple. /usr/local/bin for master node node1
Wget HTTP: / / https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
Tar - ZXVF helm - v2.11.0 - Linux - amd64. Tar. Gz
cd linux-amd64/
cp helm /usr/local/bin/
Copy the code
In order to install server tiller, you also need to configure the Kubectl tools and kubeconfig files on this machine to ensure that the Kubectl tools can access apiserver and work properly on this machine. The node1 node here is configured with Kubectl.
Because Kubernetes APIServer has enabled RBAC access control, you need to create the service Account: tiller for tiller and assign the appropriate roles to it. For details, see role-based Access Control in the HELM documentation. For the sake of simplicity, cluster-admin is directly assigned the ClusterRole built into the cluster. Create rbac-config.yaml file:
-
apiVersion : v1
-
kind : ServiceAccount
-
metadata :
-
name : tiller
-
namespace: kube- system
-
---
-
apiVersion : rbac .authorization .k8s .io /v1beta1
-
kind : ClusterRoleBinding
-
metadata :
-
name : tiller
-
roleRef :
-
apiGroup : rbac .authorization .k8s .io
-
kind : ClusterRole
-
name : cluster -admin
-
subjects :
-
- kind : ServiceAccount
-
name : tiller
-
namespace: kube- system
Copy the code
kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
Copy the code
Next deploy tiller using helm:
helm init --service-account tiller --skip-refresh
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
Copy the code
Tiller is deployed under the namespace kube-system in the K8S cluster by default:
kubectl get pod -n kube-system -l app=helm
NAME READY STATUS RESTARTS AGE
tiller-deploy-6f6fd74b68-kk2z9 1/1 Running 0 3m17s
helm version
Client: & version. Version {GitCommit SemVer: "v2.11.0" : "2 e55dbe1fdb5fdb96b75ff144a339489417b146b," GitTreeState: "clean"}
Server: & version. Version {GitCommit SemVer: "v2.11.0" : "2 e55dbe1fdb5fdb96b75ff144a339489417b146b," GitTreeState: "the clean
Copy the code
Note for some reason need to network can access GCR. IO and kubernetes-charts.storage.googleapis.com, If not, you can use the tiller image in the private image repository by helm init — service-Account tiller –tiller-image /tiller:v2.11.0 –skip-refresh
3.2 Deploying Nginx Ingress using the Helm
To make it easier to expose the services in the cluster to the outside of the cluster and access them from outside the cluster, next deploy Nginx Ingress on Kubernetes using Helm. The Nginx Ingress Controller is deployed on Kubernetes’ edge nodes. For details on the high availability of Kubernetes’ edge nodes, see the high availability of Kubernetes Ingress’s edge nodes in Bare Metal. For simplicity, there is only one edge node.
We will use node1(192.168.61.11) as the edge node and Label it:
kubectl label node node1 node-role.kubernetes.io/edge=
node/node1 labeled
kubectl get node
NAME STATUS ROLES AGE VERSION
Node1 Ready Edge, Master 46M V1.12.0
Node2 Ready < None > 22m v1.12.0
Copy the code
Stable /nginx-ingress chart values file ingress-nginx.yaml:
controller:
service:
externalIPs:
- 192.168.61.11
nodeSelector:
node-role.kubernetes.io/edge: ''
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
defaultBackend:
nodeSelector:
node-role.kubernetes.io/edge: ''
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
Copy the code
helm repo update
helm install stable/nginx-ingress \
-n nginx-ingress \
--namespace ingress-nginx \
-f ingress-nginx.yaml
Copy the code
kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
Nginx-ingress-controller-7577b57874-m4zkv 1/1 Running 0 9m13s 10.244.0.10node1 <none> nginx-ingress-controller-7577b57874-m4zkv 1/1 Running 0 9m13s 10.244.0.10node1 <none>
Nginx-ingress-default-backend -684f76869d-9jgtl 1/1 Running 0 9m13s 10.244.0.9node1 <none>
Copy the code
If default Backend is returned after accessing http://192.168.61.11, the deployment is complete:
The curl http://192.168.61.11/
default backend - 404
Copy the code
3.2 Configuring the TLS Certificate in Kubernetes
HTTPS certificates are required when using Ingress to expose HTTPS services outside the cluster. Here, the certificate and key of *.frognew.com are configured into Kubernetes.
This certificate will be used by dashboards later deployed in the kube-system namespace, so create secret for the certificate in kube-system first
kubectl create secret tls frognew-com-tls-secret --cert=fullchain.pem --key=privkey.pem -n kube-system
secret/frognew-com-tls-secret created
Copy the code
3.3 Deploying Dashboard Using the Helm
Kubernetes – dashboard. Yaml:
ingress:
enabled: true
hosts:
- k8s.frognew.com
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
tls:
- secretName: frognew-com-tls-secret
hosts:
- k8s.frognew.com
rbac:
clusterAdminRole: true
Copy the code
helm install stable/kubernetes-dashboard \
-n kubernetes-dashboard \
--namespace kube-system \
-f kubernetes-dashboard.yaml
Copy the code
kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-tjj25 kubernetes.io/service-account-token 3 37s
kubectl describe -n kube-system secret/kubernetes-dashboard-token-tjj25
Name: kubernetes-dashboard-token-tjj25
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=kubernetes-dashboard
kubernetes.io/service-account.uid=d19029f0-9cac-11e8-8d94-080027db403a
Type: kubernetes.io/service-account-token
Data
= = = =
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9 uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1 0b2tlbi10amoyNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt 1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQxOTAyOWYwLTljYWMtMTFlOC04ZDk0LTA4MDAyN2RiNDAzYSIsInN 1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.w1HZrtBOhANdqSRLNs22z8dQWd5IOCpEl9VyWQ 6DUwhHfgpAlgdhEjTqH8TT0f4ftu_eSPnnUXWbsqTNDobnlxet6zVvZv1K-YmIO-o87yn2PGIrcRYWkb-ADWD6xUWzb0xOxu2834BFVC6T5p5_cKlyo5dwer dXGEMoz9OW0kYvRpKnx7E61lQmmacEeizq7hlIk9edP-ot5tCuIO_gxpf3ZaEHnspulceIRO_ltjxb8SvqnMglLfq6Bt54RpkUOFD1EKkgWuhlXJ8c9wJt_b iHdglJWpu57tvOasXtNWaIzTfBaTiJ3AJdMB_n0bQt5CKAUnKBhK09NP3R0Qtqog
Copy the code
Use the preceding token to log in to the dashboard in the login window.
The picture
3.4 Deploying metrics- Server using Helm
Can be seen in the Heapster making https://github.com/kubernetes/heapster has, Heapster has been DEPRECATED. Here is the Deprecation timeline for heapster. It can be seen that Heapster will be removed from various Kubernetes installation scripts starting with Kubernetes 1.12.
Kubernetes is recommended to use the metrics – server (https://github.com/kubernetes-incubator/metrics-server). We also use helm to deploy metrics-Server here.
metrics-server.yaml:
args:
- --logtostderr
- --kubelet-insecure-tls
Copy the code
helm install stable/metrics-server \
-n metrics-server \
--namespace kube-system \
-f metrics-server.yaml
Copy the code
After deployment, check the metrics-server log and report the following error:
E1003 05:46:13.757009 1 Manager. go:102] Unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:node1: unable to fetch metrics from Kubelet node1 (node1): Get https://node1:10250/stats/summary/: dial TCP: lookup node1 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:node2: unable to fetch metrics from Kubelet node2 (node2): Get https://node2:10250/stats/summary/: dial tcp: Lookup node2 on 10.96.0.10:53: read UDP 10.244.1.6:45288->10.96.0.10:53: I/O timeout] lookup node2 on 10.96.0.10:53: read UDP 10.244.1.6:45288->10.96.0.10:53: I/O timeout]
Copy the code
Node1 and node2 are independent demo environments. The /etc/hosts files of the two nodes are modified. The DNS server does not exist on the Intranet. So the names of node1 and node2 are not recognized in the metrics-server. Kubernetes (Kubernetes) {Corefile (Kubernetes) {hostNames (Kubernetes); This allows all pods in the Kubernetes cluster to resolve the names of individual nodes from CoreDNS.
-
kubectl edit configmap coredns -n kube -system
-
-
apiVersion : v1
-
data :
-
Corefile: |
-
53 {. :
-
errors
-
health
-
hosts {
-
192.168. 61.11 node1
-
192.168. 61.12 2
-
fallthrough
-
}
-
kubernetes cluster .local in -addr .arpa ip6 .arpa {
-
pods insecure
-
upstream
-
fallthrough in- addr. arpa ip6. arpa
-
}
-
prometheus :9153
-
proxy . /etc /resolv .conf
-
cache 30
-
loop
-
reload
-
loadbalance
-
}
-
kind : ConfigMap
Copy the code
After the configurations are modified, restart coreDNS and metrics-server in the cluster to check whether the error logs of the metrics-server are no longer displayed. You can run the following command to obtain basic indicator information about cluster nodes:
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
Copy the code
Unfortunately, metrics- Server is not currently supported in Kubernetes Dashboard. So if metrics-server is used instead of Heapster, there is no way to graphically show Pod memory and CPU performance in the dashboard (which is actually not that important, Currently we are monitoring individual pods in the Kubernetes cluster custom in Prometheus and Grafana, so it is not important to see Pod memory and CPU in the dashboard). There’s a lot of discussion about this on Dashboard’s Github, Such as https://github.com/kubernetes/dashboard/issues/3217 and https://github.com/kubernetes/dashboard/issues/3270, Dashboard is ready to support metrics-Server at some point in the future. However, since metrics-Server and Metrics pipeline are definitely Kubernetes’ future direction in Monitor, we have decisively switched to metrics-Server in each environment.
4. To summarize
Docker images involved in this installation:
# kubernetes
K8s. GCR. IO/kube - apiserver: v1.12.0
K8s. GCR. IO/kube - controller - manager: v1.12.0
K8s. GCR. IO/kube - the scheduler: v1.12.0
K8s. GCR. IO/kube - proxy: v1.12.0
K8s. GCR. IO/etcd: 3.2.24
K8s. GCR. IO/pause: 3.1
# network and dns
Quay. IO/coreos/flannel: v0.10.0 - amd64
K8s. GCR. IO/coredns: 1.2.2
# helm and tiller
GCR. IO/kubernetes - helm/tiller: v2.11.0
# nginx ingress
Quay. IO/kubernetes - ingress - controller/nginx - ingress - controller: 0.19.0
K8s. GCR. IO/defaultbackend: 1.4
# dashboard and metric-sever
K8s. GCR. IO/kubernetes - dashboard - amd64: v1.10.0
GCR. IO/google_containers/metrics - server - amd64: v0.3.0
Copy the code
reference
-
Installing kubeadm
-
Using kubeadm to Create a Cluster
-
Get Docker CE for CentOS
-
https://kubernetes.io/docs/setup/independent/install-kubeadm/
-
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
-
https://docs.docker.com/engine/installation/linux/docker-ce/centos/
The original
-
Author: Frog white
-
https://blog.frognew.com/2018/10/kubeadm-install-kubernetes-1.12.html