In this series of articles, we will build a K8S environment on Ubuntu Server 18.04 to get started. To use the native Ubuntu Server 18.04, we will use multipass to create multiple Ubuntu Server 18.04 virtual environments. That is to say, if you want to study, complete reference this series of blogs on your computer should be installed and can run normally multipass, if you want to understand the basic operation, the multipass can refer me to write another blog: blog. Jkdev. Cn/index. The PHP/a… .

This article demonstrates the steps to build a K8S cluster, and does not involve the basic knowledge of K8S. You may feel default to some of the professional words in the article, but it does not matter. We will introduce k8S knowledge step by step in the following.

This time we will deploy a cluster of one master node (Master1) and two worker nodes (worker1 and worker2). To save computer resources, each master1, Worker1, and Worker2 node is allocated two cpus, 2 GB memory, and 10 GB hard disk. This is the minimum configuration required by K8S, but it is more than enough for us to study. All related operations are performed under the root user.

1. Prepare the environment

1. Create an Ubuntu Server 18.04 VM

Create three VMS named Master1, worker1, and worker2 with two core cpus, 10 GB hard disks, and 2 GB memory

Multipass launch -c 1-d 10g-m 1g-n master1 18.04 Multipass launch -c 1-d 10g-m 1g-n worker1 18.04 Multipass launch -c 1 -d 10g-m 1g-n Worker2 18.04Copy the code

Run the multipass list command to query the created VM list

pan@pandeMacBook-Pro ~ % multipass list
Name                    State             IPv4             Image
master1                 Running           192.168.64.8     Ubuntu 18.04 LTS
worker1                 Running           192.168.64.11    Ubuntu 18.04 LTS
worker2                 Running           192.168.64.12    Ubuntu 18.04 LTS
Copy the code

After creating a VM, initialize the VM on all hosts.

2. Change the password of user root

To facilitate user root, we change the password of each VM. Take master1 as an example, the following code

#Enter the host
multipass shell master1
#Change the root password to 123456
sudo passwd root
#After changing the password, run the su command to switch to the root user
su
Copy the code

3. Disable the firewall and iptables

According to the official documentation, firewalls and iptables may affect the K8S cluster, so we need to disable them

#Disabling the Firewall
ufw disable
#Close the iptables
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -F
Copy the code

Install docker and kubeadm

Many of the following installation process, we mostly use Ali cloud image for installation.

1. Install and configure docker

Install the docker

#Install the GPG certificate
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
#Writes software source information
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
#Update software library
apt-get -y update
#The installation programApt - get - y install docker - ce = a graceful. 03.15 ~ 3-0 ~ ubuntu - bionic#Fixed version
apt-mark hold docker-ce
Copy the code

Set up the Docker Ali Cloud accelerated image warehouse

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://g6ogy192.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"] 
}
EOF
systemctl daemon-reload
systemctl restart docker
Copy the code

Install kubeadm, kubelet, kubectl

#Download the GPG key
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
#Example Add a K8S image source
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
#Update software library
apt-get update
#The installation programApt-get install -y kubelet=1.18.0-00 kubeadm=1.18.0-00 kubectl=1.18.0-00#Fixed version
apt-mark hold kubelet kubeadm kubectl
Copy the code

3. Prepare a cluster mirroring session

Run the kubeadm config images list command to view the images required by the current cluster. The image version is determined by the kubeadm version

K8s. GCR. IO/kube - apiserver: v1.18.20 k8s. GCR. IO/kube - controller - manager: v1.18.20 k8s. GCR. IO/kube - the scheduler: v1.18.20 K8s. GCR. IO/kube - proxy: v1.18.20 k8s. GCR. IO/pause: 3.2 k8s. GCR. IO/etcd: rule 3.4.3-0 k8s. GCR. IO/coredns: 1.6.7Copy the code

We use the docker pull mirror, but due to domestic k8s access is less than normal. The CGR. IO, can replace ali to accelerate mirror address: registry.aliyuncs.com/google_containers, execute the following commands

Docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.20 docker pull Registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.20 docker pull Registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.20 docker pull Registry.aliyuncs.com/google_containers/kube-proxy:v1.18.20 docker pull Registry.aliyuncs.com/google_containers/pause:3.2 docker pull registry.aliyuncs.com/google_containers/etcd:3.4.3-0 Docker pull registry.aliyuncs.com/google_containers/coredns:1.6.7Copy the code

Next rename the image to match the image name required by the original kubeadm

Docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.20 k8s. GCR. IO/kube - apiserver: v1.18.20 docker tag Registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.20 k8s. GCR. IO/kube - controller - manager: v1.18.20 Docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.20 k8s. GCR. IO/kube - the scheduler: v1.18.20 docker tag Registry.aliyuncs.com/google_containers/kube-proxy:v1.18.20 k8s. GCR. IO/kube - proxy: v1.18.20 docker tag Registry.aliyuncs.com/google_containers/pause:3.2 k8s. GCR. IO/pause: 3.2 docker tag Registry.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s. GCR. IO/etcd: rule 3.4.3 0 docker tag Registry.aliyuncs.com/google_containers/coredns:1.6.7 k8s. GCR. IO/coredns: 1.6.7Copy the code

Then delete the image downloaded from Ali Cloud

Docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.20 docker rmi Registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.20 docker rmi Registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.20 docker rmi Registry.aliyuncs.com/google_containers/kube-proxy:v1.18.20 docker rmi registry.aliyuncs.com/google_containers/pause:3.2 Docker rmi registry.aliyuncs.com/google_containers/etcd:3.4.3-0 docker rmi registry.aliyuncs.com/google_containers/coredns:1.6.7Copy the code

3. Initialize k8S cluster

1. Initialize the master node

Execute the initialization command on the master node

Kubeadm init --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SwapCopy the code

Parameters that

  • --service-cidr: network segment of the SCV network in K8S
  • --pod-network-cidr: network segment used by pod in K8S
  • --ignore-preflight-errors: Ignore the swap error

The initialization result is as follows

Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one  of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 192.168.64.8:6443 --token mzolyd. fgbta1hw9s9YMl55 \ --discovery-token-ca-cert-hash sha256:21ffa3a184bb6ed36306b483723c37169753f9913e645dc4f88bb12afcebc9ddCopy the code

2. Configure the cluster network

According to the initialization result, we need to install the network plug-in. The plug-in, we use flannel as cluster network will save flannel configuration file from the Internet to master1, file address is: raw.githubusercontent.com/flannel-io/… On master1, name the file kube-flannel.yml and run the following command

kubectl apply -f kube-flannel.yml
Copy the code

If the following information is displayed, the network plug-in is successfully installed

podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
Copy the code

3. Enable common Linux users to operate clusters

According to the initialization result, to enable the Linux common user on Master1 to operate the cluster normally, enter exit and press Enter to switch back to the common user and run the following command

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

4. Initialize the cluster working node

Meanwhile, after initializing the cluster management node Master1, we need to add the working nodes Worker1 and Wroker2 to the cluster. Copy the initialization commands of the working nodes to worker1 and worker2 to associate the working nodes of the cluster (worker1, worker2) with the working node (master1)

Kubeadm join 192.168.64.8:6443 --token mzolyd. fgbta1hw9s9YMl55 \ --discovery-token-ca-cert-hash sha256:21ffa3a184bb6ed36306b483723c37169753f9913e645dc4f88bb12afcebc9ddCopy the code

Kubectl get Pods –all-namespaces on the admin node after executing the initialization command on the working node

root@master1:/home/ubuntu# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-pw9br          1/1     Running   0          13m
kube-system   coredns-66bff467f8-wsj45          1/1     Running   0          13m
kube-system   etcd-master1                      1/1     Running   0          14m
kube-system   kube-apiserver-master1            1/1     Running   0          14m
kube-system   kube-controller-manager-master1   1/1     Running   0          14m
kube-system   kube-flannel-ds-c4jnh             1/1     Running   0          3m39s
kube-system   kube-flannel-ds-rg58c             1/1     Running   0          3m14s
kube-system   kube-flannel-ds-sw85v             1/1     Running   0          3m15s
kube-system   kube-proxy-ddk88                  1/1     Running   0          3m15s
kube-system   kube-proxy-dt825                  1/1     Running   0          13m
kube-system   kube-proxy-jgm4h                  1/1     Running   0          3m14s
kube-system   kube-scheduler-master1            1/1     Running   0          14m
Copy the code

Note that if all the pods shown in your list are not in the Running state, you will need to wait a while. Also, you’d better be on a good network when you install the cluster.

At this point, the cluster is set up. Starting with the next article we will continue to cover the basics of K8S

This article originally appeared on WX subscription number: Geek developer up