There are many ways to set up Kubernetes. To summarize the common deployment methods on the Internet:

  • Binary package deployment mode: it is very difficult. In addition to compiling the various components of Kubernetes into binary files, I am also responsible for writing the corresponding configuration files for these binary files, configuring the self-start scripts, and configuring authorization files for Kube-Apiserver, and many other operations and maintenance work. Common operations tools such as SaltStack and Ansible can cost more to learn than Kubernetes.

  • Minnikube or Kubeadm deployment mode: The officially recommended deployment mode simplifies operation, maintenance, and deployment. Run Kubelet directly on the host, and then use the container to deploy other Kubernetes components.

  • Other solutions: Rancher, Kubesphere, KubeOperator, these methods greatly reduce the difficulty of deployment, operation, maintenance, and use, and have detailed official documentation.

I recommend trying Kubeadm first and then other tools. Kubeadm is deployed by running Kubelet directly on the host and then using the container to deploy other Kubernetes components. So after the installation, each component is roughly familiar.

Let’s talk about how Kubeadm works

When Kubernetes is deployed, each of its components is a separate binary file that needs to be executed, so it is not difficult to imagine putting these binaries into a given machine and writing control scripts to start and stop these components. But with container technology in mind, can we deploy Kubernetes in containers?

We just need to create a container image for each Kubernetes component, and then start these component containers on each host with the Docker run command.

The ideal is beautiful, but the reality is cruel, which has a very troublesome problem, namely: how to container Kubelet

Kubelet is the core component of the Kubernetes project used to operate container runtimes such as Docker. However, in addition to dealing with the container runtime, Kubelet needs to operate directly on the host machine when configuring the container network and managing the volume of container data. If kubelet itself is running in a container, manipulating the host’s file system across the container’s Mount Namespace and file system becomes a bit more difficult.

So Kubeadm uses a compromise

Run Kubelet directly on the host, and then use the container to deploy other Kubernetes components.

Now let’s try to set it up. Note: Deploying Kubernetes using kubeadm will cause problems

Install Docker first and select the specified version

Apt cache Madison docker-ce sudo apt-get install docker-ce=17.03.0-ceCopy the code

Kubelet, kubelet, kubelet, kubelet

Apt install kubectl=1.11.3-00 apt install kubeadm=1.11.3-00Copy the code

Deploy the Master node of Kubernetes

Write a YAML file for kubeadm (named: kubeadm.yaml)

apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration controllerManagerExtraArgs: horizontal-pod-autoscaler-use-rest-clients: "true" horizontal-pod-autoscaler-sync-period: "10s" node-monitor-grace-period: "10s" apiServerExtraArgs: runtime-config: "api/all=true" kubernetesVersion: "Stable - 1.11"Copy the code

In this configuration, I set kube-controller-manager to:

horizontal-pod-autoscaler-use-rest-clients: "true"
Copy the code

This means that kube-Controller-Manager deployed in the future will be able to automatically scale horizontally using Custom Metrics.

Then we only need to execute one instruction

$ kubeadm init --config kubeadm.yaml
Copy the code

The deployment of Kubernetes Master is completed. During this process, Kubeadm does a series of checks to determine that the machine can be used to deploy Kubernetes. This step is called “Preflight Checks”.

Kubernetes uses /etc/kubernetes/PKI to create kubernetes/pki files for Kubernetes. The kubernetes/pki directory is used to create kubernetes/pki files for Kubernetes. In this directory, the main certificate file is ca.crt and the corresponding private key ca.key.

In addition, when users use Kubectl to obtain container logs and other streaming operations, they need to send requests to Kubelet via Kube-Apiserver, and this connection must also be secure. Kubeadm generates the apiserver-kubelet-client. CRT file for this step, and the corresponding private key is apiserver-kubelet-client.key.

Kube-apiserver, Kube-Controller-Manager, kube-Scheduler, kube-scheduler, kube-scheduler Will be deployed using Pod

In Kubernetes, there is a special container startup method called “Static Pod”. It allows you to deploy the Pod YAML files in a specified directory (/ etc/kubernetes/manifests path). This way, when Kubelet starts on this machine, it will automatically check the directory, load all the Pod YAML files, and then start them on this machine.

From this point of view, Kubelet occupies a very high position in the Kubernetes project. By design, kubelet is a completely independent component, while the other Master components are more like auxiliary system containers.

After deployment, kubeadm generates a line of instructions:

Kubeadm join 10.211.55.4:6443 --token bow08C.5z981pawlm0rdwof --discovery-token-ca-cert-hash sha256:a613e9d0639b2c9e9fe6e7b03fe5e01aaa80d9e4c4a2888b680431990e83a7b5Copy the code

The kubeadm join command is used to add more workers to the Master node. We’ll need it later when we deploy the Worker node, so find a place to log this command.

In addition, kubeadm will prompt us with the configuration commands needed to use Kubernetes cluster for the first time:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

The Kubernetes cluster requires encrypted access by default. Kubectl will use the authorization information in this directory to access Kubernetes cluster by default. Kubectl will use the authorization information in this directory to access Kubernetes cluster.

You can use Kubectl GET to see the state of the current unique node

Kubectl get Nodes NAME STATUS ROLES AGE VERSION Ubuntu NotReady Master 8D v1.11.3Copy the code

Kubectl describe can be used to view details, status and events of this Node object. Kubectl describe can be used to view details, status and events of this Node object.

kubectl describe node ubuntu
Copy the code

The reason for NotReady is that we haven’t deployed any network plug-ins yet. Guided by the Kubernetes project’s “Everything container” design philosophy, deploying web plugins is as simple as executing the Kubectl apply directive. Take Weave as an example:

$kubectl apply -f https://git.io/weave-kube-1.6Copy the code

Once deployed, we can check the Pod status via kubectl get

kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-kr6hk 1/1 Running 1 8d coredns-78fcdf6894-wm77c 1/1 Running 1 8d etcd-ubuntu 1/1 Running 1 8d kube-apiserver-ubuntu 1/1 Running 3 8d kube-controller-manager-ubuntu 1/1 Running 2 8d kube-proxy-sjbjq 1/1 Running 1 8d kube-scheduler-ubuntu 1/1 Running 3 8d  weave-net-nbpd8 2/2 Running 3 8dCopy the code

As you can see, all system pods have started successfully!

At this point, the Kubernetes Master node is deployed. If all you need is a single-node Kubernetes, you can use it now.

The Worker node of Kubernetes is almost identical to the Master node, they are both running a Kubelet component. Kube-apiserver, kube-Scheduler, kube-Controller-Manger, kube-scheduler, kube-Apiserver, kube-Scheduler, kube-Controller-Manger

Therefore, in comparison, deploying the Worker node is the easiest, requiring only two steps.

First, perform all the steps in the section “Installing Kubeadm and Docker” on all Worker nodes.

Step 2: execute the kubeadm join command generated when the Master node is deployed:

Kubeadm join 10.211.55.4:6443 --token bow08C.5z981pawlm0rdwof --discovery-token-ca-cert-hash sha256:a613e9d0639b2c9e9fe6e7b03fe5e01aaa80d9e4c4a2888b680431990e83a7b5Copy the code

By default, the Master node is not allowed to run user pods. And Kubernetes does this relying on the Taint/Toleration mechanism of Kubernetes.

The principle is simple: once a node has a Taint, or “Taint,” no Pod will run on that node because Kubernetes’ pods have a “cleanliness fetish.”

Only Toleration can run on this node, when individual pods declare that they are “tolerating” the stain.

The command to “Taint” the node is:

kubectl taint nodes node1 foo=bar:NoSchedule
Copy the code

At this point, a key-value pair Taint is added to the node: foo=bar:NoSchedule. The NoSchedule in the value means that Taint only affects scheduling of new pods, not pods that are already running on Node1, even if they don’t have Toleration.

In the spec section of Pod’s.yaml file, add the tolerations field:

apiVersion: v1
kind: Pod
...
spec:
  tolerations:
  - key: "foo"
    operator: "Equal"
    value: "bar"
    effect: "NoSchedule"
Copy the code

Toleration means that the Pod is “tolerating” all taints with key-value pairs foo=bar (operator: “Equal”, “Equal” operation).

At this point, a nearly complete Kubernetes cluster is deployed.

Deploy the Dashboard visualization plug-in

In the Kubernetes community, there is a popular Dashboard project that provides users with a visual Web interface to view various information about the current cluster.

Kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yamlCopy the code

Once the deployment is complete, we can check the status of the Dashboard’s corresponding Pod:

kubectl get pods -o wide --all-namespaces

kubernetes-dashboard   dashboard-metrics-scraper-878cb9dc4-cdmnb    1/1       Running   1          1h       
Copy the code

It should be noted that since Dashboard is a Web Server, many people often unintentionally expose the Dashboard port on their public cloud, thus causing security risks. By default, they can only access it locally through Proxy. You can check out the Dashboard project’s official documentation

 kubectl proxy
Copy the code

The address can be accessed at this time:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Copy the code

When deploying the container storage plug-in, we have mentioned the principle of containers. In most cases, we need to use data volumes to Mount directories or files on the external host into the Mount Namespace of the container, so that the container and the host can share these directories or files. Applications in containers can also create and write files to these data volumes.

However, if you start a container on one machine, you obviously can’t see the files that containers on other machines are writing to their data volumes. This is one of the most characteristic characteristics of containers: statelessness.

Stateless means that you can’t see what’s written inside the container. The container doesn’t change in the eyes of other containers or the host, so it makes no difference whether it changes or not. That’s why it’s called stateless.

And persistent storage container, is the important means used to store the container storage condition: store plug-in will mount a network based in containers or other mechanisms of remote data volume, make the file created in a container, is actually stored in the remote stored on the server, or on the multiple nodes in a distributed way, and with the current hosting without any binding relationship. This way, whenever you start a new container on any other host, you can request to mount the specified persistent storage volume to access the contents stored in the data volume. That’s what persistence means.

The Rook project is a Ceph based Kubernetes storage plug-in (it is also adding support for more storage implementations later). However, rather than simply encapsulating Ceph, Rook has added a host of enterprise-level capabilities to its implementation, such as horizontal scaling, migration, disaster backup, monitoring, and so on, turning the project into a complete, production-level container-storage plug-in.

Thanks to containerization, Rook can deploy a complex Ceph storage back end with just a few instructions:

$ kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/common.yaml

$ kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml

$ kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml
Copy the code

In fact, in many cases, we say the so-called “cloud native”, is “Kubernetes native” meaning. Projects like Rook and Istio are examples of this approach. After we cover declarative apis later, you’ll get a better feel for the design ideas behind these projects.