Kuberbetes cluster consists of Master and slave nodes. The nodes run multiple Kubernetes services.

The Master node

The Master is the brain of the Kubernetes cluster, running Daemon services such as Kube-Apiserver, KuBE-Scheduler, Kube-Controller – Manager, ETCD and POD networks (such as flannel).

  1. API Server (Kube-Api Server)

This service provides Rest apis, what we call front-end interfaces in the application layer, resources managed by various client tools CLI/UI, and is something you need to master if you need to do secondary development.

  1. scheduler(kube-scheduler)

In charge of Pod running on that Node, that is, the topology structure of cluster, the current load of each Node, as well as the requirements of high availability and data affinity of applications will be fully considered during scheduling

  1. Controller Manager

Responsible for Cluster resources, keeping resources in their expected state, i.e. running server resources, in a proper place, not like old AMD cpus. One party is in trouble, everyone is watching. There are several types of Controllers: Replication, EndPoints, Namescpace, ServiceAccounts, and so on

Different controllers manage different resources. For example, Replication manages the lifecycle of Deployment, Statefulset, and Daemonset, and namespace manages namespace resources

4.etcd

Etcd is a key/value distributed database, because it is written by GO. So this thing is also used by K8S to store asset status information and configuration information, as well as service discovery in its own load balancing.

  1. Pod network

This is important because Docker cannot communicate across hosts. If you want to communicate across hosts, you have to manually enable port mapping yourself, which can be difficult if you have too many containers. Flannel is the answer to that question. There are other web solutions, too. Flannel has more people and more information.

The Node Node

All nodes are where pod works. Kubernetes supports docker, RKT, etc. Components running on Node include Kubelet, Kube-Proxy, and Pod networks.

Kubelet

Kubelet is the node agent. When scheduler determines to run POD on a node, it will send the pod configuration information, such as image and volume, to kubelet and then create and run the container according to the information, and report the status to the master.

kube-proxy

A service logically represents multiple pods on the back end that are accessed by the outside world. When the service receives the request, it needs to exchange containers with the proxy pod. If you have multiple copies of pod, this will automatically balance the load for you. That is, if you are using K8S it does not work when using Nginx for manual load balancing these stupid operations. I see a lot of PHP tutorials doing this with Docker.

Let’s take a look at the complete structure

This is because the master can also run applications. If you are curious, you can use Rancher to create pod containers even with a single node.

To see pods already running on k8s, use the following command as often as you use Docker PS

kubectl get pod --all-namespaces -o wide
Copy the code

The k8S system components are put into the Kube-system namespace. There is a KUBE component that provides DNS services for the cluster. This service is installed as an add-on service when K8S is initialized

Kubelet is the only K8S component that does not run as a container and is available in Ubuntu via the Systemd service. You can use it to view service information

systemctl status kubelet
Copy the code

String them together by example

All of the following commands are described in Chapter 1

kubectl run httpd-app --image=httpd --replicas=2
Copy the code

K8s deploys Deployment httpD-app with two copies of POD running on K8S-node1 and K8S-Node2

Here’s how they work

  1. Kubectl sends deployment requests to API Server
  2. API Server notifies Controller Mananger to create a Deployment Pod
  3. The Scheduler performs scheduling tasks and distributes the two pods to the two cluster nodes
  4. The configuration and current status of the application are stored in the ETCD. When kubectl get POD is executed, the API Server reads the data from the ETCD. Flannel assigns IP addresses to each POD.