Recently, I was reviewing for an exam, so I wrote a blog to confirm what I had learned.

At a high level, Kubernetes consists of the following:

  • One or more Master nodes
  • One or more worker nodes
  • A distributed key-value store, for exampleetcd

2FDF2789-F7EA-4EE6-8561-6FE7AD4A79D6

Master Node

The Master node is the cluster manager and all requests we make go to the API server of the Master Node.

A cluster can have multiple master nodes to perform HA. If there are multiple Master nodes, only one of them provides HA services and the rest are followers.

The state of the cluster is stored in etCD, and all master nodes are connected to etCD. Etcd is a distributed K-V storage. Etcd can be either internal or external to the master.

Components of a Master node

A Master node generally has the following components:

API Server

All operations are done through API Server. Each user/operator sends a REST request to the API Server, which validates and then performs these operations. After execution, the state of the cluster is saved into etCD.

Scheduler

As the name implies, Scheduler functions as a Scheduler. Scheduler has the resource usage of all worker nodes and also knows the resource requirements set by users, such as a disk= SSD label. Before scheduling, Scheduler also considers Service requirements, data Locality, affinity, anti-affinity, and so on. Scheduler is responsible for scheduling services and pods.

Controller Manager

Simply put, the Controller Manager is responsible for starting and shutting down the POD. The Controller Manager’s job is to keep the cluster in the desired state. The Controller Manager knows what the state of each Pod should be, and then continually checks if any Pod is not up to par.

Worker Node

The Worker Node is a machine controlled by the master Node, and the Pod is usually scheduled to the Worker Node. The Worker node has tools that run and connect containers. A Pod is a kubernetes scheduling unit, which is a logical collection of one or more containers that are usually scheduled together.

D012D35E-5431-411E-AD4A-829A268E0875

The Worker Node component

A Worker node typically has the following components:

Contrainer Runtime

Needless to say, the default for running containers is Docker

kubelet

Kubelet runs on every worker node to communicate with the master node. Kubelet receives the pod definition from the master, starts the container inside, and monitors whether the container is always working properly.

kube-proxy

To put it simply, kube-proxy provides external proxy services. In other words, without Kube-proxy, we have to access the application directly from the worker node, which is obviously unreasonable. We can use kube-proxy to do load balancer etc. Previous versions of Service also use Kube-proxy.

Use ETCD to manage state

In Kubernetes, etCD is used to manage all states. In addition to the state of the cluster, it is used to store information such as configMap and secret.

Network requirements

To start a fully functional Kubernetes cluster, we need to confirm the following information:

  • Each Pod has a unique and independent IP
  • The containers inside each Pod can communicate with each other
  • Pods can communicate with each other
  • The application inside the Pod is set to be externally accessible

These issues need to be resolved before deployment.

Let’s look at them one by one:

Assign a separate IP to each Pod

In Kubernetes, each Pod has a separate IP. The general container network has two specifications:

  • Container Network Model (CNM)
  • Container Network Interface (CNI)

Kubernetes uses CNI to assign IP to pods

CA97E57F-D3C7-4BF0-BD0F-AB8E83838655

In simple terms, the container runtime requests an IP from the CNI, which then obtains the IP through the plugin specified below and returns it to the container runtime.

Container to container communication

With the help of the underlying operating system, all containers run to create a separate, isolated network for each container. On Linux, this whole is called a Network Namespace, and these Network Namespaces can be shared between containers.

In a Pod, containers share a Network Namespace, so all containers in the same Pod can access each other via localhost.

Access between pods across nodes

In a clustered environment, where each Pod can be scheduled to any Node, we need pods on different machines to communicate with each other, and any Node to access any Pod. Kubernetes set a condition that there should be no NAT. We can do this by:

  • Routable Pods and Nodes through underlying services such as GCE.
  • Through Software Defined Networking, such as Flannel, Weave, Calico, etc

See the official Kubernetes documentation for more information.

Access between the extranet and the cluster

We can expose our service via kube-proxy and then access the applications in our cluster from outside.