Writing in the front

Here I have drawn a map with XMind to record the study notes of Redis and some interview analysis (the source file has detailed remarks and reference materials for some nodes.Welcome to my official account: AFeng architecture notes background send [map] to get the download link, has been improved and updated) :

preface

When learning K8S, many friends will be confused by the concept of K8S, and be afraid of it. And many of the articles are too professional. Today to help partners sort out, not in-depth, the purpose is to help partners better understand the origin of each concept.

Architecture diagram

In the figure above, there are two types of Node, Master and Work.

A Work Node is literally a Work Node, a machine Node that actually hosts a service. If service A is deployed to K8S, its running environment is on the WorkNode node.

So what does the Master Node do? The partner can be used to assign services to a Work node. It knows the running status of existing Work nodes and decides which work nodes to allocate services to.

A Work Node has two important components: a Pod and a Container.

A Pod is the smallest unit of the K8S and can have multiple containers inside it. Container is the environment in which services/components run.

Generally, a Pod has only one service Container, and other containers are required by the system (in fact, some process components, such as network components and Volume components). So it’s generally understood that our service is inside the Pod.

The above is just a brief introduction to the basic architecture of K8S, as well as the core point.

Little friends basic use, understand here also can

What are the components of the Master and Work nodes, and what is the publishing process between them? Keep reading.

The Master Node component

Above, users generally use kubectl command and dashboard console to operate K8S. All operations are done through the API Server component, and those that need persistence are stored in etCD. The Scheduler and Controller Manager components always subscribe to API Server changes.

The whole process

If the user needs to create three pods for service A, the overall process will be:

1) Submit a request to create an RC via Kubectl, which is written to etCD via API Server.

2) At this time, the Controller Manager listens to the RC event through the INTERFACE of API Server to monitor resource changes. After analysis, it finds that there is no Pod instance corresponding to it in the current cluster, so it generates a Pod object according to the Pod template definition in RC. Write etCD via API Server.

3) Next, the event is detected by the Scheduler, which immediately executes a complex scheduling process to select a resident Work Node for the new Pod and writes the result to etCD via API Server.

4) The Kubelet process running on the target Work Node then detects the “nascent” Pod through API Server and starts the Pod according to its definition.

5) The user’s requirement is 3 pods; Did you activate 3 at all; It is monitored and managed by the Controller Manager, which ensures that resources meet user requirements.

etcd

All resource objects in a persistent storage cluster, such as Node, Service, Pod, RC, Namespace, etc. API Server provides encapsulating interface APIS for operating ETCD. These apis are basically interfaces for adding, deleting, modifying and checking resource objects in a cluster and monitoring resource changes.

API Server

It provides a unique entry point to the resource object, and all other components must operate on the resource data through the API provided by it. By “full query” + “change listening” on the relevant resource data, these components can complete the relevant business functions “in real time”.

Controller Manager

The management control center within the cluster, whose main purpose is to realize the automatic work of Kubernetes cluster fault detection and recovery, such as the replication or removal of Pod according to the DEFINITION of RC, to ensure that the number of Pod instances meet the definition of RC copy; According to the management relationship between Service and Pod, the Endpoints object of Service is created and updated. Other tasks such as Node discovery, management, and status monitoring, disk space occupied by dead containers, and local cache image file cleaning are also performed by the Controller Manager.

Scheduler

The scheduler in the cluster, which is responsible for Pod scheduling and allocation in the cluster node.

The Work Node component

The right side of the figure shows the Work Node components and overall flow

1) Kubelet needs to create pod if there is a work node after listening to the change of Api Server; The Container Runtime component is notified

2) The Container Runtime is the Pod component of the management node. When the Pod is started, if there is no local image, it pulls the image from the Docker Hub to start the Container Pod

3) Kubelet will pass the relevant information to the Api Server

Kubelet

Responsible for the Pod creation, modification, monitoring, deletion and other life cycle management of the Node, and Kubelet regularly “reports” the status information of the Node to the API Server. The Container Runtime component is responsible for managing pods

kube-proxy

Implements proxy and software-mode load balancers for services because pod network IP is constantly changing. This network knowledge will be introduced in the next article

Pod release

The overall architecture process of K8S has been introduced above. Now we will start with POD and introduce other concepts of K8S step by step.

Let’s start by editing YAML and defining a POD object

Kind: Pod # Specifies the role/type to create the resource metadata: # Specifies the resource metadata/attribute name: Spec: #specification of the resource content Specify the contents of the resource containers: # container definition - name: Image: rainbow/ Mc-user: 1.0.release # container imageCopy the code

We create this pod using kubectl

kubectl apply -f mc-user-pod.yaml
Copy the code

Our image of Mc-user: 1.0.release is a Web application with port 8080; But we found that after pod started, we could not access the Web service through pod’S IP address

So how do I access pod?

The reverse proxy

Before we solve the problem of accessing pod, let’s take a look at how we deployed the site earlier.

The exnet visits our internal web site, and we typically deploy an Nginx in the middle to reverse proxy our Web service. According to this idea, K8S system also has the concept of reverse proxy

NodePort Service

In K8S, we can use the Service of type NodePort to implement reverse proxy

K8S has many services, among which NodePort Service provides the implementation of reverse proxy

This allows the extranet to access the internal POD. Implementation process:

1The POD needs to be labeled with a Label Label2External traffic requests are routed to the NodePort Service using Selector.3NodePort Service forwards routes to the BACK-END Pod based on the LabelCopy the code

In the above process, Service also acts as a load balancer. There can be multiple back-end pods with the same Label Label. The Service will route to one of them.

The Service Type can also be LoadBalancer or ClusterIP LoadBalancer: this Service is used for cloud deployment (such as ali cloud). It is also used for reverse proxy + LoadBalancer and external access to K8S. ClusterIP: This Service is used as a reverse proxy within the K8S cluster

The Label with the Selector

In the figure above, two pods define Label as app: nginx; 1 pod defines app: apache;

Then the Service Selector filter app: nginx will only route to the pod of nginx.

The Service release

Let’s write a NodePort Service publishing file

ApiVersion: v1 kind: Service metadata: name: Mc-user spec: ports: - name: HTTP # This port corresponds to clusterIP, IP :8080, for internal access. NodePort: 31001. This port is open for external calls to selector: app. Mc-user # where the selector must select the container's tag type: NodePort # where it stands for NodePort typeCopy the code

NodePort Specifies the port number. The value ranges from 30000 to 32767

Above is the yaml file for NodePort Service. We also need to modify the yaml file for the previous Pod

Kind: Pod # Specifies the role/type to create the resource metadata: # Specifies the resource metadata/attribute name: Mc-user # resource name, must be unique labels in the same namespace: # label definition app: Mc-user # label value spec: #specification of the resource content containers: #specification of the resource content containers: Rainbow/Mc-user: 1.0.release # Container imageCopy the code

We can use kubectl to execute pod and Service yaml files respectively. So you can access it directly from the extranet. http://localhost:31001 port is the port defined by nodePort.

conclusion

Today, I introduced the basic concepts of K8S and the architecture process. Pod, Service Labels, selectors, selectors, selectors What problem did they solve? The following will continue to introduce other components of K8S concept, hoping to help friends understand, reduce the difficulty of K8S learning; Thanks!!

Three things to watch ❤️

If you find this article helpful, I’d like to invite you to do three small favors for me:

  1. Like, forward, have your “like and comment”, is the motivation of my creation.
  2. Follow the public account “ARCHITECTURE Notes of A Wind” and share original knowledge from time to time.
  3. Also look forward to the follow-up article ing🚀
  4. [666] Scan code to obtain the architecture advanced learning materials package