Prerequisite: You already have a basic understanding of Docker, containers, and datastores. If not, see my article

Zero basic understanding of the Docker architecture

Brief analysis of docker network principle

Docker data volume two ways

First, basic components

In the server I have used Kubadm to build a K8S cluster (if you want to learn further, must first build a K8S cluster oh), a master (master) and a slave (slave). All the pods created by the system are in the namespace kube-system. We can see that the k8S cluster has the following main components:

The following figure shows the component allocation of master and slave

1. Master’s K8S main components

Kube-apiserver client component uses Kube-Apiserver to manage cluster resources. Kube-apiserver provides HTTP/HTTPS RESTful apis. For example, Kubectl is a client

Kube-controller-manager manages cluster resources. Kube-controller-manager consists of multiple controllers. Replication Controller, Namespacecontroller, etc. Different controllers manage different resources. The Replication Controller manages the lifecycle of Deployment, StatefulSet, and DaemonSet. Namespacecontroller Manages Namespace resources

Kube-schedule is responsible for deciding which Pod will run on which machine. When scheduling, Scheduler fully considers the Cluster topology, the load of each node, and the requirements of applications for high availability, performance, and data affinity

Etcd is a database that stores the configuration information of a cluster and the status information of various resources. For example, kubectl get POD information is retrieved from the ETCD database

Weave – Net Pods will always communicate with each other. Weave is one of the solutions for Pod networking

2. Main k8S components of Slave

kube-proxy

A service logically represents multiple pods on the back end that are accessed by the outside world. How are requests received by a service forwarded to a Pod? This is what Kube-Proxy does. Each Node runs the Kube-Proxy service, which flows TCP/UPD data accessing the Service to the container at the back end. Kube-proxy implements load balancing if there are multiple replicas. As you can see from the figure, the master also has a Kube-proxy, because the master can also be used as a slave.

Kubelet is the agent of Node. When Scheduler determines to run Pod on a Node, it will send the specific configuration information of Pod (image, volume, etc.) to Kubelet of the Node. Kubelet creates and runs containers based on the information. And reports the running status to the Master.

Second, study the whole framework of K8S from the case

Let’s first deploy an example to see how the K8S components interact.

Kubernetes manages the pod lifecycle through various controllers. In order to meet different business scenarios, Kubernetes developed Deployment, ReplicaSet, DaemonSet, StatefuleSet, Job and other controllers. Let’s start with the most common Deployment. Of course, this article is not about learning the types and differences of controllers, so we will now choose the most common Deployment controller for our case study

1. Generate a case image
1) Write code

Write a simple function that the server listens on port 8080, accesses the UIL and prints the current time and a string “This is server”

// server.go
func server(rep http.ResponseWriter,req *http.Request) {
	time := time2.Now()
	rep.Write([]byte(time.String()))
	rep.Write([]byte("This is server"))}func main(a) {
	http.HandleFunc("/util",server)
	http.ListenAndServe(": 8080".nil)}Copy the code
2) Generate an image

The image is generated locally, in this case using a Dockerfile file. The Dockerfile field is described in detail in this article [Zero Basic Understanding Docker infrastructure]juejin.cn/post/691529…

# Dockerfile
FROM golang:latest AS build
WORKDIR /go/src/service
ENV GOPROXY https://goproxy.cn
ENV GO111MODULE off
COPY . .
RUN CGO_ENABLED=1 GOOS=linux go build -ldflags="-s -w" -a -installsuffix cgo -tags=jsoniter -o main server.go

#
# Production stage
#
FROM ubuntu:latest
WORKDIR /go/src/service
COPY --from=build /go/src/service/main .
Keep consistent with the port of the code
EXPOSE 8080
ENTRYPOINT ["./main"]

Copy the code

Create an image and build the Dockerfile file in the current path

docker build -t service_test:latest .
Copy the code

Look at the generated image, which will be used later in the deployment.yaml

2. Create the Pod using the Deployment controller
1) Write namespce.yaml

Create a new namespace named k8S-test

apiVersion: v1
kind: Namespace
metadata:
    name: k8s-test
Copy the code

Create a ns

kubevtl create -f  namespce.yaml
Copy the code
2) Write service.yaml

Mapping external Ports (8080->30000)

apiVersion: v1
kind: Service
metadata:
  # Keep it consistent with the created NS
  namespace: k8s-test
  name: service
spec:
  ports:
    - name: "service-port"
      targetPort: 8080
      port: 8080
      nodePort: 30000
  The selector app must be the same as the deployment app
  selector:
    app: service-test
  type: NodePort
Copy the code

Create a service

kubevtl create -f  service.yaml
Copy the code
3) Write deployment.yaml
apiVersion: apps/v1  The version of the current format
StatefuleSet, DaemonSet, etc
kind: Deployment 
metadata:
  name: service-test
  namespace: k8s-test
# Specification,Pod description
spec:
  # number of copies
  replicas: 2
  1, Set (Kill a running Pod and create a new one)
  # 2, RollingUpdate, i.e. gradually reducing the old Pod and gradually adding new Pod
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: service-test
  # Pod template
  template:
    metadata:
      labels:
        app: service-test
    # Pod specifications
    spec:
      hostname: service-test
      # container image is a locally created container
      containers:
        - image: service_test:latest
          name: service-test
# imagePullPolicy Optional fields: Never(only pull images from local)/IfNotPresent(pull from repository if local does not exist)/ Always(want to pull the latest image every time)
          imagePullPolicy: Never
          The port is consistent with the code
          ports:
            - containerPort: 8080
      # restartPolicy restartPolicy; Always restart whenever you exit OnFailure Never restart
      restartPolicy: Always
Copy the code

Create a pod

kubectl create -f  deployment.yaml
Copy the code

View the pod you created

Access port 30000 on the machine is functional. Do this using a Deployment deployment Pod

3. Analysis process

1. Main process of creating Pod

Let’s go back to the diagram at the beginning of this article using the above example to learn the main functions of each component

① Kubectl sends a deployment request to k8S-master’s Apiserver

② Apiserver notifies the Controller Manager to create a Deployment resource

③ Notify schedule scheduling after the controller is created

④ Schedule Performs the scheduling task on which node the pod configuration information including mirror, volume, and replica set will be notified to kubelet of K8S-slave

Kubectl creates and runs pod copies on its own Node

The application configuration and status information is stored in etCD and is read from etCD when kubectl get is executed

Analyze deployment resources

Take a look at the following three commands

Get the Deployment resource and see 2 copies running properly (named service-test)
[centos@wunaichi ~]$ kubectl get deployment -n k8s-test  
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
service-test   2/2     2            2           18h

# Kubectl describe Deployment for more details
[centos@wunaichi ~]$ kubectl describe deployment -nk8s-test
Name:               service-test
Namespace:          k8s-test
CreationTimestamp:  Thu, 04 Feb 2021 07:05:43 +0000
Labels:             <none>
Annotations:        deployment.kubernetes.io/revision: 1
Selector:           app=service-test
Replicas:           2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:  app=service-test
  Containers:
   service-test:
    Image:        service_test:latest
    Port:         8080/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   service-test-958ccb545 (2/2 replicas created)
Events:          <none>

 Create a replicaset(service-test-958ccb545)
 # Replicaset is created by Deployment, which manages pods through Replicaset
[centos@wunaichi ~]$ kubectl get replicaset -n k8s-test 
NAME                     DESIRED   CURRENT   READY   AGE
service-test-958ccb545   2         2         2       18h

Get the pod you created[centos@wunaichi ~]$ kubectl get pod -n k8s-test NAME READY STATUS RESTARTS AGE service-test-958ccb545-b78xl 1/1 Running  0 18h service-test-958ccb545-scz2c 1/1 Running 0 18hView pod information for details[centos@wunaichi ~]$ kubectl describe pod service-test-958ccb545-b78xl -n k8s-test Name: service-test-958ccb545-b78xl Namespace: k8s-test Priority: 0 PriorityClassName: <none> Node: Wunaichi. Novalocal / 10.0.0.173 Start Time, Thu, 04 Feb 2021 07:05:43 + 0000 Labels: Annotations: app=service test poD-template-hash =958ccb545 Annotations: < None > Status: Running IP: 10.44.0.4 Controlled By: ReplicaSet/service-test-958ccb545Manage by ReplicaSet
Containers:
  service-test:
    Container ID:   docker://b7f0319de468d1886616faf6896b01c9abbb593a6d383343514cc65e8d07c99b
    Image:          service_test:latest
    Image ID:       docker://sha256:8c6afd1695977500af6c5efebb4b4176f4bf80bf9b0f850ef374d70455daaff2
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 04 Feb 2021 07:05:45 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ps55 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-9ps55:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9ps55
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
Copy the code

To summarize the process:

(1) User creates Deployment through kubectl

(2) Deployment create ReplicaSet

(3) ReplicaSet creates pods

You can see that Deployment manages multiple copies of pods through ReplicaSet

Learn about the various functions of k8S components and create pod resources using Depolyment

reference

5 Minutes a day to complete Kubernetes