Abstract: Cloud is composed of many small water droplets, every computer imagine as small water droplets, combined to form a cloud. Typically, drops appear first, followed by platforms that manage them (e.g. OpenStack, Kubernetes).
Cloud computing — independent universes
1. A cloud is made up of many small water droplets. Imagine each computer as a small water drop, and unite to form a cloud. A traditional drop is a VM; The appearance of Docker changes the particle size of small water droplets
2. Water droplets can run independently with complete interior (such as VM and Docker container)
3. Generally, water droplets appear first, and then the platform for managing water droplets (such as OpenStack and Kubernetes) appears.
Introduction to Kubernetes
1.Kubernetes is an open source, used to manage the cloud platform on multiple host containerized applications,Kubernetes goal is to make the deployment of containerized applications simple and efficient (powerful),Kubernetes provides a mechanism for application deployment, planning, update, maintenance
2. One of the core features of Kubernetes is the ability to independently manage the container in the cloud platform to ensure that the container in the cloud platform is running according to the user’s expectation (for example, users want to keep dlCatalog running, users do not need to care about how to do, Kubernetes will automatically monitor, and then to restart, create, in short, Keep the DLCatalog serving all the time.)
3. In Kubenetes, all containers run in pods, and a Pod can hold one or more related containers
Kubernetes
1.Pod
In Kubernetes, the smallest management element is not a separate container, but a Pod; A Pod is a “logical host” in a container environment. A Pod is composed of multiple containers that are related and share disks. The ports between containers in the same Pod cannot be repeated, otherwise the Pod will fail to get up, or it will reboot indefinitely
2. Node
Node is the host on which the Pod actually runs. It can be a physical machine or a VIRTUAL machine. To manage pods, each Node must run at least the Container Runtime (such as Docker), Kubelet, and Kube-Proxy services. Nodes are not created by Kubernetes per se. Kubernetes only manage the resources on nodes. Although it is possible to create a Node object from the manifest (as shown in JSON below), Kubernetes only checks to see if there really is such a Node, and does not schedule pods up if the check fails
{” kind “:” Node “, “apiVersion” : “v1”, “metadata” : {” name “:” 10.63.90.18 “, “labels” : {” name “:” my – first – k8s – Node “}}}
3. Service
Service is an abstract concept and the essence of K8s. Each App on K8s can apply for the “name” inside the cluster to represent itself; K8s will assign your App a Service license with a “fake IP” on it, and any access to this IP from within the cluster will equal access to your App
Suppose we have pods, each with port 9083 open, and each with a label app=MyApp; This json code creates a new Service object named my-dlcatalog-metastore-service and connects to the target port 9083. And the Pod with the label app=MyApp will be assigned an IP address, which is used by kube-proxy, as long as the cluster internal access to this IP, is equal to access your APP; Note that K8s inside the Pod actual IP is generally useless
kind: Service, apiVersion: v1, metadata: name: my-dlcatalog-metastore-service spec: selector: app: MyApp ports: – protocol: TCP, port: 20403, targetPort: 9083
4. ConfigMap
ConfigMap A key-value pair used to store configuration data. It can be used to store individual properties or configuration files. ConfigMap is similar to Secret, but it makes it easier to handle strings that don’t contain sensitive information;
Use volume to mount ConfigMap directly as a file or directory
Mount the created ConfigMap to the Pod /etc/config directory as follows
apiVersion: v1 kind: Pod metadata: name: vol-test-pod spec: containers: – name: test-container image: 10.63.30.148:20202 / ei_cnnroth7a/jwsdlcatalog – x86_64:1.0.1.20200918144530 command: [ “/bin/sh”, “bin/start_server.sh” ] volumeMounts: – name: config-volume mountPath: /etc/config volumes: – name: config-volume configMap: name: special-config restartPolicy: Never
Kubernetes resource fancy scheduling
Specify Node Node scheduling
There are three ways to specify that a Pod will only run on a specified Node
A:
NodeSelector: only nodes that match the specified label are scheduled
Method 2:
NodeAffinity: A Node selector with more features, such as support for collection operations
NodeAffinity currently supports two types: RequiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution, representing must satisfy the conditions and the optimum conditions
IO /e2e-az-name and the value is e2E-az1 or e2E-az2. Nodes with the label another-node-label-key=another-node-label-value are preferred
apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: – matchExpressions: – key: kubernetes.io/e2e-az-name operator: In values: – e2e-az1 – e2e-az2 preferredDuringSchedulingIgnoredDuringExecution: – weight: 1 preference: matchExpressions: – key: another-node-label-key operator: In values: – another-node-label-value containers: – name: with-node-affinity image: 10.63.30.148:20202 / ei_cnnroth7a/jwsdlcatalog – x86_64:1.0.1.20200918144530
Three:
PodAffinity: Assigns a desired Pod to a Node
PodAffinity Nodes are selected based on the Pod label. PodAffinity and podAntiAffinity are supported
This function is rather convoluted, and the following two examples are used as explanations:
The first example shows:
A Node can be scheduled to a Zone that contains at least one Pod running with a security=S1 tag. Does not schedule to a Node that contains at least one running Pod with a security=S2 tag
apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: – labelSelector: matchExpressions: – key: security operator: In values: – S1 topologyKey: failure-domain.beta.kubernetes.io/zone podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: – weight: 100 podAffinityTerm: labelSelector: matchExpressions: – key: security operator: In values: – S2 topologyKey: kubernetes.io/hostname containers: – name: with-node-affinity image: 10.63.30.148:20202 / ei_cnnroth7a/jwsdlcatalog – x86_64:1.0.1.20200918144530
The second example shows:
If the Zone in which a Node is located contains at least one running Pod with appVersion= jWSDLCatalog-x86_64-1.0.1.20200918144530 tag, it is recommended that the Node is not scheduled. Not scheduled to “contain at least one running Pod with app= jWSDLCatalog-x86_64 tag” Node
SecurityContext: runAsUser: 2000 fsGroup: 2000 Affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: – podAffinityTerm: labelSelector: matchExpressions: – key: appVersion operator: In values: – concat: – get_input: IMAGE_NAME – ‘-‘ – get_input: IMAGE_VERSION #numOfMatchingPods: “2” # do not add this field, this field is huawei’s own implementation, community didn’t accept topologyKey: “failure – domain. Beta. Kubernetes. IO/zone” weight: 100 requiredDuringSchedulingIgnoredDuringExecution: – labelSelector: matchExpressions: – key: app operator: In values: – get_input: IMAGE_NAME numOfMatchingPods: “1” topologyKey: “kubernetes.io/hostname” containers: – image: concat: -get_input: IMAGE_ADDR # image_input: IMAGE_ADDR – “/” -get_input: IMAGE_ADDR # image_input: IMAGE_ADDR – “:” -get_input: IMAGE_VERSION # IMAGE_VERSION # jwsDLcatalog name: jwsDLcatalog
Note: this article is purely personal opinion, if some pictures are similar, it is pure accident
Click to follow, the first time to learn about Huawei cloud fresh technology ~