preface

After the actual combat cases of the previous several articles, I believe we have a certain understanding of how K8S is deployed and applied. Although the deployment template is not necessarily optimal, it can basically solve the deployment problems of most applications. This article focuses on the Kubernetes objects used in the previous articles.

Details on common objects

When creating a Kubernetes object, you must provide the object’s specification, which describes the expected state of the object, as well as some basic information about the object (such as its name). When an object is created using the Kubernetes API (either directly or based on Kubectl), the API request must contain jSON-formatted information in the request body. In most cases, you need to provide this information to Kubectl in a.yaml file. Kubectl converts this information into JSON format when it makes an API request.

Required field

In the.yaml file corresponding to the Kubernetes object you want to create, you need to configure the following fields:

  • apiVersion– The version of the Kubernetes API used to create the object
  • kind– The type of the object you want to create
    • ConfigMap
    • PersistentVolume
    • PersistentVolumeClaim
    • Deployment
    • Service
    • Ingress
    • StorageClass (StorageClass, added in this article)
  • metadata– Data that helps identify object uniqueness, including onenameString, UID, and optionalnamespace

ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-cm
  namespace: mldong-test
data:
  a.conf: |-
    server {
      listen       80;
      server_name  a.mldong.com;
      location / {
        root   /usr/share/nginx/html/a;
        index  index.html index.htm;
      }
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
        root   /usr/share/nginx/html;
      }
    }
  b.conf: |-
    server {
      listen       80;
      server_name  b.mldong.com;
      location / {
        root   /usr/share/nginx/html/b;
        index  index.html index.htm;
      }
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
        root   /usr/share/nginx/html;
      }
    }
    
Copy the code

In this case:

  • Will be in the namespacemldong-testBy the.metadata.namespaceField indication.
  • Create anginx-cmConfigMapBy the.metadata.nameField indication.
  • Two configuration files are createda.confandb.confBy the.dataField indication,key-valueKey corresponds to the file name and value corresponds to the file content
  • A definedConfigMapCan be made ofDeploymentthe.template.spec.volumes[].nameThe specified

PersistentVolume

PersistentVolume (PV) is used to provide an API for users and administrators on how to provide and consume storage that is provided by administrators in a cluster. It is a resource in a cluster just like Node. PersistentVolume is a plug-in like a storage volume, but it has its own independent life cycle. PersistentVolumeClaim (PVC) is a user request for storage, similar to how A Pod consumes a Node resource and a PVC consumes a PV resource. Pods can request specific resources (CPU and memory), stating the request for specific storage sizes and access patterns. PV is a system resource and therefore has no namespace to which it belongs.

The life cycle

  • Provisioning: Provisioning is the creation of a PV, which can be created statically or dynamically using the StorageClass
  • Binding: Allocates the PV to the PVC
  • Using: The Pod uses this Volume through a PVC
  • Releasing: Pod releases Volume and deletes PVC
  • Reclaiming: Reclaiming the PV. You can keep the PV for future use or delete it from the cloud storage
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nginx-pv
  labels:
    alicloud-pvname: nginx-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  csi:
    driver: nasplugin.csi.alibabacloud.com
    volumeHandle: nginx-pv
    volumeAttributes:
      server: "9fdd94bf87-wfq66.cn-zhangjiakou.nas.aliyuncs.com"
      path: "/"
      vers: "3"
  storageClassName: nas
Copy the code

In this case:

  • Create anginx-pvPersistentVolumeBy the.metadata.nameField indication.
  • Create a volume with the size of 5G.spec.capacity.storageField indication.
  • The volume can be read and written, as indicated by the.spec.accessModes field
    • ReadWriteOnce: Can be read and written by a node. The abbreviation is RWO.
    • ReadOnlyMany: can be read by multiple nodes. ROX for short.
    • ReadWriteMany: Allows multiple nodes to read and write. The abbreviation is RWX.
  • The storage class used isnasCsi plug-in is provided by Ali, please refer to the official documentation of Ali.

PersistentVolumeClaim

Skip it for now — the next storage volume will be covered in a separate chapter.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: 'yes'
    pv.kubernetes.io/bound-by-controller: 'yes'
  finalizers:
    - kubernetes.io/pvc-protection
  name: nginx-pvc
  namespace: mldong-test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      alicloud-pvname: nginx-pv
  storageClassName: nas
  volumeMode: Filesystem
  volumeName: nginx-pv
Copy the code

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: mldong-test
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: registry-vpc.cn-zhangjiakou.aliyuncs.com/mldong/java/nginx:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
              name: port
              protocol: TCP
          volumeMounts:
            - name: nginx-pvc
              mountPath: "/usr/share/nginx/html"
            - name: nginx-cm
              mountPath: "/etc/nginx/conf.d"
      volumes:
        - name: nginx-pvc
          persistentVolumeClaim: 
            claimName: nginx-pvc
        - name: nginx-cm
          configMap:
            name: nginx-cm
Copy the code

In this case:

  • Will be in the namespacemldong-testBy the.metadata.namespaceField indication.
  • Create anginxDeploymentBy the.metadata.nameField indication.
  • DeploymentCreate 3 replicasPodsBy thereplicasField indication.
  • selectorField definitionDeploymentHow do I find things to managePods. In this case, simply select in the Pod templateapp: nginx). However, more complex selection rules are possible as long asPodThe template itself satisfies the rules.
  • templateFields contain the following subfields:
  • PodMarked asapp: nginx, the use oflabelsField.
  • PodTemplate specification or.template.specField instructionsPodsRun a container,nginxRun,nginxPrivate mirror repository versionlatestIn the image.
  • Create a container and usenameThe field names it asnginx.
  • The mirror pull policy isIfNotPresentBy the field.template.spec.containers.imagePullPolicyinstructions
    • IfNotPresent: Pull if it does not exist
    • Always: Always pull
    • Never: Only local images are used
  • The ports exposed by creating containers are80By the field ` `. Template. Spec. Containers. Ports [] containerPort ` instructions
  • Create a container that mounts two directories/usr/share/nginx/htmland/etc/nginx/conf.dBy the.template.spec.containers.volumeMounts[]Indicating, whereinnameThe corresponding is.template.spec.volumesIn thenameAnd thevolumesCan be defined in a variety of ways, such asconfigMap,persistentVolumeClaimAnd so on.

Currently, in addition to the above definition, it is possible to use other, such as

  • The environment variable.template.spec.containers.env
  • Start the command.template.spec.containers.command
  • Service survival monitoring.template.spec.containers.livenessProbe
  • Resource definition constraints.template.spec.containers.resources
  • , etc.

Service

Service access mode

ClusterIP way

This parameter is used to provide a fixed IP address for Pod access in a cluster. By default, the IP address is automatically allocated. You can use the ClusterIP keyword to specify a fixed IP address.

  • Will be in the namespacemldong-testBy the.metadata.namespaceField indication.
  • Create anginxtheServiceBy the.metadata.nameField instructions
  • A type ofClusterIPService by.spec.typeField instructions
  • The port corresponding to the container is 80.spec.ports[].targetPortField instructions
  • Specifies the range of labels selected by the label selectorapp:nginxBy the.spec.selectorIndicate, corresponding toDeployment.template.metadata.labels
  • The specified service can be accessed through http://nodeIP:nodePort
  • Among them
    • You can obtain the nodeport by running the commandkubectl get svc -n mldong-test
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: mldong-test
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
Copy the code

NodePort way

Used to provide access access ports for external cluster access pods behind Service

The workflow of the service is as follows: Client—–>NodeIP:NodePort—–>ClusterIP:ServicePort—–>PodIP:ContainerPort

apiVersion: v1
kind: Service
metadata:
  name: nginx-nodeport
  namespace: mldong-test
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    # nodePort: 32180
  selector:
    app: nginx
Copy the code

In this case:

  • Will be in the namespacemldong-testBy the.metadata.namespaceField indication.
  • Create anginx-nodeporttheServiceBy the.metadata.nameField instructions
  • A type ofNodePortService by.spec.typeField instructions
  • The port corresponding to the container is 80.spec.ports[].targetPortField instructions
  • Specifies the range of labels selected by the label selectorapp:nginxBy the.spec.selectorIndicate, corresponding toDeployment.template.metadata.labels
  • The specified service can be accessed through http://nodeIP:nodePort
  • Among them
    • You can obtain the node IP address by running commandskubectl get nodes
    • You can obtain the nodeport by running the commandkubectl get svc -n mldong-test

Ingress

Ingress is an API object that manages external access to services in a cluster, typically through HTTP.

Ingress can provide load balancing, SSL termination, and name-based virtual hosting

 internet
        |
   [ Ingress ]
   --|-----|--
   [ Services ]
Copy the code
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
  name: nginx-ingress
  namespace: mldong-test
spec:
  rules:
    - host: a.mldong.com
      http:
        paths:
          - backend:
              serviceName: nginx
              servicePort: 80
            path: /
    - host: b.mldong.com
      http:
        paths:
          - backend:
              serviceName: nginx
              servicePort: 80
            path: /
Copy the code

In this case:

  • Will be in the namespacemldong-testBy the.metadata.namespaceField indication.
  • Create anginx-ingresstheIngressBy the.metadata.nameField instructions

Each HTTP rule contains the following information:

  • Optional host. In this example, no host is specified, so this rule applies to all inbound HTTP traffic over a specified IP address. If a host is provided (for example, A.mldong.com and B.mldong.com), the rules apply to that host.
  • Path list (for example,/), each path has a path byserviceNameservicePortDefinition of the association back end. Before the load balancer can direct traffic to the referenced service, both the host and the path must match the content of the incoming request.
    • serviceNameThe corresponding Service. The metadata. The name
    • servicePortThe corresponding Service. Spec. Ports []. Port

Check the command

kubectl get ingress -n mldong-test
Copy the code

In the result, ADDRESS is the IP ADDRESS resolved by the domain name (the value of the Ali cloud is empty, and it is the IP ADDRESS bound to the load balancer). If you want to configure /etc/hosts on the Intranet, add the following records

IP a.mldong.com
IP b.mldong.com
Copy the code

You can access the service in the following ways

curl a.mldong.com
curl b.mldong.com
Copy the code

summary

Alas, pure writing, I really is not very good at, then slowly practice it.

Related articles

Walk you through K8S – cluster creation and Hello World

Take you through K8S-ConfigMap and persistent storage

Take you hand in hand to play K8S – complete release of an externally accessible service

Docker-compose k8S-docker advanced Dockerfile and docker-compose

K8s – One click deployment of springboot project

Take you through k8S – One-click deployment of VUE projects