Writing in the front

Here I have drawn a map with XMind to record the study notes of Redis and some interview analysis (the source file has detailed remarks and reference materials for some nodes.Welcome to my official account: AFeng architecture notes background send [map] to get the download link, has been improved and updated) :

preface

In the previous article, I introduced the K8S infrastructure flow, as well as the core components; This article continues the related concepts of K8S.

The NodePort Service solves the problem of external requests for K8S internal applications. Now let’s see how to build an application service cluster.

Application of the cluster

In traditional applications, we generally use nginx reverse proxy, by configuring the domain name to point to multiple IP addresses, so as to realize the application cluster. If you need to add or reduce applications, you need to adjust the configuration of nginx. It’s pretty tedious.

How does K8S implement application clustering?

A copy of the set ReplicaSet

In the previous article, you used the Selector of the NodePort Service to select the Label Label and route it to one of the pods on the back end.

The application cluster in the figure above consists of three PODS. How to ensure the high availability of Pod cluster? If one of the pods hangs and is deleted, what will K8S do?

K8S has a Replica Set component, which literally means Replica Set. Its role is to ensure the high availability of Pod. If the number of applications defined in the Replica Set is 3, it will ensure the number of applications. Even if a POD dies, it will automatically start one, always keeping the number of POD applications at 3.

Writing yaml

apiVersion: extensions/v1beta1  # specify the API version
kind: ReplicaSet       # specify the role/type to create the resource
metadata: 
  name: mc-user
spec: 
  replicas: 3       # Number of replica sets
  template:         # pod template
    metadata:       Metadata/attributes of the resource
      labels:       # tag definition
        app: mc-user  # label value
    spec:           # specify the content of the resource
      containers:    # container definition
        - name: mc-user   The name of the container
          image: Rainbow/MC - user: 1.0 RELEASE    # container image
Copy the code

Above is the pod with Mc-user defined, and the number of replica sets is always 3. The yamL of the Service is the same as before. Note that the Selector Label is available to the external access port 31001

apiVersion: v1
kind: Service
metadata: 
  name: mc-user
spec: 
  ports:
    - name: http
      port: 8080
      targetPort: 8080
      nodePort: 31001
  selector:
    app: mc-user
  type: NodePort
Copy the code

Run kubectl apply -f to enable ReplicaSet and Service

We can try to look at the three pods that are started and select one pod to remove.

# kubectl get all
# kubectl delete po mc-user-6adfw
Copy the code

Let’s look at the pod again

kubectl get all
Copy the code

There are still three pods, so you can see that even if you delete one pod; ReplicaSet will help us start another POD.

This is ReplicaSet’s ability to heal itself, to heal itself.

Rolling Update

Let’s start by talking about what a rolling release is. A rolling release is an advanced release strategy in which an older release is replaced in batches, gradually upgrading to a new release. During the release process, applications are not interrupted and user experience is smooth.

Now there is V1 version in Pod, now we want to upgrade to V2 version, what is the whole process like?

Delete one of the V1 pods first

Then release Pod for V2

Delete a pod for V1

Start a Pod for V2

Delete the last pod of V1

The upgrade is complete.

We can see the nature of rolling releases, where the old and new versions coexist for a while. Therefore, this distribution method is suitable for version-compatible applications. It also supports rolling back. Let’s take a look at the differences with the blue-green release

Roll out the Abstract Deployment

ReplicaSet is a wrapper around Pod, and Deployment is another wrapper around ReplicaSet on top of that.

Note: ReplicaSet and Deployment are software concepts that have no specific components; Is an abstract noun, convenient for everyone to understand

The diagram above depicts the deployment rolling publishing architecture; Rolling Deployment is transparent and unaware of user requests and services.

Deployment of yaml

apiVersion: apps/v1  This value must be in kubectl APIVersion
kind: Deployment       # specify the role/type to create the resource
metadata: 
  name: mc-user
spec: 
  selector:           # This Deployment selects which TAB to scroll to publish
    matchLabels:      # Scroll to publish pod labels like the labels in template below
      app: mc-user
  minReadySeconds: 10 The minimum waiting time is 10s, which makes it easy to see the rolling release process
  replicas: 3       # Number of replica sets
  template:         # pod template
    metadata:       Metadata/attributes of the resource
      labels:       # tag definition
        app: mc-user  # label value
    spec:           # specify the content of the resource
      containers:    # container definition
        - name: mc-user   The name of the container
          image: Rainbow/MC - user: 1.0 RELEASE    # container image
Copy the code

Yaml and ReplicaSet are similar, so beware

selector:        # This Deployment selects which TAB to scroll to publish
   matchLabels:  # Scroll to publish pod labels like the labels in template below
      app: mc-user
Copy the code

Define deployment to manage which label pod

The Service of the yaml

The yamL of the Service is unchanged. You need to define a selector, and you just need to select the tag

apiVersion: v1
kind: Service
metadata: 
  name: mc-user
spec: 
  ports:
    - name: http
      port: 8080
      targetPort: 8080
      nodePort: 31000
  selector:
    app: mc-user
  type: NodePort
Copy the code

We use kubectl apply -f to execute deployment and service

If we use Kubectl get All to get the runtime, we can see that there are two types

Deployment. Apps/MC – user and replicaset. Afaa apps/MC – user – 4345

To upgrade, just change the image name in deployment and then apply

image: Rainbow/MC - user: 1.1 RELEASE    # container image
Copy the code

We use Kubectl get all to check, we can find two replicaset; One is the old version, one is the new version. The number of pods in the old version decreases gradually, and the number of pods in the new version increases gradually, until the new version is 3 and the old version is 0.

The fallback version

If we find problems with the version, we can roll back the version using the following command

kubectl rollout undo deployment/mc-user
Copy the code

We’ll go back to the old version of V1.0.

ConfigMap configuration

In our daily business process, we need to configure some configuration parameters, such as: one-time static configuration (database connection string, user name, password), and dynamic configuration (such as: purchase limit) in the process of operation. How does the Pod in K8S get external configuration information?

In the figure above, K8S provides ConfigMap, which can be configured externally by the user. K8S then provides ConfigMap to the Pod container as an environment variable or as a persistent Volume file.

Shared configuration

Since we will have many services with the same configuration, the implementation of microservices will share a configuration information, as shown in the figure below

A ConfigMap can be provided to multiple services, and the ConfigMap stores configuration information in the env format in each service’s environment variables.

The yaml ConfigMap

apiVersion: v1  This value must be in kubectl APIVersion
kind: ConfigMap       # specify the role/type to create the resource
metadata: 
  name: mc-user-config
data: Define the configuration information
  DATASOURCE_URL: jdbc:mysql://mysql/mc-user
  DATASOURCE_USERNAME: root
  DATASOURCE_PASSWORD: 123456
Copy the code

Modify the Deployment configuration file

Add the envFrom attribute

apiVersion: apps/v1  This value must be in kubectl APIVersion
kind: Deployment     # specify the role/type to create the resource
metadata: 
  name: mc-user
spec: 
  selector:           # This Deployment selects which TAB to scroll to publish
    matchLabels:      # Scroll to publish pod labels like the labels in template below
      app: mc-user
  minReadySeconds: 10 The minimum waiting time is 10s, which makes it easy to see the rolling release process
  replicas: 3       # Number of replica sets
  template:         # pod template
    metadata:       Metadata/attributes of the resource
      labels:       # tag definition
        app: mc-user  # label value
    spec:           # specify the content of the resource
      containers:    # container definition
        - name: mc-user   The name of the container
          image: Rainbow/MC - user: 1.0 RELEASE    # container image
          envFrom:  # Source of environment variables
            - configMapRef: The configMap reference for the container application
                name: mc-user-config The name of the # configMap
Copy the code

ConfigMapRef configuration reference name in envFrom; This allows us to retrieve configMap configuration information in the POD container. We can use

kubectl exec mc-user-34wrwq-3423 printenv | grep DATASOURCE_NAME
Copy the code

Get the environment variables in the POD container.

ConfigMap change

If the service is already running and we update the ConfigMap configuration information, will the container in the POD get the new configuration information immediately?

Unfortunately, configMap was updated; Republish configMap with kubectl apply-f; Previous POD containers do not get the latest configuration information.

How do I get the POD container to use the latest ConfigMap configuration values? We can delete the pod, because Replicaset keeps the number of pods and automatically restarts, so the new pod will apply the new configuration information.

conclusion

Today we introduce the concept of K8S ReplicaSet, rolling Deployment and ConfigMap. The next article will cover network-related models. Thanks!!

Three things to watch ❤️

If you find this article helpful, I’d like to invite you to do three small favors for me:

  1. Like, forward, have your “like and comment”, is the motivation of my creation.
  2. Follow the public account “ARCHITECTURE Notes of A Wind” and share original knowledge from time to time.
  3. Also look forward to the follow-up article ing🚀
  4. [666] Scan code to obtain the architecture advanced learning materials package