Github.com/huayin-open…

The cluster to check

Before deploying applications, you need to check the cluster environment to ensure that the cluster environment is normal, the cluster version information, and nodes in the cluster are running properly.

View context

> kubectl config get-contexts                  
CURRENT   NAME               CLUSTER            AUTHINFO                               NAMESPACE
          docker-desktop     docker-desktop     docker-desktop                         
*         minikube           minikube           minikube                               default
Copy the code

Context switching

> kubectl config use-context minikube
CURRENT   NAME               CLUSTER            AUTHINFO                               NAMESPACE
          docker-desktop     docker-desktop     docker-desktop                         
*         minikube           minikube           minikube                               default
Copy the code

View the K8S version information

> kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:42:41Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/arm64"}
Copy the code

Viewing Cluster Status

If the cluster is not in a running state, you can check the status of Kubelet because Kube-Apiserver is managed by Kubelet.

> kubectl cluster-info Kubernetes control plane is running at https://kubernetes.docker.internal:6443 CoreDNS is running  at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use'kubectl cluster-info dump'.
Copy the code

View cluster nodes. There is only one node

> kubectl get no -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME Docker-desktop Ready Control-plane, Master 3D21h v1.22.4 192.168.65.4 < None > Docker Desktop 5.10.76-linuxkit Docker: / / 20.10.11Copy the code

Viewing a node

> kubectl explain no
KIND:     Node
VERSION:  v1

DESCRIPTION:
     Node is a worker node in Kubernetes. Each node will have a unique
     identifier in the cache (i.e. in etcd).

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata     <Object>
     Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec  Spec defines the behavior of a node. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status  Most  recently observed status of the node. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-statusCopy the code

Deploy the back-end Redis

Create primary Redis Deployment [create]

> kubectl create -f redis-leader-deployment.yaml
deployment.apps/redis-leader created
Copy the code

View the Deployment status

> kubectl get deployments -l app=redis -l role=leader
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
redis-leader   1/1     1            1           63m
Copy the code

Check Pod status [YAML]

> kubectl get po -l app=redis -l role=leader -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2022-01-02T12:43:25Z"
  generateName: redis-leader-5d66d78fcb-
  labels:
    app: redis
    pod-template-hash: 5d66d78fcb
    role: leader
    tier: backend
  name: redis-leader-5d66d78fcb-l2drh
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: redis-leader-5d66d78fcb
    uid: 942d6581-6ab6-4f5a-b849-1f80ab2e7e21
  resourceVersion: "161467"
  uid: a5fc9aa2-9534-43af-b09b-7240457cda1a
spec:
  containers:
  - image: registry.cn-shenzhen.aliyuncs.com/kubeops/redis:6.0.5
    imagePullPolicy: IfNotPresent
    name: leader
    ports:
    - containerPort: 6379
      protocol: TCP
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-kjzqs
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: docker-desktop
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-kjzqs
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-01-02T12:43:25Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-01-02T12:43:26Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-01-02T12:43:26Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-01-02T12:43:25Z"
    status: "True"
    type: PodScheduled containerStatuses: - containerID: Docker: / / 30 b948eff36f3bbc4489e4b338e7e95f1150a4c5281e4d33031d9f28027f2819 image: redis: 6.0.5/6.5.4 imageID: docker-pullable://redis@sha256:800f2587bf3376cb01e6307afe599ddce9439deafbd4fb8562829da96085c9c5 lastState: {} name: leader ready:true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-01-02T12:43:25Z"HostIP: 192.168.65.4 Phase: Running podIP: 10.1.0.51 podIPs: -ip: 10.1.0.51 qosClass: Burstable startTime:"2022-01-02T12:43:25Z"
Copy the code

Check the Pod startup failure

If Pod STATUS has not changed to Running for a long time, you can view Pod details via Kubectl describe Pod .

> kubectl describe po redis-leader-fb76b4755-nw4x4 Name: redis-leader-fb76b4755-nw4x4 Namespace: default Priority: 0 Node: Docker-desktop /192.168.65.4 Start Time: Sun, 02 Jan 2022 20:10:26 +0800 Labels: App =redis pod-template-hash=fb76b4755 Role =leader Tier = Backend Annotations: < None > Status: Running IP: 10.1.0.46 IPs: IP: 10.1.0.46 Controlled By: ReplicaSet/ Redis-leader-Fb76b4755 Containers: leader: Container ID: Docker: / / 8 df17907fec2b38966ec46c9d71bfc89fe77519952df327449d54150c4e1384b Image: docker. IO/redis: 6.0.5/6.5.4 Image ID: docker-pullable://redis@sha256:800f2587bf3376cb01e6307afe599ddce9439deafbd4fb8562829da96085c9c5 Port: 6379/TCP Host Port: 0/TCP State: Running Started: Sun, 02 Jan 2022 20:10:27 +0800 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 100Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ggts8 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-ggts8: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI:true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m9s default-scheduler Successfully assigned  default/redis-leader-fb76b4755-nw4x4 to docker-desktop Normal Pulled 4m8s kubelet Container image"Docker. IO/redis: 6.0.5/6.5.4." already present on machine
  Normal  Created    4m8s  kubelet            Created container leader
  Normal  Started    4m8s  kubelet            Started container leader
Copy the code

Create the primary Redis Service

> kubectl create -f redis-leader-service.yaml
service/redis-leader created
Copy the code

Check primary Redis service status [wide]

> kubectl get svc -l app=redis -l role=leader -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR redis-leader ClusterIP 10.105.105.38 < None > 6379/TCP 23M app=redis,role=leader,tier=backendCopy the code

Create two copies from Redis

> kubectl create -f redis-follower-deployment.yaml
deployment.apps/redis-follower created
Copy the code

Verify that two copies from Redis are running

> kubectl get po -l app=redis -l role=follower
NAME                              READY   STATUS    RESTARTS   AGE
redis-follower-74cc7db576-fstd2   1/1     Running   0          106s
redis-follower-74cc7db576-kwbsc   1/1     Running   0          106s
Copy the code

Create a service from Redis

> kubectl create -f redis-follower-service.yaml
service/redis-follower created
Copy the code

Redis service status [json]

> kubectl get svc -l app=redis -l role=follower -o json
{
    "apiVersion": "v1"."items": [{"apiVersion": "v1"."kind": "Service"."metadata": {
                "creationTimestamp": "2022-01-02T12:46:02Z"."labels": {
                    "app": "redis"."role": "follower"."tier": "backend"
                },
                "name": "redis-follower"."namespace": "default"."resourceVersion": "161635"."uid": "92c2298a-cc31-4c3b-b1fb-bdbf25f80975"
            },
            "spec": {
                "clusterIP": "10.105.1.89"."clusterIPs": [
                    "10.105.1.89"]."internalTrafficPolicy": "Cluster"."ipFamilies": [
                    "IPv4"]."ipFamilyPolicy": "SingleStack"."ports": [{"port": 6379,
                        "protocol": "TCP"."targetPort": 6379}]."selector": {
                    "app": "redis"."role": "follower"."tier": "backend"
                },
                "sessionAffinity": "None"."type": "ClusterIP"
            },
            "status": {
                "loadBalancer": {}}}],"kind": "List"."metadata": {
        "resourceVersion": ""."selfLink": ""}}Copy the code

Deploy the message board front end

Create a front-end Deployment [apply]

> kubectl apply -f frontend-deployment.yaml
Copy the code

Verify that three front-end replicas are running

> kubectl get po -l app=guestbook -l tier=frontend -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES Frontend - 57756596Cb-m2jj7 1/1 Running 0 6m51s 10.1.0.54 Docker-desktop < None > < None > Frontend -57756596cb-skz9h 1/1 Running 0 6m51s 10.1.0.53 docker-desktop <none> <none> frontend-57756596cb-xqc6n 1/1 Running 0 6m51s 10.1.0.55 docker-desktop <none> <none>Copy the code

Creating front-end services

> kubectl apply -f frontend-service.yaml
service/frontend created
Copy the code

Verify that the front-end service is running

> kubectl get SVC -l tier=frontend NAME TYPE cluster-ip external-ip PORT(S) AGE frontend ClusterIP 10.105.140.183 <none>  80/TCP 18mCopy the code

throughkubectl port-forwardViewing Front-end Services

Run the following command to forward port 8080 on the local machine to port 80 on the service.

> kubectl port-forward SVC /frontend 8080:80 Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80Copy the code

Extending the Web front end

> kubectl scale deployment frontend --replicas=5
Copy the code

Verify the number of front-end pods running

> kubectl get po -l app=guestbook -l tier=frontend
NAME                        READY   STATUS    RESTARTS   AGE
frontend-57756596cb-g76s5   1/1     Running   0          46s
frontend-57756596cb-h8xhm   1/1     Running   0          46s
frontend-57756596cb-m2jj7   1/1     Running   0          38m
frontend-57756596cb-skz9h   1/1     Running   0          38m
frontend-57756596cb-xqc6n   1/1     Running   0          38m
Copy the code

View the Redis Pod log

> kubectl logs redis-leader-5d66d78fcb-pvd84
1:C 02 Jan 2022 12:44:53.370 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 02 Jan 2022 12:44:53.370 # Redis version=6.0.5, bits=64, commit=00000000, Modified =0, PID =1, just started
1:C 02 Jan 2022 12:44:53.370 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf1:M 02 Jan 2022 12:44:53.370 * Running mode=standalone, port= 6379.1 :M 02 Jan 2022 12:44:53.370# Server initialized
1:M 02 Jan 2022 12:44:53.370 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as  root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 02 Jan 2022 12:44:53.371 * Ready to accept connections
1:M 02 Jan 2022 12:44:53.975 * Replica 10.1.0.50:6379 asks forSynchronization 1:M 02 Jan 2022 12:44:53.975 * Partial resynchronization not accepted: Replication ID mismatch (Replica askedfor '5f3f86fcc80e66d778048b0d0876f094154ebb65', my replication IDs are 'ef5856ce660b088d5e8f1a75a50d07d624d18326' and '0000000000000000000000000000000000000000')
1:M 02 Jan 2022 12:44:53.975 * Replication backlog created, my new replication IDs are '6e7879f5f854ed376103ea1eb10c5579d21f2d64' and '0000000000000000000000000000000000000000'
1:M 02 Jan 2022 12:44:53.975 * Starting BGSAVE for SYNC with target: disk
1:M 02 Jan 2022 12:44:53.975 * Background saving started by pid 21
1:M 02 Jan 2022 12:44:53.976 * Replica 10.1.0.49:6379 asks forSynchronization 1:M 02 Jan 2022 12:44:53.977 * Partial resynchronization not accepted: Replication ID mismatch (Replica askedfor '5f3f86fcc80e66d778048b0d0876f094154ebb65', my replication IDs are '6e7879f5f854ed376103ea1eb10c5579d21f2d64' and '0000000000000000000000000000000000000000')
1:M 02 Jan 2022 12:44:53.977 * Waiting for end of BGSAVE for SYNC
21:C 02 Jan 2022 12:44:53.978 * DB saved on disk
21:C 02 Jan 2022 12:44:53.979 * RDB: 0 MB of memory used by copy-on-write
1:M 02 Jan 2022 12:44:54.073 * Background saving terminated with success
1:M 02 Jan 2022 12:44:54.073 * Synchronization with replica 10.1.0.50:6379 succeeded
1:M 02 Jan 2022 12:44:54.073 * Synchronization with replica 10.1.0.49:6379 succeeded
Copy the code

Enter the Redis container to view the data

> kubectl exec-redis-leader-5d66d78fcb-pvd84 -- redis-cli 127.0.0.1:6379> KEYS * 1)"guestbook"127.0.0.1:6379 > GET guestbook",test,hello"
Copy the code

follow-up

Add the status tag to all Redis pods

> kubectl label pods -l app=redis status=healthy
pod/redis-follower-74cc7db576-fstd2 labeled
pod/redis-follower-74cc7db576-kwbsc labeled
pod/redis-leader-5d66d78fcb-pvd84 labeled
Copy the code

Check the Redis Pod label [–show-labels]

> kubectl get po -l app=redis --show-labels                    
NAME                              READY   STATUS    RESTARTS   AGE   LABELS
redis-follower-74cc7db576-fstd2   1/1     Running   0          96m   app=redis,pod-template-hash=74cc7db576,role=follower,status=healthy,tier=backend
redis-follower-74cc7db576-kwbsc   1/1     Running   0          96m   app=redis,pod-template-hash=74cc7db576,role=follower,status=healthy,tier=backend
redis-leader-5d66d78fcb-pvd84     1/1     Running   0          85m   app=redis,pod-template-hash=5d66d78fcb,role=leader,status=healthy,tier=backend
Copy the code

Update the main Redis Pod label

> kubectl label pods -l app=redis -l role=leader status=unhealthy --overwrite
pod/redis-leader-5d66d78fcb-pvd84
Copy the code

View primary Redis resource usage

> kubectl top po -l app=redis -l role=leader
NAME                            CPU(cores)   MEMORY(bytes)   
redis-leader-5d66d78fcb-9bm92   2m           3Mi
Copy the code

Update the number of copies from Redis Pod using Edit

> kubectl edit deploy/redis-follower
deployment.apps/redis-follower edited

# change the number of Pod copies from 2 to 3
> kubectl get pods -l app=redis -l role=follower
NAME                              READY   STATUS    RESTARTS   AGE
redis-follower-74cc7db576-cvf2q   1/1     Running   0          9m29s
redis-follower-74cc7db576-mz6h8   1/1     Running   0          25s
redis-follower-74cc7db576-twq9x   1/1     Running   0          9m29s
Copy the code

Resolve mirror pull problem

Guestbook front-end image is based on Google’s GCR. IO, so if no proxy is available, synchronize the image from the Hong Kong server to the Ali Cloud container image service.

Pull the mirror

> docker pull gcr.io/google_samples/gb-frontend:v5
v5: Pulling from google_samples/gb-frontend
72a69066d2fe: Pull complete 
fbf13c1e88c3: Pull complete 
cddf91161400: Pull complete 
2c396aa97b98: Pull complete 
1d9707294ce1: Pull complete 
443be0efd1a3: Pull complete 
f40e54f5a6bb: Pull complete 
449e25c19260: Pull complete 
4116245e7948: Pull complete 
063a257bdaed: Pull complete 
ba6b06b0aa4f: Pull complete 
331cb0169fcf: Pull complete 
7889266700ad: Pull complete 
2648f8d3ecd7: Pull complete 
2b19c0592f6e: Pull complete 
c3b640245fb3: Pull complete 
4a8a6bc16a1b: Pull complete 
Digest: sha256:1ffc7816e028b2e2f2b592594383a0139b9f570ff5fcc5fdfd81806aa8d403bf
Status: Downloaded newer image for gcr.io/google_samples/gb-frontend:v5
gcr.io/google_samples/gb-frontend:v5
Copy the code

Rename image

> docker tag gcr.io/google_samples/gb-frontend:v5
Copy the code

Log in to ali Cloud container image service

> docker login registry.cn-shenzhen.aliyuncs.com -ukubeops
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
Copy the code

Push image to Ali Cloud container image service

> docker push registry.cn-shenzhen.aliyuncs.com/kubeops/gb-frontend:v5
The push refers to repository [registry.cn-shenzhen.aliyuncs.com/kubeops/gb-frontend]
6fd1212aa9ea: Pushed 
76f17e2309c9: Pushed 
4b6e88f9653e: Pushed 
2aecaf70d382: Pushed 
80cd58a1bab0: Pushed 
2e559a423c71: Pushed 
ede4b550d621: Pushed 
2dba6c06bdde: Pushed 
ecd6a695b6ea: Pushed 
b01bf213b941: Pushed 
0ccbc08ded6d: Pushed 
449cff66aba3: Pushed 
643f1d079de7: Pushed 
0104a3ee0257: Pushed 
fd3ee495df7c: Pushed 
00f117848faf: Pushed 
ad6b69b54919: Pushed 
v5: digest: sha256:1ffc7816e028b2e2f2b592594383a0139b9f570ff5fcc5fdfd81806aa8d403bf size: 3876
Copy the code

Update image

> kubectl set image deploy/redis-follower follower=registry.cn-shenzhen.aliyuncs.com/kubeops/gb-redis-follower:v2
deployment.apps/redis-follower image updated
Copy the code

Deploy using Kustomize

Add the kustomization.yaml configuration file

resources:
  - redis-leader-service.yaml
  - redis-leader-deployment.yaml
  - redis-follower-service.yaml
  - redis-follower-deployment.yaml
  - frontend-service.yaml
  - frontend-deployment.yaml
Copy the code

Delete the old deployment

All previous resources can also be deleted at this time.

> kubectl delete -k .
service "frontend" deleted
service "redis-follower" deleted
service "redis-leader" deleted
deployment.apps "frontend" deleted
deployment.apps "redis-follower" deleted
deployment.apps "redis-leader" deleted
Copy the code

A key deployment

> kubectl apply -k .
service/frontend created
service/redis-follower created
service/redis-leader created
deployment.apps/frontend created
deployment.apps/redis-follower created
deployment.apps/redis-leader created
Copy the code

The rollback

Rollback is a very common operation and we use Kubectl rollout undo to do this.

View the Deployment version

> kubectl rollout history deploy redis-follower
deployment.apps/redis-follower
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         <none>
Copy the code

Roll back to the previous version

> kubectl rollout undo deploy redis-follower
Copy the code

Rollback to the specified version

> kubectl rollout undo deploy redis-follower --to-revision=1
Copy the code

Pod horizontal automatic expansion

This is more powerful than Kubectl scale, which automatically scales based on current resource usage.

Create a Horizontal Pod Autoscaler

— CPU-percent =80 means that the HPA will increase or decrease (through Deployment) the number of Pod copies to keep the average CPU utilization of all pods around 80%.

> kubectl autoscale deploy frontend --min=5 --max=10 --cpu-percent=80
horizontalpodautoscaler.autoscaling/frontend autoscaled
Copy the code

View the status of Autoscaler

> kubectl get hpa
NAME       REFERENCE             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
frontend   Deployment/frontend   1%/80%    5         10        5          3m32s
Copy the code