1. Getting started and preparing the environment

1.1 introduction

  • Operator is a way to wrap, run, and manage K8S applications. It covers the CRD (CustomResourceDeftination) + AdmissionWebhook + Controller, and in the form of Deployment Deployment into K8S.

    • CRD is used to define the declarative API (YAML) through which the program keeps the Minimum Scheduling unit (POD) in this state.
    • AdmissionWebhook is used to intercept yamL and validate declaration fields for requests to make mutate submissions.
    • ControllerThe primary controller that monitors resource creation/update/delete events and firesReconcileFunction in response. The whole adjustment process is calledReconcile Loop(harmonized loop), which is actually to move POD towards the state required by the CRD definition;
  • The flow chart of the Operator

  • Kubebuilder is a scaffolding for developing operators. It can generate CRD, Webhook, Controller code and configuration, and provides K8S Go-client.

Core concept (quote) : juejin.cn/post/696287…

1.2 Environment Preparations

  • Go version v1.16 +.
  • Docker version 18.03 +.
  • Kubectl version v1.20.3 +.

1.2.0 Prior knowledge

  • Basic knowledge of K8S is required

1.2.1 K8S Environment (Server)

  • Prepare a usable K8S environment, whether standalone or clustered.Above 1.20 is recommended
  • You can use it on the server or locally. You are advised to use it on the server

1.2.2 KubeBuilder (Local)

Doc: book. Kubebuilder. IO/quick – start…

  • Download the corresponding platform kubeBuilder, such as MAC development to download MAC;
    • Github.com/kubernetes-…
  • When the download is complete, put the binary filekubebuilderAdd environment variables such as:
    # mv kubebuilder_darwin_amd64 /usr/local/bin/kubebuilder
    # chmod a+x /usr/local/bin/kubebuilder
    # kubebuilder version// Version: . The main version {KubeBuilderVersion: "3.2.0," KubernetesVendor: "1.22.1", GitCommit:"b7a730c84495122a14a0faff95e9e9615fffbfc5", BuildDate:"2021-10-29T18:32:16Z", GoOs:"darwin", GoArch:"amd64"}Copy the code

1.2.3 kubectl (local)

  • Using Kubectl to control the remote server, be sure to add environment variables and use the default KubeconFig, otherwise you will need to modify the kubeBuilder generated laterMakefile.

1.2.4 Docker (Local or Server)

  • Docker is used to package images when publishing projects. It is also used in makefiles generated by KubeBuilder.

    • Ps: Personally, considering that there is no place to install Docker on the local MAC, Docker is installed on the server. However, it should be noted that The Controller-Gen and Kustomize used by Linux and MAC are different. You need to regenerate a new project using KubeBuilder in Linux to override the bin directory of an existing project.

2. The project

Reference address: github.com/Shadow-linu…

2.1 Project Creation

2.1.1 Creating a Directory

  • Chinese characters, Spaces, special characters, and underscores (_) are not allowed. Only hyphens (-) are allowed.
    # mkdir -p /usr/local/k8s-operator
    # cd /usr/local/k8s-operator
    Copy the code

2.1.2 Creating a Project

  • Init project to create a domain.

    # kubebuilder init --domain shadow.com
    Copy the code
  • Create the API, create group, Version, and kind.

    # kubebuilder create api --group myapp --version v1 --kind Redis
    Copy the code
    • After the above creation, the following CRD definition will look like:test.yaml
      apiVersion: myapp.shadow.com/v1
      kind: Redis
      .
      Copy the code
  • Create the post-project structure

2.1.3 create CRD

  • Can choose to set up resource prefix (optional), the config/default/kustomization yaml

    .
    You can change the prefix
    namePrefix: shadow-operator-
    .
    Copy the code
  • K8s-operator/API /v1/redis_types.go, RedisSepc, where I create Name, Port, Replicas;

  • install crd

    # make install
    Copy the code
  • Kubectl describe CRD redis.myapp.shadow.com

    # kubectl get crd
    NAME                                       CREATED AT
    ...
    redis.myapp.shadow.com                     2021-11-16T09:35:42Z
    Copy the code
  • If the CRD is modified, you need to reinstall it

    # make uninstall
    # make install
    Copy the code

2.1.4 launch Controller

  • Our main logic to achieve in the Reconcile the core function, the function will be event trigger repeatedly, k8s – operator/controllers/redis_controller. Go

    .func (r *RedisReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    _ = log.FromContext(ctx)
    
        // TODO(user): your logic here
        
        redis := &myappv1.Redis{}
        iferr := r.Get(ctx, req.NamespacedName, redis); err ! =nil {
            fmt.Println(err)
        } else {
            fmt.Println("Get the object", redis.Spec)
        }
    }
    ...
    Copy the code
  • Run the controller and pay attention to the console output

    # make run
    Copy the code

2.1.5 test Yaml

  • Edit yaml files, k8s – operator/config/samples/myapp_v1_redis yaml

    apiVersion: myapp.shadow.com/v1
    kind: Redis
    metadata:
      name: shadow
      namespace: default
    spec:
      name: shadow
      port: 2378
      replicas: 3
    Copy the code
  • apply

    # kubectl apply -f config/samples/myapp_v1_redis.yaml
    Copy the code
  • View the console output of the controller

    . {Name:shadow Port:2378 Replicas:3}...Copy the code

2.2 CRD Field Simple verification

Doc: book. Kubebuilder. IO/reference/m…

2.2.1 demo

  • K8s-operator/API /v1/redis_type. Go, add specific-specification comments to limit the value of Port. Refer to the above documentation for more comments.

    • +kubebuilder:validation:Minimum:=2000
    • +kubebuilder:validation:Maximum:=2380
    type RedisSpec struct {
       // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
       // Important: Run "make" to regenerate code after modifying this file
    
       // Foo is an example field of Redis. Edit redis_types.go to remove/update
       //Foo string `json:"foo,omitempty"`
       Name string `json:"name,omitempty"`
    
       // validation: https://book.kubebuilder.io/reference/markers/crd-validation.html
    
       //+kubebuilder:validation:Minimum:=2000
       //+kubebuilder:validation:Maximum:=2380
       Port int `json:"port,omitempty"`
    
       Replicas int `json:"replicas,omitempty"`
    }
    Copy the code
  • Reinstall the CRD

    # make install
    # make uninstall
    Copy the code
  • Verify the effect of

    • config/samples/myapp_v1_redis.yaml
      apiVersion: myapp.shadow.com/v1
      kind: Redis
      metadata:
        name: shadow
        namespace: default
      spec:
        port: 2390
        replicas: 3
        name: shadow
      Copy the code
    • After execution, you can see the corresponding limit error
      # kubectl apply -f config/samples/myapp_v1_redis.yaml 
      The Redis "shadow" is invalid: spec.port: Invalid value: 2390: spec.port in body should be less than or equal to 2380
      Copy the code

2.3 Webhook Creation (Modification and Verification)

2.3.1 Prior knowledge

  • What is Webhook? Webhook is a stand-alone resource that can be developed separately.
    • Kubernetes. IO/useful/docs/ref…
  • We usually go throughMutatingWebhookValidatingWebhookFor API request interception, these two Webhooks are also enabled by default by K8S (if not, please manually enable). So here’s the requestYamlFile contents.

2.3.2 create webhook

  • Executing the create command
    # kubebuilder create webhook --group myapp --version v1 --kind Redis --defaulting --programmatic-validation
    Copy the code
  • After creation, API /v1/redis_webhook.go is generated
    .func (r *Redis) Default(a) {
       redislog.Info("default"."name", r.Name)
    
       // TODO(user): fill in your defaulting logic.
    }
    
    // TODO(user): change verbs to "verbs=create; update; delete" if you want to enable deletion validation.
    //+kubebuilder:webhook:path=/validate-myapp-shadow-com-v1-redis,mutating=false,failurePolicy=fail,sideEffects=None,group s=myapp.shadow.com,resources=redis,verbs=create; update,versions=v1,name=vredis.kb.io,admissionReviewVersions=v1
    
    var _ webhook.Validator = &Redis{}
    
    // ValidateCreate implements webhook.Validator so a webhook will be registered for the type
    func (r *Redis) ValidateCreate(a) error {
       redislog.Info("validate create"."name", r.Name)
    
       // Add: If the resource name is shadow, it cannot be created
       if r.Name == "shadow" {
          return fmt.Errorf("error name.")}// TODO(user): fill in your validation logic upon object creation.
       return nil}...Copy the code

2.3.3 Deploying in the K8S Environment

  • Since webhook is deployed online, we usually verify the certificate through CA, so here we also configure a Webhook that can be used online

  • Create a K8S cert-Manager, which is a K8S plug-in.

    • Download and apply cert-Manager.yaml

      #Wget HTTP: / / https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.yaml
      
      # kubectl apply -f cert-manager.yaml
      Copy the code
    • Check whether the vm is created successfully

      #kubectl get pods -A
      NAMESPACE                NAME                                                  READY   STATUS    RESTARTS   AGE
      cert-manager             cert-manager-55658cdf68-9559b                         1/1     Running   0          7d18h
      cert-manager             cert-manager-cainjector-967788869-hl472               1/1     Running   0          7d18h
      cert-manager             cert-manager-webhook-7b86bc6578-spdct
      ...
      Copy the code
  • Open the configuration config/default/kustomization yaml

    .
    .
    bases:
    - ../crd
    - ../rbac
    - ../manager
    # [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
    # crd/kustomization.yaml
    - ../webhook
    # [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
    - ../certmanager
    # [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
    # -.. /prometheus
    
    patchesStrategicMerge:
    # Protect the /metrics endpoint by putting it behind auth.
    # If you want your controller-manager to expose the /metrics
    # endpoint w/o any authn/z, please comment the following line.
    - manager_auth_proxy_patch.yaml
    
    # Mount the controller config file for loading manager configurations
    # through a ComponentConfig type
    #- manager_config_patch.yaml
    
    # [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
    # crd/kustomization.yaml
    - manager_webhook_patch.yaml
    
    # [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
    # Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
    # 'CERTMANAGER' needs to be enabled to use ca injection
    - webhookcainjection_patch.yaml
    
    # the following config is for teaching kustomize how to do var substitution
    vars:
    # [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
    - name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
      objref:
        kind: Certificate
        group: cert-manager.io
        version: v1
        name: serving-cert # this name should match the one in certificate.yaml
      fieldref:
        fieldpath: metadata.namespace
    - name: CERTIFICATE_NAME
      objref:
        kind: Certificate
        group: cert-manager.io
        version: v1
        name: serving-cert # this name should match the one in certificate.yaml
    - name: SERVICE_NAMESPACE # namespace of the service
      objref:
        kind: Service
        version: v1
        name: webhook-service
      fieldref:
        fieldpath: metadata.namespace
    - name: SERVICE_NAME
      objref:
        kind: Service
        version: v1
        name: webhook-service
    Copy the code
  • To deploy (this step is performed on the server because docker is not installed locally), you need to pre-install the image repository, Harbor or Docker Registry.

    # make install
    #Make docker - build IMG = 192.168.6.102:5000 / shadow - redis: v1
    #Make docker - push IMG = 192.168.6.102:5000 / shadow - redis: v1
    #Make the deploy IMG = 192.168.6.102:5000 / shadow - redis: v1
    Copy the code
    • View the deployed Webhook
      #kubectl get mutatingwebhookconfigurations
      NAME                                             WEBHOOKS   AGE
      shadow-operator-mutating-webhook-configuration   1          82m
      ...
      
      #kubectl get validatingwebhookconfigurations
      NAME                                               WEBHOOKS   AGE
      shadow-operator-validating-webhook-configuration   1          47h
      ...
      Copy the code

2.3.4 Testing

  • Since cert-Manager has been deployed for TLS authentication, webhook is not started locally, and the controller of K8S environment is directly used for verification when Webhook is disabled.

    • Main. Go find itSetupWebhookWithManagerComment out after
    //if err = (&myappv1.Redis{}).SetupWebhookWithManager(mgr); err ! = nil {
    // setupLog.Error(err, "unable to create webhook", "webhook", "Redis")
    // os.Exit(1)
    / /}
    Copy the code
  • Start the

    # make run
    Copy the code
  • validation

    # kubectl apply -f config/samples/myapp_redis_v1.yaml
    Error from server (error name.): error when creating "config/samples/myapp_v1_redis.yaml": admission webhook "vredis.kb.io" denied the request: error name.
    Copy the code

2.4 Control POD resources

  • The Controller needs to be restarted at each step
    # make run
    Copy the code
  • Create a helper directory
    # mkdir -p k8s-operator/helper
    Copy the code

2.4.1 Add/Delete/Change

  • Main functions implemented

    • Create resources;
    • Delete resources;
    • Copy scaling;
    • POD is automatically rebuilt after being deleted;
    • Record events;
  • helper/redis_helper.go

    
    package helper
    
    import (
       "context"
       "fmt"
       corev1 "k8s.io/api/core/v1"
       "k8s.io/apimachinery/pkg/runtime"
       "k8s.io/apimachinery/pkg/types"
       v1 "shadow.com/v1/api/v1"
       "sigs.k8s.io/controller-runtime/pkg/client"
       "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
    )
    // Generate POD name
    func GetRedisPodNames(redisConfig *v1.Redis) []string {
       podNames := make([]string, redisConfig.Spec.Replicas)
       fmt.Printf("%+v", redisConfig)
       for i := 0; i < redisConfig.Spec.Replicas; i++ {
          podNames[i] = fmt.Sprintf("%s-%d", redisConfig.Name, i)
       }
    
       fmt.Println("PodNames: ", podNames)
       return podNames
    }
    
    // Check whether redis pod can be obtained
    func IsExistPod(podName string, redis *v1.Redis, client client.Client) bool {
       err := client.Get(context.Background(), types.NamespacedName{
          Namespace: redis.Namespace,
          Name:      podName,
       },
          &corev1.Pod{},
       )
    
       iferr ! =nil {
          return false
       }
       return true
    }
    // Are there any finalizers in the chain?
    // As long as finalizers are valid, deletion cannot go smoothly until finalizers are empty;
    func IsExistInFinalizers(podName string, redis *v1.Redis) bool {
       for _, fPodName := range redis.Finalizers {
          if podName == fPodName {
             return true}}return false
    }
    
    func CreateRedis(client client.Client, redisConfig *v1.Redis, podName string, schema *runtime.Scheme) (string, error) {
       if IsExistPod(podName, redisConfig, client) {
          return "".nil
       }
       // Create a POD object
       newPod := &corev1.Pod{}
       newPod.Name = podName
       newPod.Namespace = redisConfig.Namespace
       newPod.Spec.Containers = []corev1.Container{
          {
             Name:            podName,
             Image:           "redis:5-alpine",
             ImagePullPolicy: corev1.PullIfNotPresent,
             Ports: []corev1.ContainerPort{
                {
                   ContainerPort: int32(redisConfig.Spec.Port),
                },
             },
          },
       }
    
       // Set owner reference, using ControllerManager to manage pods for us
       // ReplicateSet is ReplicateSet
       err := controllerutil.SetControllerReference(redisConfig, newPod, schema)
       iferr ! =nil {
          return "", err
       }
       / / create a POD
       err = client.Create(context.Background(), newPod)
       return podName, err
    }
    Copy the code
  • controllers/redis_controller.go

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */
    
    package controllers
    
    import (
            "context"
            "fmt"
            corev1 "k8s.io/api/core/v1"
            metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
            "k8s.io/apimachinery/pkg/runtime"
            "k8s.io/apimachinery/pkg/types"
            "k8s.io/client-go/tools/record"
            "k8s.io/client-go/util/workqueue"
            "shadow.com/v1/helper"
            ctrl "sigs.k8s.io/controller-runtime"
            "sigs.k8s.io/controller-runtime/pkg/client"
            "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
            "sigs.k8s.io/controller-runtime/pkg/event"
            "sigs.k8s.io/controller-runtime/pkg/handler"
            "sigs.k8s.io/controller-runtime/pkg/log"
            "sigs.k8s.io/controller-runtime/pkg/reconcile"
            "sigs.k8s.io/controller-runtime/pkg/source"
    
            myappv1 "shadow.com/v1/api/v1"
    )
    
    // RedisReconciler reconciles a Redis object
    type RedisReconciler struct {
            client.Client
            Scheme      *runtime.Scheme
            EventRecord record.EventRecorder
    }
    
    //+kubebuilder:rbac:groups=myapp.shadow.com,resources=redis,verbs=get; list; watch; create; update; patch; delete
    //+kubebuilder:rbac:groups=myapp.shadow.com,resources=redis/status,verbs=get; update; patch
    //+kubebuilder:rbac:groups=myapp.shadow.com,resources=redis/finalizers,verbs=update
    
    // Reconcile is part of the main kubernetes reconciliation loop which aims to
    // move the current state of the cluster closer to the desired state.
    // TODO(user): Modify the Reconcile function to compare the state specified by
    // the Redis object against the actual cluster state, and then
    // perform operations to make the cluster state reflect the state specified by
    // the user.
    //
    // For more details, check Reconcile and its Result here:
    / / https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile
    func (r *RedisReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
            _ = log.FromContext(ctx)
    
            // TODO(user): your logic here
    
            redis := &myappv1.Redis{}
            iferr := r.Get(ctx, req.NamespacedName, redis); err ! =nil {
                    fmt.Println(err)
            } else {
                    // If not empty, deleting
                    if! redis.DeletionTimestamp.IsZero() {return ctrl.Result{}, r.clearRedis(ctx, redis)
                    }
    
                    fmt.Printf("Get object %+v \n", redis.Spec)
                    podNames := helper.GetRedisPodNames(redis)
                    isEdit := false
                    for _, podName := range podNames {
                            podName, err := helper.CreateRedis(r.Client, redis, podName, r.Scheme)
    
                            iferr ! =nil {
                                    return ctrl.Result{}, err
                            }
                            
                            if podName == "" {
                                    continue
                            }
                            // If finalizers already exist, skip
                            if controllerutil.ContainsFinalizer(redis, podName) {
                                    continue
                            }
    
                            redis.Finalizers = append(redis.Finalizers, podName)
                            isEdit = true
                    }
    
                    // Copy shrink
                    if len(redis.Finalizers) > len(podNames) {
                            r.EventRecord.Event(redis, corev1.EventTypeNormal, "Upgrade"."Copy shrinkage")
                            isEdit = true
                            err := r.rmIfSurplus(ctx, podNames, redis)
                            iferr ! =nil {
                                    return ctrl.Result{}, err
                            }
                    }
    
                    if isEdit {
                            r.EventRecord.Event(redis, corev1.EventTypeNormal, "Updated"."Update shadow - redis")
                            err = r.Client.Update(ctx, redis)
                            iferr ! =nil {
                                    return ctrl.Result{}, err
                            }
                            // To increase RedisNum to see the number of existing pods, open the following code
                            // redis.Status.RedisNum = len(redis.Finalizers)
                            err = r.Status().Update(ctx, redis)
                            return ctrl.Result{}, err
                    }
    
                    return ctrl.Result{}, nil
    
            }
    
            return ctrl.Result{}, nil
    }
    
    / / copy of contract [' redis0 ', 'redis1] - > podName [' redis0]
    func (r *RedisReconciler) rmIfSurplus(ctx context.Context, podNames []string, redis *myappv1.Redis) error {
            for i := 0; i < len(redis.Finalizers)-len(podNames); i++ {
                    err := r.Client.Delete(ctx, &corev1.Pod{
                            ObjectMeta: metav1.ObjectMeta{
                                    Name: redis.Finalizers[len(podNames)+i], Namespace: redis.Namespace,
                            },
                    })
    
                    iferr ! =nil {
                            return err
                    }
            }
            redis.Finalizers = podNames
            return nil
    }
    
    func (r *RedisReconciler) clearRedis(ctx context.Context, redis *myappv1.Redis) error {
            podList := redis.Finalizers
            for _, podName := range podList {
                    err := r.Client.Delete(ctx, &corev1.Pod{
                            ObjectMeta: metav1.ObjectMeta{
                                    Name:      podName,
                                    Namespace: redis.Namespace,
                            },
                    })
    
                    iferr ! =nil {
                            fmt.Println("Clear Pod exception:", err)
                    }
            }
    
            redis.Finalizers = []string{}
            return r.Client.Update(ctx, redis)
    }
    
    func (r *RedisReconciler) podDeleteHandler(event event.DeleteEvent, limitInterface workqueue.RateLimitingInterface) {
            fmt.Println("The name of the deleted object is", event.Object.GetName())
    
            for _, ref := range event.Object.GetOwnerReferences() {
                    // All deleted pods will be retrieved, so make a judgment
                    if ref.Kind == "Redis" && ref.APIVersion == "myapp.shadow.com/v1" {
                            // Rereconcile the push queue
                            limitInterface.Add(reconcile.Request{
                                    NamespacedName: types.NamespacedName{
                                            Name:      ref.Name,
                                            Namespace: event.Object.GetNamespace(),
                                    },
                            })
                    }
            }
    }
    
    // SetupWithManager sets up the controller with the Manager.
    func (r *RedisReconciler) SetupWithManager(mgr ctrl.Manager) error {
            return ctrl.NewControllerManagedBy(mgr).
                    For(&myappv1.Redis{}).
                    // Monitor resources and perform delete actions
                    Watches(&source.Kind{Type: &corev1.Pod{}}, handler.Funcs{DeleteFunc: r.podDeleteHandler}).
                    Complete(r)
    }
    
    Copy the code
  • Kubectl delete (mutate, validate); kubectl delete (validate);

    • myapp_redis_v1.yaml

      apiVersion: myapp.shadow.com/v1
      kind: Redis
      metadata:
        name: shadow
        namespace: default
      spec:
        port: 2379
        replicas: 3
        name: shadow
      Copy the code
    • create

      # kubectl apply -f config/samples/myapp_redis_v1.yaml
      # kubectl get Redis
      # kubectl get pods
      Copy the code
    • Shrink the copy and modify myapp_redis_v1.yaml

      apiVersion: myapp.shadow.com/v1
      kind: Redis
      metadata:
        name: shadow
        namespace: default
      spec:
        port: 2379
        replicas: 2
        name: shadow
      Copy the code
      # kubectl apply -f config/samples/myapp_redis_v1.yaml
      Copy the code
    • delete

      # kubectl delete Redis shadow
      Copy the code
    • View recorded events

      # kubectl describe Redis shadow
      Copy the code

2.4.2 check

  • Kubectl get Redis displays only the Name and Age fields by default, extending RedisNum to count how many pods are created

    • API /v1/redis_type. Go, add status

       type RedisStatus struct {
          // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
          // Important: Run "make" to regenerate code after modifying this file
          RedisNum int `json:"redis_num"`
      
       }
      
       // The following two comments correspond to the state seen by kubectl get Redis
       //+kubebuilder:printcolumn:JSONPath=".status.redis_num",name=REDIS_NUM,type=integer
       //+kubebuilder:printcolumn:JSONPath=".metadata.creationTimestamp",name=AGE,type=date
      Copy the code
    • Controllers /redis_controller.go, open the following comment

      // redis.Status.RedisNum = len(redis.Finalizers)
      Copy the code
  • Reinstall the CRD and restart the Controller

    # make install
    # make run
    Copy the code
  • To view

    # kubectl get Redis
    NAME     REDIS_NUM   AGE
    shadow   3           21m
    Copy the code

3. Write at the end

  • After the actual combat project written, will update an Opeartor actual combat plus source code.
  • 👈 (left) click herepraiseLet’s go again.