A background

For business applications, it is necessary to carry out a series of operation and maintenance operations on the k8S built-in resources first, so it is essential to write the operator of the business. Therefore, we understand that KubeBuilder is an official and standardized operator framework with high community recognition. You can extend the Kubernetes API by taking advantage of its convenience in writing business operators

1.1 What is KubeBuilder

Kubebuilder is an SDK that uses CRDs to build THE K8s API.

  • Provide scaffolding tools to initialize CRDs projects, automatically generate Boilerplate code and configuration;
  • Provide a code base to encapsulate the underlying K8s Go-client;

Easy for users to develop CRDs, Controllers and Admission Webhooks from scratch to extend K8s.

1.2 features

The Kubernetes API can be extended by Custom Resource Definition (CRD). Mastering CRD is an essential skill for advanced Kubernetes players. This article will introduce the concept of CRD and Controller. And the CRD programming framework Kubebuilder in-depth analysis, so that you really understand and can quickly develop CRD.

1.3 Basic Concepts

  • CRD (Custom Resource Definition): allows users to customize Kubernetes resources, is a type;
  • CR (Custom Resourse): a concrete instance of CRD;
  • Webhook: This is essentially an HTTP callback registered with Apiserver. When an Apiserver specific event occurs, the registered Webhooks are queried and the corresponding message is forwarded.

According to the different processing types, it can be generally divided into two categories: one may modify the incoming object, called mutating Webhook; A class that reads only incoming objects is called Validating Webhook.

  • Work queues: The core component of controllers. It monitors resource changes in the cluster and stores related objects, including their actions and keys, such as a Create action of Pod, as an event in the queue.
  • Controller: It cycles through the above work queues, pushing the cluster state towards the desired state according to its own logic. Different controllers deal with different types. Replicaset controllers, for example, focus on the number of copies and handle poD-related events.
  • Operator: Operator is a mechanism for describing, deploying, and managing Kubernetes applications. In terms of implementation, it can be understood as CRD with optional Webhook and Controller to implement user business logic. Operator = CRD + Webhook + Controller.

Architecture is the basic concept

2.1 architecture diagram

2.2 Basic Concepts

2.2.1 GVKs&GVRs

GVK = GroupVersionKind, GVR = GroupVersionResource.

  • API Group & Versions (GV)

An API Group is a collection of related API functions, and each Group has one or more Versions for the evolution of the interface.

  • Kinds & Resources (GVR)

Each GV contains multiple API types, called Kinds. The same Kind may be defined differently between different Versions. Resource is the Resource type of the Kind object. Generally speaking, Kinds and Resources are 1:1. For example, Pods Resource corresponds to Pod Kind, but sometimes the same Kind may correspond to multiple Resources. For example, Scale Kind may correspond to many Resources: deployments/ Scale, replicasets/ Scale. For CRD, the relationship will only be 1:1.

Each GVK is associated with a root Go type given in the package, For example, apps/v1/Deployment is associated with the Deployment struct in k8s. IO/API /apps/v1 package in K8s source code. The YAML file of all kinds of resource definitions submitted by us needs to write:

ApiVersion: This is GV. Kind: This is K.

The GVK K8s will tell you exactly what type of Resource you want to create. After you create the Resource according to the Spec you defined, it will become Resource, also known as GVR. GVK/GVR is the coordinates of K8s resources, which is the basis for creating/deleting/modifying/reading resources.

2.2.3 Scheme

For each set of Controllers, a Scheme is required, which provides a mapping between the Go types and the Go types. In other words, given the Go type, GVK is known, and given the Go type, Go Type is known. “Tutotial. Kubebuilder. IO/API/v1”. A CronJob {} this Go type mapping to batch. The tutotial. Kubebuilder. IO/v1 CronJob GVK, So get the following JSON from Api Server:

{
    "kind": "CronJob",
    "apiVersion": "batch.tutorial.kubebuilder.io/v1",
    ...
}
Copy the code

Then the corresponding Go Type can be constructed. Some information of GVR can be correctly obtained through this Go Type. The controller can obtain the expected state and other auxiliary information for tuning logic through this Go type.

2.2.4 Manager

The core component of Kubebuilder, with three responsibilities:

  • Responsible for running all Controllers;
  • Initialize shared Caches, including listAndWatch;
  • Initialize clients to communicate with the Api Server.

2.2.5 Cache

The core component of Kubebuilder is responsible for synchronizing all the GVRs of the Controller concerned with GVKs in the Controller process according to Scheme Api Server. Its core is GVK -> Informer mapping. The Informer listens for the create/delete/update operations of the GVRs corresponding to the GVK to trigger the Controller’s Reconcile logic.

2.2.6 Controller

For the scaffolding files Kubebuidler generates for us, we just need to implement the Reconcile method.

2.2.7 Client

Clients creates, deletes, and updates resource types that are inevitable when implementing Controller. This is done with Clients, where the query function is the local Cache and writes directly to the Api Server.

2.2.8 Index

Kubebuilder provides the Index Utility to add indexes to the Cache to improve query efficiency.

2.2.9 finalizers

In general, if the resource is deleted, the deletion event can be triggered, but no information about the deleted object can be read from the Cache at this time. As a result, many garbage cleaning work cannot be carried out due to insufficient information. The Finalizer field of K8s is used to deal with this situation. In K8s, as long as the Finalizers in the ObjectMeta are not empty, delete operations can be converted into update operations (update deletionTimestamp). This tells K8s’s GC that “after deletionTimestamp, whenever the Finalizers are empty, delete the object”.

So the normal posture is to put Finalizers in place when creating objects (arbitrary string) and then handle update (delete) when DeletionTimestamp is not empty. After completing all of the pre-delete hooks in the Finalizers (where any information about the deleted objects can be read in the Cache), empty the Finalizers.

2.2.10 OwnerReference

When the K8s GC deletes an object, any object whose ownerReference is the object is erased. Meanwhile, Kubebuidler supports the Reconcile method of the Owner object Controller for all object changes.

Three practical kubebuilder

3.1 Requirement Environment

  • Go version v1.15 +.
  • Docker version 17.03 +.
  • Kubectl version v1.11.3 +.
  • Kustomize v3.1.0 +
  • Kind native development installs Kind

Access to the Kubernetes V1.11.3 + cluster

  • The environment

Create: k8s. IO and sigs. K8s. IO two directories * k8s in IO directly to k8s program source code, and will be the kubernetes/staging/SRC/k8s. IO copy out * sigs in k8s. IO need to create the directory, Clone k8s – sigs/controller – the runtime project

# under the SRC directory of gopath pull k8s source $PWD/Users/xuel/workspace/goworkspace/SRC $ls github.com golang.org google.golang.org Gopkg. # copy k8s in source of k8s. IO to upper directory $cp -r kubernetes/staging/SRC/k8s. IO k8s. IO # in gopath create sigs. K8s. IO, and clone controller-runtime $ mkdir /Users/xuel/workspace/goworkspace/src/sigs.k8s.io && cd sigs.k8s.io && git clone https://github.com/kubernetes-sigs/controller-runtime.git $ ls drwxr-xr-x 24 xuel staff 768B Jan 3 18:46 github.com drwxr-xr-x 3 xuel staff 96B Mar 22 2020 golang.org drwxr-xr-x 3 xuel staff 96B May 21 2020 google.golang.org drwxr-xr-x 3 Xuel Staff 96B May 21 2020 Gopkg. in DRwxr-XR-x 28 Xuel Staff 896B Jan 28 19:53k8s. IO drwxr-XR-x 41 xuel staff 1.3k Jan 28 19:52 kubernetes drwxr-xr-x 3 xuel staff 96B Jan 28 19:57 sigs.k8s.ioCopy the code

3.2 Creating a Project

  • Create directories and initialize the system
$ kubebuilder init --domain imoc-operator
Copy the code

The directory structure is as follows:

3.3 create API

Run the following command to create a new API (group/version) named “webapp/v1” and create a new Kind(CRD) “Guestbook” on it.

kubebuilder create api --group webapp --version v1 --kind Guestbook

Copy the code

If you press y in Create Resource [y/n] and Create Controller [y/n], then this will Create the file API /v1/ Guestbook_types.go that defines the related API, The reconciliation business logic for this type (CRD) is generated in the Controller/Guestbook_Controller.go file.

3.4 Compiling and running the Controller

Kubebuilder automatic generated controller source address:$GOPATH/src/helloworld/controllers/guestbook_controller.go, as follows:

3.5 Installing a CRD on a Cluster

  • Install the CRD into the cluster
make install
Copy the code

  • Run the controller (this will run in the foreground, switch to the new terminal if you want to keep it running).
make run
Copy the code

Note that the Controller is running on the KubeBuilder computer. If you interrupt the console with Ctrl+ C, this will cause the Controller to stop:

3.6 Creating an Instance of Guestbook Resources

  • Now that kubernetes has deployed the Guestbook CRD, and the controller is running, you can create an instance of the Guestbook.
  • Kubebuilder has automatically created a deployment file of type: $GOPATH/SRC/helloworld/config/samples/webapp_v1_guestbook yaml, content as follows, is very simple, then let’s use this file to create a Guestbook instances:
apiVersion: webapp.com.bolingcavalry/v1
kind: Guestbook
metadata:
  name: guestbook-sample
spec:
  # Add fields here
  foo: bar
Copy the code

3.6.2 installation pod

$ kubectl apply -f config/samples/
$ kubectl get Guestbook
Copy the code

3.7 Make controller as docker Image

  1. So far, we have experienced the basic functions of KubeBuilder, but in the actual production environment, the controller will normally run in Kubernetes environment, such a way outside kubernetes is not appropriate. Let’s try to make it a Docker image and run it in Kubernetes;
  2. There is a requirement that you have a mirror repository that Kubernetes can access, such as Harbor on a LAN, or a public hub.docker.com. I chose hub.docker.com for convenience. To use it, you must have a registered account with Hub.docker.com;
  3. On the KubeBuilder computer, open a console, execute the Docker login command, and enter your hub.docker.com account and password as prompted. This allows you to execute a Docker push command from your current console to push the image to hub.docker.com (which has a poor network and may require several login attempts);
  4. Execute the following command to build docker mirror and push to hub.docker.com, mirror called bolingcavalry/guestbook: 002:
make docker-build docker-push IMG=bolingcavalry/guestbook:002
Copy the code
  1. Hub.docker.com network condition is not generally poor, kubeBuilder computer docker must set the image acceleration, the above command if encounter timeout failure, please try several times, in addition, the construction process will also download a lot of go module dependence, also need you to wait patiently, it is also easy to encounter network problems, Multiple retries are required, so it is best to use the Habor service set up on the LAN;
  2. After the command is successfully executed, the following output is displayed:
[root@kubebuilder helloworld]# make docker-build docker-push IMG=bolingcavalry/guestbook:002
Copy the code

Build will link to foreign websites. Note that you can view the image information after you push the image through the wall

$Make docker - build docker - push IMG = 127.0.0.1:5000 / guesstbook: v1/Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..." go fmt ./... go vet ./... /Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/bin/controller-gen "crd:trivialVersions=true,preserveUnknownFields=false" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases mkdir -p /Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/testbin test -f /Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/testbin/setup-envtest.sh || curl -sSLo /Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/testbin/setup-envtest.sh https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/v0.7.0/hack/setup-envtest.sh source /Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/testbin/setup-envtest.sh; fetch_envtest_tools /Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/testbin; setup_envtest_env /Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/testbin; go test ./... - Coverprofile cover.out fetching [email protected] (into '/Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/testbin') x bin/ x bin/etcd x bin/kubectl x bin/kube-apiserver setting up env vars ? github.com/kaliarch/imoc-operator [no test files] ? github.com/kaliarch/imoc-operator/api/v1 [no test files] OK github.com/kaliarch/imoc-operator/controllers 12.959s coverage: 0.0% of statements docker build -t 127.0.0.1:5000 / guesstbook: v1. Sending the build context to docker daemon 335.5 MB Step 1/14: FROM Golang :1.15 as Builder 1.15: Pulling FROM Library/Golang b9a857cBF04d: Pull complete D557EE20540b: Pull complete 3b9ca4f00c2e: Pull complete 667fd949ed93: Pull complete 547cc43be03d: Pull complete 0977886e8147: Pull complete cceccf7c7738: Pull complete Digest: Sha256: de97bab9325c4c3904f8f7fec8eb469169a1d247bdc97dcab38c2c75cf4b4c5d Status: Downloaded newer image for golang: 1.15 ---> 5f46b413e8f5
Step 2/14 : WORKDIR /workspace
 ---> Running in 597efa584096
Removing intermediate container 597efa584096
 ---> a21979056316
Step 3/14 : COPY go.mod go.mod
 ---> b6c4b03d5126
Step 4/14 : COPY go.sum go.sum
 ---> f1af7c95cdc8
Step 5/14 : RUN go mod download
 ---> Running in baf57375b805
Removing intermediate container baf57375b805
 ---> 62e488ee06f5
Step 6/14 : COPY main.go main.go
 ---> 72c3d023e770
Step 7/14 : COPY api/ api/
 ---> b164eb864a85
Step 8/14 : COPY controllers/ controllers/
 ---> 843af6a782ec
Step 9/14 : RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go
 ---> Running in af2881daee7c
Removing intermediate container af2881daee7c
 ---> cf6ef1542da6
Step 10/14 : FROM gcr.io/distroless/static:nonroot
nonroot: Pulling from distroless/static
9e4425256ce4: Pull complete 
Digest: sha256:b89b98ea1f5bc6e0b48c8be6803a155b2a3532ac6f1e9508a8bcbf99885a9152
Status: Downloaded newer image for gcr.io/distroless/static:nonroot
 ---> 88055b6758df
Step 11/14 : WORKDIR /
 ---> Running in 35900ca6d19f
Removing intermediate container 35900ca6d19f
 ---> 902a3991fa3b
Step 12/14 : COPY --from=builder /workspace/manager .
 ---> 5af066bf1214
Step 13/14 : USER 65532:65532
 ---> Running in b44fbfb3c52b
Removing intermediate container b44fbfb3c52b
 ---> 6ca11554d8fa
Step 14/14 : ENTRYPOINT ["/manager"]
 ---> Running in 716538bf799a
Removing intermediate container 716538bf799a
 ---> a98e090c1e68Successfully built a98e090c1e68 Successfully tagged 127.0.0.1:5000 / guesstbook: v1 docker push 127.0.0.1:5000 / guesstbook: v1 The push refers to The repository 127.0.0.1:5000 / guesstbook babc932481e7: Pushed 8651333b21e7: Pushed v1: digest: sha256:5dbb4e549c0dff1a4edba19ea5a35f9e21deeabe2fcefbc6b6358fb849dd61e2 size: 739$The curl - XGET http://127.0.0.1:5000/v2/_catalog                   
{"repositories":["guesstbook"]}

Copy the code
  1. Deploy a Controller image to the cluster
$Make the deploy IMG = 127.0.0.1:5000 / guesstbook: v1/Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/bin/controller-gen "crd:trivialVersions=true,preserveUnknownFields=false" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases cd config/manager && /Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/bin/kustomize edit set image The controller = 127.0.0.1:5000 / guesstbook: v1 /Users/xuel/workspace/goworkspace/src/github.com/kaliarch/imoc-operator/bin/kustomize build config/default | kubectl apply -f - namespace/imoc-operator-system created customresourcedefinition.apiextensions.k8s.io/guestbooks.webapp.com.bolingcavalry configured role.rbac.authorization.k8s.io/imoc-operator-leader-election-role created clusterrole.rbac.authorization.k8s.io/imoc-operator-manager-role created clusterrole.rbac.authorization.k8s.io/imoc-operator-metrics-reader created clusterrole.rbac.authorization.k8s.io/imoc-operator-proxy-role created rolebinding.rbac.authorization.k8s.io/imoc-operator-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/imoc-operator-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/imoc-operator-proxy-rolebinding created configmap/imoc-operator-manager-config created service/imoc-operator-controller-manager-metrics-service created deployment.apps/imoc-operator-controller-manager createdCopy the code
  • View the controllers deployed in the cluster

Check the mirror pull exception.

  • Use ali Cloud mirror warehouse here
$ sudo docker login [email protected] registry.cn-shanghai.aliyuncs.com
$Sudo docker pull registry.cn-shanghai.aliyuncs.com/kaliarch/slate: [the mirror version number]

$ sudo docker login [email protected] registry.cn-shanghai.aliyuncs.com
$Sudo docker tag (ImageId) registry.cn-shanghai.aliyuncs.com/kaliarch/slate: [the mirror version number]
$Sudo docker push registry.cn-shanghai.aliyuncs.com/kaliarch/slate: [the mirror version number]1. Login password of Aliyun mirror warehouse: Ali console account 2. Modify the tag docker tag 127.0.0.1:5000 / guesstbook: v1 registry.cn-shanghai.aliyuncs.com/kaliarch/guesstbook:v1 3. Push the mirror docker push registry.cn-shanghai.aliyuncs.com/kaliarch/guesstbook:v1Copy the code

redeploy

make deploy IMG=registry.cn-shanghai.aliyuncs.com/kaliarch/guesstbook:v1
Copy the code

The command is successfully executed

  1. View details

Show that the POD actually has two containers, which are kube-rbac-proxy and Manager, using the kubectl describe command

  1. See the log
kubectl logs -f imoc-operator-controller-manager-648b4877c6-4bpp9 -n imoc-operator-system  -c manager

Copy the code

Four clean

4.1 clean up kubebuilder

make uninstall
Copy the code

4.2 clean the kind

kind delete cluster
Copy the code

Five other

Through understanding, we can see that the functions provided by Kubebuilder are very helpful for the rapid writing of CRD and Controller, whether it is well-known open source projects in cloud native such as Istio, Knative or various custom Operators. CRD is used extensively and various components are abstracted into CRDS. Based on this, the corresponding CRD can be written for the business, making the business more based on the cloud and longer than the cloud.

Refer to the link

  • Cloudnative. To/kubebuilder…
  • www.cnblogs.com/alisystemso…
  • Blog.csdn.net/boling_cava…
  • zhuanlan.zhihu.com/p/83957726