The use Kubebuilder build Kubernetes CRD Controller “first published in: blog.ihypo.net/15645917310…

The previous article (” How to Use CRD to Scale Kubernetes Clusters “) explained what CRD is and what it can provide with a Demo, and this article continues to build on that Demo (github.com/Coderhypo/K…). To show you how to build a CRD Controller.

CRD Controller

For CRD (CustomResourceDefinition) itself, it’s not too much to think of it as just an OpenApi Schema, because that’s its only capability and function. “Using CRD to achieve XX function”, really responsible for function implementation, in fact, it is said that CRD Controller.

Kubernetes itself comes with a bunch of controllers. One of the three core components on the Master node is the Controller Manager, which is actually a collection of controllers.

There are a lot of controllers in the Controller Manger that essentially do the same thing as the CRD Controller that we’re going to implement, which is manage certain resources.

Also, the communication between different controllers in Kubernetes is very interesting, for example, creating a Pod in Deployment:

The user creates a Deployment through Kubectl, APIServer authenticates the request for permission, access, and stores the Deployment resources in ETCD. Because Kubernetes implements the list-watch mechanism via ETCD, Deployment controllers interested in deployment-related events are handled by resource ADD events, That is, create RS for the Deployment.

After the RS create request is received by APIServer, the RS ADD event will be published, so the ReplicaSet Controller will receive the event and proceed with the creation of the Pod.

So you can see that both Deployment Controller and ReplicaSet Controller participate in the action of creating a Deployment-managed Pod, thanks to Kubernetes’ event-based approach. But there is no direct communication between the two controllers.

Because of its event-based approach, we can customize controllers to handle events of interest, including but not limited to CR creation, modification, and so on.

Kubebuilder and Operator – SDK

There are several mainstream tools for building CRD Controllers, one of which is the Open source Operator-SDK of coreOS (github.com/operator-fr…). , the other is Kubebuilder maintained by K8s interest Groups (github.com/kubernetes-…) .

Operator-sdk is part of the Operator framework. The Operator community is mature and active, and even has its own Hub (OperatorHub.io /) for exploring and sharing interesting operators.

Kubebuilder is more of a code generator than an SDK, generating a working Controller with well-formed comments and data structures. After using the Demo, the biggest feeling is that the infrastructure is not perfect, such as documentation, in writing this Demo, I still have to turn over the code to find out how to solve many scenes.

Kubebuilder quick start

It’s easy to create a Controller from zero to one using Kubebuilder, probably thanks to Kubebuilder’s bias towards code generators. There will be many articles or topics about creating a CRD Controller in X minutes.

Official entry document: book. Kubebuilder. IO/quick – start…

Create a project

First initialize the project with the Kubebuilder init command, –domain flag arg to specify the API group.

kubebuilder init --domain o0w0o.cn --owner "Hypo"
Copy the code

When the project is created, it will remind you whether to download dependencies or not, and then you will find that more than half of Kubernetes code is already in your GOPATH ┑( ̄  ̄)┍.

Create Api

When the project is created, the API can be created:

kubebuilder create api --group app --version v1 --kind App
kubebuilder create api --group app --version v1 --kind MicroService
Copy the code

While creating the API you’ll find Kubebuilder will help you create some directories and source files:

  1. inpkg.apisIt contains resourcesAppMicroServiceThe default data structure of
  2. inpkg.controllerinsideAppMicroServiceTwo default controllers of

resource type

Kubebuilder has helped you create and default structures:

// MicroService is the Schema for the microservices API
// +k8s:openapi-gen=true
type MicroService struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   MicroServiceSpec   `json:"spec,omitempty"`
	Status MicroServiceStatus `json:"status,omitempty"`
}
Copy the code

All you need to do is extend, using MicroService as an example:

type Canary struct {
	// +kubebuilder:validation:Maximum=100
	// +kubebuilder:validation:Minimum=1
	Weight int `json:"weight"`

	// +optional
	CanaryIngressName string `json:"canaryIngressName,omitempty"`

	// +optional
	Header string `json:"header,omitempty"`

	// +optional
	HeaderValue string `json:"headerValue,omitempty"`

	// +optional
	Cookie string `json:"cookie,omitempty"`
}

type DeployVersion struct {
	Name     string                `json:"name"`
	Template appsv1.DeploymentSpec `json:"template"`

	// +optional
	ServiceName string `json:"serviceName,omitempty"`

	// +optional
	Canary *Canary `json:"canary,omitempty"`
}

type ServiceLoadBalance struct {
	Name string             `json:"name"`
	Spec corev1.ServiceSpec `json:"spec"`
}

type IngressLoadBalance struct {
	Name string                        `json:"name"`
	Spec extensionsv1beta1.IngressSpec `json:"spec"`
}

type LoadBalance struct {
	// +optional
	Service *ServiceLoadBalance `json:"service,omitempty"`
	// +optional
	Ingress *IngressLoadBalance `json:"ingress,omitempty"`
}

// MicroServiceSpec defines the desired state of MicroService
type MicroServiceSpec struct {
	// +optional
	LoadBalance        *LoadBalance    `json:"loadBalance,omitempty"`
	Versions           []DeployVersion `json:"versions"`
	CurrentVersionName string          `json:"currentVersionName"`
}
Copy the code

The complete code is at: github.com/Coderhypo/K…

The controller logic

If you have not previously looked at the code for the Kubernetes Controller, it may be strange to find the default generated Controller, the MicroService Controller, named ReconcileMicroService, with only one main method:

func (r *ReconcileMicroService) Reconcile(request reconcile.Request) (reconcile.Result, error)
Copy the code

In make the Controller before Work, need PKG. Controller. The microservice. Micriservice_controller. Go register to pay attention to in the add method of the event, when any events of interest, Reconcile is called, and this function’s job, just like syncHandler in the Deployment Controller, is to Reconcile the current resource state with the expected state when an event occurs and, if not, rectify it.

For example, If MicroService passes a Deployment management version, Reconcile determines whether the Deployment version for each version exists and meets expectations. If it does not, create it, and if it does not, correct it.

The specific ReconcileMicroService code can be seen: github.com/Coderhypo/K…

run

Once the structure of the CR is finalized and the Controller code is complete, you can give it a try. Kubebuilder can be trial-run with a cluster configured locally with KubeconFig (Minikube is recommended for quickly creating a development cluster).

First remember to add schema to the init method in main.go:

func init(a) {
	_ = corev1.AddToScheme(scheme)
	_ = appsv1.AddToScheme(scheme)
	_ = extensionsv1beta1.AddToScheme(scheme)
	_ = apis.AddToScheme(scheme)
	// +kubebuilder:scaffold:scheme
}
Copy the code

Then make Kubebuilder regenerate the code:

make
Copy the code

Then apply CRD YAML under config/ CRD to the current cluster:

make install
Copy the code

Run the CRD Controller locally (you can also run main directly) :

make run
Copy the code