Author: Su Hou Town Qingyun Technology database development engineer
Currently engaged in RadonDB ClickHouse related work, keen on database kernel research.
The Operator provides easy management of ClickHouse clusters, and the Helm provides easy deployment of clusters.
This article will use RadonDB ClickHouse[1] as an example. Compare the convenience of deploying ClickHouse clusters on K8s by Kubectl and Helm methods under the condition of the same Operator selection. It also briefly describes how to manage ClickHouse clusters easily and quickly on K8s with the Operator.
| use Kubectl + Operator deployment
precondition
- The Kubernetes cluster has been installed.
Deployment steps
1. Deploy the RadonDB ClickHouse Operator
$ kubectl apply -f https://github.com/radondb/radondb-clickhouse-kubernetes/clickhouse-operator-install.yaml
Copy the code
Note: If the Operator is required to monitor all Kubernetes namespaces, it must be deployed in the kube-system namespace. Otherwise, only the namespace to which it is deployed will be monitored.
2. Write deployment files for CR
The following YAML file describes the configuration specifications for ClickHouse using the RadonDB ClickHouse Operator to install a two-shard and two-copy cluster.
ApiVersion: "clickhouse.radondb.com/v1 kind" : "ClickHouseInstallation" # application Operator to create cluster metadata: name: "ClickHouse" spec: defaults: templates: dataVolumeClaimTemplate Name: "replicas" layout: shardsCount: 2 replicasCount: 2 templates: name: "replicas" layout: shardsCount: 2 replicasCount: 2 templates: VolumeClaimTemplates: # Diskinfo description - name: data reclaimPolicy: Retain Spec: accessModes: - ReadWriteOnce resources: requests: storage: 10GiCopy the code
Deploy with Kubectl
Take the test namespace as an example:
$ kubectl -n test apply -f hello-kubernetes.yaml
clickhouseinstallation.clickhouse.radondb.com/ClickHouse created
Copy the code
Note: If the RadonDB ClickHouse Operator is not deployed in kube-system, then the RadonDB ClickHouse cluster and Operator need to be deployed in the same namespace.
After a successful deployment, Kubernetes stores CR information into the ETCD, and Operator is aware of changes to the ETCD. When the Operator obtains the CR change, it creates the StatefulSet and Service based on the CR change.
4. Check the running status of the cluster
You can obtain four running RadonDB ClickHouse Pods to form a two-copy cluster and provide a LoadBalancer SVC for external access.
$kubectl Get Pods -n test NAME READY STATUS RESTARTS AGE Pod/Chi-clickhouse-Replicas -0-0-0 1/1 Running 0 for RESTARTS 3m13s pod/chi-ClickHouse-replicas-0-1-0 1/1 Running 0 2m51s pod/chi-ClickHouse-replicas-1-0-0 1/1 Running 0 2m34s Pod/Chi-clickhouse-replicas -1-1-0 1/1 Running 0 2m17s # Check the SVC Running status. $kubectl get service -n test NAME TYPE cluster-ip EXTERNAL-IP PORT(S) AGE service/chi-ClickHouse-replicas-0-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 2m53s service/chi-ClickHouse-replicas-0-1 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 2m36s service/chi-ClickHouse-replicas-1-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 2m19s service/chi-ClickHouse-replicas-1-1 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 117s service/clickhouse-ClickHouse LoadBalancer 10.96.137.152 < pending > / TCP 8123-30563, 9000:30615 / TCP 3 m14sCopy the code
This is how to deploy a RadonDB ClickHouse cluster using Kubectl + Operator. You can see that the whole process still requires some K8s knowledge.
| use Helm + Operator deployment
precondition
- The Kubernetes cluster is installed.
- The Helm package management tool has been installed.
Deployment steps
1. Add RadonDB ClickHouse’s Helm repository
$ helm repo add ck https://radondb.github.io/radondb-clickhouse-kubernetes/
$ helm repo update
Copy the code
2. Deploy RadonDB ClickHouse Operator
$ helm install clickhouse-operator ck/clickhouse-operator
Copy the code
3. Deploy the RadonDB ClickHouse cluster
$ helm install clickhouse ck/clickhouse-cluster
Copy the code
4. Check the running status of the cluster
You can obtain six running RadonDB ClickHouse Pods and three Zookeeper Pods to form a cluster of three fragments and two copies, and provide a ClusterIP service for access. If you need to access the cluster from outside, change the service type to NodePort or LoadBalancer using kubectl Edit Service/clickhouse-clickhouse.
$kubectl get Pods -n test NAME READY STATUS RESTARTS AGE Pod/Chi-clickhouse-Replicas -0-0-0 2/2 Running 0 3m13s pod/chi-ClickHouse-replicas-0-1-0 2/2 Running 0 2m51s pod/chi-ClickHouse-replicas-1-0-0 2/2 Running 0 2m34s pod/chi-ClickHouse-replicas-1-1-0 2/2 Running 0 2m17s pod/chi-ClickHouse-replicas-2-0-0 2/2 Running 0 115s pod/chi-ClickHouse-replicas-2-1-0 2/2 Running 0 48s pod/zk-clickhouse-cluster-0 1/1 Running 0 3m13s Pod /zk-clickhouse-cluster-1 1/1 Running 0 3m13s pod/zk-clickhouse-cluster-2 1/1 Running 0 3m13s # get service -n test NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/chi-ClickHouse-replicas-0-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 2m53s service/chi-ClickHouse-replicas-0-1 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 2m36s service/chi-ClickHouse-replicas-1-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 2m19s service/chi-ClickHouse-replicas-1-1 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 117s service/chi-ClickHouse-replicas-2-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 50s service/chi-ClickHouse-replicas-2-1 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 13s service/clickhouse-ClickHouse ClusterIP 10.96.137.152 < None > 8123/TCP,9000/TCP 3m14s service/zk-client-clickhouse-cluster ClusterIP 10.107.33.51 <none> 2181/TCP,7000/TCP 3m13s service/zk-server-clickhouse-cluster ClusterIP None <none> 2888/TCP,3888/TCP 3m13sCopy the code
At this point, the deployment of RadonDB ClickHouse on Kubernetes cluster by Helm mode has been completed. It can be seen that the deployment of Helm mode is relatively convenient and simple. This simplifies the CR deployment file configuration process. You do not need to learn all the Yaml syntax of Kubernetes and the meanings of parameters in the CR deployment file.
| use Operator management RadonDB ClickHouse cluster
The above demonstrates how to deploy a RadonDB ClickHouse cluster using Operator. Now let’s verify Operator’s ability to manage the cluster.
Add the shard
What if you need to add an extra shard to ClickHouse? All we need to do is change the CR we deploy.
$ kubectl get chi -n test
NAME CLUSTERS HOSTS STATUS
clickhouse 1 6 Completed
$ kubectl edit chi/clickhouse -n test
Copy the code
Spec: Configuration: clusters: - name: "Replicas" layout: shardsCount: 4Copy the code
After the modification, Kubernetes will store the CR information in the ETCD, and Operator will be aware of the changes in the ETCD. When the Operator obtains the CR change, it creates the StatefulSet and Service based on the CR change.
Check out the Operation of the RadonDB ClickHouse cluster below. You can see that two RadonDB ClickHouse Pods have been added to complete the increase in cluster sharding.
$ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
pod/chi-ClickHouse-replicas-0-0-0 1/1 Running 0 14m
pod/chi-ClickHouse-replicas-0-1-0 1/1 Running 0 14m
pod/chi-ClickHouse-replicas-1-0-0 1/1 Running 0 13m
pod/chi-ClickHouse-replicas-1-1-0 1/1 Running 0 13m
pod/chi-ClickHouse-replicas-2-0-0 1/1 Running 0 13m
pod/chi-ClickHouse-replicas-2-1-0 1/1 Running 0 12m
pod/chi-ClickHouse-replicas-3-0-0 1/1 Running 0 102s
pod/chi-ClickHouse-replicas-3-1-0 1/1 Running 0 80s
Copy the code
Hard drive capacity
Similarly, if you need to expand ClickHouse Pods, you can simply modify the CR.
$ kubectl get chi -n test
NAME CLUSTERS HOSTS STATUS
clickhouse 1 8 Completed
$ kubectl edit chi/clickhouse -n test
Copy the code
For example, change the storage capacity to 20 Gi.
volumeClaimTemplates:
- name: data
reclaimPolicy: Retain
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Copy the code
After the modification is successful, the Operator automatically applies for expansion, reconstructs the StatefulSet, and mounts the expanded hard drives.
By viewing the PVC mounting status of the cluster, you can see that the disk capacity has been updated to 20Gi.
$ kubectl get pvc -n clickhouse
NAME STATUS VOLUME CAPACITY ACCESS MODES
data-chi-clickhouse-cluster-all-nodes-0-0-0 Bound pv4 20Gi RWO
data-chi-clickhouse-cluster-all-nodes-0-1-0 Bound pv5 20Gi RWO
data-chi-clickhouse-cluster-all-nodes-1-0-0 Bound pv7 20Gi RWO
data-chi-clickhouse-cluster-all-nodes-1-1-0 Bound pv6 20Gi RWO
...
Copy the code
conclusion
At this point, we have seen two ways to deploy a RadonDB ClickHouse cluster on the Kubernetes platform, and basic operations for the Operator to manage the ClickHouse cluster.
Next up
More specifics about the ClickHouse Operator project, rationale, code architecture, and more. Please look forward to…
reference
[1]. RadonDB ClickHouse: github.com/radondb/rad…
About RadonDB
The RadonDB open Source community is a cloud-oriented, containerized open source database community. To provide database technology enthusiasts around the mainstream open source database (MySQL, PostgreSQL, Redis, MongoDB, ClickHouse, etc.) technology sharing platform, and provide enterprise RadonDB open source products and services.
Currently the RadonDB open source database family has been Everbright bank, bank of Shanghai pudong development bank in silicon valley, hami, taikang life insurance, taiping insurance, axa, sunshine, life, anji logistics, one hundred AnChang logistics, blue moon, TianCai ShangLong, Luo Kejia China, zhe hui run sports science and technology, wuxi, Beijing telecom, jiangsu traffic holding airlines, sichuan airlines, kunming, the thousands of companies such as biological control is used and the community of users.
RadonDB can be delivered based on cloud platform and Kubernetes container platform. It not only provides database product solutions covering multiple scenarios, but also provides professional cluster management and automatic operation and maintenance capabilities. The main functions and features include: High availability (HA) primary/secondary switchover, strong data consistency, read/write separation, one-click installation and deployment, multi-dimensional indicator monitoring and alarm, flexible capacity expansion and reduction, horizontal free expansion, automatic backup and recovery, same-city multi-active, and remote Dr. RadonDB only requires enterprises and community users to focus on business logic development, and does not need to pay attention to complex issues such as cluster selection, management, operation and maintenance, so as to help enterprises and community users greatly improve the efficiency of business development and value innovation!
GitHub:github.com/radondb
Wechat group: please search and add group assistant wechat id RadonDB