TiDB is PingCAP company independent research and development of open source distributed relational database TiDB, key business for the enterprise, have “distributed transaction consistency, online elastic scale-out, fault since the recovery of high availability and data center live” core features, such as power companies to maximize play data value, and release the enterprise growth space.

KubeSphere is an application-centric multi-tenant container platform built on Top of Kubernetes. IT is fully open source, supports multi-cloud and multi-cluster management, provides full-stack IT automated operation and maintenance capabilities, and simplifies DevOps workflows for enterprises. KubeSphere provides an operation-friendly wizard interface to help enterprises quickly build a powerful and feature-rich container cloud platform.

This article describes how to deploy TiDB on KubeSphere.

Preparing for the Deployment Environment

KubeSphere is an open source container platform from QingCloud that can be installed and deployed on any infrastructure. Support one-click deployment of KubeSphere (QKE) on qingyun public cloud.

The following uses KubeSphere container platform as an example to deploy TiDB distributed database on Qingyun cloud platform. At least three schedulable nodes are required. You can also install KubeSphere on any Kubernetes cluster or Linux system by clicking on the KubeSphere documentation.

1. Log in to console.qingcloud.com/, click on the left container platform, and select… KubeSphere, click Create and select the appropriate cluster specification:

2. Log in to KubeSphere platform interface after creation:

3. Click the Web Kubectl cluster client command line tool below to connect to the Kubectl command line interface. Run the following command to install the TiDB Operator CRD:

Kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.6/manifests/crd.yamlCopy the code

4. The following information is displayed:

5. Click Platform Management in the upper left corner, select Access Control, and create an enterprise space named dev-workspace

6. Enter the enterprise space, select the application repository, and add a TiDB application repository:

7. Add PingCAP’s official Helm warehouse to KubeSphere container platform. The Helm warehouse address is as follows:

charts.pingcap.org

8. Add as follows:

Deploy TiDB – Operator

1. Create a project (namespace) to run the TiDB cluster:

2. After the creation, click enter the project, select the application, and deploy the new application:

3. Select from an application template:

4. Select PingCAP. The warehouse contains multiple Helm Charts and TiDB Operator and TiDB Cluster are deployed.

5. Click TiDB Operator to enter the details page of Chart. Click Configuration file to view or download the default values.yaml, select the version, and click Deploy:

6. Set the application name, select the application version, and confirm the deployment location of the application.

7. Go to the next step. You can directly edit the values.yaml file and customize the values.

8. Click Deploy and wait until the application status becomes active:

9. Click workload (Deployment) to see that TiDB Operator has deployed two Deployment type resources:

Deploy TiDB – Cluster

1. After TiDB Operator is deployed, you can continue to deploy TiDB Cluster. To deploy the TiDB Operator, select the application on the left and click TiDB Cluster:

2. Switch to the configuration file, select the version, and download values.yaml to the local PC.

3. Some components in the TiDB Cluster require persistent volumes. Qingyun public cloud platform provides the following types of StorageClass:

/ # kubectl get sc
NAME                       PROVISIONER     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-high-capacity-legacy   csi-qingcloud   Delete          Immediate           true                   101m
csi-high-perf              csi-qingcloud   Delete          Immediate           true                   101m
csi-ssd-enterprise         csi-qingcloud   Delete          Immediate           true                   101m
csi-standard (default)     csi-qingcloud   Delete          Immediate           true                   101m
csi-super-high-perf        csi-qingcloud   Delete  
Copy the code

4. Select cSI-standard. StorageClassName in values.yaml is set to local-storage by default. Therefore, replace all local-storage fields with CSI-standard directly in the downloaded YAML file. In the last step, overwrite the contents of the application configuration text box with the modified values.yaml, or manually replace the configuration files one by one:

5. Modify the storageClassName field only to reference external persistent storage. If you need to schedule TiDB, TiKV, or PD components to independent nodes, modify the field by referring to parameters related to NodeAffinity. Click Deploy to deploy TiDB Cluster to container platform. The following two applications can be seen in the application list:

View TiDB cluster monitoring

1. TiDB takes some time to complete initialization after cluster Deployment. Select the workload and view the Deployment stateless application:

2. Check the StatefulSets, where TiDB, TiKV, and PD are all stateful applications:

3. Check TiDB load on KubeSphere monitoring panel, it can be seen that CPU, memory and network outflow rate have obvious changes:

4. Check TiKV load on KubeSphere monitoring panel:

The TiDB cluster contains 3 PD, 2 TiDB, and 3 TiKV components:

6. Click Storage Management to view the storage volume. TiKV and PD use persistent storage:

7. View the usage of a storage volume. For example, if TiKV is used as an example, you can view monitoring data such as the storage capacity and remaining capacity of the current storage volume.

TiDB Cluster resource usage ranking in KubeSphere project

Access the TiDB cluster

1. Click the service on the left to view the service information created and exposed in the TiDB cluster.

2. The service type bound to port 4000 of the TiDB service is NodePort, which can be directly accessed from outside the cluster using nodeIP. The test uses the MySQL client to connect to the database.

[root@k8s-master1 ~]# docker run it --rm mysql bash [root@0d7cf9d2173e:/# mysql -h 192.168.1.102 -p 32682 -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 201 Server version: 5.7.25- TIDB-v4.0.6 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 Compatible Copyright (C) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help; ' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases;  +--------------------+ | Database | +--------------------+ | INFORMATION_SCHEMA | | METRICS_SCHEMA | | PERFORMANCE_SCHEMA | | mysql | | test | + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + 5 rows in the set (0.01 SEC) mysql >Copy the code

View the Grafana monitor panel

In addition, TiDB provides Prometheus and Grafana for database cluster performance monitoring, and the Serivce 3000 port on the Grafana interface is also bound to the NodePort port. Access Grafana UI to view a metric:

conclusion

KubeSphere container platform is very friendly for cloud native application deployment. For application developers who are not familiar with Kubernetes and want to complete TiDB cluster deployment through simple configuration on the interface, you can refer to the above steps to get started quickly. In the next installment, we’ll share another way to deploy your TiDB app to the KubeSphere App Store for true one-click deployment.

In addition, TiDB can be combined with KubeSphere’s multi-cluster federation function. When TiDB applications are deployed, copies of different TiDB components can be distributed to multiple Kubernetes clusters in different infrastructure environments with one click, achieving cross-cluster and cross-region high availability. If you are interested, we will share TiDB’s hybrid cloud deployment architecture for multi-cluster federation in KubeSphere in a future article.