About the author:Tianyi Wang, TIDB Community Architect. I used to work in Fidelity Investment and Softbank Investment. I have rich experience in designing high-availability database schemes, and have in-depth studies on the high-availability architecture and database ecology of TIDB, Oracle, PostgreSQL, MySQL and other databases.
Nowadays, the topic of cloud on the database has attracted more and more attention. As an important basic software, what kind of changes will the database face in the era of cloud native? When the database meets the cloud native, what sparks will be collided? Recently, in the Amazon cloud developer Meetup, TIDB Community Architect Wang Tianyi shared his experience in developing TIDB and cloud native practices.
This article will look at TIDB Operator and cloud native in the following three ways:
- What is a cloud native database;
- Why cloud native database TIDB should embrace Kubernetes;
- TIDB best practices on AWS.
What is a cloud native database
What is cloud native
Any technological change must be thought first. I think of cloud native as a methodology for applications running on the cloud.
We can split the cloud into cloud and native parts. The so-called cloud must mean that the application is located in the cloud, rather than in the traditional data center. For example, files on a cloud disk are stored in the cloud, rather than locally on a hard drive. The native interpretation, I think, is that applications are born in the cloud, and they need to be designed from the beginning with the cloud in mind, to run optimally on the cloud.
In a word, cloud primitivity is to make full use of the elasticity and distributed advantages of the cloud platform, and it is a technology born, grown and used in the cloud.
Cloud native features
Since the concept of cloud native was born in 2013, its definition has been continuously improved. Cloud native has the following four characteristics:
- Continuous Delivery: The concept of agile development has been around for about a decade. The purpose of agile development is to respond to the changing needs of users, to achieve frequent release, fast delivery. In some grayscale release, canary release scenarios, several different versions may be available simultaneously.
- Containerization: Containerization makes our development, testing and production environments highly unified. In the research stage, we can also use commands such as docker compose to quickly build a research environment.
- Microservices: Microservices are independently published application services that communicate with each other through REST APIs and can be independently deployed, updated, scaled up, and restarted.
- DevOps: DevOps emphasizes the efficient coordination of development and operations. Automated deployment, CI tools to quickly deploy applications to production environments.
The essence of cloud primitivity is to give full play to the advantages of cloud computing resources pooling, platform scale and other technical dividends to create more business value.
What is a cloud native database
Cloud native database is a database service built, deployed and distributed through cloud platform. It is distributed as a PaaS, often referred to as DBaaS. Cloud native databases provide better accessibility and scalability than traditional databases.
Cloud native database features
- Cloud native database should have an automatic fault-tolerant mechanism to achieve automatic migration of downtime and automatic fault isolation to ensure the high availability of applications. This is the most basic feature of a cloud native database.
- Cloud native database should have good elastic scaling ability, which can be automatically scaled and expanded in seconds according to CPU load and memory utilization rate.
- For elastic expansion, flexible billing methods can be used, such as payment by flow, payment by resources, or combination of package services. A variety of pricing strategies can be used to achieve the goal of cost reduction and efficiency increase for users.
- Cloud native database should be easy to manage, provide a good alarm monitoring service, simple operation and maintenance operation, from self-service operation and maintenance to automatic operation and maintenance transformation.
- Cloud native databases should also have good security isolation mechanisms. This includes both multi-tenant resource isolation and network security isolation.
All these features are for the ultimate user experience, low learning cost, low operation and maintenance cost, low price cost to use the cloud native database.
Why is TIDB a cloud native database
Let’s take a look at the basic architecture of TIDB:
- TIDB Server: Responsible for accepting connections from clients, performing SQL parsing and optimization, and ultimately generating a distributed execution plan.
- TIKV Server and TIFlash Server: For data storage, TIKV is the row storage engine, TIFlash is the column storage engine.
- PD: The whole TIDB stores the source information of the cluster, is responsible for the scheduling operation of the cluster, and is the brain of the cluster.
Storage node:
- TIKV: Key-value row storage engine for distributed transactions.
- TIFlash: Column storage engine, mainly for analysis of type scenarios for acceleration.
TIDB Cluster itself has high availability and high scalability, which is very consistent with the concept of cloud native.
Why Embrace Kubernetes
The history of Kubernetes
There are two topics that cannot be avoided when it comes to cloud native: containerization and container orchestration. In what way will TIDB go to the cloud? We have to mention Kubernetes here. From the history of Kubernetes, Docker solved the problem of containerization, while Kubernetes solved the problem of container choreography:
- In 2013, the Docker project was released, making sandbox technology available for all operating systems;
- In 2014, the Kubernetes project was released, and the container design pattern was formally established.
- In 2017, the implementation standard of the Kubernetes project was established;
- In 2018, Kubernetes has become a cloud-native operating system.
Database × Kubernetes
In my previous company, I designed such an architecture: the underlying database uses MariaDB, Galera, and Cluster. MariaDB, Galera, Cluster are master master replicates composed of a single stateless instance. The upper layer uses the stateless ProxySQL for SQL routing and then to the different mariaDB. The middle tier uses ZK for service registration and discovery, controlling Proxysql in the upper layer and MariaDB in the lower layer. Such a structure would be a natural fit for Kubernetes.
At first, we were worried about Kubernetes on Database, so we put the relatively lightweight Proxy nodes in Kubernetes, and the Database layer was still managed on the physical machine. In order to cope with the occurrence of accidents, in addition to the proxy deployed on Kubernetes, proxy nodes are also deployed on the physical machine. So even if the Kubernetes cluster fails completely, we can easily handle this situation.
Having accumulated more than half a year of experience in Kubernetes maintenance, we decided to put a set of relatively marginal clusters on Kubernetes. Since then, both Kubernetes on Proxy and Kubernetes on Database are completed.
In what way should TIDB embrace Kubernetes
Is it appropriate to put stateless TIDB nodes on Kubernetes? In fact, this is the same idea as putting Proxy on Kubernetes. If there is the intention of TIDB on Kubernetes, then maintaining a set of such TIDB + Kubernetes cluster should be the best means to deal with the unfamiliar environment, especially for disaster recovery learning and network configuration, are very helpful. A lot of companies used TIDB this way before they went to Kubernetes.
Those of you who know TIDB should know that PD is implemented based on ETCD. Is there any good way to install Kubernetes on ETCD? Because ETCD is a stateful service, it may not be suitable for naked cloth on Kubernetes. We can learn from the well-known ETCD-operator to develop a set of PD-operators.
TIKV might be a little more complicated. As a database product, storage layer on Kubernetes must be very careful. Before we go to Kubernetes, we can take a look at how TIKV is implemented.
We split the data in the storage layer TIKV into small data blocks, which are called regions in TIKV.
To ensure correctness and availability, each region corresponds to a Raft Group, with at least three copies of each region implemented through Raft Log replication. It can be seen that, at least in the TIKV layer, it is a stateful service. When a TIKV node fails, there is no way to simply create a POD using the PV of the failed node.
As we all know, Kubernetes determines a node failure based on whether the Kubelet service can properly report the state of the node. Imagine if Kubelet doesn’t start properly, but the container in the node is still running, and then you hook PV to another node, and you have a double write problem.
Therefore, it is imperative to use operators to manage the state of the TIDB.
TIDB best practices on AWS
Why choose the TIDB Operator
What is a TiDB Operator? Simply put, a TIDB Operator is a combination of a CRD + a Controller. The CRD is responsible for declarative management, and the Controller is responsible for driving the display state of the TIDB Cluster to the expected state. For example, our cluster definition of TIDB is 3 TIDBs and 3 TIKVs. For some reasons, a TIDB node was hung up. So the real state is 2 TIDB nodes, and the expected state is 3 TIDB nodes. At this point, the Controller can pick up a new TIDB node.
The CRD defines many types of components and ecological components of a TIDB Cluster. For example, TIDB Cluster type is defined for TIDB Cluster, TIDB Monitor type is defined for monitoring, and DM Cluster type is defined for DM synchronization component.
Let’s look at how the tidb-operator automates the maintenance of a TiDB cluster.
The tidb-operator applies practices
At deployment time, the TIDB Operator selects the best Kubernetes native object for each component. PD is essentially a component of ETCD. TIDB is a stateless service. We don’t need to worry about what is the most appropriate deployment. PD is developed based on ETCD, which needs to make peer discovery, while Operator will break up TIKV containers and automatically add store-label, set each container from which available area and which machine room, and assist PD to realize high availability topology of Region. With the TIDB Operator, we can copy and propagate the operation experience of the original factory as code. You can quickly deploy a set of TIDB Cluster on Kubernetes with just a declaration in YAML format. Of course, creation alone is not enough; operators also provide best practices for operations.
During the upgrade process, we send an upgrade request to TIKV. At this time, the TIKV node needs to be shutdown and replace the image of the specified version. Since TIKV is a stateful service, we still need to do something before the TIKV node is shutdown. For example, call the PD interface and eject the Region leader from the current TIKV node. At this time, the TIKV will not accept read and write requests. After that, the TIKV container can be rebuilt smoothly and the TIKV can be upgraded to the specified version. Of course, after the upgrade, we need to call PD interface to move the region leader back again, so as to repeat and roll the upgrade.
This is a working three-copy TIKV node. According to RAFT protocol, more than half of the nodes are required to survive in order to provide external service, that is, the failure of one node can be tolerated at most.
When the service of TIKV 1 is abnormal, combining the store information of PD and the container information of Kubernetes, we can know whether TIKV is abnormal through some Probe means.
Do you failover when a fault is detected? Not really. Failover too fast may cause frequent switches due to jitter of network resources or CPU resources, and failover too slow may reduce high availability and throughput of the cluster.
At this point, the TIDB Operator does an automated failover. It is up to the TIDB Operator to determine when and how often failover is needed, and how to do it.
Of course, there is a lot of opposition in the industry. We need to “think twice” about whether to deploy Database on Kubernetes:
- Think back. Not only do we have to think about how to move the physical data to Kubernetes, but we also have to think about how to retreat from Kubernetes when Kubernete operations are too complicated, or for some other reason we can’t cover too much technology stack temporarily. The Binlog and TiCDC tools are available to help.
- Complacency. The risk point goes up with the addition of a technology stack of Kubernetes and operators. That means we need to do more testing. If the Kubernetes master node is down, we can still maintain service. Of course, the master node itself supports Keepalived + HAProxy and is highly available. If Node fails, we can move the POD from that Node to another Node. The Kubernetes are all dead, along with locally saved PV files. Even if a truly catastrophic failure corrupts the PV files, we still have regular backups to recover from.
- Think the same. Kubernetes itself is a platform for change. Upgrade operation, scale capacity, become easier on Kubernetes.
In addition, please consider two more questions:
- Do you want to believe that Kubernetes is the future?
- Are you ready?
If you are not ready for the relevant technology stack, or you want to experience the charm of TIDB on Kubernetes platform in advance, we provide you with the AWS based DBASS service – TIDB Cloud. Using TIDB Cloud, users can not only get reliable database services, but also enjoy professional operation and maintenance services, and at the same time avoid high learning costs.
On the TIDB Cloud, users can freely choose TIDB nodes for computing and TIKV & TIFLASH nodes for storage, which can truly realize on-demand usage and payment on demand, giving full play to the advantages of Cloud native, reducing cost and increasing efficiency.
In 2014, a Gartner report used the term Hybrid Transaction Analytical Processing (HTAP) to describe a new application framework that breaks down the barrier between OLTP and OLAP and can be applied to both transactional and analytical database scenarios.
HTAP enables real-time business decisions. This architecture has obvious advantages: tedious and expensive ETL operations are avoided, and the latest data can be analyzed more quickly. This ability to quickly analyze data will be one of the core competencies of the future.
Some users may need to do some query class analysis at regular intervals. At this time, we can analyze the previous TIFLASH node of the previous day and synchronize the data from TIKV to the TIFLASH node for the operation personnel to analyze the next day. Therefore, the characteristics of elastic expansion capacity on the cloud and the HTAP scenario are naturally highly matched.
The above is the practical experience of TIDB Operator in relation to Amazon Web Service cloud native. I hope it can be helpful to you.