• MySQL primary/secondary replication: improves overall database performance, Dr Backup, and data recovery
  • ShardingSphere read and write separation, database and table separation: master/slave replication cannot solve the performance problems caused by the large size of a single-table database, so it is necessary to separate databases and tables

The basic forms of clusters are common

Cluster goals

High availability

  • High AvailabilityWhen a server stops services, it has no impact on services and users. The cause may be due to unexpected causes such as network adapters, routers, equipment rooms, high CPU load, memory overflow, and natural disasters. In most cases, it is also called a single point of failure

Break the data limit

  • One server cannot store a large amount of data. Multiple servers are required to share data. It is best to back up each other so that even if one node fails, data can be found on other nodes

Data backup Dr

  • After a single node fails, the stored data can still be pulled up elsewhere

Pressure share

  • Because multiple servers can do part of their work, the existence of a single point of stress is avoided as much as possible

Cluster base form

  • Master-slave form
  • Shard type
  • The main type

MySQL cluster

MySQL Cluster Principle

MySQL – MMM

  • Master-Master Replication Manager for MySQL (MySQL Master Master Replication Manager) is a Google Open Source project (Perl) script. MMM is an extended architecture based on MySQL Replication. It is used to monitor the master/master Replication of MySQL and perform failed transfers. Its principle is to map the IP of real database node (RIP) to virtual IP (VIP) set. The monitoring end of mysql-MMM provides multiple virtual IP addresses (VIPs), including one writable VIP and multiple readable VIPs. These IP addresses are bound to the available mysql through the management of the monitoring end. When one of these IP addresses is down, the monitoring end will migrate the VIPs to other mysql. In the whole supervision process, it is necessary to add relevant authorized users in mysql, so that mysql can support the maintenance of supervision machine; The authorized users include user MMM_monitor and user MMm_agent. If you want to use MMM backup tool, you need to add user MMm_tools

MHA(Master High Availability)

  • Just a quick overview

InnoDB Cluster

  • The solution supports automatic Failover, strong consistency, read/write separation, high availability of read libraries, load balancing of read requests, and horizontal expansion. However, it is complex to deploy, and new components are needed to solve the router single point problem. If there is no other better solution to consider this solution. InnoDB Cluster is mainly composed of MySQL shell, MySQL Router and MySQL server Cluster. The three work together to provide a complete high availability solution for MySQL

    MySQL Shell provides management interface for managers, which can be very convenient to configure and manage the cluster

    The MySQL Router can be automatically initialized according to the cluster status. It is a client connection instance. If a node becomes Down, the cluster automatically updates its configuration. The MySQL Router automatically probes and connects the client to the new node

Common enterprise database solutions

  • Step 1: Database proxy DBProxy; Mycat, Cobar, ShardingSphere, etc

  • Step 2: Read/write separation, primary node write, secondary node read

  • Step 3: Distinguish by node role; These three nodes, salve1-3, are for public access and undertake large concurrent requests; Slave4 Internal staff, background management system; Slave5 specializes in data backup

Docker deplores MySQL clusters

Create a Master/Slave instance and start it

  • Creating a Local File

    mkdir -p /var/mall/mysql/master/data mkdir -p /var/mall/mysql/master/conf mkdir -p /var/mall/mysql/slave/data mkdir -p The/var/mall/mysql/slave/conf. # conf directory to create my CNF (master/slave) vim. My CNFCopy the code
---master docker run -p 33060:3306 --name mysql-master33060 \ -v /var/mall/mysql/master/data:/var/lib/mysql \ -v / var/mall/mysql/master/conf/my CNF: / etc/mysql/my CNF \ - e MYSQL_ROOT_PASSWORD = root \ - d mysql: 5.7.31 - slave docker run  -p 33070:3306 --name mysql-slave33070 \ -v /var/mall/mysql/slave/data:/var/lib/mysql \ -v The/var/mall/mysql/slave/conf/my CNF: / etc/mysql/my CNF \ - e MYSQL_ROOT_PASSWORD = root \ - d mysql: 5.7 # parameter specifies the -p 33060:3306: Map port 3306 of a container to port 33060 of a host. -v: mounts the configuration file to the host. -e: initializes the password of user rootCopy the code

Modify the configuration

Modify Msater/Slvae basic configuration
  • Basic configuration, master and slave must be added

    Vim/var/mall/mysql/master/conf/my CNF vim/var/mall/mysql/slave/conf/my CNF # basic setup, [client] default-character-set=utf8 [mysql] default-character-set=utf8 [mysqld] init_connect=' set collation_connection=utf8_unicode_ci' init_connect='SET NAMES utf8' character-set-server=utf8 collation-server=utf8_unicode_ci skip-character-set-client-handshake skip-name-resolveCopy the code
Example Add master master/slave replication
  • vim /var/mall/mysql/master/conf/my.cnf server-id=1 log-bin=mysql-bin read-only=0 binlog-do-db=touch-air-mall-ums binlog-do-db=touch-air-mall-pms binlog-do-db=touch-air-mall-oms binlog-do-db=touch-air-mall-sms binlog-do-db=touch-air-mall-wms replicate-ignore-db=mysql replicate-ignore-db=sys replicate-ignore-db=information_schema  replicate-ignore-db=performance_schemaCopy the code

    Restarting the Master Node

    docker restart mysql-master33060
    Copy the code
Added slave slave replication configurations
  • vim /var/mall/mysql/slave/conf/my.cnf server-id=2 log-bin=mysql-bin read-only=1 binlog-do-db=touch-air-mall-ums binlog-do-db=touch-air-mall-pms binlog-do-db=touch-air-mall-oms binlog-do-db=touch-air-mall-sms binlog-do-db=touch-air-mall-wms replicate-ignore-db=mysql replicate-ignore-db=sys replicate-ignore-db=information_schema  replicate-ignore-db=performance_schemaCopy the code

    Restart the Slave node

    docker restart mysql-slave33070
    Copy the code
Authorize the master user to synchronize data
GRANT GRANT to master docker exec it mysql-master33060 /bin/bash docker exec it mysql-master33060 /bin/bash REPLICATION SLAVE ON *.* TO 'backup'@'%' IDENTIFIED BY '123456'; # refresh PRIVILEGES; Show master status\G;Copy the code

Access the slave interface to synchronize master data
Stop the slave node. reset slave;Copy the code
  • Synchronization command
Docker exec it mysql-slave33070 /bin/bash CHANGE MASTER TO MASTER_HOST='192.168.83.133', MASTER_USER='backup', MASTER_PASSWORD='123456', MASTER_LOG_FILE='mysql-bin.000002',MASTER_LOG_POS=604,master_port=33060;Copy the code
  • Open the slave

    start slave; Show slave status\G;Copy the code

Test the master/slave synchronization effect

  • In 33060, create touch-air-mall-UMS and insert data, or import project data directly

ShardingSphere

Introduction to the

  • In this mode, performance problems caused by a large amount of data in a single table cannot be solved. Therefore, separate databases and tables are required. Middleware Mycat or ShardingSphere

    ShardingSphere official documentation

  • ShardingSphere is an ecosystem of open source distributed database middleware solutions, which is composed of three independent products, Sharding-JDBC, Sharding-Proxy and Sharding-Sidecar (planned). They all provide standardized data sharding, distributed transaction and database governance functions, which can be applied to diverse application scenarios such as Java isomorphism, heterogeneous languages, cloud native and so on

  • Sharding-jdbc adopts a decentralized architecture, which is suitable for lightweight OLTP applications with high performance developed in Java. Sharding-proxy provides static entry and heterogeneous language support, which is suitable for OLAP applications and the management, operation and maintenance of fragmented databases. ShardingSphere is an ecosystem composed of multiple access ends. By mixing Sharding-JDBC and Sharding-Proxy, and using the same registry to configure Sharding policies, application systems can be flexibly built for various scenarios, and architects can more freely adjust the best system architecture suitable for the current business

    Sharding-JDBC Sharding-Proxy Sharding-Sidecar
    The database any MySQL MySQL
    Connection consumption high low high
    Heterogeneous language Only the Java any any
    performance Low loss Loss is slightly high Low loss
    There is no centralized is no is
    Static entry There is no There are There is no

Feature list

Data fragmentation
  • Sub-library & sub-table
  • Reading and writing separation
  • Customize sharding policies
  • Decentralized distributed primary keys
Distributed transaction
  • Standardized transaction interface
  • XA strong consistent transactions
  • Flexible transaction
Database governance
  • Configuration dynamic
  • Orchestration & Governance
  • Data desensitization
  • Visual link tracking
  • Elastic expansion (under planning)

Sharding-JDBC

  • Positioned as a lightweight Java framework that provides additional services in Java’s JDBC layer. It uses the client directly connected to the database, in the form of JAR package to provide services, without additional deployment and dependence, can be understood as an enhanced VERSION of THE JDBC driver, fully compatible with JDBC and various ORM frameworks

    • Works with any JDBC-based ORM framework, such as JPA, Hibernate, Mybatis, Spring JDBC Template or directly using JDBC
    • Support any third party database connection pool, such as DBCP, C3P0, BoneCP, Druid, HikariCP, etc
    • Support for any database that implements the JDBC specification. Currently supports MySQL, Oracle, SQLServer, PostgreSQL and any database that complies with the SQL92 standard

Sharding-Proxy

  • Positioned as a transparent database agent, providing server-side versions that encapsulate database binary protocols to complete support for heterogeneous languages. MySQL/PostgreSQL can be accessed using any MySQL/PostgreSQL compliant client (e.g. MySQL Command Client, MySQL Workbench, Navicat, etc.) manipulate data and are more DBA-friendly

    • Fully transparent to applications and can be used directly as MySQL/PostgreSQL
    • This works with any client that is compatible with MySQL/PostgreSQL

Sharding – sidecars (TODO)

  • Positioned as Kubernetes’ cloud native database agent, proxy all access to the database as Sidecar. The Database Mesh, also known as the data grid, is provided through a central-free, zero-intrusion solution to interact with the Database

    Database Mesh focuses on how to organically connect distributed data access applications with databases. It pays more attention to interaction and effectively combs the interaction between chaotic applications and databases. Using the Database Mesh, applications and databases that access the Database eventually form a huge grid system. Applications and databases only need to be placed in the grid system, and they are governed by the Mesh layer

Use Sharding – Proxy

  • Download Sharding – Proxy

    The configuration manual

  • Download the mysql driver

    Place the downloaded JAR package in the Lib folder of sharding-Proxy

Configuring Authentication Information

  • Conf /server.yaml, open comments

Configure sub-database sub-table and read/write separation

  • config-master-slave-sharding.yaml

  • Step 2: Prepare the data

    • Edit the my.cnf files of 33060 and 33070, add demo_DS_0 and DEMO_DS_1 to the master-slave synchronized table, save and restart the container

      vim /var/mall/mysql/slave/conf/my.cnf
      vim /var/mall/mysql/master/conf/my.cnf
      
      docker restart mysql-master33060
      docker restart mysql-slave33070
      Copy the code

  • Step 3: Create demo_ds_0 and demo_DS_1 libraries on master33060. This is Slave33070

Start the test

  • Bat in Windows

  • If the startup is successful, you can use the connection tool to connect to Sharding-proxy. The default port is 3307, and the account password is root, root

Create table tests
  • In the future, all database operations are connected to sharding_DB of Sharding-proxy

  • Build table SQL

    CREATE TABLE `t_order`(
     `order_id` bigint(20) NOT NULL AUTO_INCREMENT,
     `user_id` int(11) NOT NULL,
     `status` VARCHAR(50) ,
    PRIMARY KEY (`order_id`)
    )ENGINE=INNODB DEFAULT CHARSET=utf8;
    
    
    CREATE TABLE `t_order_item`(
    `order_item_id` bigint(20) NOT NULL,
    `order_id` bigint(20) NOT NULL,
    `user_id` int(11) NOT NULL,
    `content` VARCHAR(255) COLLATE utf8_bin DEFAULT NULL,
    `status` VARCHAR(50) COLLATE utf8_bin DEFAULT NULL,
    PRIMARY KEY (`order_item_id`)
    )ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
    Copy the code
  • Look at the demo_DS_0 and DEMO_DS_1 libraries

Insert record test sub – library sub – table
  • INSERT into t_order (user_id,status) VALUES(1,1);
    INSERT into t_order (user_id,status) VALUES(2,2);
    INSERT into t_order (user_id,status) VALUES(3,3);
    
    Copy the code
    • Observe t_order in sharding-proxy

    • Theoretical results:

      • Based on the user id, records for branch user id 1,3 should existdemo_ds_1; The record for user 2 existsdemo_ds_0;
      • According to order_id, a record of the sub-table order_id singular should existt_order_1; The record whose order ID is an even number existst_order_0

      Observe master33060 again

Test read/write separation
  • Manually insert a record on the Slave33070 node

    Theoretical result: the host will not copy the slave machine insert records, so there are still only three records on Master33060, while there are four records on Slave33070 node, this is to query the order table on Sharding-proxy, according to the host write slave machine, should be four results