This article was adapted from the Internet

This series of articles will be organized into my GitHub repository for the Java Interview Guide. Check out my repository for more highlights

https://github.com/h2pl/Java-Tutorial

If you like, please click Star

The article was first posted on my personal blog:

www.how2playlife.com

WeChat public is this article number of the Java technology river’s lake 】 【 one, to study the MySQL database, which is the part of the content is derived from the network, this paper in order to put this article speaks with clarity theme, also integrates a lot I think good technology content, reference some good blog posts, if there are any copyright infringement, please contact the author.

This series of posts will tell you how from introductory to advanced, from the basic method of using SQL from MySQL execution engine to index, transaction, such as knowledge, principle of step by step you will learn about MySQL technology, a better understanding of how to optimize SQL based on this knowledge, reduce the SQL execution time, through the execution plans for SQL performance analysis, Then to the MySQL master-slave replication, master-slave deployment and other content, so that you can have a more complete understanding of the technical system of the whole MySQL, the formation of their own knowledge framework.

If you have any suggestions or questions about this series of articles, you can also contact the author on the public account “Java Technology Jianghu”. You are welcome to participate in the creation and revision of this series of blog posts.

First, MySQL extension specific implementation method

With the continuous expansion of business scale, it is necessary to choose appropriate solutions to cope with the increase of data scale, to cope with the increasing access pressure and data volume.

The expansion of database mainly includes: service separation, master/slave replication, read/write separation, database database and table separation, etc. This article mainly tells about database sub – database and sub – table

(1) Business separation

The article “Big Data and High Concurrency Solutions for Large Web applications” also explains why the business should be split.

At the beginning of the business, many applications adopt centralized architecture to speed up application rollout and rapid iteration. With the expansion of the service system, the system becomes more and more complex, more and more difficult to maintain, the development efficiency becomes lower and lower, and the consumption of resources becomes larger and larger, and the cost of improving the system performance through hardware also becomes higher and higher.

Therefore, in the early stage of selection, a good architecture design is an important guarantee for the later system expansion.

For example, the e-commerce platform contains several modules such as users, goods, evaluation and order. The simplest way is to create four tables of Users, Shops, comment and Order respectively in a database.

However, as the business grew in size and traffic increased, we had to split the business. Each module uses a separate database for storage, different business access to different databases, the original dependence on one database split into the dependence on four databases, so that four databases bear the pressure at the same time, the system throughput will naturally improve.

(2) Master/slave replication

Generally, the master writes and reads

1, MySQL5.6 database Master/Slave synchronization installation and configuration details

2. Common topology, principle analysis and summary of how to improve the efficiency of MySQL master-slave replication

3. Use the mysqlreplicate command to quickly set up Mysql primary secondary replication

In the above three articles, how to configure the master and slave database, and how to achieve the separation of read and write of the database is not described here, if necessary, click to view.

The figure above shows the process of data synchronization between Master and Slave of MySQL.

The Slave obtains the Binary log file from the Master, and then performs the operations recorded in the execution log of the local mirror. Since the Master and Slave replication is asynchronous, data between the Slave and Master may be delayed. In this case, data consistency can only be ensured.

(3) Database sub-library and sub-table

We know that no matter how good it configuration every machine has its own physical limits, so that when we apply already reach or beyond single machine a limit of time, we only looking for other machines to help or to continue to upgrade our hardware, but still common solution by adding more machines to common pressure.

We also have to consider as our business logic grows, can our machines grow linearly to meet demand? Therefore, the use of database sub-database sub-table, can immediately improve the performance of the system, about why to use the database sub-database sub-table other reasons are not described here, mainly about the specific implementation strategy. See the following section.

Second, the implementation strategy of separate tables

Keyword: user ID, table capacity

For most of the database design and business operation are basically related to the user ID, so using the user ID is the most common branch routing strategy. The user ID can be used as an important field throughout the system. Therefore, by using the user ID, we can not only facilitate our query, but also distribute the data evenly among different databases. (Of course, you can also divide tables by category, etc., and there are many ways to divide routing policies.)

Then assume that the order table ORDER holds the user’s order data, and the SQL script is as follows (for demonstration only, some details are omitted) :

CREATE TABLE `order` (
  `order_id` bigint(32) primary key auto_increment,
  `user_id` bigint(32),
   ...
) Copy the code

When the data is large and the data is divided into tables, the first step is to determine how many tables the data needs to be evenly distributed, that is, the table capacity.

Assume that there are 100 tables for storage, then when we store data, we first carry out the operation of obtaining the user ID and query the corresponding table according to user_id%100, as shown in the following diagram:

For example, if user_id = 101, then we can get the value by using the following SQL statement:

select * from order_1 where user_id= 101Copy the code

Where order_1 is calculated based on 101%100 and represents the chapter 1 order table after the sub-table.

Note:

In the actual development, if you use MyBatis to do the persistence layer, MyBatis has provided a good support for the function of database sub-table, such as the above SQL with MyBatis implementation should be:

Interface definition:

@param tableNum @param userId @return Order List */ public List<Order> getOrder(@Param("tableNum") int tableNum,@Param("userId") int userId);Copy the code

XML configuration mapping file:

<select id="getOrder" resultMap="BaseResultMap">
    select * from order_${tableNum}
    where user_id = #{userId}
  </select>Copy the code

${tableNum} = ${tableNum} = ${tableNum}

Note:

In addition, in actual development, our user ID is more likely to be generated by the UUID, so we can hash the UUID to an integer value, and then perform the modulo operation.Copy the code

Three, the implementation strategy of separate library

Database partition table can solve the efficiency problem of data query when a single table has a large amount of data, but it cannot improve the efficiency of the concurrent operation of the database, because the essence of the partition table is still carried out on a database, which is easily limited by the IO performance of the database.

Therefore, how to evenly distribute the IO performance of the database, it is obvious that the database operation can solve the performance problem of a single database.

The implementation of branch library strategy is very similar to that of branch table strategy. The simplest one can be routed by taking modules.

As in the above example, the user ID is modded to obtain a specific database, and the same keyword is:

User ID and database capacity

The routing diagram is as follows:

In the figure above, the library capacity is 100.

Similarly, if the user ID is UUID, hash first and then mold.

Fourth, the implementation strategy of sub-database and sub-table

In the above configuration, the database sub-table can solve the query performance problem of massive data in a single table, and the database sub-table can solve the concurrent access pressure of a single database.

Sometimes, we need to consider these two problems at the same time, therefore, we need to separate the table operation of a single table, but also need to carry out a separate operation, in order to expand the concurrent processing capacity of the system and improve the query performance of a single table, that is, we use the separate database and table.

Compared with the preceding two types of routing policies, a common routing policy is as follows:

Intermediate variable = user_id% (number of libraries * number of tables per library); 2, library number = round (intermediate variable/number of tables per library); 3, table number = intermediate variable % number of tables per library;Copy the code

For example, if there are 256 databases and 1024 data tables in each database and user user_id = 262145, the following information can be obtained based on the routing policy:

1, intermediate variable = 262145% (256*1024) = 1; 2. Library serial number = round (1/1024) = 0; 3, table serial number = 1% 1024 = 1;Copy the code

In this case, user_id = 262145 will be routed to table 1 of the 0th database.

The schematic diagram is as follows:

5. Summary of database and table

There are many kinds of strategies to choose, and the above is a relatively simple one according to the user ID. Other methods include using number segments for partitioning or using hash for routing. Those who are interested can find and learn by themselves.

As mentioned above, if the user ID is generated in the way of UUID, we need to perform a separate hash operation, and then perform a modular operation. In fact, hash itself is a strategy of library and table division. When using hash for routing policy, we need to know: Advantages and disadvantages of the hash routing policy are as follows: Uniform data distribution; Disadvantages: data migration is troublesome, can not be allocated according to machine performance data.

The above operations, query performance and concurrency are improved, but there are still some things to be noted, for example: things that used to cross tables become distributed things; Because the records are divided into different databases and different data tables, it is difficult to carry out multi-table associative query, and the data cannot be queried without specifying routing fields. After dividing the database and table, if we need to further expand the lineup of the system (change of routing policy), it will become very inconvenient and we need to carry out data migration again.

Finally, it should be pointed out that there are many middleware to choose from at present, the most common one is Cobar, the middleware of Taobao.

GitHub address: github.com/alibaba/cob…

The document address is github.com/alibaba/cob…

Cobar, the middleware of Taobao, will not be introduced in detail in this paper and will be introduced in the later study.

In addition, Spring can also implement read-write separation of databases, which will be covered in future articles.

Six, summarized

In the above, we learned how to carry out the database read and write separation and sub-database sub-table, then, is it possible to achieve an extensible, high performance, high concurrency site that? Apparently not yet! A large website uses far more than these technologies, it can be said that these are the most basic link, because there are many specific details we have not mastered, such as: database cluster control, cluster load balancing, disaster recovery, failover, transaction management and so on. Therefore, there is still much to learn and research.

In a word:

The way ahead is so long without ending, yet high and low I'll search with my will unbending.Copy the code

The road ahead is beautiful and bright. The new journey of 2017 is full of progress!

Mycat implements master/slave replication, read/write separation, and separate libraries and tables

What is Mycat

A completely open source, large database cluster for enterprise application development

Support transaction, ACID, can replace MySQL enhanced database

An enterprise-class database that can be viewed as a MySQL cluster instead of an expensive Oracle cluster

A fusion of memory caching technology, NoSQL technology, HDFS big data of the new SQL Server

A new generation of enterprise database products combining traditional database and new distributed data warehouse

A novel database middleware product

The above content comes from Mycat’s official website. To put it simply, Mycat is a database middleware. For our development, it is like a proxy. Just need to original data link to the mycat, of course, if there are multiple data sources before, you need to switch to a single data source, data source there is one thing about this is that when the amount of data that we have a lot of time, you need to start depots table separation or doing, speaking, reading and writing, don’t need to modify the code (only need to change the data source link address)

Use Mycat table and library practice

Haha, first of all, this is not an introductory blog for Mycat, but it feels like an introductory blog! This blog mainly explains the relevant knowledge of data sharding in Mycat, while xiaobian will be tested on the local database, illustrated.

Database partition table, how it sounds very high, there is always a kind of high, feeling, but small make up want to say is, in fact, as a developer, the always will come, the learning you still have to learn, the difference is just a matter of time order.

A, partition sub-table

Partitioning is storing the files and indexes of a table in separate physical files.

Mysql supports Range, List, Hash, and Key partition types. Range is commonly used:

RANGE partition: Allocates multiple rows to a partition based on column values belonging to a given contiguous interval.

LIST partitioning: Similar to partitioning by RANGE, except that LIST partitioning is selected based on column values matching a value from a discrete set of values.

HASH partition: A partition selected based on the return value of a user-defined expression computed using the column values of the rows to be inserted into the table. This function can contain any valid expression in MySQL that produces a non-negative integer value.

KEY partitioning: Similar to HASH partitioning, except that KEY partitioning supports only one or more columns and the MySQL server provides its own HASH function. One or more columns must contain integer values.

Table division refers to the logical division of a table into multiple logical tables, which is a table as a whole. Table division can be divided horizontally or vertically. For example, a large table storing merchant information is divided into tables according to the scope of merchant number, and records of different scopes are distributed into different tables.

Second, the type of Mycat data fragment

The sharding of Mycat is similar to the sharding of a table, that is, when a database is too large, especially if it is written too frequently and cannot be supported by a single host, then the database will face a bottleneck. The data stored in the same database instance is distributed to multiple database instances (hosts) for multi-device access to improve performance and improve the integrity of the system while cutting data.

Data sharding refers to the global division of data into relevant logical fragments, including horizontal sharding, vertical sharding and mixed sharding. The horizontal and vertical sharding of Mycat are mainly discussed below. It is important to note that Mycat is distributed, so it is normal for separate pieces of data to be distributed on different physical machines, relying on network communication for cooperation.

The level of segmentation

It is distributed to multiple node libraries according to certain rules of a field, and each node contains a portion of the data. Horizontal data partitioning can be simply understood as partitioning according to data rows. That is, some rows in the table are segmented to one node and some other rows are segmented to other nodes. From the perspective of distributed whole, they are a whole table.

Vertical segmentation

A database consists of many tables, and each table corresponds to different services. Vertical sharding means that the tables are classified according to services and cannot be placed on different nodes. Vertical split is simple and clear, the split rules are clear, the application module is clear, clear, easy to integrate, but it is difficult to expand the data volume of a table to a certain extent.

Hybrid segmentation

Is the combination of horizontal and vertical segmentation.

Three, Mycat vertical segmentation, horizontal segmentation actual combat

1. Vertical segmentation

Said above, the vertical segmentation is mainly split according to the specific business, so, we can imagine such a scenario, suppose we have a very large electricity system, so we need to order table, water table, users table, user comment, table points under different database to improve the throughput, architecture diagram about the following:

Since xiaobian is tested on a machine, there is only host1 node, but different tables still correspond to different databases, but all databases belong to the same database instance (host), later different hosts only need to add nodes.

The mycat configuration file is as follows:

server.xml

<user name="root"> <property name="password">root</property> // Correspond to four logical libraries <property name="schemas">order,trade,user,comment</property> </user>Copy the code

schema.xml

<? The XML version = "1.0"? > <! DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <! -- 4 logical libraries, There are four different fragmentation nodes: -> <schema name="order" checkSQLschema="false" sqlMaxLimit="100" dataNode="database1" /> <schema name="trade" checkSQLschema="false" sqlMaxLimit="100" dataNode="database2" /> <schema name="user" checkSQLschema="false"  sqlMaxLimit="100" dataNode="database3" /> <schema name="comment" checkSQLschema="false" sqlMaxLimit="100" dataNode="database4" /> <! -- Four fragments, DataHost ="localhost1" database="database1" /> <dataNode name="database2" dataHost="localhost1" database="database2" /> <dataNode name="database3" dataHost="localhost1" database="database3" /> <dataNode name="database4" dataHost="localhost1" database="database4" /> <! -- Actual physical host, DataHost name="localhost1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1" url="localhost:3306" user="root" password="root"> </writeHost> </dataHost> </mycat:schema>Copy the code

Log in native mysql, create the order, the trade, the user, comment4 database:

create database database1 character set utf8;
create database database2 character set utf8;
create database database3 character set utf8;
create database database4 character set utf8;
Copy the code

Perform the bin directory startup_nowrap. Bat file, if the output content below, then has begun mycat success, if not, please check the order, the trade, the user, comment4 if a database has been created.

Log in to Mycat server with the following statement:

Mysql -uroot -proot-p8066-h127.0.0.1

Create a comment table in the COMMENT database and insert a data entry

Create a new Comment table at 1, insert a record at 2, and check to see which data node the record is inserted into, database4.

2. Horizontal segmentation

server.xml

<user name="root">
    <property name="password">root</property>
    <property name="schemas">TESTDB</property>
</user>
Copy the code

schema.xml

<? The XML version = "1.0"? > <! DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100"> <table name="travelrecord" dataNode="dn1,dn2,dn3" rule="auto-sharding-long" />  </schema> <dataNode name="dn1" dataHost="localhost1" database="db1" /> <dataNode name="dn2" dataHost="localhost1" database="db2" /> <dataNode name="dn3" dataHost="localhost1" database="db3" /> <dataHost name="localhost1" maxCon="1000"  minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <! -- can have multi write hosts --> <writeHost host="hostM1" url="localhost:3306" user="root" password="root"> </writeHost> </dataHost> </mycat:schema>Copy the code

rule.xml

<? The XML version = "1.0" encoding = "utf-8"? > <! DOCTYPE mycat:rule SYSTEM "rule.dtd"> <mycat:rule xmlns:mycat="http://io.mycat/"> <tableRule name="auto-sharding-long"> <rule> <columns>id</columns> rang-long </rule> </tableRule> <function name="rang-long" class="io.mycat.route.function.AutoPartitionByLong"> <property name="mapFile">autopartition-long.txt</property> </function> </mycat:rule>Copy the code

Autopartition-long. TXT in the conf directory

# range start-end ,data node index
# K=1000,M=10000.
0-500M=0
500M-1000M=1
1000M-1500M=2
Copy the code

The above configuration creates a logical library named TESTDB and specifies the table label to be shred, named TravelRecord, using rang-long partitioning strategies, which shard based on the range of ID column values. The specific rules are defined in the autopartition-long. TXT file, that is, records with ids between 0 and 500*10000 are stored in db1’s TravelRecord table, Records with ids in the range of 500*10000-1000*10000 are stored in the TravelRecord table of DB2 database. Now we insert two data to verify whether they are consistent with sharding rules.

createdb1,db2,db3The database

create database db1 character set utf8;
create database db2 character set utf8;
create database db3 character set utf8;
Copy the code

This example just demonstrates the value range of the ID column. Mycat also supports many sharding algorithms, such as modular, consistent hashing algorithm, sharding by date algorithm, etc. You can see distributed database architecture and enterprise combat ---- based on Mycat middleware in-depth study of this book.

Why do we need read-write separation

As for why we need read/write separation, I have introduced it in my previous article. I believe that those who read this article also know why we need read/write separation. Of course, if you also need to understand, please check out my previous articleSpringBoot Mybatis read-write separation configurationBy the way, you can also see how to separate read from write through code

MySQL primary/secondary replication

Master-slave replication is the key to reading and writing separation, no matter how to read and write separation, premise is MySQL with master-slave replication, the current owner from, but the key point, is want to make sure that the data can be consistent 2 library (out off just write in the library from the library also failed to react in time), if the two do not match the data library, Does read/write separation make any sense? How does MySQL master copyMySQL master/slave replication (binlog)

Mycat read-write separation setting

Configure the Mycat user

The Mycat user is the same as the MySQL user. The main configuration is the username and password linked to Mycat, and the logical library that can be used. The user information is mainly configured in server.xml, as follows

<? The XML version = "1.0" encoding = "utf-8"? > <! -- -- Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the  License. --> <! DOCTYPE mycat:server SYSTEM "server.dtd"> <mycat:server xmlns:mycat="http://io.mycat/"> <system> <property name="defaultSqlParser">druidparser</property> <! -- <property name="useCompression">1</property>--> <! --1 to enable mysql compression protocol --> <! -- <property name="processorBufferChunk">40960</property> --> <! -- <property name="processors">1</property> <property name="processorExecutor">32</property> --> <! -- Default is 65535 64K maximum text length for SQL parsing --> <! --<property name="maxStringLiteralLength">65535</property>--> <! --<property name="sequnceHandlerType">0</property>--> <! --<property name="backSocketNoDelay">1</property>--> <! --<property name="frontSocketNoDelay">1</property>--> <! --<property name="processorExecutor">16</property>--> <! -- <property name="mutiNodeLimitType">1</property> 0: enable small orders of magnitude (default); 1: <property name="mutiNodePatchSize">100</property> Processors <property name="processors">32</property> <property name="processorExecutor">32</property> <property name="serverPort">8066</property> <property name="managerPort">9066</property> <property name="idleTimeout">300000</property> <property Name = "bindIp" > 0.0.0.0 < / property > < property name = "frontWriteQueueSize" > 4096 < / property > < property name="processors">32</property> --> </system> <user name="raye"> <property name="password">rayewang</property> <property  name="schemas">separate</property> </user> </host> </mycat:server>Copy the code

Among themDefines a user named raye, in the tag userrayewangDefines the user's password,separateDefines a logical library that users can use

Configure the Mycat logical library

Mycat configuration has a lot of, but because we only use Mycat classification function to read and write, so use the configuration is not much, only need to configure some basic, of course, this paper also simply introduces the separation, speaking, reading and writing related configuration, other configuration suggests the reader to check the document, or by other means, logic library is inschema.xmlConfigured in

Let's start with some configuration tags in the Mycat logical library

schema

schemaThe tag is used to define the logical library,schemaThere are four propertiesdataNode.checkSQLschema.sqlMaxLimit.name

dataNodeThe tag attribute is used to bind the logical library to a specific database. In version 1.3, if dataNode is configured, shard tables cannot be configured. In 1.4, default shards can be configured

nameIs to define the name of the current logical library, convenientserver.xmlTo define a user

checkSQLschemaWhen this value is set to true, if we execute the statement SELECTfrom separate.users; MyCat changes the statement to SELECTfrom users; . ERROR 1146 (42S02): Table 'separate.users' doesn't exist. However, even if this value is set to true, if the statement has a name other than that specified by the schema, for example: select * from db1.users; MyCat does not remove the db1 field, and an error will be reported if the library is not defined, so it is best to provide SQL statements without the db1 field.

sqlMaxLimitWhen the value is set to a value. For each SQL statement executed without a limit statement, MyCat will automatically add the corresponding value. For example, if the value is set to 100, run selectfrom users; And perform selectfrom users limit 100; The same. When set to this value, MyCat will display all the query information by default, resulting in too much output. Therefore, in normal use, it is recommended to add a value to reduce excessive data returns. Of course, SQL statements explicitly specify the size of limit, which is not bound by this property. Note that this property does not take effect if you are running a schema that is not a split repository. You need to manually add the limit statement.

schemaThere is a tag within a tagtableIt is used to define different table sharding information, but we only do read and write separation, we will not use it, so we won't cover it here

dataNode

dataNodeThe dataNode tag defines the data nodes in MyCat, which are commonly referred to as data fragments. A dataNode tag is an independent data fragment.dataNodeThere are three attributes:name.dataHost.database.

nameDefine the name of the data node. This name needs to be unique. This name is used fortableThe labels andschemaReferenced in the tag

dataHostThis property is used to define which database instance the shard belongs to, and the property value refers to the Name property defined on the dataHost tag

databaseThis attribute is used to define the specific library on which specific database instance the sharding attribute belongs to, because sharding is defined in two dimensions: instance + specific library. Because the tables created on each library and table structure are the same. This makes it easy to split the table horizontally

dataHost

dataHostIs the tag that defines the actual database connection. This tag also exists as the lowest level tag in the myCAT logical library, directly defining the specific database instance, read/write separation configuration, and heartbeat statements.dataHostThere are seven properties:name.maxCon.minCon.balance.writeType.dbType.dbDriver, has 2 tagsheartbeat.writeHost, includingwriteHostThe tag contains another onereadHostThe label

nameUniquely identified dataHost label fordataNodeLabel use

maxConSpecifies the maximum connection for each read/write instance connection pool. That is, both writeHost and readHost tags nested within the tag use the value of this attribute to instantiate the maximum number of connections in the pool

minConSpecifies the minimum connection for each read/write instance connection pool, initializing the pool size

balanceRead the load balancing type

  1. Balance ="0", the read/write separation mechanism is disabled and all read operations are sent to the currently available writeHost.
  2. Balance ="1", all readHost and stand by writeHost participate in the load balancing of select statements. To put it simply, when there are two primary and secondary modes (M1->S1, M2->S2, and M1 and M2 are in active/standby mode), normally, M2,S1, and S2 all participate in load balancing of select statements.
  3. Balance ="2", all read operations are distributed randomly on writeHost and readhost.
  4. Balance ="3", all read requests are randomly sent to wiriterHost's readhost. WriterHost does not bear the read pressure

writeTypeWrite load balancing type. The current value can be:

  1. WriteType ="0", all write operations are sent to the first configured writeHost. The first writeHost fails and is switched to the second writeHost that is still alive. The switch takes effect after the restart, and is recorded in the configuration file :dnindex.properties.
  2. WriteType ="1", all write operations are randomly sent to the configured writeHost

dbTypeSpecifies the type of database to connect to at the back end. Currently, binary mysql protocol is supported, as well as other databases that use JDBC connections. For example, mongodb, Oracle, and Spark

dbDriverSpecifies the Driver used to connect to the back-end database. Currently, the available values are Native and JDBC. With Native, mysql and Maridb can be used because this value executes the binary mysql protocol. Other types of databases require JDBC driver support. The postgresQL native protocol is supported from version 1.6. If using the JDBC need to conform to the four standard JDBC Driver JAR package into MYCATlib directory, and check the Driver JAR package includes the following files directory structure: META - INFservicesjava. SQL. The Driver. Write the specific Driver class name in this file, for example, com.mysql.jdbc.driver.

heartbeatThis tag identifies the statements used to perform a heartbeat check with the back-end database. For example,MYSQL can use select user(), Oracle can use select 1 from dual, etc. This tag also has a connectionInitSql property, which is where the initial SQL statement that needs to be executed when using the Oracla database is placed. For example: ALTER session set NLSDateformat =' YYYY-MM-DD HH24 :mi:ss'

writeHost.readHostBoth of these tags specify the configuration of the back-end database to myCAT to instantiate the back-end connection pool. The only difference is that writeHost specifies the write instance and readHost specifies the read instance, which are grouped to meet the requirements of the system. Multiple writeHosts and readhosts can be defined within a dataHost. However, if the back-end database specified by writeHost goes down, all readHosts bound to this writeHost will become unavailable. On the other hand, the system will automatically detect the writeHost failure and switch to the standby writeHost. These two tags have the same attributeshost.url.password.user.weight.usingDecryptAttributes such as

hostTo identify different instances, M1 is used for writeHost and S1 is used for readHost

urlThe link address of the real database instance, if using native dbDriver, is generally in the form of address:port. If JDBC or other dbDriver is used, this parameter needs to be specified. When using JDBC, so to write: JDBC: mysql: / / localhost: 3306 /

userThe linked user name of the real database instance

passwordThe link password of the real database instance

weightThe weight configuration is used as the weight of read nodes in readhost. It is mainly used when multiple database instance machines are configured differently. The access volume can be adjusted according to the weight

usingDecryptPassword encryption The default value is 0. No If you need to enable configuration 1 and use an encryption program to encrypt passwords

Note that readHost is inside the writeHost tag, not alone

Here is my read/write separation profile

<? The XML version = "1.0"? > <! DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <schema name="separate" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1"/> <dataNode name="dn1" dataHost="localhost1" database="test" /> <dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <! -- can have multi write hosts --> <writeHost host="hostM1" URL ="192.168.1.126:3307" user="root" password="123456"> <! -- can have multi read hosts --> <readHost host="hostS2" URL ="192.168.1.126:3308" user="root" password="123456" /> </writeHost> </dataHost> </mycat:schema>Copy the code

That's pretty much all explained up front, because I'm just using basic master-slave replication, so mydataHostthebalanceIt's set to 3

You can test whether the configuration is successful by starting mycat and then connecting to mycat with the database connection tool. The easiest way is to modify the data from the library, so that it is convenient to check which library it is running on. In addition, since I started Mycat based on docker, if it is directly running mycat in the system, Check the official documentation to see how to start MyCat