preface

The industry has basic requirements for high availability of systems, which can be summarized as follows.

  • There are no single points of problem in the system architecture.

  • The availability of services can be guaranteed to the maximum extent.

In general, the high availability of a system can be evaluated with several nines. The nines are the percentage of time that the system can guarantee to provide services externally. For example, to achieve high availability (HA) of 99.99, the total time of system failure in a year cannot exceed 52 minutes.

System high availability architecture

Since we need to realize the high availability architecture of the system, then, what kind of system architecture do we need to build? We can simplify the system architecture we need to build as shown below.

Server Planning

Due to my limited computer resources, I set up a high availability environment on four servers. You can expand the environment to more servers according to this article, and the setup steps are the same.

The host name The IP address Installed services
binghe151 192.168.175.151 Mycat, Zookeeper, MySQL, HAProxy, Keepalived, Xinetd
binghe152 192.168.175.152 Zookeeper, MySQL,
binghe153 192.168.175.153 Zookeeper, MySQL,
binghe154 192.168.175.154 Mycat, MySQL, HAProxy, Keepalived, Xinetd
binghe155 192.168.175.155 MySQL

Note: HAProxy and Keepalived are best deployed on the same server as Mycat.

MySQL installation

Mysql8.x + upgrade GCC + upgrade cmake (pro test complete version)

Install the JDK

Since Mycat and Zookeeper require JDK environment support to run, we need to install JDK environment on each server.

Here, I use installing the JDK on the binghe151 server as an example. The other servers are installed in the same way as on the binghe151 server. The installation procedure is as follows.

(1) to the JDK’s official website to download the JDK version 1.8, JDK1.8 download address is: www.oracle.com/technetwork… .

Note: I downloaded the JDK installation package version is: JDK-8u212-linux-x64.tar. gz, if the JDK version is updated, you can download the corresponding version.

(2) Upload the downloaded JDK-8u212-linux-x64.tar. gz installation package to the /usr/local/src directory of binghe151.

(3) Decompress the JDK-8u212-linux-x64.tar. gz file, as shown below.

tar -zxvf jdk-8u212-linux-x64.tar.gz
Copy the code

(4) Move the decompressed jdk1.8.0_212 directory to the /usr/local directory on the binghe151 server, as shown below.

The mv jdk1.8.0 _212 / / usr /local/src/
Copy the code

(5) Configure JDK environment variables, as shown below.

vim /etc/profile
JAVA_HOME=/usr/local/ jdk1.8.0 _212 CLASS_PATH =. :$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH
Copy the code

Make the system environment variables take effect, as shown below.

source /etc/profile
Copy the code

(6) View the JDK version, as shown below.

[root@binghe151 ~]# java -version
java version "1.8.0 comes with _212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)
Copy the code

If the JDK version information is displayed, JDK is successfully installed.

Install Mycat

Download Mycat 1.6.7.4 Release, unzip to /usr/local/mycat on the server, and configure Mycat’s system environment variables. Then configure Mycat’s configuration file.

  • schema.xml

      
<! DOCTYPEmycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

	<schema name="shop" checkSQLschema="false" sqlMaxLimit="1000">
		<! --<table name="order_master" primaryKey="order_id" dataNode = "ordb"/>-->
		<table name="order_master" primaryKey="order_id" dataNode = "orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" autoIncrement="true">
			<childTable name="order_detail" primaryKey="order_detail_id" joinKey="order_id" parentKey="order_id" autoIncrement="true"/>
		</table>
		<table name="order_cart" primaryKey="cart_id" dataNode = "ordb"/>
		<table name="order_customer_addr" primaryKey="customer_addr_id" dataNode = "ordb"/>
		<table name="region_info" primaryKey="region_id" dataNode = "ordb,prodb,custdb" type="global"/>
		<table name="serial" primaryKey="id" dataNode = "ordb"/>
		<table name="shipping_info" primaryKey="ship_id" dataNode = "ordb"/>
		<table name="warehouse_info" primaryKey="w_id" dataNode = "ordb"/>
		<table name="warehouse_proudct" primaryKey="wp_id" dataNode = "ordb"/>
		
		<table name="product_brand_info" primaryKey="brand_id" dataNode = "prodb"/>
		<table name="product_category" primaryKey="category_id" dataNode = "prodb"/>
		<table name="product_comment" primaryKey="comment_id" dataNode = "prodb"/>
		<table name="product_info" primaryKey="product_id" dataNode = "prodb"/>
		<table name="product_pic_info" primaryKey="product_pic_id" dataNode = "prodb"/>
		<table name="product_supplier_info" primaryKey="supplier_id" dataNode = "prodb"/>
		
		<table name="customer_balance_log" primaryKey="balance_id" dataNode = "custdb"/>
		<table name="customer_inf" primaryKey="customer_inf_id" dataNode = "custdb"/>
		<table name="customer_level_inf" primaryKey="customer_level" dataNode = "custdb"/>
		<table name="customer_login" primaryKey="customer_id" dataNode = "custdb"/>
		<table name="customer_login_log" primaryKey="login_id" dataNode = "custdb"/>
		<table name="customer_point_log" primaryKey="point_id" dataNode = "custdb"/>
		
	</schema>
	
	<dataNode name="mycat" dataHost="binghe151" database="mycat" />
	 
	<dataNode name="ordb" dataHost="binghe152" database="order_db" />
	<dataNode name="prodb" dataHost="binghe153" database="product_db" />
	<dataNode name="custdb" dataHost="binghe154" database="customer_db" />
	
	<dataNode name="orderdb01" dataHost="binghe152" database="orderdb01" />
	<dataNode name="orderdb02" dataHost="binghe152" database="orderdb02" />
	<dataNode name="orderdb03" dataHost="binghe153" database="orderdb03" />
	<dataNode name="orderdb04" dataHost="binghe153" database="orderdb04" />
	
	<dataHost name="binghe151" maxCon="1000" minCon="10" balance="1"
			  writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
		<heartbeat>select user()</heartbeat>
		<writeHost host="binghe51" url="192.168.175.151:3306" user="mycat" password="mycat"/>
	</dataHost>
	
	<dataHost name="binghe152" maxCon="1000" minCon="10" balance="1"
			  writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
		<heartbeat>select user()</heartbeat>
		<writeHost host="binghe52" url="192.168.175.152:3306" user="mycat" password="mycat"/>
	</dataHost>
	
	<dataHost name="binghe153" maxCon="1000" minCon="10" balance="1"
			  writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
		<heartbeat>select user()</heartbeat>
		<writeHost host="binghe53" url="192.168.175.153:3306" user="mycat" password="mycat"/>
	</dataHost>
	
	<dataHost name="binghe154" maxCon="1000" minCon="10" balance="1"
			  writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
		<heartbeat>select user()</heartbeat>
		<writeHost host="binghe54" url="192.168.175.154:3306" user="mycat" password="mycat"/>
	</dataHost>
	
</mycat:schema>
Copy the code
  • server.xml

      
<! DOCTYPEmycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/">
	<system>
		<property name="useHandshakeV10">1</property>
        <property name="defaultSqlParser">druidparser</property>
		<property name="serverPort">3307</property>
		<property name="managerPort">3308</property>
		<property name="nonePasswordLogin">0</property>
		<property name="bindIp">0.0.0.0</property>
		<property name="charset">utf8mb4</property>
		<property name="frontWriteQueueSize">2048</property>
		<property name="txIsolation">2</property>
		<property name="processors">2</property>
		<property name="idleTimeout">1800000</property>
		<property name="sqlExecuteTimeout">300</property>
		<property name="useSqlStat">0</property>
		<property name="useGlobleTableCheck">0</property>
		<property name="sequenceHandlerType">1</property>
		<property name="defaultMaxLimit">1000</property>
		<property name="maxPacketSize">104857600</property>
		
		<property name="sqlInterceptor">
			io.mycat.server.interceptor.impl.StatisticsSqlInterceptor
		</property>
		<property name="sqlInterceptorType">
			UPDATE,DELETE,INSERT
		</property>
		<property name="sqlInterceptorFile">/tmp/sql.txt</property>
	</system>
	
	<firewall>
		<whitehost>
			<host user="mycat" host="192.168.175.151"></host>
		</whitehost>
		<blacklist check="true">
			<property name="noneBaseStatementAllow">true</property>
			<property name="deleteWhereNoneCheck">true</property>
		</blacklist>
	</firewall>
	
	<user name="mycat" defaultAccount="true">
		<property name="usingDecrypt">1</property>
		<property name="password">cTwf23RrpBCEmalp/nx0BAKenNhvNs2NSr9nYiMzHADeEDEfwVWlI6hBDccJjNBJqJxnunHFp5ae63PPnMfGYA==</property>
		<property name="schemas">shop</property>
	</user>

</mycat:server>
Copy the code
  • rule.xml

      
<! DOCTYPEmycat:rule SYSTEM "rule.dtd">
<mycat:rule xmlns:mycat="http://io.mycat/">
	<tableRule name="order_master">
		<rule>
			<columns>customer_id</columns>
			<algorithm>mod-long</algorithm>
		</rule>
	</tableRule>
	
	<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
		<property name="count">4</property>
	</function>
</mycat:rule>
Copy the code
  • sequence_db_conf.properties
#sequence stored in datanode
GLOBAL=mycat
ORDER_MASTER=mycat
ORDER_DETAIL=mycat
Copy the code

As for the configuration of Mycat, it is only for your reference. You don’t have to follow the configuration here. You can configure it according to your own business needs. The focus of this paper is to build a high availability environment for Mycat.

Create Mycat account in MySQL as shown below.

CREATE USER 'mycat'@'192.168.175. %' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175. %' IDENTIFIED WITH mysql_native_password BY 'mycat'; 
GRANT SELECT.INSERT, UPDATE, DELETE.EXECUTE  ON *.* TO 'mycat'@'192.168.175. %';
FLUSH PRIVILEGES;
Copy the code

Install the Zookeeper cluster

After installing and configuring JDK, it is necessary to build Zookeeper cluster. According to the server planning, Zookeeper cluster is now built on three servers: “Binghe151”, “binghe152” and “Binghe153”.

1. Download the Zookeeper

To the Apache Zookeeper’s official website to download the installation package, the Zookeeper installation package download address is: mirrors.tuna.tsinghua.edu.cn/apache/zook… . The details are shown in the following figure.

You can also run the following command on the binghe151 server to download Zookeeper-3.5.5.

Wget HTTP: / / https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gzCopy the code

Run the preceding command to download the apache-Zookeeper-3.5.5-bin.tar. gz installation package to the binghe151 server.

2. Install and configure Zookeeper

Note that steps (1), (2), and (3) are all performed on the Binghe152 server.

(1) Decompress the Zookeeper installation package

Run the following command on the binghe151 server to decompress Zookeeper to the /usr/local/ directory and change the Zookeeper directory to zooKeeper-3.5.5.

Tar -zxvf apache-zookeeper-3.5.5-bin.tar.gz mv apache-zookeeper-3.5.5-bin zookeeper-3.5.5Copy the code

(2) Configure Zookeeper environment variables

Similarly, you need to configure Zookeeper environment variables in the /etc/profile file as follows:

ZOOKEEPER_HOME=/usr/local/ zookeeper - 2.6.2 PATH =$ZOOKEEPER_HOME/bin:$PATH
export ZOOKEEPER_HOME PATH
Copy the code

Combined with the JDK environment variables previously configured, /etc/profile, the general configuration is as follows:

MYSQL_HOME=/usr/local/mysql
JAVA_HOME=/usr/local/ jdk1.8.0 _212 MYCAT_HOME = / usr /local/mycat
ZOOKEEPER_HOME=/usr/local/ zookeeper - 2.6.2 MPC_HOME = / usr /local/ MPC - 1.1.0 GMP_HOME = / usr /local/ GMP - 6.1.2 MPFR_HOME = / usr /local/ MPFR - 4.0.2 CLASS_PATH =. :$JAVA_HOME/lib
LD_LIBRARY_PATH=$MPC_LIB_HOME/lib:$GMP_HOME/lib:$MPFR_HOME/lib:$LD_LIBRARY_PATH
PATH=$MYSQL_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$MYCAT_HOME/bin:$PATH
export JAVA_HOME ZOOKEEPER_HOME MYCAT_HOME CLASS_PATH MYSQL_HOME MPC_LIB_HOME GMP_HOME MPFR_HOME LD_LIBRARY_PATH PATH
Copy the code

(3) Configure Zookeeper

First of all, You need to CFG file in ZOOKEEPERHOME/conf (ZOOKEEPER_HOME/conf (ZOOKEEPER_HOME is the installation directory of Zookeeper) is changed to zoo. CFG . Specific commands are as follows:

cd /usr/local/ zookeeper - 2.6.2 / conf/mv zoo_sample CFG zoo. The CFGCopy the code

Then modify the zoo. CFG file as follows:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/ zookeeper - 2.6.2 / data dataLogDir = / usr /localZookeeper - 2.6.2 / dataLog clientPort = 2181 server. 1 = binghe151:2888:3888 server. 2 = binghe152:2888-3888 server.3=binghe153:2888:3888Copy the code

Create folders data and dataLog in the Zookeeper installation directory.

mkdir -p /usr/local/ zookeeper - 2.6.2 / data mkdir -p/usr /localZookeeper - 2.6.2 / dataLogCopy the code

Switch to the new data directory and create the myID file with the number 1, as shown below:

cd /usr/local/ zookeeper - 2.6.2 / data vim myidCopy the code

Write the number 1 to the file myid.

3. Copy the Zookeeper and system environment variable files to other servers

Note that steps (1) and (2) are performed on the Binghe151 server.

(1) Copy Zookeeper to other servers

According to the server planning, Zookeeper is copied to binghe152 and binghe53 servers as follows:

scp -r /usr/localZookeeper - 2.6.2 / binghe152: / usr /local/
scp -r /usr/localZookeeper - 2.6.2 / binghe153: / usr /local/
Copy the code

(2) Copy system environment variable files to other servers

According to the server planning, the system environment variable file /etc/profile is copied to binghe152 and binghe153 servers as follows:

scp /etc/profile binghe152:/etc/
scp /etc/profile binghe153:/etc/
Copy the code

You may be asked to enter a password.

4. Modify the myID file on other servers

Example Change the content of the Zookeeper myID file on the binghe152 server to 2 and the content of the Zookeeper myID file on the binghe153 server to 3. Details are as follows:

Perform the following operations on the Binghe152 server:

echo "2" > /usr/local/ zookeeper - 2.6.2 / data/myid cat/usr /local/ zookeeper - 2.6.2 / data/myid 2Copy the code

Perform the following operations on the binghe153 server:

echo "3" > /usr/local/ zookeeper - 2.6.2 / data/myid cat/usr /local/ zookeeper - 2.6.2 / data/myid 3Copy the code

5. Enable environment variables

Perform the following operations on binghe151, binghe152, and binghe153 respectively to make the system environment variables take effect.

source /etc/profile
Copy the code

6. Start the Zookeeper cluster

Perform the following operations on binghe151, binghe152, and binghe153 respectively to start the Zookeeper cluster.

zkServer.sh start
Copy the code

7. Check the startup status of the Zookeeper cluster

  • Binghe151 server
[root@binghe151 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/ zookeeper - 2.6.2 / bin /.. /conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: followerCopy the code
  • Binghe152 server
[root@binghe152 local]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/ zookeeper - 2.6.2 / bin /.. /conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: leaderCopy the code
  • Binghe153 server
[root@binghe153 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/ zookeeper - 2.6.2 / bin /.. /conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: followerCopy the code

The Zookeeper role on the binghe151 and binghe153 servers is follower, and the Zookeeper role on the binghe152 server is leader.

Initialize the Mycat configuration to the Zookeeper cluster

Note: The initialization of Zookeeper data is performed on binghe151 server because we have installed Mycat on binghe151 server.

1. View the initialization script

An init_zk_data.sh script file is provided in the bin directory under the Mycat installation directory, as shown below.

[root@binghe151 ~]# ll /usr/local/mycat/bin/total 384 -rwxr-xr-x 1 root root 3658 Feb 26 17:10 dataMigrate.sh -rwxr-xr-x 1 root root 1272 Feb 26 17:10 init_zk_data.sh -rwxr-xr-x 1 root root 15701 Feb 28 20:51 mycat -rwxr-xr-x 1 root root 2986 Feb 26 17:10 rehash.sh -rwxr-xr-x 1 root root 2526 Feb 26 17:10 startup_nowrap.sh -rwxr-xr-x 1 root root 140198 Feb 28 20:51 wrapper-linux-ppc-64 -rwxr-xr-x 1 root root 99401 Feb 28 20:51 wrapper-linux-x86-32 -rwxr-xr-x 1 root root 111027 Feb 28  20:51 wrapper-linux-x86-64Copy the code

The init_zk_data.sh script file is used to initialize the configuration of Mycat in Zookeeper. The init_zk_data.sh script file reads the configuration file in the conf directory of the Mycat installation directory and initializes it into the Zookeeper cluster.

2. Copy the Mycat configuration file

First, let’s look at the file information in the conf directory of Mycat installation directory, as shown below.

[root@binghe151 ~]# cd /usr/local/mycat/conf/
[root@binghe151 conf]# ll
total 108
-rwxrwxrwx 1 root root   92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root   51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root   67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root  340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root   86 Mar  1 22:37 dnindex.properties
-rwxrwxrwx 1 root root  446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root  183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root  271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root   16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root  108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root  988 Mar  1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar  3 23:59 schema.xml
-rwxrwxrwx 1 root root  440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root   84 Mar  3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root   29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root   28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root   53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar  4 15:14 server.xml
-rwxrwxrwx 1 root root   18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload
Copy the code

Next, copy the schema. XML, server. XML, rule. XML, and sequence_db_conf.properties files from the conf directory under the Mycat installation directory to the zkconf directory under the conf directory, as shown below.

cp schema.xml server.xml rule.xml sequence_db_conf.properties zkconf/
Copy the code

3. Write the Mycat configuration information to the Zookeeper cluster

Run the init_zk_data.sh script to initialize the configuration information in the Zookeeper cluster, as shown in the following figure.

[root@binghe151 bin]# /usr/local/mycat/bin/init_zk_data.sh  
o2020-03-08 20:03:13 INFO JAVA_CMD=/usr/local/jdk1.8.0_212/bin/ Java o2020-03-08 20:03:13 INFO Start to initialize /mycat of ZooKeeper o2020-03-08 20:03:14 INFO DoneCopy the code

According to the above information, Mycat successfully writes initial configuration information to Zookeeper.

4. Verify that the Mycat configuration information is successfully written to Mycat

You can log in to Zookeeper using the Zookeeper client command zkcli. sh to verify that the configuration information of Mycat is successfully written to Mycat.

First, log in to Zookeeper, as shown below.

[root@binghe151 ~]# zkCli.sh 
Connecting to localhost:2181
################### The N lines are omitted and ###################### is displayed
Welcome to ZooKeeper!

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] 
Copy the code

Next, view the mycat information on the Zookeeper command line, as shown below.

[zk: localhost:2181(CONNECTED) 0] ls /
[mycat, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
[zk: localhost:2181(CONNECTED) 2] ls /mycat/mycat-cluster-1
[cache, line, rules, schema, sequences, server]
[zk: localhost:2181(CONNECTED) 3] 
Copy the code

You can see that there are six directories under /mycat/mycat-cluster-1. Next, check the information under the schema directory, as shown below.

[zk: localhost:2181(CONNECTED) 3] ls /mycat/mycat-cluster-1/schema
[dataHost, dataNode, schema]
Copy the code

Next, let’s take a look at the dataHost configuration, as shown below.

[zk: localhost:2181(CONNECTED) 4] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe151"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe51"."url":"192.168.175.151:3306"."password":"root"."user":"root"}]}, {"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe152"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe52"."url":"192.168.175.152:3306"."password":"root"."user":"root"}]}, {"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe153"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe53"."url":"192.168.175.153:3306"."password":"root"."user":"root"}]}, {"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe154"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe54"."url":"192.168.175.154:3306"."password":"root"."user":"root"}]}]
Copy the code

The format of the output information is messy, but it is in Json format. We can format the output information, and the result after formatting is as follows.

[{"balance": 1."maxCon": 1000."minCon": 10."name": "binghe151"."writeType": 0."switchType": 1."slaveThreshold": 100."dbType": "mysql"."dbDriver": "native"."heartbeat": "select user()"."writeHost": [{"host": "binghe51"."url": "192.168.175.151:3306"."password": "root"."user": "root"}]}, {"balance": 1."maxCon": 1000."minCon": 10."name": "binghe152"."writeType": 0."switchType": 1."slaveThreshold": 100."dbType": "mysql"."dbDriver": "native"."heartbeat": "select user()"."writeHost": [{"host": "binghe52"."url": "192.168.175.152:3306"."password": "root"."user": "root"}]}, {"balance": 1."maxCon": 1000."minCon": 10."name": "binghe153"."writeType": 0."switchType": 1."slaveThreshold": 100."dbType": "mysql"."dbDriver": "native"."heartbeat": "select user()"."writeHost": [{"host": "binghe53"."url": "192.168.175.153:3306"."password": "root"."user": "root"}]}, {"balance": 1."maxCon": 1000."minCon": 10."name": "binghe154"."writeType": 0."switchType": 1."slaveThreshold": 100."dbType": "mysql"."dbDriver": "native"."heartbeat": "select user()"."writeHost": [{"host": "binghe54"."url": "192.168.175.154:3306"."password": "root"."user": "root"}}]]Copy the code

As you can see, the dataHost node information we configured in Mycat’s schema. XML file was successfully written to Zookeeper.

In order to verify whether the Mycat configuration information has been synchronized to other Zookeeper nodes, we can also log in Zookeeper on binghe152 and binghe153 servers to check whether the Mycat configuration information has been successfully written.

  • Binghe152 server
[root@binghe152 ~]# zkCli.sh 
Connecting to localhost:2181
################# Omit N lines of information ################
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe151"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe51"."url":"192.168.175.151:3306"."password":"root"."user":"root"}]}, {"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe152"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe52"."url":"192.168.175.152:3306"."password":"root"."user":"root"}]}, {"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe153"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe53"."url":"192.168.175.153:3306"."password":"root"."user":"root"}]}, {"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe154"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe54"."url":"192.168.175.154:3306"."password":"root"."user":"root"}]}]
Copy the code

Mycat configuration information is successfully synchronized to Zookeeper on binghe152 server.

  • Binghe153 server
[root@binghe153 ~]# zkCli.sh 
Connecting to localhost:2181
# # # # # # # # # # # # # # # # # # # # # is omitted N line output information # # # # # # # # # # # # # # # # # # # # #
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe151"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe51"."url":"192.168.175.151:3306"."password":"root"."user":"root"}]}, {"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe152"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe52"."url":"192.168.175.152:3306"."password":"root"."user":"root"}]}, {"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe153"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe53"."url":"192.168.175.153:3306"."password":"root"."user":"root"}]}, {"balance": 1,"maxCon": 1000,"minCon": 10,"name":"binghe154"."writeType": 0."switchType": 1,"slaveThreshold": 100,"dbType":"mysql"."dbDriver":"native"."heartbeat":"select user()"."writeHost": [{"host":"binghe54"."url":"192.168.175.154:3306"."password":"root"."user":"root"}]}]
Copy the code

As you can see, Mycat configuration information is successfully synchronized to Zookeeper on binghe153 server.

Configure Mycat to support Zookeeper startup

1. Configure Mycat on the Binghe151 server

On the binghe151 server, go to the conf directory of the Mycat installation directory and view the file information, as shown below.

[root@binghe151 ~]# cd /usr/local/mycat/conf/
[root@binghe151 conf]# ll
total 108
-rwxrwxrwx 1 root root   92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root   51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root   67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root  340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root   86 Mar  1 22:37 dnindex.properties
-rwxrwxrwx 1 root root  446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root  183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root  271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root   16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root  108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root  988 Mar  1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar  3 23:59 schema.xml
-rwxrwxrwx 1 root root  440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root   84 Mar  3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root   29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root   28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root   53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar  4 15:14 server.xml
-rwxrwxrwx 1 root root   18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload
Copy the code

You can see that there is a myid.properties file in the conf directory of Mycat. Next, use the Vim editor to edit this file, as shown below.

vim myid.properties 
Copy the code

The contents of the edited myid.properties file are shown below.

loadZk=trueZkURL = 192.168.175.151:2181192168 175.152:2181192168:175.153 2181 clusterId myid = = mycat - cluster - 1 mycat_151 clusterSize=2 clusterNodes=mycat_151,mycat_154#server booster ; booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1
Copy the code

Several important parameters are described as follows.

  • LoadZk: indicates whether to load the Zookeeper configuration. True: it is; False: no;

  • ZkURL: specifies the Zookeeper connection address. Multiple Zookeeper connection addresses are separated by commas (,).

  • ClusterId: specifies the Id of the Mycat cluster. The Id must be the same as the directory name under/Mycat in Zookeeper, as shown in the following figure.

[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
Copy the code
  • Myid: the id of the current Mycat node, which I named mycat_ with the last three digits of the IP address;
  • ClusterSize: indicates the number of Mycat nodes in the Mycat cluster. Here, Mycat is deployed on binghe151 and Binghe154 nodes, so the number of Mycat nodes is 2.
  • ClusterNodes: Mycat Specifies all Mycat nodes in the cluster. These nodes need to be configured with Mycat node ids specified in MyID. Multiple nodes are separated by commas (,). Here I configured the nodes as mycat_151,mycat_154.

2. Install the new Mycat on binghe154 server

Download and install the same version of Mycat on binghe154 and unzip it to /usr/local/mycat on binghe154.

You can also run the following command on the binghe151 server to copy the Mycat installation directory to binghe154 server.

[root@binghe151 ~]# scp -r /usr/local/mycat binghe154:/usr/local
Copy the code

Note: Don’t forget to configure the system environment variables for Mycat on the Binghe154 server.

3. Modify the Mycat configuration on the Binghe154 server

Modify the myid.properties file in the conf directory under the Mycat installation directory on the binghe154 server, as shown below.

vim /usr/local/mycat/conf/myid.properties
Copy the code

The contents of the modified myid.properties file are shown below.

loadZk=trueZkURL = 192.168.175.151:2181192168 175.152:2181192168:175.153 2181 clusterId myid = = mycat - cluster - 1 mycat_154 clusterSize=2 clusterNodes=mycat_151,mycat_154#server booster ; booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1
Copy the code

4. Restart Mycat

Restart Mycat on binghe151 server and Binghe154 server respectively, as shown below.

Note: Restart first

  • Binghe151 server
[root@binghe151 ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
Copy the code
  • Binghe154 server
[root@binghe154 ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
Copy the code

View the startup log of Mycat on binghe151 and Binghe154 servers, as shown below.

STATUS | wrapper | 2020/03/08 21:08:15 | <-- Wrapper Stopped STATUS | wrapper | 2020/03/08 21:08:15 | --> Wrapper Started as Daemon STATUS | wrapper | 2020/03/08 21:08:15 | Launching a JVM... The INFO 1 | | JVM 2020/03/08 21:08:16 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org INFO 1 | | JVM 2020/03/08  21:08:16 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved. INFO | jvm 1 | 2020/03/08 21:08:16 | INFO | jvm 1 | 2020/03/08 21:08:28 | MyCAT Server startup  successfully. see logsin logs/mycat.log
Copy the code

According to the log output, Mycat is successfully restarted.

At this point, Mycat on binghe151 server is restarted, and then Mycat on Binghe154 server is restarted. XML, server. XML, rule-xml and sequence_db_conf.properties files in Mycat conf directory on Binghe154 server are the same as Mycat configuration files on Binghe151 server. This is the result of Mycat on the Binghe154 server reading the configuration file from Zookeeper.

In the future, we only need to modify the configurations of Mycat in Zookeeper, and these configurations will be automatically synchronized to Mycat, so that the configurations of multiple Mycat nodes are consistent.

Configuring a Virtual IP Address

Configure virtual IP addresses on binghe151 and Binghe154 servers, as shown in the following figure.

Ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 Up route add -host 192.168.175.110 dev eth0:1Copy the code

The virtual IP address configuration is as follows. The Binghe151 server is used as an example.

[root@binghe151 ~]# ifconfigEth0 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45 inet ADDR :192.168.175.151 Bcast:192.168.175.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe10:a145/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:116766 errors:0 dropped:0 overruns:0 frame:0 TX packets:85230 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX Bytes :25559422 (24.3 MiB) TX bytes:55997016 (53.4 MiB) eth0:1 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45 inet Addr :192.168.175.110 Bcast:192.168.175.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 LO Link Encap :Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:51102 errors:0 dropped:0 overruns:0 frame:0 TX Packets :51102 Errors :0 dropped:0 Overruns :0 Carrier :0 collisions:0 TXQueuelen :0 RX bytes:2934009 (2.7 MiB) TX 2934009 (2.7 MiB) bytes:Copy the code

Note: After adding a VIP on the command line, the VIP information will disappear when the server restarts, so it is best to write the command to create a VIP into a script file, for example, /usr/local/script/vip.sh, as shown below.

mkdir /usr/local/script
vim /usr/local/script/vip.sh
Copy the code

The contents of the file are as follows.

Ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 Up route add -host 192.168.175.110 dev eth0:1Copy the code

Next, add the /usr/local/script/vip.sh file to the server boot option, as shown below.

echo /usr/local/script/vip.sh >> /etc/rc.d/rc.local
Copy the code

Configuring IP Forwarding

Configure the system kernel IP forwarding function on binghe151 and binghe154 servers, and edit the /etc/sysctl.conf file, as shown below.

vim /etc/sysctl.conf
Copy the code

Find the following line of code.

net.ipv4.ip_forward = 0
Copy the code

Change it to the code shown below.

net.ipv4.ip_forward = 1
Copy the code

Save and exit the Vim editor, and run the following command for the configuration to take effect.

sysctl -p
Copy the code

Install and configure the xinetd service

We need to install the xinetd service on the HAProxy server, namely binghe151 and Binghe154 servers to enable port 48700.

(1) On the cli of the server, run the following command to install the xinetd service:

yum install xinetd -y
Copy the code

(2) Edit the /etc/xinetd.conf file as follows:

vim /etc/xinetd.conf
Copy the code

Check whether the following configuration exists in the file.

includedir /etc/xinetd.d
Copy the code

If the preceding configurations are not configured in the /etc/xinetd.conf file, add the preceding configurations to the /etc/xinetd.conf file. If the preceding configuration exists, do not modify it.

(3) Create the /etc/xinetd.d directory, as shown in the following.

mkdir /etc/xinetd.d
Copy the code

Note: If the /etc/xinetd.d directory already exists, the following error will be reported when the directory is created.

mkdir: cannot create directory `/etc/xinetd.d': File exists
Copy the code

You should ignore this error message.

(4) Add the configuration file mycat_status of the Mycat status detection server to the /etc/xinetd.d directory, as shown in the following.

touch /etc/xinetd.d/mycat_status
Copy the code

(5) Edit the mycat_status file as shown below.

vim /etc/xinetd.d/mycat_status
Copy the code

The contents of the edited mycat_status file are as follows.

service mycat_status
{
flags = REUSE
socket_type = stream
port = 48700
wait = no
user = root
server =/usr/local/bin/mycat_check.sh
log_on_failure += USERID
disable = no
}
Copy the code

The following describes some xinetd configuration parameters.

  • Socket_type: indicates the packet processing mode. Stream indicates TCP packets.

  • Port: indicates the port number monitored by the xinetd service.

  • Wait: Indicates that there is no need to wait, that is, the service will run in multi-threaded mode.

  • User: indicates the user running the xinted service.

  • Server: service script to be started.

  • Log_on_failure: records failure logs.

  • Disable: To enable the xinted service, set this parameter to no.

(6) Add the mycat_check.sh service script to the /usr/local/bin directory, as shown in the following.

touch /usr/local/bin/mycat_check.sh
Copy the code

(7) Edit the /usr/local/bin/mycat_check.sh file as follows:

vim /usr/local/bin/mycat_check.sh
Copy the code

The content of the edited file is as follows:

#! /bin/bash
mycat=`/usr/local/mycat/bin/mycat status | grep 'not running' | wc -l`
if [ "$mycat" = "0" ]; then
/bin/echo -e "HTTP / 1.1 200 OK \ r \ n"
else
/bin/echo -e "HTTP / 1.1 503 Service Unavailable \ r \ n"
/usr/local/mycat/bin/mycat start
fi
Copy the code

Grant executable permissions to the mycat_check.sh file, as shown below.

chmod a+x /usr/local/bin/mycat_check.sh
Copy the code

(8) Edit the /etc/services file as follows:

vim /etc/services
Copy the code

Add the following at the end of the file.

mycat_status  48700/tcp        # mycat_status
Copy the code

The port number must be the same as that configured in the /etc/xinetd.d/mycat_status file.

(9) Restart the xinetd service.

service xinetd restart
Copy the code

(10) Check whether the mycat_status service is started successfully, as shown in the following.

  • Binghe151 server
[root@binghe151 ~]# netstat -antup|grep 48700
tcp    0   0 :::48700          :::*      LISTEN   2776/xinetd
Copy the code
  • Binghe154 server
[root@binghe154 ~]# netstat -antup|grep 48700
tcp    0   0 :::48700          :::*      LISTEN   6654/xinetd
Copy the code

The result shows that the myCAT_STATUS server is successfully started on both servers.

At this point, the Xinetd service is successfully installed and configured, that is, the Mycat status check service is successfully installed.

Install and configure HAProxy

We installed HAProxy directly on binghe151 and Binghe154 servers using the following commands.

yum install haproxy -y
Copy the code

After the installation, we need to configure HAProxy. The configuration file directory of HAProxy is /etc/haproxy. We check the file information in this directory, as shown below.

[root@binghe151 ~]# ll /etc/haproxy/
total 4
-rw-r--r-- 1 root root 3142 Oct 21  2016 haproxy.cfg
Copy the code

The haproxy. CFG file exists in the /etc/haproxy. directory. Next, we modify the haproxy. CFG file, and the content of the modified haproxy. CFG file is as follows.

global
    log127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy stats socket /var/lib/haproxy/stats defaults mode httplog                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen  admin_status
      bind 0.0.0.0:48800
      stats uri /admin-status
      stats auth  admin:admin
listen    allmycat_service
      bind 0.0.0.0:3366
      mode tcp
      option tcplog
	  option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
      balance    roundrobin
      server    mycat_151 192.168.175.151:3307 check port 48700 inter 5s rise 2 fall 3
      server    mycat_154 192.168.175.154:3307 check port 48700 inter 5s rise 2 fall 3
listen    allmycat_admin
      bind 0.0.0.0:3377
      mode tcp
      option tcplog
	  option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
      balance    roundrobin
      server    mycat_151 192.168.175.151:3308 check port 48700 inter 5s rise 2 fall 3
      server    mycat_154 192.168.175.154:3308 check port 48700 inter 5s rise 2 fall 3
Copy the code

Next, start HAProxy on binghe151 server and Binghe154 server, as shown below.

haproxy -f /etc/haproxy/haproxy.cfg 
Copy the code

Next, we connect to Mycat using the mysql command to connect to the virtual IP and port that HAProxy listens to, as shown below.

[root@binghe151 ~]# mysql-umycat -pmycat -h192.168.175.110-p3366 --default-auth=mysql_native_password # mysql-umycat -pmycat -h192.168.175.110-p3366 --default-auth=mysql_native_password
mysql: [Warning] Using a password on the commandline interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: Mycat Server (OpenCloudDB) Copyright (C) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type'help; ' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 
Copy the code

As you can see, the connection to Mycat was successful.

Install Keepalived

1. Install and configure Keepalived

To install Keepalived, run the following command on the binghe151 and Binghe154 servers.

yum install keepalived -y
Copy the code

After the installation is successful, a keepalived directory will be generated in /etc/keepalived. Next, we will configure keepalive. conf file in /etc/keepalived, as shown below.

vim /etc/keepalived/keepalived.conf
Copy the code
  • Configure the binghe151 server
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
  script "/etc/keepalived/check_haproxy.sh"interval 2 weight 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 150 advert_int 1 Authentication {auth_type PASS auth_pass 1111} track_script {chk_http_port} virtual_ipaddress {192.168.175.110 dev eth0 scope global } }Copy the code
  • Configure the binghe154 server
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
  script "/etc/keepalived/check_haproxy.sh"interval 2 weight 2 } vrrp_instance VI_1 { state SLAVE interface eth0 virtual_router_id 51 priority 120 advert_int 1 Authentication {auth_type PASS auth_pass 1111} track_script {chk_http_port} virtual_ipaddress {192.168.175.110 dev eth0 scope global } }Copy the code

2. Write a script to detect HAProxy

Next, you need to create check_haproxy.sh script in /etc/keepalived directory on binghe151 and Binghe154 respectively, as shown below.

#! /bin/bash
STARTHAPROXY="/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg"
STOPKEEPALIVED="/etc/init.d/keepalived stop"
#STOPKEEPALIVED="/usr/bin/systemctl stop keepalived"
LOGFILE="/var/log/keepalived-haproxy-state.log"
echo "[check_haproxy status]" >> $LOGFILE
A=`ps -C haproxy --no-header |wc -l`
echo "[check_haproxy status]" >> $LOGFILE
date >> $LOGFILE
if [ $A -eq 0 ];then
   echo $STARTHAPROXY >> $LOGFILE
   $STARTHAPROXY >> $LOGFILE 2>&1
   sleep 5
fi
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
   exit 0
else
   exit 1
fi
Copy the code

Run the following command to grant executable permission to the check_haproxy.sh script:

chmod a+x /etc/keepalived/check_haproxy.sh 
Copy the code

3. Starting Keepalived

Once configured, we can start Keepalived on binghe151 and Binghe154 servers, as shown below.

/etc/init.d/keepalived start
Copy the code

Check whether Keepalived is started successfully, as shown below.

  • Binghe151 server
[root@binghe151 ~]# ps -ef | grep keepalived
root       1221      1  0 20:06 ?        00:00:00 keepalived -D
root       1222   1221  0 20:06 ?        00:00:00 keepalived -D
root       1223   1221  0 20:06 ?        00:00:02 keepalived -D
root      93290   3787  0 21:42 pts/0    00:00:00 grep keepalived
Copy the code
  • Binghe154 server
[root@binghe154 ~]# ps -ef | grep keepalived
root       1224      1  0 20:06 ?        00:00:00 keepalived -D
root       1225   1224  0 20:06 ?        00:00:00 keepalived -D
root       1226   1224  0 20:06 ?        00:00:02 keepalived -D
root      94636   3798  0 21:43 pts/0    00:00:00 grep keepalived
Copy the code

As you can see, the Keepalived service started successfully on both servers.

4. Verify the virtual IP address bound to Keepalived

Next, we check to see if Keepalived on each server is bound to a virtual IP.

  • Binghe151 server
[root@binghe151 ~]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:10:a1:45 brd Ff :ff: FF: FF: FF: FF: FF inet 192.168.175.151/24 BRD 192.168.175.255 scope global eth0 inet 192.168.175.110/32 scope global Eth0 inet 192.168.175.110/24 BRD 192.168.175.255 scope global secondary eth0:1 inet6 fe80: : 20 c: 29 ff: fe10: a145/64 scope link valid_lft forever preferred_lft foreverCopy the code

You can see the following line of code.

Inet 192.168.175.110/32 scope global eth0Copy the code

Keepalived on the Binghe151 server is bound to the virtual IP address 192.168.175.110.

  • Binghe154 server
[root@binghe154 ~]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:22:2a:75 brd Ff :ff:ff: FF: FF: FF: FF INET 192.168.175.154/24 BRD 192.168.175.255 scope global eth0 inet 192.168.175.110/24 BRD 192.168.175.255 scope global secondary eth0:1 inet6 fe80: : 250:56 ff: fe22:2 a75/64 scope link valid_lft forever preferred_lft foreverCopy the code

You can see that Keepalived on the Binghe154 server is not bound to a virtual IP.

5. Test the virtual IP address drift

How to test the drift of virtual IP? First, we stop Keepalived on the Binghe151 server, as shown below.

/etc/init.d/keepalived stop
Copy the code

Next, look at Keepalived binding virtual IP on the Binghe154 server, as shown below.

[root@binghe154 ~]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:22:2a:75 brd Ff: FF: FF: FF: FF: FF: FF inet 192.168.175.154/24 BRD 192.168.175.255 scope global eth0 inet 192.168.175.110/32 scope global Eth0 inet 192.168.175.110/24 BRD 192.168.175.255 scope global secondary eth0:1 inet6 fe80: : 250:56 ff: fe22:2 a75/64 scope link valid_lft forever preferred_lft foreverCopy the code

You can see the following information in the output result.

Inet 192.168.175.110/32 scope global eth0Copy the code

Keepalived on the Binghe154 server is bound to the virtual IP address 192.168.175.110. The virtual IP address drifts to the Binghe154 server.

6. Keepalived on binghe151 server preempts virtual IP

Next, we start Keepalived on the Binghe151 server, as shown below.

/etc/init.d/keepalived start
Copy the code

After successful startup, we check the virtual IP binding again, as shown below.

  • Binghe151 server
[root@binghe151 ~]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:10:a1:45 brd Ff :ff: FF: FF: FF: FF: FF inet 192.168.175.151/24 BRD 192.168.175.255 scope global eth0 inet 192.168.175.110/32 scope global Eth0 inet 192.168.175.110/24 BRD 192.168.175.255 scope global secondary eth0:1 inet6 fe80: : 20 c: 29 ff: fe10: a145/64 scope link valid_lft forever preferred_lft foreverCopy the code
  • Binghe154 server
[root@binghe154 ~]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:22:2a:75 brd Ff :ff:ff: FF: FF: FF: FF INET 192.168.175.154/24 BRD 192.168.175.255 scope global eth0 inet 192.168.175.110/24 BRD 192.168.175.255 scope global secondary eth0:1 inet6 fe80: : 250:56 ff: fe22:2 a75/64 scope link valid_lft forever preferred_lft foreverCopy the code

Keepalived on binghe151 server has a higher priority than Keepalived on Binghe154 server. Keepalived on binghe151 servers preempts virtual IP addresses.

Configure the primary/secondary replication for MySQL

For simplicity, I have configured MySQL on binghe154 and Binghe155 as master/slave replication. You can also configure MySQL on other servers as required.

1. Edit the my.cnf file

  • Binghe154 server
server_id = 154
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery
Copy the code
  • Binghe155 server
server_id = 155
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery
Copy the code

2. Synchronize MySQL data on the two servers

There is only one Customer_DB database on the binghe154 server, we export the Customer_DB database using the mysqldump command as shown below.

[root@binghe154 ~]# mysqldump --master-data=2 --single-transaction -uroot -p --databases customer_db > binghe154.sql
Enter password: 
Copy the code

Next, we look at the binghe154.sql file.

more binghe154.sql
Copy the code

In the file, we can find the following information.

CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000042', MASTER_LOG_POS=995;
Copy the code

The current binary log file of MySQL is mysql-bin.000042, and the binary log file is stored in 995.

Next, we copy the binghe154.sql file to the Binghe155 server, as shown below.

SCP binghe154. SQL 192.168.175.155: / usr /local/src
Copy the code

On the binghe155 server, import the binghe154.sql script into MySQL, as shown below.

mysql -uroot -p < /usr/local/src/binghe154.sql
Copy the code

At this point, the data is initialized.

3. Create a primary/secondary replication account

In MySQL on binghe154 server, create a MySQL account for master/slave replication, as shown below.

mysql> CREATE USER 'repl'@'192.168.175. %' IDENTIFIED BY 'repl123456';
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER USER 'repl'@'192.168.175. %' IDENTIFIED WITH mysql_native_password BY 'repl123456';                           
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.175. %';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
Copy the code

4. Configure replication links

Log in to MySQL on the binghe155 server and run the following command to configure the replication link.

mysql> change master to 
     > master_host='192.168.175.154'.> master_port=3306.> master_user='repl'.> master_password='repl123456'.> MASTER_LOG_FILE='mysql-bin.000042'.> MASTER_LOG_POS=995;
Copy the code

Where MASTER_LOG_FILE=’mysql-bin.000042′, MASTER_LOG_POS=995 is found in binghe154.sql.

5. Start the slave library

Start the slave library on the MySQL command line of the binghe155 server, as shown below.

mysql> start slave;
Copy the code

Check whether the secondary library is successfully started, as shown below.

mysql> SHOW slave STATUS \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168175.151.
                  Master_User: binghe152
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin000007.
          Read_Master_Log_Pos: 1360
               Relay_Log_File: relay-bin000003.
                Relay_Log_Pos: 322
        Relay_Master_Log_File: mysql-bin000007.Slave_IO_Running: Yes Slave_SQL_Running: Yes # # # # # # # # # # # # # # # # # omit part of the output information # # # # # # # # # # # # # # # # # #Copy the code

The Slave_IO_Running and Slave_SQL_Running values are both Yes, indicating that the primary and secondary replication environments of MySQL are successfully set up.

Finally, don’t forget to create Mycat to connect to MySQL in MySQL on binghe155 server, as shown below.

CREATE USER 'mycat'@'192.168.175. %' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175. %' IDENTIFIED WITH mysql_native_password BY 'mycat'; 
GRANT SELECT.INSERT, UPDATE, DELETE.EXECUTE  ON *.* TO 'mycat'@'192.168.175. %';
FLUSH PRIVILEGES;
Copy the code

Configure Mycat read/write separation

Modify Mycatd schema. XML file, realize the MySQL read and write separation on binghe154 and Binghe155 servers. In the conf/zkconf directory of the Mycat installation directory, modify the schema. XML file, as shown in the following figure.

<! DOCTYPEmycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
    <schema name="shop" checkSQLschema="true" sqlMaxLimit="1000">
        <table name="order_master" dataNode="orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" primaryKey="order_id" autoIncrement="true">
            <childTable name="order_detail" joinKey="order_id" parentKey="order_id" primaryKey="order_detail_id" autoIncrement="true"/>
        </table>
        <table name="order_cart" dataNode="ordb" primaryKey="cart_id"/>
        <table name="order_customer_addr" dataNode="ordb" primaryKey="customer_addr_id"/>
        <table name="region_info" dataNode="ordb,prodb,custdb" primaryKey="region_id" type="global"/>
        <table name="serial" dataNode="ordb" primaryKey="id"/>
        <table name="shipping_info" dataNode="ordb" primaryKey="ship_id"/>
        <table name="warehouse_info" dataNode="ordb" primaryKey="w_id"/>
        <table name="warehouse_proudct" dataNode="ordb" primaryKey="wp_id"/>
        <table name="product_brand_info" dataNode="prodb" primaryKey="brand_id"/>
        <table name="product_category" dataNode="prodb" primaryKey="category_id"/>
        <table name="product_comment" dataNode="prodb" primaryKey="comment_id"/>
        <table name="product_info" dataNode="prodb" primaryKey="product_id"/>
        <table name="product_pic_info" dataNode="prodb" primaryKey="product_pic_id"/>
        <table name="product_supplier_info" dataNode="prodb" primaryKey="supplier_id"/>
        <table name="customer_balance_log" dataNode="custdb" primaryKey="balance_id"/>
        <table name="customer_inf" dataNode="custdb" primaryKey="customer_inf_id"/>
        <table name="customer_level_inf" dataNode="custdb" primaryKey="customer_level"/>
        <table name="customer_login" dataNode="custdb" primaryKey="customer_id"/>
        <table name="customer_login_log" dataNode="custdb" primaryKey="login_id"/>
        <table name="customer_point_log" dataNode="custdb" primaryKey="point_id"/>
    </schema>
	
    <dataNode name="mycat" dataHost="binghe151" database="mycat"/>
    <dataNode name="ordb" dataHost="binghe152" database="order_db"/>
    <dataNode name="prodb" dataHost="binghe153" database="product_db"/>
    <dataNode name="custdb" dataHost="binghe154" database="customer_db"/>
    <dataNode name="orderdb01" dataHost="binghe152" database="orderdb01"/>
    <dataNode name="orderdb02" dataHost="binghe152" database="orderdb02"/>
    <dataNode name="orderdb03" dataHost="binghe153" database="orderdb03"/>
    <dataNode name="orderdb04" dataHost="binghe153" database="orderdb04"/>
	
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe151" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe51" url="192.168.175.151:3306" password="mycat" user="mycat"/>
    </dataHost>
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe152" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe52" url="192.168.175.152:3306" password="mycat" user="mycat"/>
    </dataHost>
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe153" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe53" url="192.168.175.153:3306" password="mycat" user="mycat"/>
    </dataHost>
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe154" writeType="0" switchTymycate="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe54" url="192.168.175.154:3306" password="mycat" user="mycat">
			<readHost host="binghe55".url="192.168.175.155:3306" user="mycat" password="mycat"/>
		</writeHost>
         <writeHost host="binghe55" url="192.168.175.155:3306" password="mycat" user="mycat"/>
    </dataHost>
</mycat:schema>
Copy the code

Save and exit the Vim editor. Next, initialize the data in Zookeeper as shown below.

/usr/local/mycat/bin/init_zk_data.sh 
Copy the code

After the preceding command is successfully executed, the configuration is automatically synchronized to schema.xml in the conf directory of the Mycat installation directory on binghe151 and Binghe154 servers.

Next, start Mycat service on binghe151 and Binghe154 servers respectively.

mycat restart
Copy the code

How do I access a high availability environment

When the ha environment is configured, upper-layer applications need to connect to the IP address and port listened by HAProxy. For example, use the mysql command to connect to a high availability environment as shown below.

[root@binghe151 ~]# mysql -umycat -pmycat -h192168.175.110. -P3366 --default-auth=mysql_native_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.629.-mycat1.67.4.-release- 20200228205020. MyCat Server (OpenCloudDB)

Copyright (c) 2000.2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help; ' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+----------+
| DATABASE |
+----------+
| shop     |
+----------+
1 row in set (0.10 sec)

mysql> use shop;
Database changed
mysql> show tables;
+-----------------------+
| Tables in shop        |
+-----------------------+
| customer_balance_log  |
| customer_inf          |
| customer_level_inf    |
| customer_login        |
| customer_login_log    |
| customer_point_log    |
| order_cart            |
| order_customer_addr   |
| order_detail          |
| order_master          |
| product_brand_info    |
| product_category      |
| product_comment       |
| product_info          |
| product_pic_info      |
| product_supplier_info |
| region_info           |
| serial                |
| shipping_info         |
| warehouse_info        |
| warehouse_proudct     |
+-----------------------+
21 rows in set (0.00 sec)
Copy the code

Here, I just extend the MySQL on binghe154 server read/write separation environment, you can also according to the actual situation of MySQL on other servers to achieve master/slave replication and read/write separation, so that, The whole high availability environment realizes the high availability of HAProxy, Mycat, MySQL, Zookeeper and Keepalived.

Ok, that’s enough for today, I’m Glacier, we’ll see you next time!

Glacier Original PDF

Follow Glacier Technology wechat official account:

Reply to “Concurrent Programming” to receive the PDF of In-depth Understanding of High Concurrent Programming (1st edition).

Reply “concurrent source code” to get the “Concurrent programming core Knowledge (source code Analysis first edition)” PDF document.

Reply to “Limit Traffic” to get the PDF document “Distributed Solution under 100 million Traffic”.

Reply to “design patterns” to get the PDF of “simple Java23 design patterns”.

Reply “new Java8 features” obtain the Java8 new features tutorial PDF document.

Reply to “Distributed Storage” to receive the PDF of “Learn Distributed Storage Techniques from Glacier”.

Reply to “Nginx” to receive the PDF of Learn Nginx Technology from Glacier.

Reply to “Internet Engineering” to get the PDF of “Learn Internet Engineering techniques from Glacier”.

Big welfare

WeChat search the ice technology WeChat 】 the public, focus on the depth of programmers, daily reading of hard dry nuclear technology, the public, reply within [PDF] have I prepared a line companies interview data and my original super hardcore PDF technology document, and I prepared for you more than your resume template (update), I hope everyone can find the right job, Learning is a way of unhappy, sometimes laugh, come on. If you’ve worked your way into the company of your choice, don’t slack off. Career growth is like learning new technology. If lucky, we meet again in the river’s lake!

In addition, I open source each PDF, I will continue to update and maintain, thank you for your long-term support to glacier!!

Write in the last

If you think glacier wrote good, please search and pay attention to “glacier Technology” wechat public number, learn with glacier high concurrency, distributed, micro services, big data, Internet and cloud native technology, “glacier technology” wechat public number updated a large number of technical topics, each technical article is full of dry goods! Many readers have read the articles on the wechat public account of “Glacier Technology” and succeeded in job-hopping to big factories. There are also many readers to achieve a technological leap, become the company’s technical backbone! If you also want to like them to improve their ability to achieve a leap in technical ability, into the big factory, promotion and salary, then pay attention to the “Glacier Technology” wechat public account, update the super core technology every day dry goods, so that you no longer confused about how to improve technical ability!