“This is my third day of the first Gwen Challenge 2022.First challenge in 2022”
MySQL database cluster solution
MySQL high availability architecture:Master-slave backup
- In order to prevent the database from suddenly hanging up, we need to do a high availability architecture master/slave backup of the database is a common scenario
- Usually
One master, one subordinate /(many subordinate)
Under normal circumstances, it is the host that does the work, and the slave machine backs up the host data.If the host machine unexpectedly goes down one day, the slave machine can immediatelywork
Without data loss…
MySql master-slave replication principle
Mysql master node (master) | from (called a slave) copy
- First we need to turn on the mainframe
logging
The master host,Write a message record to the log for each DML operationTo binary log events
- Secondary node, will specify the information of the primary node: IP port binary log location file… The slave relay log will copy the master binary log events to its slave relay log in real time. The slave relay log will change the events that reflect its own data.
Note:
- To ensure data accuracy, perform the following operations before performing a primary/secondary backup: Ensure
- The versions of the primary DB server and secondary DB Server databases are the same
- The data on the primary DB server and secondary DB server is consistent
- Primary DB Server Specifies the server_id of the primary DB server and secondary DB server for enabling binary logs
The service id
Must be unique
Environment Preparation:
- percona-5-7-23.tar Percona, a derivative of MysqlBuild the service through Docker.
Here I directly use the good tar to generate Docker image
The friend that has network condition can pull the latest mirror directly!
Docker create primary/secondary backup for Mysql
- Just so, some time ago, I looked at Dokcer for some time this time directly through Dokcer to build….
The normal situation is also undoubtedly..
Set up Master01 node:
# Directory where I install software:
[root@hadoop1 ~]# cd /usr/local
[root@hadoop1 local]# lsApache - Phoenix -5.0.0- hbase-2.0 -bin ElasticSearch haproxy-1-9-3.tar kafka_2.10-0.10.2.1 MR-1.0- snapshot.jar partition. CSV Zookeeper-3.4.13 apache-tomcat-7.0.85 ElasticSearch -6.2.4 hbase-2.1.0Kibana -6.2.4-linux-x86_64 Mycat-server-1.6.6.1-release-20181031195535-linux.tar.gz Percona-5-7-23.tar Apollo Hadoop jdk1.8.0_171 logstash-6.3.0 mysql wordcount.txtTar files are converted into image files.
[root@hadoop1 local]# docker load --input percona-5-7-23.tarF972d139738d: Loading layer [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = >] 208.8 MB / 208.8 MB 793 d15bf88da: Loading layer [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = >] 6.144 kB / 6.144 kB f34bc9e37309: Loading layer [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = >] 303.1 kB / 303.1 kB df7a5abc61 39: Loading layer [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = >] 22.46 MB / 22.46 MB# omit..
# View all mirrors
#percona Mysql derived version
# haProxy Proxy server
[root@hadoop1 ~]# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE HaProxy 1.9.3d1ba0f09266d 2 years ago 72.1MB Percona 5.7.23 9DB94bcf28b3 2 years Ago 577MB features start 😀 Mysql master primary node setupGo back to the root directory and create a file directory.
[root@hadoop1 /]# cd /
[root@hadoop1 /]# mkdir -p /data/mysql/master01
Go to primary node 1
[root@hadoop1 /]# cd /data/mysql/master01
Create two directories: conf configuration file data log file, to facilitate operation container, create data volume
[root@hadoop1 master01]# mkdir conf data
[root@hadoop1 master01]# ls
conf data
# Linux set file permissions 777 Max!
[root@hadoop1 master01]# chmod 777 * -R
[root@hadoop1 master01]# cd /data/mysql/master01/conf
# create write configuration file 👇👇
[root@hadoop1 conf]# vim my.cnf
# create container
#-v set up data volume: the container is shared with the host for file management.
#-p set port (); #-p set port (); (3307, 3316 don't get confused)
-e Set the default database password root
[root@hadoop1 conf]# docker create --name percona-master01 -v /data/mysql/master01/data:/var/lib/mysql -v / data/mysql/master01 / conf: / etc/my CNF. D - 3306: p. 3306 - e MYSQL_ROOT_PASSWORD = root percona: 5.7.23
cde032cb98a0465f08cee6554e83a4d53ce2b72bc95f662746d712418d4d1c91
# start the mysql container && (and check its startup log: if startup errors are reported, you can find them in real time!
[root@hadoop1 conf]# docker start percona-master01 && docker logs -f percona-master01
#...
[root@hadoop1 ~]# docker ps -aCde032cb98a0 percona: 5.7.23"/ docker - entrypoint...."15 hours ago Exited (255) 15 minutes ago 0.0.0.0:3306->3306/ TCP, ::3306->3306/ TCP Percona-master01Copy the code
My.cnf file contents
Configure basic mysql information
[mysqld]
Enable binary log, cluster must be enabled, master/slave communication is through: binary log! (This can be ignored from nodes!)
log-bin=mysql-bin
The service ID cannot be repeated in the same cluster environment.
server-id=1
Copy the code
- The slave node can be ignored:
log-bin=mysql-bin
Because the slave node only reads the master node’sBinary log
Can!
Master node test connection:
- To view the binary log file:
The file name
File location...
Settings:Configuration information about the active node
Create a synchronous account and authorize it
CREATE USER 'wf'@The '%' IDENTIFIED BY 'wf';
*.* * * * * * * *
GRANT REPLICATION SLAVE ON *.* TO 'wf'@The '%';
MySQL > alter table system permissions
FLUSH PRIVILEGES;
# Check master status
SHOW MASTER STATUS;
View configuration items related to binary logs
SHOW GLOBAL VARIABLES LIKE 'binlog%';
Check the configuration items of the server
SHOW GLOBAL VARIABLES LIKE 'server%';
Copy the code
Note:
- [Err] 1055 – Expression #1 of ORDER BY clause is not in GROUP BY clause In the my.cnf configuration file, set SQL_mode =’STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO ,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION’
Build slave node Slave01
- Basically the same as creating a Master!
Create a directory, create a data volume, write my.cnf
[root@hadoop1 mysql]# mkdir -p /data/mysql/slave01
[root@hadoop1 slave01]# cd /data/mysql/slave01
[root@hadoop1 slave01]# mkdir conf data
[root@hadoop1 slave01]# chmod 777 * -R
[root@hadoop1 slave01]# cd /data/mysql/slave01/conf
[root@hadoop1 conf]# vi my.cnf
Create a container
[root@hadoop1 conf]# docker create --name percona-slave01 -v /data/mysql/slave01/data:/var/lib/mysql -v / data/mysql/slave01 / conf: / etc/my CNF. D - 3307: p. 3306 - e MYSQL_ROOT_PASSWORD = root percona: 5.7.23
3ac806767c09ce3adb6c468f45f61020afeef701cd0162e439dcae01a040338a
# start container
[root@hadoop1 conf]# docker start percona-slave01 && docker logs -f percona-slave01
percona-slave01
Copy the code
- The slave
my.cnf
fileNo configuration required,log-bin=mysql-bin
Enable binary logging!
Test from node:
- According to the individual actual situation to change!
Set master information
CHANGE MASTER TO
MASTER_HOST='192.168.1.110'.# note port
MASTER_USER='wf',
MASTER_PASSWORD='wf',
MASTER_PORT=3316, Port of the primary node
MASTER_LOG_FILE='mysql-bin.000004'.# specify a secondary log file to listen on from the primary node.
MASTER_LOG_POS=154; # secondary file location
# start sync
START SLAVE;
# Check master status
SHOW SLAVE STATUS; # Waiting for master to send event: Waiting for master to send event
# error:
#ERROR 3021 (HY000): This operation cannot be performed with a running slave io thread; run STOP SLAVMysql has been bound to the host from the slave server. If you want to continue to bind mysql to the host, you need to reset mysql.# Stop bindings that have been started
STOP SLAVE
# reset binding
RESET MASTER
Copy the code
Testing:
- In this case, to check whether the master and slave are synchronized, backup… Create one directly on the primary node
Library/table
New data…Check whether the slave machine is synchronized...
- Ok, verification successful, the host has changed the slave machine, immediately will synchronize data!
Mysql high availability, primary/secondary backup
- Mysql primary/secondary backup… Generally speaking, it is not difficult, I use Docker to build this machine…
- In practice, you’re essentially installing two databases, one as Master and one as Slave
- The slave machine starts a thread in real time to read the execution SQL of the host and synchronize the execution data..
Mycat + MySql read-write separation
Reading and writing separationThe principle of
- Our general application for databases is read more, write less.
That is to say, the pressure on the database to read data is relatively large
- Solution: Database cluster solution:One of them is the master library, which is responsible for writing data. We call it the write library. Everything else is reading from the library, we call it reading from the library;A main n from
Master-slave read-write separation architecture!
So, here are the requirements for us:1. The data in the read and write libraries is consistent
2. Write data must be written to the write library
3. Data must be read to the read library
- 👈 figure: The database changes from the previous single node to multiple nodes to provide services to the master node data, data synchronization to the slave node data application needs to connect to two database nodes, and realize the judgment read and write operation within the program. This architecture has two problems: The complexity of development for an application that needs to be connected to multiple nodes can be addressed by middleware if implemented internally using Spring’s AOP capabilities
- 👉 graph: Middleware solutions:
Reading and writing separation
The application just needs to connect to the middleware,There is no need to connect to multiple database nodes The applicationThere is no need to distinguish between read and write operations
, to middlewareDirectly read and write operations
In the middleware, read is sent to the slave node, and write is sent to the master nodeLeave everything to the middleware, not the database
Mycat learning
- Those in need can check out Mycat
Environment to prepare
- Mycat-server-1.6.6.1-release-20181031195535-linux.tar.gz Linux installation tar package
Environment building:
- Mycat installation is very simple,
Windwos / Linux
Basic decompression to use.The thing to notice is its
The configuration fileRule Sets sharding rules. Schema Sets sharding library information… Server set Mycat user/password
# View my software directory
[root@hadoop1 ~]# cd /usr/local
[root@hadoop1 local]# lsApache-phoenix-5.0.0-hbase-2.0-bin ElasticSearch haproxy-1-9-3.tar kafka_2.10-0.10.2.1 MR-1.0-snapshot.jar Mycat - server - 1.6.6.1 - release - 20181031195535 - Linux. Tar. Gz percona - 5-7-23. The tar apache tomcat -- 7.0.85 elasticsearch - 6.2.4 X86_64 mycat mysql wordcount. TXT Apollo Hadoop jdk1.8.0_171 logstash- 6.3.0mycat2 Partition. CSV zookeeper - 3.4.13# mycat-server-1.6.6.1-release-20181031195535-linux.tar. gz zip package # mycat-server-1.6.6.1-release-20181031195535-linux.tar. gz zip package..
#Mycat Mycat2
# start Mycat
[root@hadoop1 local]# cd /usr/local/mycat
[root@hadoop1 mycat]# cd bin
[root@hadoop1 bin]# ls
dataMigrate.sh init_zk_data.sh mycat rehash.sh startup_nowrap.sh wrapper-linux-ppc-64 wrapper-linux-x86-32 wrapper-linux-x86-64
# Mycat command. / Mycat {the console | start start | stop stop | restart | | run state status dump}
[root@hadoop1 bin]# ./mycat start
Starting Mycat-server...
Check whether the service process is started
[root@hadoop1 bin]# jps
2721 WrapperSimpleApp
2742 Jps
Copy the code
- Linux unzip command:
Tar -zxvf The file to decompress
Note:
- Mycat startup, error is not easy to be found, you can view the log to check.
Configuration file:
- What is provided here is not perfect, but an important modification..
- In addition, the Chinese character encoding problem exists in Linux. It is not recommended to manually modify and write in Linux. Can directly through the tool to file directly upload overwrite run..
- Please pay attention to details.Blog.csdn.net/qq_45542380…
rule.xml
<tableRule name="mod-long">
<rule>
<columns>id</columns>
<algorithm>mod-long</algorithm>
</rule>
</tableRule>
<! -... -->
<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
<! -- how many data nodes -->
<property name="count">1</property> <! This is currently a Mysql master-slave architecture. Depending on how many sharding rules! Change to 2 later -->
</function>
<! -... -->
Copy the code
schema.xml
<! DOCTYPEmycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="wf" checkSQLschema="false" sqlMaxLimit="100">
<table name="wsm" dataNode="dn1" rule="mod-long" />
<! -- <table name="wsm" dataNode="dn1,dn2" rule="mod-long" /> -->
</schema>
<dataNode name="dn1" dataHost="cluster1" database="wsm" />
<! --<dataNode name="dn2" dataHost="cluster2" database="wsm" /> -->
<dataHost name="cluster1" maxCon="1000" minCon="10" balance="3" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="W1" url="192.168.1.110:3306" user="root" password="root">
<readHost host="W1R1" url="192.168.1.110:3307" user="root" password="root" />
</writeHost>
</dataHost>
<! -- <dataHost name="cluster2" maxCon="1000" minCon="10" balance="3" writeType="1" dbType="mysql" dbDriver="native" SwitchType ="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="W1" URL ="192.168.1.110:3316" User ="root" password="root"> <readHost host="W1R1" URL ="192.168.1.110:3317" user="root" password="root" /> </writeHost> </dataHost> -->
</mycat:schema>
Copy the code
- The comment says
The contents of the database sharding
Logical library – Logical tablesThe table name matches the database
— Data nodeDatabase specifies the corresponding
The database
— Database host configuration - Note: The logical table name must be the same as the actual table name! Database specifies the same number of nodes as the database!
Refer to the above link for details!
server.xml
- Configure the Mycat user password
Joined logical tables
<user name="wf" defaultAccount="true">
<property name="password">wf</property>
<property name="schemas">wf</property>
</user>
Copy the code
Testing:
- The connection is successful and you can operate the database WSM
Mycat + Mysqlmultiple
Data sharding:
Data sharding:
What is database sharding
- In simple terms, it means passing certain conditions
- Distribute the data stored in the same database to multiple database hosts to achieve the effect of distributing the load of a single device.
Three sharding modes: vertical sharding horizontal sharding mixed sharding
Build a set of Mysql master/slave architectureThe cluster:
Repeat operation…. I won’t explain it in detail
- Note change port, container name!
Master02
Create a directory
mkdir -p /data/mysql/master02
cd /data/mysql/master02
cd /data/mysql/master02
chmod 777 * -R
Create a configuration file
cd /data/mysql/master02/conf
vim my.cnf
Create a containerdocker create --name percona-master02 -v /data/mysql/master02/data:/var/lib/mysql -v / data/mysql/master02 / conf: / etc/my CNF. D - 3316: p. 3306 - e MYSQL_ROOT_PASSWORD = root percona: 5.7.23# start
docker start percona-master02 && docker logs -f percona-master02
Copy the code
- Don’t forget that the Maste node is logging enabled
my.cnf
Set user and sharding permissions
Slave02
mkdir /data/mysql/slave02
cd /data/mysql/slave02
mkdir conf data
chmod 777 * -R
Create a configuration file
cd /data/mysql/slave02/conf
vim my.cnf
Create a containerdocker create --name percona-slave02 -v /data/mysql/slave02/data:/var/lib/mysql -v / data/mysql/slave02 / conf: / etc/my CNF. D - 3317: p. 3306 - e MYSQL_ROOT_PASSWORD = root percona: 5.7.23# start
docker start percona-slave02 && docker logs -f percona-slave02
Copy the code
- my.cnf
Don't forget to
View all running containers:
[root@hadoop1 bin]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES A2F509B44DA7 PerCONa :5.7.23"/ docker - entrypoint...."18 hours ago Up 15 seconds 0.0.0.0:3317->3306/ TCP, :::3317->3306/ TCP percona-slave02 1d03a6315278 Percona :5.7.23"/ docker - entrypoint...."18 hours ago Up 4 minutes 0.0.0.0:3316->3306/ TCP, :::3316->3306/ TCP percona-master02 3AC806767c09 Percona :5.7.23"/ docker - entrypoint...."21 hours ago Up About an hour 0.0.0.0:3307->3306/ TCP, :::3307->3306/ TCP Percona-slave01 cDE032CB98A0 Percona :5.7.23"/ docker - entrypoint...."21 hours ago Up About an hour 0.0.0.0:3306->3306/ TCP, :::3306->3306/ TCP percona-master01Copy the code
Testing:
- Mycat is for Master01 and Master02
summary
- When using Mycat, you only need to specify the IP port of Mycat, add, delete, change and check Mycat will fragment the data, read and write separation
haproxy + MycatThe middleware
The cluster:
- Mycat is used as a proxy for the database. In the case of high concurrency, it is bound to face single-node performance problems, so it needs to deploy multiple nodes. Otherwise, if it hangs, the following Mysql service can not be called even if it does not hang.
haproxy
Environment building:
Mycat2 create:
- Copy the previous service directly, because the port of the same IP address needs to change the port configuration:
[root@hadoop1 local]# lsApache-phoenix-5.0.0-hbase-2.0-bin ElasticSearch haproxy-1-9-3.tar kafka_2.10-0.10.2.1 MR-1.0-snapshot.jar Mycat - server - 1.6.6.1 - release - 20181031195535 - Linux. Tar. Gz percona - 5-7-23. The tar apache tomcat -- 7.0.85 elasticsearch - 6.2.4 X86_64 mycat mysql wordcount. TXT Apollo Hadoop jdk1.8.0_171 logstash- 6.3.0mycat2 Partition. CSV zookeeper - 3.4.13 [root @ hadoop1local]# Build multi-node mycat: copy a change configuration directly! cp mycat mycat2 -R vim wrapper.conf# Set JMX portWrapper. Java. Additional. 7 = - Dcom. Sun. Management jmxremote. Port = 1982 (default) 1985 (modified) vim server. XMLSet the service port and management port
<property name="serverPort">8066/ 8067</property>
<property name="managerPort">9066 / 9067</property>
Restart the service
Copy the code
Haproxy installation:
# pull mirrorDocker pull haproxy: 1.9.3Create data volume create container...
[root@hadoop1 bin]# mkdir -p /haoke/haproxy
[root@hadoop1 bin]# docker create -- the name haproxy - net host - v/haoke haproxy: / usr/local/etc/haproxy haproxy: 1.9.3
b6ab17da4aaa0ef43a4a5253ab23b46366db9873e0b03a307291f0086613c820
Edit the configuration file
[root@hadoop1 bin]# vim /haoke/haproxy/haproxy.cfg
# Start the container... Print log
[root@hadoop1 bin]# docker restart haproxy && docker logs -f haproxy
haproxy
[WARNING] 109/153326 (1) : config : 'option forwardfor' ignored for proxy 'proxy-mysql' as it requires HTTP mode.
[NOTICE] 109/153326 (1) : New worker #1 (7) forked
[WARNING] 109/153328 (7) : Server proxy-mysql/mycat_1 is DOWN, reason: Layer4 timeout, check duration: 2000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 109/153329 (7) : Server proxy-mysql/mycat_2 is DOWN, reason: Layer4 timeout, check duration: 2023ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 109/153329 (7) : proxy 'proxy-mysql' has no server available!
Run the container successfully!
[root@hadoop1 bin]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES B6AB17DA4AAA HAProxy :1.9.3"/ docker - entrypoint...."2 minutes ago Up 17 seconds Haproxy A2F509b44DA7 Percona :5.7.23"/ docker - entrypoint...."39 minutes ago Up 39 minutes 0.0.0.0:3317->3306/ TCP, :::3317->3306/ TCP percona-slave02Copy the code
Haproxy configuration file
vim /haoke/haproxy/haproxy.cfg
# create file I am currently only application is not a explanation:
vim /haoke/haproxy/haproxy.cfg
# Log management
# maxCONN Maximum number of connections 4000
global
log 127.0.0.1 local2
maxconn 4000
daemon
# Support HTTP...
defaults
mode http
logGlobal option Httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000# a set of ur management page listening web mode 4001.. The account password
listen admin_stats
bind 0.0.0.0:4001
mode http
stats uri /dbs
stats realm Global\ statistics
stats auth admin:admin123
4002 specifies to listen on mycat configuration, listen on address
#balance roundrobin roundrobin Server Two MyCAT services
listen proxy-mysql
bind 0.0.0.0:4002
mode tcp
balance roundrobin
option tcplog
server mycat_1 192.168.1.110:8066 check port 8066 maxconn 2000
server mycat_2 192.168.1.110:8067 check port 8067 maxconn 2000
Copy the code
- Linux
Suggest deleting Chinese...
- From the page, you can see that two MyCat proxy services already exist.
- Test with mysql client:
- Because of the fear of a single Mycat suspension, affecting the normal use of services, Mycat cluster architecture!
But then, there are multiple IPS
- Haproxy is a summary of Mycat cluster Ip…
Someone said haproxy hang how to do?
- Haproxy doesn’t hang! 😒
- It is mainly, do proxy server, browse forward without doing tasks… Under normal circumstances, it is almost impossible, if it means to shout operation and maintenance!