preface
For enterprise applications, the continuous availability and accessibility of the database is very important, especially for some Internet enterprise users. Only when the database provides continuous and reliable availability, can the enterprise bring good benefits and provide high-quality and reliable service experience for its customers. Therefore, when designing a database architecture, you need to consider how to build a highly available architecture suitable for your application.
MySQL database, as the most popular open source database product, has many mature high availability architecture solutions, with an availability coverage rate of 90%~ 9.999%, which can be applied to a variety of different requirements for availability level. It mainly uses replication technology to replicate between different database hosts to maintain data consistency, and realizes load balancing and unified query interface through some third-party open source software, which not only reduces the complexity of application development, but also reduces the operating cost of enterprises. This chapter provides configuration guidance for two common High-availability architecture solutions of MySQL.
A brief introduction to MySQL high availability
In the single point of access server, client access application system, direct access to the database application system, all data is stored in the system architecture and read and write operations occur in only one server host, database data is stored in a database system, once the database system often fails, the application system can’t return to normal in a short time.
Database system is often the core system of all enterprises, which stores enterprise customer data, generated data and business data. Once an unexpected shutdown occurs, it will often bring the most direct economic loss without timely recovery of online or redundancy schemes. High availability of database design, synchronous read/write separation, load balancing and other methods are usually used to reduce system downtime and improve system service reliability. The following will explain how to achieve high availability of the database.
MySQL primary/secondary replication
MySQL primary/secondary replication refers to the replication of data from one MySQL server to another MySQL server. This scheme does not require complex configuration and data can be copied from a single Master host to any number of Slave hosts. Replication is asynchronous and semi-synchronous replication is added in MySQL 5.5 to improve replication reliability.
MySQL master-slave architecture design
The architecture is composed of a single master server and multiple slave servers. The master server mainly receives write requests from the application program, and the slave server receives read requests. Load balancing devices can be built between the slave server and the application layer. For example, LVS, Haproxy, and F5. The architecture can separate read and write requests to different machines for execution, but the application programs need to implement different connection pools to balance the load of read operations to multiple slave servers, so as to improve the overall read operation processing capacity of the system. However, only a single primary server can write data, so the write operation cannot be load balanced and its scalability is limited. In addition, when the primary server fails, the system cannot write data, resulting in a single point of failure. This method is applicable to scenarios that do not have high availability requirements, for example, online backup. Data is backed up on one secondary server, avoiding impact on the primary server.
As can be seen from the following figure, the application can access multiple slave hosts, and the pressure can be distributed on different slave hosts. LVS can be added between the application layer and slave servers to achieve load balancing. Meanwhile, slave servers can be horizontally expanded to improve the overall read performance of the system.
Configure the environment
The following four machines are prepared: one master server, two slave servers, and one for load balancing. The specific Settings are shown in the table.
Server installation and configuration
Install MySQL database on master server, slave server 1, and slave server 2. Upload the MySQL installation script to the server where MySQL is to be installed and execute the script operation. The script has three parameters: software package, installation directory, and configuration file. You need to specify these parameters to ensure successful installation, as shown in the following.
#================================================== #FileName: mysql install local.sh #================================================== #! /usr/bin/env bash ful lname=$1 prefix=$2 mycnf=$3 fname=] basename $ fullname realname= 'echo "$fname" 1 awk -F".tar" ' If [-z"$fullname"]; then echo "Usage: $0 /home/xyz/mysq1-xxx.tar.gz /path/to/install prefix /path/to/my. cnf" exit 1 ; Fi echo "Create MySQL user" tt='grep MySQL /etc/passwd, if[[-b $tt]]; then groupadd -f mysq1 useradd -g mysq1 -d /dev/null -s /sbin/nologin mysql fi echo "Clean old MySQL" rm -rf $prefix/mysq1 echo "Unpacking.." if[[ ! -d Sprefix ]] ; then mkdir -P Sprefix fi tar xfz $fullname -C $prefix 2>> / tmp/mysqlinstall.1og echo "Setting up symlink mysql->mysq1 -XYZ..." ln -S Sprefix/Srealname $prefix/mysq1 2>> / tmp/mysqlinstall.1og echo "export PATH=SPATH:$prefix/mysql/bin" >> /etc/profile source /etc/profile test -X $prefix/mysql/bin/mysqld [ $?!=0] &&exit 1 test -x Sprefix/mysq1/bin/mysq1 [ $?!=0] && exit 1 if[[-f /etc/my.cnf ]] ; Then mv /etc/my.cnf{,.old} fi CP $mycnf /etc/my.cnf echo "Initial MySQL Database" if [[$PWD!= $prefix/mysq1]]; then cd Sprefix/mysq1/ fi ./scripts/mysq1_ install db --user=mysq1 --defaults-file=/etc/my.cnf 2>> / tmp/mysqlinstall.log Igrep -i 'ok' if [[ $?!= 0 ]]; then echo "initial mysql dataabse failed see /tmp/mysqlinstall.log "; exit 1 fi cp -f support-files/mysq1.server /etc/init. d/mysq1 echo "Installing of MySQL is complete" echo "You can start MySQL server /etc/init.d/mysq1 start"Copy the code
After installing MySQL, you can configure it as follows:
1. Enable binlog on the primary server to ensure the uniqueness of the server ID. This step should be written in the configuration file before installing mysql.
2. Configure accounts for replication on the primary server, as shown in the following table.
Mysql > grant replication slave on *.* to rep1C identified by 'repl password'; Query OK, 0 rows affected (0.05 SEC) mysql> select PRIVILEGES; Query OK, 0 rows affected(0.01sec)Copy the code
3. Configure primary server information on secondary server 1 and secondary server 2, as shown in the following figure.
Let’s take a look at some of the main variables.
- Master_ Host: indicates the IP address of the replicated Host.
- Master User: account created on the primary server for replication.
- MasterPort: indicates the port on which MySQL runs on the primary server.
- Slave_ IO_ Running: Displays the Running status of I/O threads.
- Slave SQL Running: Displays the Running status of SQL threads.
- Last Error: When the replication is interrupted, the cause of the Error is displayed and the replication is repaired accordingly.
- Seconds Behind Master: Displays the time difference between the IO thread and the SQL thread in the slave server. Can reflect the network speed between the master and slave servers.
At this point, the MySQL primary and secondary servers have been configured.
LVS installation configuration
You can configure LVS for load balancing as follows:
1. Install LVS related software on the LVS host through Yum, as shown below.
root@LVs#yum install ipvsadm pi ranha
Copy the code
2. After the installation is complete, configure the LVS and edit the LVS configuration file.
RootOLvs# vi /etc/sysconfig/ha/lvs.cf serial_ no = 26 primary = 192.168.1.10 port= 539 keepalive = 6 deadtime = 18 network = direct debug level = NONE monitor_ links = 1 syncdaemon = 0 virtual Mysql {active = 1 address = 192.168.1.23 eth0:1 # Configure the IP address of the host physical network port vip_ nmask = 255.255.255.0 port = 3306 use regex = 0 Load monitor = none Scheduler = WRR Or WLC Weighted Minimum Connection Protocol = TCP Timeout = 6 Reentry = 15 Quiesce_ server = 0 server slave-1 address = 192.168.1.21 # slave-1 host IP active=1 port = 3306 weight =1 serve r slave-2 {address = 192.168.1.22 # slave-2 host IP active=1 port =3306 weight=1Copy the code
3. Install arptables_ jf on secondary server 1 and server 1, set ARP, start arptable_ jf, and enable the arptable_ jf service to start automatically upon startup.
root@Slave-1# yum install arptables_ j f
root@Slave-1# arptables -A IN -d 192.168.1.10一j DROP
root@slave-1# arptables -A OUT -S 192.168.1.10 -j mangle --mangle-ip-s
192.168.1.21
root@Slave-1#echo "ip addr add 192.168.1.10dev 1o" >> /etc/rc. local
rootaslave-1# service arptables _jf start
Copy the code
root@slave-1# chkconfig -level 2345 arptables jf on
Copy the code
Start the LVS service and set it to start automatically upon startup.
root@LVs#service pulse start
root@LVS# chkconfig -level 2345 pulse on
Copy the code
After the LVS is started, you can run the ipvsadm command to check whether the LVS status is normal
In the command output, the virtual IP address is 192.168.1.10, the forwarding port is 3306, and the IP addresses of the real host are 192.168.1.21 and 192.168.1.22. When an application accesses port 3306 of 192.168.1.10, the connection is automatically forwarded to machines 21 and 22. At this point, the value of ActiveConn changes to indicate that LVS is working properly.
So far, you’ve learned how to configure a simple MySQL high availability architecture. Of course, this architecture has a single point of failure, which can be solved by having a backup machine on the master server. The following sections build on this architecture to further improve its usability.
MySQL+DRBD+HA
As the services of some growing enterprises develop rapidly, they have higher requirements on the scalability, availability, and performance of the database. The common solutions described above cannot meet the requirements for availability and reliability in such rapidly growing scenarios. Increasing pressure will lead to a large delay in replication between master and slave, which affects the consistency of database data and reduces its reliability.
Is there any way to solve this problem? DRBD technology is used to implement file-level replication, which can maintain high file consistency between hosts and achieve 99.99% reliability. This scheme takes more expansibility into consideration and can well adapt to the pressure of read and write of databases brought by the rapid growth of enterprise services.
What is a DRBD
DRBD (Distributed Replicated Block Device) is a Linux-based software component that consists of kernel modules and related programs and is used to build high-availability clusters. When data is written to a file on the local DRBD device, it is simultaneously sent to another host on the network.
DRBD mirrors the entire device through the network, as shown in the following figure. The DRBD replication receives data, writes it to the local disk, and then sends the data to another host. The other host writes the data to the local disk. The DRBD performs failover to ensure high availability of the cluster.
DRBD working principle diagram
MySQL +DRBD+HA architecture design
The architecture mainly consists of two machines. Active and Standby respectively. They are connected to each other through the Heartbeat and share a virtual IP address. In normal cases, Active provides external services. When Active has a single point of failure of software or hardware, The Heartbeat automatically points the virtual IP address to the Standby node and the Standby node serves as the new Active node. After the fault of the Active node is rectified, the Active node is added to the system as a new Standby node.
DRBD technology is used to replicate data between Active and Standby nodes to reduce failover time. The following figure shows the detailed design.
Configure the environment
First, you need to prepare two database hosts, as shown in the following table.
Install the MySQL database on both hosts. Note that the MySQL data directory must be set to the mount point of the DRBD block device. The following uses the /data mount point as an example.
Configuring the Heartbeat
To install and configure the Heartbeat, perform the following steps:
1. Install the Hearbeat heartbeat software on the two hosts using yum, as shown in the following table.
root@db-01# yum install heartbeat
Copy the code
2. Configure the hosts file and add the host name and IP address to the hosts file.
root@db-01# cat /etc/hosts 127.0.0.1 localhost. Localhost: Localdomain6 LocalHost6 10.11.196.48db-01 10.11.196.49db-02 192.168.1.11 DB-01 192.168.1.12 DB-02Copy the code
3. Configure the NIC IP address and heartbeat IP address.
root@db-01# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 BOOTPROTO=static IPADDR=10.11.196.48 ONBOOT=yes root@db-01# cat /etc/sysconfig/network-scripts /ifcfg-ethl DEVICE=eth1 BOOTPROTO=static Broadcast-192.168.1.255 IPADDR=192.168.1.11 # 192.168.1.12 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yesCopy the code
4. The following uses DB-01 as the NTP server as an example. First install the NTPD package, via YUM, and then configure its options as shown below.
Rootedb-01 # cat /etc/ntp.conf # Set db-01 ntpServer RESTRICT default nomodify notrap noQuery Server 127.0.0.1 RESTRICT 192.168.1.0 Mask 255.255.255.0 nomodify notrap RESTRICT 127.0.0.1 driftfile /var/lib/ntp.driftCopy the code
5. Start the NTP service and set automatic startup upon system startup. Configure a scheduled task on db-02 to update the time.
root@db-01# service ntpd start root@db-01# chkconfig --level 2345 ntpd on root@db-02# crontab -e 0-59/30* ** * / usr/sbin/ntpdate 192.168.1.11Copy the code
Once these preparations are complete, you can configure heartbeat. In Heartbeat 2.x, there are three relevant configuration files: ha.cf, AuthKeys, and HaResources.
6. First configure the ha.cf file, which is the heartbeat core configuration file.
root@db-01# cat /etc/ha.d/ha.cf debugfile /var/ log/ha-debug logfile /var/ log/ha-log logfacility 1ocal0 keepalive 1 Deadtime 10 warntime 5 initdead30 udpPort 694 bcast eth1 # This network port must be a heartbeat port, which can also be written as ucast eth1 < IP > IP address of the peer port Auto failback off Nodedb-01 db-02 node name and host name (uname -n). Ping10.11.196.254 # gateway respawn hacluster /usr/lib/heartbeat /ipfail apiauth ipfail gid=haclient uid=hacluster deadping 5 This time is less than deadt imeCopy the code
7. Then configure the AuthKeys file, which sets up the communication verification mechanism, and modify its permission Settings.
root@db-01# cat/etc/ha.d/ authkeys
auth 1
1 crc
root@db-01# chmod 600 /etc/ha .d/ authkeys
Copy the code
8. Configure the haResources file, which sets the mounted file system type and path, as well as the service name and virtual IP address.
root@db-01# cat/etc/ha. d/haresources db-01drbddisk Filesystem: :/dev/drbd0: :/data: :ext3 mysql10.11.196.50 Host name Resource name File system: device name, mount path, type service name Virtual IP MailTo: : youremail@address. Com: : DRBDFailure # This line is optionalCopy the code
Note that because DRBD is used, the device name is /dev/drbd0 and the service name is mysql. This name must be the same as the name of the /etc/init.d/mysql script. Otherwise, mysql cannot be started in case of failover.
The heartbeat configuration process needs to be set up on both database hosts, so keep in mind that some changes, such as hostname, need to be made.
Install and configure the DRBD
Download the source code package, ht: : s/iit. O/l/dr6/8/6 / dr6d – 8-3.7 targz, DRBD divided into two parts, the kernel module and management tools, Linux from 2.6.33 start incorporating DRBD driver module into the kernel, If your Linux kernel is larger than 2.6.33, you only need to install the DRBD management tool. This section uses version 8.3.7 as an example.
1. Install DRBD on the two devices, as shown in the following figure.
root@db-01#wget http://oss. Linbit.com/drbD-8.3/8.3 / drbD-8.3.7.tar. gz root@db-01# tar ZXF drBD-8.3.7.tar. gz root@db-01# CD drbD-8.3.7 root@db-01#./configure --with-utils --with-km root@db-01# make && make install root@db-01# cp DRBD /etc/init.d/ root@db-01# chkconfig --add drbdCopy the code
2. After the DRBD is installed, allocate a block device for the DRBD. The following uses /dev/sdb1 as an example.
root@db-01# cat/usr/local/etc/drbd. conf global { usage-count yes; } common { protocol C; syncer rate 100M; } } resource r0 { on db-01 { devi ce /dev/drbd0; disk /dev/sdb1; Address 192.168.1.11:7789; Meta -disk internal; } on db-02 { device /dev/drbd0; disk /dev/ sdbl; Address 192.168.1.12:7789; meta-disk internal ; }}Copy the code
Once the configuration is complete, the kernel modules can be loaded and initialized (this is required on both machines).
root@db-01# depmod root@db-01# modprobe DRBD create DRBD metadata: root@db-01# drbdadm create-md r0 root@db-01# drbdadm attach r0 root@db-01# drbdadm syncer r0 root@db-01# drbdadm connect r0Copy the code
The system starts to create the primary node and format the file system. This step needs to be performed on only one server. The following uses DB-01 as an example.
UpToDate Indicates that the DRBD block device data on the two servers has been synchronized and the DRBD configuration is successful. Note that the DRBD device can be mounted only when it is in the primary state, as shown in /proc/drbd. On mount/dev/drbd0:
root@db-01# mkdir /data
rootedb-01# mount /dev/drbd0 /data
Copy the code
5. Start the heartbeat service.
root@db-01# chkconfig -level 234 drbd on
root@db-01# service heartbeat start
Copy the code
Now that the entire configuration is complete, be sure to run some tests to verify that the services are running properly before you put them into production, and to simulate a failure to see if you can switch hosts automatically without interruption.
Imagine a scenario in which two DRBD hosts – active one – standby – fail to communicate and the standby machine cannot keep in sync with the host. The Heartbeat recognizes a connection failure and switches the standby machine to the host, but the data on the standby machine is not exactly the same as that on the host, resulting in inconsistent data. The sex. How to deal with this situation? If the dopd of the Heartbeat is enabled, the metadata of the standby server is set to Outdated if the data on the two servers is inconsistent. If the Heartbeat server is recognized, the server will not switch to the standby server. In this way, the data of the two machines is consistent, and then it is necessary to check the cause of the problem and fix it manually. Sacrificing availability for data consistency is worth it. Here’s how to enable this feature.
Add the following to the Heartbeat ha.cf configuration file (needed on both hosts):
respawn hacluster /usr/ lib/heartbeat/dopd
apiauth dopd gid=haclient uid=hacluster
Copy the code
Reload the Heartbeat configuration as shown below.
root@db-01# /etc/init.d/heartbeat reload
Copy the code
Next, modify the common part of the DRBD configuration file drbd.conf to add the following:
Reload the DRBD configuration file.
root@db-01# drbdadmin adjust all
Copy the code
As an extension and enhancement of the scheme in the previous section, this scheme can upgrade the original single master server to the master/standby architecture, avoiding the occurrence of single point of failure. In the actual production environment, it is often necessary to flexibly design the architecture to cope with the changing and diverse business requirements.