Security code: tomorrow, tomorrow, how much tomorrow.

There was a single point of failure of one mysql host in the production environment, so we wanted to ensure high availability of mysql, that is, two mysql hosts

If one of the MySQL servers fails, the other can take over immediately.

The high availability schemes of MySQL are as follows:

Keepalived + double master, MHA, PXC, MMM, Heartbeat+DRBD, etc.

MHA and PXC.

This section mainly introduces the implementation of high availability of MySQL database by Keepalived.

Keepalived+mysql double master to implement mysql-HA, we must ensure that the data of two mysql databases is exactly the same,

The basic idea is that two MySQL servers are master and slave to each other (double master). Keepalived is used to configure a virtual IP address to become one of them

If one MySQL database is down, the application can automatically switch to another MySQL database to ensure high availability of the system.

The topological environment

OS: centos6.5 x86_64

Mysql version: Mysql 5.5.38

Keepalived: Keepalived – 1.2.20

Mysql – VIP: 192.168.12.1

Mysql – master1:192.168.12.128

Mysql – master2:192.168.12.129

Configure primary synchronization between two mysql servers

The first part of the process is for the master to log binary logs. Before each transaction updates the data, the master is in

Log these changes. MySQL writes transactions to the binary log. Master after the event is written to the binary log

Notifies the storage engine to commit the transaction.

The next step is for the slave to copy the master’s binary log into its own relay log. First, the slave starts a job

Make thread — I/O thread. The I/O thread opens a normal connection on the master and starts the binlog dump process.

The Binlog dump process reads events from the master’s binary log and sleeps if the master has been synchronized

Sleep and wait for the master to generate new events. The I/O thread writes these events to the relay log.

SQL slave Threads process the last step of the process. The SQL thread reads the event from the continuation log and

Update the slave data to match the master data by replaying the events. As long as the thread is connected to the I/O thread

Consistent, relay logs are usually in the OS cache, so the overhead of relay logs is minimal.

Master master synchronization is the relationship between the master and slave of two machines. Data written on any machine will be synchronized.

If the firewall is enabled for the mysql host, disable the firewall or create rules.

1. Modify the MySQL configuration file

The binlog function must be enabled for both MySQL databases. To enable this function, add the binlog function to the [MySQLd] section of the MySQL configuration file

Log-bin = mysql-bin option, the server ID of two MySQL databases cannot be the same. By default, the server ID of two MySQL databases cannot be the same

Both serverids are 1. Change one serverID to 2.

The replication configuration in Master1 is as follows:

log-bin = mysql-bin

binlog_format = mixed

server-id = 1

relay-log = relay-bin

relay-log-index = slave-relay-bin.index

auto-increment-increment = 2

auto-increment-offset = 1

Restart the mysqld service

#service mysqld restart

The replication configuration in Master2 is as follows:

log-bin = mysql-bin

binlog_format = mixed

server-id = 2

relay-log = relay-bin

relay-log-index = slave-relay-bin.index

auto-increment-increment = 2

auto-increment-offset = 2

Restart the mysqld service

#service mysqld restart

Note: Master1 and Master2 differ only in server-ID and auto-increment-offset.

Mysql has a self-growth field. You need to set the following parameters when performing primary/primary synchronization:

Auto_increment_offset and auto_increment_increment.

Auto-increment-increment Indicates the increment amount of the self-growing field each time. The default value is 1. Its value should be set to whole

The total number of servers in the structure. This example uses two servers, so the value is set to 2.

Auto-increment-offset is used to set the starting point (the initial value) for automatic growth in the database, because both servers are capable

An automatic increment of 2 is set, so their starting points must be different to avoid data synchronization between the two servers

A primary key conflict occurs when,

Note: You can specify this by adding the “binlog_do_db= database name” configuration item (you can add more than one) to the my.cnf file

The database to synchronize

2. Set Master1 as the master server of Master2

Create an authorized account on master1 to allow connections on master2(192.168.1.102)

View the current binlog status of master1

Set Master1 as its master server on Master2 and enable the slave function.

Check the status of the secondary server. The following two values must be yes, indicating that the secondary server can connect to the primary server properly

Slave_IO_Running:Yes

Slave_SQL_Running:Yes

3. Set Master2 as the master server of Master1

Create an authorized account on master2 to allow connections on master1(192.168.12.128)

View the current binlog status of master2

Set Master2 as its master server on Master1 and enable the slave function.

Check the status of the secondary server. The following two values must be yes, indicating that the secondary server can connect to the primary server properly

Slave_IO_Running:Yes

Slave_SQL_Running:Yes

4. Test master master synchronization

Create the database to be synchronized on master1, such as test_DB, and create a test table, such as tab1, in test_DB

Check whether the master2 host synchronizes data changes on Master1

As you can see from the figure above, Master2 synchronizes data changes for master

Insert data into table TAB1 on master2 host

Check whether master1 has synchronized data changes on Master2

Now any MySQL update will be synchronized to the other MySQL, MySQL synchronization is complete.

Note: If the primary MYSQL server already exists, only the secondary MYSQL server was set up later, before configuring data synchronization

Copy the database to be synchronized from the primary MYSQL server to the secondary MYSQL server

MYSQL > restore database from MYSQL server

Here we complete keepalived’s high availability.

Keepalived is a software solution for cluster management to ensure high cluster availability

To prevent single points of failure

Keepalived is based on Virtual Router Redundancy Protocol, or VRRP

Virtual route redundancy protocol.

Virtual route redundancy protocol is a protocol to implement router high availability, that is, N routes that provide the same function

A router group consists of a router group containing a master and multiple backups. The master has an external backup

For the SERVICE VIP, the master sends multicast (multicast address 224.0.0.18). When the BACKUP fails to receive VRRP packets, the master recognizes them

In this case, you need to elect a backup as the master based on the VRRP priority. so

You can ensure that the router is highly available.

Keepalived has three main modules, namely core, Check and VRRP. Core module is keepalived core, negative

Responsible for starting and maintaining the main process and loading and parsing the global configuration file. Check is responsible for health checks, including common ones

Check modes (Mode 1: tcp_check, layer 4. Mode 2: http_get, works at layer 5, to the specified

The URL performs the HTTP request, encrypts the result with MD5 and compares it with the specified MD5 value to see if it matches

Is removed from the server pool. Mode 3: ssl_get: similar to http_get. Method 4: misc_check: Check using scripts.

Mode 5: smtp_check: SMTP used to check mail services. The VRRP module implements the VRRP protocol.

Two, Keepalived installation configuration

1. Install keepalived on master1 and Master2

Install keepalived package with service control

To build and install Keepalived, you must install kernel-devel, OpenSSL-devel,

Popt-devel and other support libraries.

If not, use the RPM or yum tool to install it

Build and install Keepalived

Configure Keepalived using the specified Linux kernel location and specify the installation path as the root directory

You need to create an additional link file. After the configuration is complete, run make and make install to install it.

Note: If you don’t know which dependencies are required for Keepalived, go to the downloaded source directory and check out INSTALL

Install any software and make a habit of looking at the source code documentation, for example

INSTALL, README,doc, etc. You can get a lot of useful information

Note: installing keepalived on centos7.2 does not require adding –with-kernel-dir

[root@localhost keepalived-1.2.20]#./configure –prefix=/ && make && make install

Use keepalived service

After making install, the /etc/init.d/keepalived script file is automatically generated, but you need to add it manually

The keepalived server can be managed using the Service, chkconfig tool.

Master2 also completes keepalived installation, same as Master1, the installation process is omitted

Note: If the firewall is enabled, you need to disable the firewall or create rules.

Note: if you install keepalived firewall on centos7.2, the rule configuration is as follows:

[root@localhost ~]# firewall-cmd –permanent –add-rich-rule=”rule family=ipv4 destination

Address = 224.0.0.18 protocol IP accept value =”

success

[root@localhost ~]# firewall-cmd –reload

2. Modify Keepalived configuration files

Keepalived has only one configuration file, keepalive.conf, which mainly includes the following configuration areas, respectively

Global_defs, vrrp_instance and virtual_server.

Global_defs: sets the notification object and machine ID when a fault occurs.

Vrrp_instance: defines the VIP area that provides external services and its related attributes.

Virtual_server: virtual server definition

Keepalive. conf file on master1:

vi /etc/keepalived/keepalived.conf:

! Configuration File for keepalived //! Said annotation

global_defs {

Router_id mysql-1 // An identifier to run a Keepalived server

}

vrrp_instance VI_1 {

State BACKUP // Specifies the keepalived role. Both configurations are BACKUP

Priority determines master or slave

Interface eth0 // Specifies the interface of the HA monitoring network

Virtual_router_id 51 // Virtual route ID. This id is a number (the value ranges from 0 to 255 and is used to distinguish multiple routes

VRRP multicast of instance). Use a unique IDENTIFIER for the same VRRP instance. Ensure that the identifier is the same as master2.

Priority 100 // Used to elect master. To become master, the value ranges from 1 to 255 (in this range)

External will be recognized as the default value 100), here set to 50 on Master2

Advert_int 1 // Interval for sending VRRP packets, i.e., how often a master election takes place (can be considered as a health check)

Check interval)

Nopreempt // Do not preempt, that is, allow a node with a lower priority as master, even if it has a higher priority

Start node of

Authentication {// Authentication zone. Authentication types include PASS and HA (IPSEC). PASS (password) is recommended

Identify only the first 8 bits)

auth_type PASS

auth_pass 1111

}

Virtual_ipaddress {// In the VIP area, specify the VIP address

192.168.12.1

}

}

Virtual_server 192.168.1.100 3306 {// To set up a virtual server, specify the virtual IP address and service port.

The IP address and port are separated by Spaces

Delay_loop 2 // Sets the health check time, in seconds

Lb_algo rr // Set the back-end scheduling algorithm. In this case, set it to RR, that is, the polling algorithm

Lb_kind DR // Sets the load balancing mechanism of LVS. NAT, TUN, and DR are available

Persistence_timeout 60 // Session persistence time, in seconds. This option is useful for dynamic web pages,

It provides a good solution for session sharing in cluster system.

With this session persistence feature, user requests are routed all the way to a service node,

Until the hold time of the session is exceeded.

Protocol TCP // Indicates the forwarding protocol. The options are TCP and UDP

Real_server 192.168.1.101 3306 {// To configure service node 1, specify the real IP address of the real server and

Port. IP address and port are separated by Spaces

Note: this is changed to 192.168.12.128 on master2.

Weight 3 // Set the weight of the service node. The weight is expressed in numbers. The larger the number is, the higher the weight is

Value size is used to distinguish between servers with different performance

Notify_down/etc/keepalived/bin/mysql. Sh / / detect realserver mysql services performed after the down

The script

TCP_CHECK {

Connect_timeout 3 // Connection timeout time

Nb_get_retry 3 // Number of retry times

Delay_before_retry 3 // Retry interval

Connect_port 3306 // Health check port

}

}

}

The configuration of keepalive. conf file on master1 is as follows:

Start keepalived service

#/etc/init.d/keepalived start

Keepalive. conf file on Master2:

The keepalive. conf file for Master2 has the same configuration as that for master1, except that router_id, priority,

Real_server is different in three ways, but other configurations are the same

The keepalived.conf file configured on server1 can be copied to server2 using the SCP command, as long as

Start keepalived service

#/etc/init.d/keepalived start

This check script is added to master1 and master2 to automatically shut down the host when mysql stops working

Keepalived to kick out the faulty machine (keepalived adds only the machine as a realserver on each machine).

When mysqld starts properly, start keepalived service manually.

#mkdir /etc/keepalived/bin

Vi/etc/keepalived/bin/mysql. Sh, content is as follows:

The same operation is performed for Master2

4, test,

Run the ipaddr show dev eth0 command on master1 and master2 respectively to check the VIP pairs of master1 and master2

Control of the cluster virtual IP.

Master1 Master query result:

Master2 master query result:

From the figure above, you can see that Master1 is the primary server and Master2 is the secondary server.

Stop the MySQL service and see if the Keepalived health checker triggers the script we wrote

Stop the mysql service on master1

Master2 master query result:

This means that stopping MySQL services on the main service triggers the script we wrote to perform an automatic failover.

MySQL remote login test

We find a MySQL client installed, and then log in to VIP, see if we can log in, log in to the two MySQL servers

All servers must be authorized to allow remote login. Such as:

Test the login on the client

The image above shows that the client accesses the VIP address, which is responded by master1, because master1 is currently the master

Mysql server, stop master1 mysql server, run show variables like ‘server_id’ on the client;

The above figure shows that the query request on the client is answered by master2 host. The failover succeeds.

Conclusion:

In general, this architecture is the most effort-free for small and medium scale applications.

In the event of a master node failure, keepalived’s high availability mechanism enables a quick switch to the standby node.

In this scheme, there are a few things to note:

1. When keepalived is used as a high availability solution, it is best to set both nodes to BACKUP mode to avoid accidents

In the case (such as split brain), preemption leads to writing the same data to two nodes, resulting in conflicts;

2. Add auto_INCREment_increment and auto_increment_offset to the two nodes

Start value) is set to a different value. The purpose is to avoid an unexpected outage of the master node, which may cause some binlogs to fail

If the data is copied to the slave in time and applied to the slave, the auto-increment of the data written to the slave will conflict with that of the original master.

So stagger it at the beginning; Of course, if there is a suitable fault tolerance mechanism to resolve the master-slave increment ID conflict, also

You don’t have to do that;

3. Do not configure the slave server badly. Otherwise, the replication delay may occur. The slave server serves as the hot spare node.

The hardware configuration of the master node cannot be lower than that of the master node.

4. If you are sensitive to latency, consider using the MariaDB branch or simply launching MySQL 5.7

In the new version, the replication delay can be greatly reduced by using multi-thread replication.

Thank you for watching, sincerely hope to help you!