MySQL high Availability Solution DRBD+MySQL+RHCS

MySQL5.6.42 install MySQL5.6.42

Installation steps (for both machines)

[root@DRBD1 ~]# cd /opt/

[root@DRBD1 opt]# ls
mysql-5.6.42-linux-glibc2.12-x86_64.tar.gz

[root@DRBD1 opt]# tar -xvf mysql-5.6.42-linux-glibc2.12-x86_64.tar.gz

[root@DRBD1 opt]# ln -s /opt/mysql-5.6.42-linux-glibc2.12-x86_64 /opt/mysql

[root@DRBD1 opt]# mkdir /data/mysql/{data,log,tmp,run,undo} -p

[root@DRBD1 opt]# groupadd mysql

[root@DRBD1 opt]# useradd mysql -r -g mysql

[root@DRBD1 opt]# chown -R mysql:mysql /data/mysql/

[root@DRBD1 opt]# cd /opt/mysql/                                     

[root@DRBD1 mysql]# ./scripts/mysql_install_db --user=mysql

[root@DRBD1 mysql]# echo 'PATH=$PATH:/opt/mysql/bin' >> /etc/profile

[root@DRBD1 mysql]# source /etc/profile

[root@DRBD1 mysql]# cp support-files/mysql.server  /etc/init.d/mysqld

[root@DRBD1 mysql]# /etc/init.d/mysqld startCopy the code

6. Introduction to RHCS

1) What is RHCS

RHCS is short for Red Hat Cluster Suite. RHCS is a collection of inexpensive Cluster tools that provide high availability, high reliability, load balancing, storage sharing, and integration of three Cluster architectures in a Cluster system. It provides a secure and stable operating environment for Web applications and database applications. More specifically, is a fully functional RHCS cluster application solutions, it from the application of the front end of the visit to the backend data store provides a effective cluster architecture implementation, by RHCS providing this solution, not only can ensure the front-end application lasting and stable to provide services, but also to ensure the safety of the backend data store. RHCS provides three cluster architectures in the cluster system: high availability cluster, load balancing cluster, and storage cluster.Copy the code

2) Three core functions provided by RHCS

High availability clustering is a core function of RHCS. When an application fails or the system hardware or network fails, an application can be automatically and quickly switched from one node to another using the HIGH availability management component provided by the RHCS. The node failover function is transparent to the client, ensuring that the application provides services continuously. This is what the RHCS high availability cluster does. RHCS uses LVS (Linux Virtual Server) to provide load balancing cluster. LVS is an open source and powerful IP-based load balancing technology. LVS consists of load scheduler and service access node. Average client requests can be assigned to each service node, at the same time, you can also define a variety of load distribution strategy, when a request comes in, the cluster system based on scheduling algorithm to judge the request should be assigned to which service node, and then, by the assigned to nodes in response to client requests, at the same time, the LVS also provides failover service node, That is, when a service node cannot provide services, LVS will automatically mask the failed node, then remove the failed node from the cluster, and smoothly transfer the incoming requests to other normal nodes. When the failed node recovers, LVS automatically adds the node to the cluster. This series of switching actions, for users, are transparent, through the failover function, to ensure the uninterrupted and stable operation of the service. RHCS provides storage clustering through the GFS File System, which stands for Global File System. GFS allows multiple services to read and write to a single shared File System at the same time. The storage cluster eliminates the need to synchronize data between applications by putting shared data into a shared File System. GFS is a distributed file system that coordinates and manages the read and write operations of multiple service nodes on the same file system through the lock management mechanism.Copy the code

3) Composition of RHCS cluster

RHCS is a collection of cluster tools, consisting of the following parts:

Cluster architecture manager

RHCS cluster is a basic suite of RHCS cluster, provides the basic functions of a cluster, the various nodes to form a cluster to work together, including distributed cluster manager (CMAN), membership management, lock management (DLM), configuration file management (CCS), FENCE (FENCE).Copy the code

High availability service manager

Provides node service monitoring and service failover to transfer services to another healthy node when a node service fails.Copy the code

Cluster configuration management tool

The latest version of RHCS uses LUCI to configure and manage RHCS clusters. LUCI is a Web-based cluster configuration method that enables you to easily build a powerful cluster system.Copy the code

LVS(Linux Virtual Server)

LVS is an open source load balancing software. LVS can reasonably allocate client requests to each service node based on specified load policies and algorithms to achieve dynamic and intelligent load balancing.Copy the code
In addition to the above core components, RHCS can also complement the RHCS cluster function through the following components.

GFS(Global File System)

GFS is a clustered file system developed by Redhat. The latest version is GFS2. The GFS file system allows multiple services to read and write data to a disk partition simultaneously. Installing GFS requires support from the underlying components of RHCS.Copy the code

CLVM(Cluster Logical Volume Manager)

Cluster logical volume management, or CLVM, is an extension of LVM that allows machines in a Cluster to use LVM to manage shared storage.Copy the code

iSCSI

ISCSI is a standard for transferring data blocks over the Internet, especially the Ethernet. It is a new Storage technology based on the IP Storage theory. RHCS can export and allocate shared Storage using the iSCSI technology.Copy the code

GNBD(Global Network Block Device)

GNBD is a supplementary component of GFS for RHCS to allocate and manage shared storage. GNBD is divided into client and server. On the server, GNBD allows the export of multiple block devices or GNBD files, and the GNBD client imports these exported block devices or files. You can use them as local block devices. Now that GNBD development has stopped, less and less GNBD is being used.Copy the code

4) RHCS cluster structure

RHCS cluster is divided into three parts as a whole: load balancing cluster, high availability cluster, and storage cluster, as shown in the figure:Copy the code

wKiom1ZxP26AhyanAAEA5ekWqGo409.png

The figure above shows a typical RHCS cluster topology: the entire topology is divided into three layers: the LVS load balancing layer, the Real Server layer (service node) layer in the middle, and the shared storage layer (shared storage), which provides shared storage space for GFS file systems.Copy the code

5) RHCS cluster operation principle and function introduction

wKiom1ZyFXqxyYaMAAEOL9MHcVU990.jpg

5.1 Distributed Cluster Manager (CMAN)
Cluster Manager (CMAN) is a distributed Cluster management tool. It runs on each node of the Cluster and provides Cluster management tasks for RHCS. CMAN is used to manage cluster members, messages, and notifications. It monitors the running status of each node to understand the relationship between node members. When a node in the cluster fails, the node membership relationship will change. CMAN notifies the bottom layer of the change in time, and then makes corresponding adjustments.Copy the code
5.2 Lock Management (DLM)
Distributed Lock Manager (DLM for short) is a Distributed Lock Manager. It is a basic component of RHCS and provides a common Lock running mechanism for the cluster. In RHCS cluster system, DLM runs on each node of the cluster. GFS synchronizes access to file system metadata through the locking mechanism of the lock manager. CLVM uses the lock manager to synchronize updated data to LVM volumes and volume groups. DLM does not need to set up a lock management server, it adopts a peer-to-peer lock management mode, greatly improving the processing performance. At the same time, DLM avoids the performance bottleneck of requiring a full recovery when a single node fails, and DLM requests are local and do not require network requests, so they take effect immediately. Finally, DLM can realize the parallel locking mode of multiple lock Spaces through the layered mechanism.Copy the code
5.3 Configuration File Management (CCS)
Cluster Configuration System (CCS) is used to manage Cluster Configuration files and synchronize Configuration files between nodes. CCS running on each node in the cluster, monitoring each cluster nodes on a single configuration file/etc/cluster/cluster. The state of the conf, when there is any variation in the file, will update the changes to each node in the cluster, keep each node configuration file synchronization. For example, an administrator updates the cluster configuration file on node A. After the CCS detects that the configuration file on node A is changed, the CCS immediately sends the change to other nodes. The RHCS configuration file is cluster.conf. It is an XML file that contains cluster name, cluster node information, cluster resource and service information, fence device, etc., which will be described later.Copy the code
5.4. FENCE Equipment
A FENCE device is an essential part of a RHCS cluster. It can avoid the "split brain" phenomenon caused by unpredictable situations. It is designed to solve these problems by using the hardware management interface of the server or storage itself. Or an external power management device that directly issues hardware management instructions to the server or storage, restarts or shuts down the server, or disconnects from the network. FENCE works like this: If the host is abnormal or down due to unexpected reasons, the FENCE device is called by the standby server and the FENCE device is used to restart the faulty host or isolate the faulty host from the network. After the FENCE operation is successfully executed, the standby server returns a message to the standby server. After receiving the message, the standby server takes over the services and resources of the host. In this way, the FENCE device releases resources occupied by the abnormal node, ensuring that resources and services always run on the same node. RHCS FENCE devices can be divided into two types: internal FENCE and external FENCE. Common internal FENCE devices include IBM RSAII card, HP iLO card, and IPMI device. External FENCE devices include UPS, SAN SWITCH, and NETWORK SWITCHCopy the code
5.5 High Availability Service Manager
High availability service management is used to monitor, start, and stop cluster applications, services, and resources. It provides a management capability for cluster services. When a node fails, the ha cluster service management process can transfer services from the failed node to other healthy nodes, and the service transfer capability is automatic and transparent. RHCS manages cluster services through rGManager, which runs on each cluster node. The corresponding process on the server is Clurgmgrd. In a RHCS cluster, ha ** contains two aspects: cluster services and cluster resources. Cluster services are actually application services, such as Apache, mysql, etc. There are many kinds of cluster resources, such as an IP address, a running script, ext3/GFS file system, etc. In a RHCS cluster, the high availability ** is combined with a failed migration domain, which is a collection of cluster nodes running a specific service. In a failed transfer domain, you can set a priority for each node to determine the sequence of service transfer when a node fails. If no priority is specified for a node, the cluster's high availability services will be transferred between nodes. Therefore, by creating a failed transfer domain, you can not only set the order in which services are transferred between nodes, but also restrict a service to be switched only within the nodes specified by the failed transfer domain.Copy the code
5.6. Cluster Configuration Management Tool
RHCS provides a variety of cluster configuration and management tools, including the GUi-based System-config-cluster and Conga, as well as the command-line interface (CLI) management tool. System-config-cluster is a graphical management tool for creating and configuring clusters. It consists of two parts: cluster node configuration and cluster management, which are used to create cluster node configuration files and maintain node running status respectively. Used in earlier versions of the RHCS. Conga is a new web-based cluster configuration tool. Different from System-config-cluster, Conga configes and manages cluster nodes through web. Conga consists of two parts, luci and Ricci. Luci is installed on a single machine to configure and manage the cluster, while RicCI is installed on each cluster node. Luci communicates with each node in the cluster through RicCI. RHCS also provides some powerful cluster command line management tools, such as clustat, cman_tool, ccs_tool, fence_tool, clusvcadm, etc. The usage of these commands will be described below.Copy the code
5.7, Redhat GFS
GFS is RHCS cluster system provide a storage solution, which allows multiple cluster nodes in Shared storage block level, each node by sharing a storage space, to ensure the access to the data consistency, more practical, says GFS is RHCS provides a cluster file system, multiple nodes mount a file system partition at the same time, File system data is not corrupted, which is not possible with a single file system such as EXT3 or EXT2. In order to realize the multiple nodes for a file system to read and write operations at the same time, the GFS use lock manager to manage the I/O operation, when a writing process operation a file, the file is locked, at this time do not allow the other process to read and write operations, releases the lock until the writing process completes, only when the lock is released, This file can only be accessed by other read/write processes, and when a node makes a change on the GFS file system, the change is immediately visible to the other nodes through the RHCS underlying communication mechanism. When setting up a RHCS cluster, GFS runs on each node as shared storage. You can configure and manage GFS using the RHCS management tool. These are the relationships between RHCS and GFS that the average beginner can easily confuse: The GFS component is not required to run the RHCS. The GFS component is required only when shared storage is required. The RHCS component must be installed on the node where the GFS file system is installed.Copy the code

Seven, RHCS installation

1, Installation steps (DRBD1 and DRBD2 are installed)

# yum install fence-virtd-multicast fence-virtd fence-virtd-libvirt -y

# yum install cman rgmanager -yCopy the code

2. Configuration file (DRBD1 and DRBD2 are identical)

# vim /etc/cluster/cluster.conf <? The XML version = "1.0"? > <cluster config_version="2" name="mysql_cluster"> <fence_daemon post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="DRBD1" nodeid="1" votes="1"> <fence> <method name="1"> <device lanplus="" name="mysql1_fence" nodename="DRBD1"/> </method> </fence> </clusternode> <clusternode name="DRBD2" nodeid="2" votes="1">  <fence> <method name="1"> <device lanplus="" name="mysql2_fence" nodename="DRBD2"/> </method> </fence> </clusternode> </clusternodes> <cman expected_votes="1" two_node="1"/> <fencedevices> <fencedevice agent="fence_ilo" Hostname ="10.10.110.231" login="root" name="mysql1_fence" passwd="Yuelei66"/> <fencedevice agent="fence_ilo" Hostname ="10.10.110.230" login="root" name="mysql2_fence" passwd="Yuelei66"/> </ fenceDevices > <rm> < failoverDomains > <failoverdomain name="mysql_faildomain" ordered="1" restricted="0"> <failoverdomainnode name="DRBD1" priority="1"/> <failoverdomainnode name="DRBD2" priority="2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/mysqld" name="mysql_script"/> <fs device="/dev/drbd0" force_fsck="0" force_unmount="1" fstype="ext4" Mountpoint ="/data" name="drbd_filesystem" options="noatime,nodiratime" self_fence="1"/> < IP address="10.10.110.229" monitor_link="1"/> <drbd name="res_drbd0" resource="r0"/> </resources> <service autostart="1" domain="mysql_faildomain" Name ="mysql_service" recovery="relocate"> < IP ref="10.10.110.229"/> < DRBD ref="res_drbd0"> <fs ref="drbd_filesystem"/> <script ref="mysql_script"/> </drbd> </service> </rm> </cluster>Copy the code

3, start (DRBD1 and DRBD2)

# service cman start

# service rgmanager startCopy the code

4. Verify information

[root@DRBD1 cluster]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 17:09:53 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, Local, rgmanager DRBD2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD1 started [root@DRBD1 cluster]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:3a:b2:c8 brd Ff: FF: FF: FF: FF: FF: FF INET 10.10.110.231/24 BRD 10.10.110.255 scope global eth0 INET 10.10.110.229/24 scope global secondary  eth0 inet6 fe80::a00:27ff:fe3a:b2c8/64 scope link valid_lft forever preferred_lft forever [root@DRBD1 cluster]# mysql -uroot Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.6.42- Log MySQL Community Server (GPL) Copyright (C) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help; ' or '\h' for help. Type '\c' to clear the current input statement. mysql> grant all on *.* to powdba@'%' identified by 'abc123'; Query OK, 0 rows affected (0.12 SEC) [root@yuelei1 ~]# mysql-upowdba-h10.10.110.229-pabc123 mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 4 Server version: 5.6.42- Log MySQL Community Server (GPL) Copyright (C) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help; ' or '\h' for help. Type '\c' to clear the current input statement. mysql>Copy the code

5. Test and verification

7.5.1 DRBD1 number
[root@DRBD1 ~]# mysql -uroot Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 7 Server version: 5.6.42- Log MySQL Community Server (GPL) Copyright (C) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help; ' or '\h' for help. Type '\c' to clear the current input statement. mysql> use test Database changed mysql> create table  test5 (id int,name char(2)); Query OK, 0 rows affected (0.16 SEC) mysql> select count(*) from test5; + -- -- -- -- -- -- -- -- -- -- + | count (*) | + -- -- -- -- -- -- -- -- -- -- + | 100 | + -- -- -- -- -- -- -- -- -- -- + 1 row in the set (0.00 SEC)Copy the code
7.5.2 DRBD1 Shutdown (Simulated Downtime environment)
[root@DRBD1 ~]# clustat
Cluster Status for mysql_cluster @ Wed Nov 21 17:18:04 2018
Member Status: Quorate

 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 DRBD1                                                               1 Online, Local, rgmanager
 DRBD2                                                               2 Online, rgmanager

 Service Name                                                     Owner (Last)                                                     State        
 ------- ----                                                     ----- ------                                                     -----        
 service:mysql_service                                            DRBD1                                                            started      
[root@DRBD1 ~]# reboot


Broadcast message from root@DRBD1
        (/dev/pts/0) at 17:19 ...

The system is going down for reboot NOW!Copy the code
7.5.3 DRBD2 Takes Over services
[root@DRBD2 ~]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 17:19:24 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, rgmanager DRBD2 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD1 stopping [root@DRBD2 ~]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 17:19:31 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online DRBD2 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD2 starting [root@DRBD2 data]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 19:55:16 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Offline DRBD2 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD2 started [root@DRBD2 data]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 0:r0/0 WFConnection Primary/Unknown UpToDate/DUnknown /data ext4 32G 4.4g 26G 15% [root@DRBD2 data]# IP a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:f4:d4:1d brd Ff: FF: FF: FF: FF: FF: FF INET 10.10.110.230/24 BRD 10.10.110.255 scope global eth0 INET 10.10.110.229/24 scope global secondary  eth0 inet6 fe80::a00:27ff:fef4:d41d/64 scope link valid_lft forever preferred_lft forever [root@yuelei1 ~]# mysql - upowdba - h10.10.110.229 - pabc123 mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.6.42- Log MySQL Community Server (GPL) Copyright (C) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help; ' or '\h' for help. Type '\c' to clear the current input statement. mysql> use test Database changed mysql> select count(*) from test5; + -- -- -- -- -- -- -- -- -- -- + | count (*) | + -- -- -- -- -- -- -- -- -- -- + | 100 | + -- -- -- -- -- -- -- -- -- -- + 1 row in the set (0.18 SEC)Copy the code
7.5.4 DRBD1 Restarts
[root@DRBD1 ~]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 0:r0/0 Unconfigured . . [root@DRBD1 ~]# service drbd start Starting DRBD resources: [ create res: r0 prepare disk: r0 adjust disk: r0 adjust net: r0 ] . [root@DRBD1 ~]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 0:r0/0 Connected Secondary/Primary UpToDate/UpToDate [root@DRBD1 ~]# service cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... tempfile:8: element device: Relax-NG validity error : Invalid attribute nodename for element device Relax-NG validity error : Extra element fence in interleave tempfile:4: element clusternodes: Relax-NG validity error : Element clusternode failed to validate content tempfile:5: element clusternode: Relax-NG validity error : Element clusternodes has extra content: clusternode Relax-NG validity error : Extra element fencedevices in interleave tempfile:21: element fencedevices: Relax-NG validity error : Element cluster failed to validate content Configuration fails to validate [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel config... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] [root@DRBD1 ~]# service rgmanager start Starting Cluster Service Manager: [ OK ] [root@DRBD1 ~]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 17:49:21 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, Local, rgmanager DRBD2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD2 stopping [root@DRBD1 ~]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 17:51:14 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, Local, rgmanager DRBD2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD1 started [root@DRBD1 ~]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 0:r0/0 Connected Primary/Secondary UpToDate/UpToDate /data ext4 32G 4.4g 26G 15% [root@DRBD1 ~]# IP a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:3a:b2:c8 brd Ff: FF: FF: FF: FF: FF: FF INET 10.10.110.231/24 BRD 10.10.110.255 scope global eth0 INET 10.10.110.229/24 scope global secondary  eth0 inet6 fe80::a00:27ff:fe3a:b2c8/64 scope link valid_lft forever preferred_lft forever [root@DRBD1 mysql]# ps -ef|grep mysqld root 3515 1 0 17:49 ? 00:00:00 /bin/sh /opt/mysql/bin/mysqld_safe --datadir=/data/mysql/data --pid-file=/data/mysql/data/mysql.pid mysql 4968 3515 0 17:49? 00:00:02 /opt/mysql/bin/mysqld --basedir=/opt/mysql --datadir=/data/mysql/data --plugin-dir=/opt/mysql/lib/plugin --user=mysql --log-error=/data/mysql/log/error.log --open-files-limit=65535 --pid-file=/data/mysql/data/mysql.pid --socket=/data/mysql/run/mysql.sock --port=3306 root 7812 1844 0 17:53 pts/0 00:00:00 grep mysqldCopy the code
7.5.5 Pkill mysql in DRBD1
[root@DRBD1 data]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 19:41:41 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, Local, rgmanager DRBD2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD1 started [root@DRBD1 data]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 0:r0/0 Connected Primary/Secondary UpToDate/UpToDate /data ext4 32G 4.4g 26G 15% [root@DRBD1 data]# IP a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:3a:b2:c8 brd Ff: FF: FF: FF: FF: FF: FF INET 10.10.110.231/24 BRD 10.10.110.255 scope global eth0 INET 10.10.110.229/24 scope global secondary  eth0 inet6 fe80::a00:27ff:fe3a:b2c8/64 scope link valid_lft forever preferred_lft forever [root@DRBD2 ~]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 19:41:57 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, rgmanager DRBD2 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD1 started [root@DRBD2 ~]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 0:r0/0 Connected Secondary/Primary UpToDate/UpToDate [root@DRBD1 data]# pkill mysqld [root@DRBD1 data]# ps -ef|grep mysqld root 18098 1844 0 19:42 pts/0 00:00:00 grep mysqld [root@DRBD1 data]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 0:r0/0 Connected Secondary/Secondary UpToDate/UpToDate [root@DRBD1 data]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 19:42:52 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, Local, rgmanager DRBD2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service (DRBD2) recoverable [root@DRBD1 data]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 19:43:17 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, Local, rgmanager DRBD2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD2 starting [root@DRBD1 data]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 19:43:36 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, Local, rgmanager DRBD2 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD2 started [root@DRBD2 ~]# ps -ef|grep mysqld root 30799 1 0 19:42 ? 00:00:00 /bin/sh /opt/mysql/bin/mysqld_safe --datadir=/data/mysql/data --pid-file=/data/mysql/data/mysql.pid mysql 32239 30799 7 therefore? 00:00:02 /opt/mysql/bin/mysqld --basedir=/opt/mysql --datadir=/data/mysql/data --plugin-dir=/opt/mysql/lib/plugin --user=mysql --log-error=/data/mysql/log/error.log --open-files-limit=65535 --pid-file=/data/mysql/data/mysql.pid --socket=/data/mysql/run/mysql.sock --port=3306 root 32333 17473 0 19:43 pts/0 00:00:00 grep mysqld [root@DRBD2 ~]# clustat Cluster Status for mysql_cluster @ Wed Nov 21 19:43:49 2018 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ DRBD1 1 Online, rgmanager DRBD2 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql_service DRBD2 started [root@DRBD2 ~]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 0:r0/0 Connected Primary/Secondary UpToDate/UpToDate /data ext4 32G 4.4g 26G 15% [root@DRBD2 ~]# IP a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo Inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:f4:d4:1d brd Ff: FF: FF: FF: FF: FF: FF INET 10.10.110.230/24 BRD 10.10.110.255 scope global eth0 INET 10.10.110.229/24 scope global secondary  eth0 inet6 fe80::a00:27ff:fef4:d41d/64 scope link valid_lft forever preferred_lft foreverCopy the code