MySQL5.7 MGR Introduction and installation

Article source: Tao Teacher operation notes – wechat official account

Official introduction:

  • Dev.mysql.com/doc/refman/…
  • Dev.mysql.com/doc/refman/…
  • Dev.mysql.com/doc/refman/…

1. Introduce MGR

MySQL Group Replication (MGR) is a highly available, highly scalable and highly reliable MySQL cluster service. Similar technologies include MariaDB Galera Cluster and Percona XtraDB Cluster. MGR consists of multiple instance nodes to form a database cluster. The system submission transaction must be approved by more than half of the nodes before submission. A database state machine is maintained on each node in the cluster to ensure the consistency of transactions among nodes.

MySQL group replication is divided into single-master mode and multi-master mode. The replication technology of MySQL only solves the problem of data synchronization. If the master fails, it means that the database administrator needs to intervene and the application system may need to change the database connection address or restart to achieve this. (Database middleware products, such as MyCAT and Atlas, can also be used here to avoid application system database connection problems.) Group replication is done at the database level, as long as most hosts in the cluster are available, the service is available, that is, a cluster of 3 servers, one of which is allowed to go down.

1.1 MGR architecture

1.2 Two modes of group replication

The MGR offers two modes: single-primary and multi-primary.

  • Single-primary mode: Only one node in the group is responsible for writing data. Read data can be read from any node, and the data in the group remains the same.
  • Multi-primary mode: All server members can receive updates simultaneously. That is, write data is sent to all nodes in the group and all nodes in the group can read data, ensuring data consistency in the group. Note: all nodes of an MGR must be configured to use the same mode, not mixed!

1.3 MRG Group Replication Features

MGR advantages:

  • High consistency, group replication technology based on native replication and PAxOS protocol.
  • High fault tolerance, automatic detection mechanism, when downtime, will automatically eliminate the problem node, other nodes can be used normally, when different nodes generate resource contention conflict, will be handled according to the first come, first served, and built-in automatic brain crack protection mechanism.
  • It automatically synchronizes the status of all nodes until the new node is consistent with other nodes, and automatically maintains new group information.
  • High flexibility, installed as plug-ins, available in two modes: single-master mode and multi-master mode

Limitations:

  • The storage engine only supports InnoDB tables, and each table must have a primary key for write set conflict detection.
  • Each table must provide a primary key;
  • Only ipv4 is supported, requiring high network requirements.
  • The GTID feature must be enabled, and the binary log format must be set to ROW for primary selection and write set.
  • COMMIT may cause a failure, similar to a snapshot transaction isolation level failure scenario;
  • Currently, an MGR cluster group supports a maximum of 9 nodes.
  • Does not support foreign keys to save point feature, can not do global constraint detection and partial rollback;
  • Binary log Binlog does not support Replication Event Checksums.
  • The SERIALIZABLE transaction isolation level is not supported in multi-master (also known as multi-write) mode;
  • Multi-master mode does not fully support cascading foreign key constraints;
  • The multi-master mode does not support concurrent DDL execution of the same database object on different nodes (RW transactions on the same row on different nodes will fail later).

1.4 APPLICATION Scenarios of MRG Group Replication

Some typical application scenarios of group replication:

  • Scalable replication – Requires a very fluid replication architecture environment that can dynamically increase and decrease the number of server nodes with minimal side effects. For example, database services are deployed on the cloud.
  • Highly available Sharding – Sharding is a very popular way to implement scale-out. MySQL group replication is used for highly available Sharding, with one replication group for each shard.
  • Switchable master-slave replication – Single master bottlenecks and single points of failure can occur when using the single master model in certain environments. Writing to an entire group may have better scalability.
  • Automated clustering systems – In addition, you can deploy MySQL group replication just to use its automatic switching capabilities (described earlier and in this chapter).

2. Comparison of MRG principle and other replication

Reference source: www.cnblogs.com/luoahong/ar…

2.1 MySQL Asynchronous Replication

The slave does not need to confirm the submission of the master transaction. The master does not care whether the slave receives the master’s binlog. After receiving the master binlog, the slave first writes the relay log and then asynchronously executes the SQL in the relay log. The slave relay fails to accept the master binlog or the relay log application fails, because the slave does not need to ensure that the slave relay log is correctly accepted.

In addition, in the case of high concurrency, the traditional master-slave replication, the slave node may have a large delay with the master (of course, the later versions of mysql have been optimized to introduce parallel replication, so as to reduce the delay of asynchronous replication).

2.2 MySQL Semi-synchronous Replication

Due to the shortcomings of traditional asynchrony, mysql introduced semi-synchronous replication in version 5.5. Semi-synchronous replication is an improvement of the traditional asynchronous replication. Before the master transaction commit, a slave must ensure that it receives the relay log and responds to the master before committing the transaction. However, the slave relay log is still applied asynchronously, as shown in the following figure:

Semi-synchronous replication, which adds a synchronous operation to the asynchronous replication. The master must wait for an ACK reply from the slave before committing the transaction.

2.3 MRG Group Replication

  • Dev.mysql.com/doc/refman/…

MySQL Group Replication (MGR) is officially launched in 5.7.17 due to the defects of traditional asynchronous and semi-synchronous Replication — data consistency cannot be guaranteed. A replication group consists of several nodes, and a transaction can be committed only after the majority of nodes (N / 2 + 1) in the group decide and approve it. Group replication is introduced to solve the data inconsistency problem caused by traditional asynchronous replication and semi-synchronous replication. Group replication relies on the distributed consistency protocol (a variant of Paxos) to achieve the final consistency of distributed data and provide a real data high availability solution.

Group-based replication is a technique used in fault-tolerant systems. A replication-group is a collection of servers (database instances) that can communicate with each other. At the communication layer, Groupreplication implements a number of mechanisms, such as atomicmessage delivery and totalorderingof messages. In short, a replication-group is a group of nodes, each of which can execute transactions independently, while read and write transactions are committed after the other nodes in the group have coordinated. Therefore, when a transaction is ready to commit, atomic broadcasts are automatically made within the group to inform other nodes of what has changed/what transaction has been executed.

Description:

  • MySQL group replication is a share-nothing replication scheme in which each server member has its own full copy of data.
  • For any RW transaction, the commit operation is not determined one-way by the originating server, but by the group.

3. Environment preparation

The plan is as follows:

role IP port server-id
DB-1 192.110.103.41 3106 103413106
DB-2 192.110.103.42 3106 103423106
DB-3 192.110.103.43 3106 103423106

Note: Port 3306 on the machine has been occupied, so we have to change to 3106.

4. Install MySQL

4.1 Installation software can be yum or binary installation. This article is installed in binary. 1) yum installation

yum localinstall https://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm
yum install -y mysql-community-server
Start MySQL server and MySQL automatic start
systemctl start mysqld.service
systemctl enable mysqld.service
Copy the code

2) binary download software installation: dev.mysql.com/downloads/m…

# Download MySQL5.7
$wgetXXX/mysql-5.7.23-linux-glibc2.12-x86_64.tar.gz. tar -xvzf mysql-5.7.23-linux-glibc2.12-x86_64.tar.gz mv Mysql - 5.7.23 - Linux - glibc2.12 - x86_64 / usr /local/ mysql - 5.7.23 ln-s /usr/local/ mysql - 5.7.23 / usr /local/mysql
Copy the code

4.2 the initialization

DB_BASE_DIR='/data1/mysql_3106'

mkdir -p ${DB_BASE_DIR}/{data,etc,logs,tmp}
$treeMysql388 / mysql388 / ├── data ├─ etc ├── mysql${DB_BASE_DIR}
Initialize init
cd  /usr/local/ mysql - 5.7.23 / / usr /local/ mysql - 5.7.23 / bin/mysqld - the initialize - insecure - user = mysql - basedir = / usr /local/ mysql - 5.7.23 - datadir =${DB_BASE_DIR}/data

Copy the code

5. MGR installation and configuration

  • Dev.mysql.com/doc/refman/…
  • Dev.mysql.com/doc/refman/…

5.1 my CNF configuration

Set the environment variables on all three machines, and remember to adjust CURRENT_IP accordingly.

# Environment variables on three machines.
PROG_BASE_DIR='/ usr/local/mysql - 5.7.23'
DB_BASE_DIR='/data1/mysql_3106'

CURRENT_IP='192.110.103.41'  # Change it channel by channel
NODE1_IP='192.110.103.41'
NODE2_IP='192.110.103.42'
NODE3_IP='192.110.103.43'
PORT='3106'
SERVICE_ID=`echo ${CURRENT_IP} | awk -F"." '{printf "%s%s", $3,$4 }'`
CLUSTER_NAME='mysqlmgr-test'
Copy the code

Generating a Configuration File

#mkdir -p ${DB_BASE_DIR}/{data,etc,logs,tmp}
Generate the configuration file
cat > ${DB_BASE_DIR}/etc/my.cnf << EOF
[client]
port = ${PORT}
socket = ${DB_BASE_DIR}/tmp/mysql.sock
default-character-set=utf8
 
[mysql]
no_auto_rehash
max_allowed_packet = 16M
prompt = '\u@\h [\d]> ' # 'user@host [schema]> '
default_character_set = utf8
 
[mysqldump]
max_allowed_packet = 16M
 
[mysqld_safe]
open_files_limit = 8192
user = mysql
pid_file = ${DB_BASE_DIR}/tmp/mysql.pid
 
[mysqld]
# Base
port = ${PORT}
socket = ${DB_BASE_DIR}/tmp/mysql.sock
server-id=$SERVICE_ID
user = mysql
basedir = ${PROG_BASE_DIR}
plugin-dir=${PROG_BASE_DIR}/lib/plugin
datadir = ${DB_BASE_DIR}/data
tmpdir = ${DB_BASE_DIR}/tmp
max_allowed_packet = 64M
character_set_server = utf8
collation_server = utf8_general_ci
max_connections=5000
max_user_connections=1000
thread_cache_size = 128
 
# Query Cache
#query_cache_type = 0 # <= MySQL 8.0.3
 
# Session variables
sort_buffer_size = 2M
tmp_table_size = 32M
read_buffer_size = 128k
read_rnd_buffer_size = 256k
join_buffer_size = 128k
 
# Other buffers and caches
table_definition_cache = 1400
table_open_cache = 2000
table_open_cache_instances = 16
 
# MySQL error log
log-error = ${DB_BASE_DIR}/logs/mysqld.err
innodb_print_all_deadlocks = 1
 
# Slow Query Log
slow_query_log = 1
slow_query_log_file = ${DB_BASE_DIR}/logs/mysql_slow.log
log_queries_not_using_indexes = 0
long_query_time = 1
min_examined_row_limit = 0
 
# General Query Log
general_log = 0
general_log_file = ${DB_BASE_DIR}/logs/mysql_general.log
 
# Performance Schema
# performance_schema = OFF
 
# Binary logging and Replication
skip-name-resolve
skip-slave-start
log_bin = mysql-bin
binlog_format = ROW
binlog_checksum = NONE # or CRC32
master_verify_checksum = OFF # ON if binlog_checksum = CRC32
slave_sql_verify_checksum = OFF # ON if binlog_checksum = CRC32
binlog_cache_size = 1M
binlog_stmt_cache_size = 3M
max_binlog_size = 512M
sync_binlog = 1
expire_logs_days = 7
log_slave_updates = 1
relay_log = mysql-relay-bin
relay_log_purge = 1
 
# MyISAM variables
key_buffer_size = 8M
myisam_recover_options = 'BACKUP,FORCE'
 
# MEMORY variables
max_heap_table_size = 64M # Should be greater or equal to tmp_table_size
 
# InnoDB variables
default_storage_engine = InnoDB
innodb_file_per_table = 1
innodb_data_home_dir=${DB_BASE_DIR}/data
innodb_data_file_path=ibdata1:10M:autoextend
innodb_log_group_home_dir=${DB_BASE_DIR}/data
innodb_buffer_pool_size = 4G
innodb_buffer_pool_instances = 8
innodb_stats_on_metadata=0
innodb_lock_wait_timeout=30
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 8M
innodb_log_file_size = 1024M
# innodb_write_io_threads = 8 # If you have a strong I/O system or SSD
# innodb_read_io_threads = 8 # If you have a strong I/O system or SSD
# innodb_io_capacity = 1000 # If you have a strong I/O system or SSD
 
# Group Replication parameter
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
 
slave_parallel_workers = 10
slave_preserve_commit_order = ON
slave_parallel_type = LOGICAL_CLOCK
 
transaction_write_set_extraction = XXHASH64
loose-group_replication_group_name = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot = OFF
loose-group_replication_local_address = "${CURRENT_IP}:${PORT}1"
loose-group_replication_group_seeds = "${NODE1_IP}:${PORT}1,${NODE2_IP}:${PORT}1,${NODE3_IP}:${PORT}1"
loose-group_replication_bootstrap_group = OFF
loose-group_replication_single_primary_mode = FALSE # = multi-primary
loose-group_replication_enforce_update_everywhere_checks=ON # = multi-primary
report_host=${CURRENT_IP}
report_port=${PORT}
EOF

#useradd mysql
chown -R mysql.mysql ${DB_BASE_DIR}
Copy the code

The results are as follows:

cat /data1/mysql_3106/etc/my.cnf
[client]
port = 3106
socket = /data1/mysql_3106/tmp/mysql.sock
default-character-set=utf8
 
[mysql]
no_auto_rehash
max_allowed_packet = 16M
prompt = '\u@\h [\d]> ' # 'user@host [schema]> '
default_character_set = utf8
 
[mysqldump]
max_allowed_packet = 16M
 
[mysqld_safe]
open_files_limit = 8192
user = mysql
pid_file = /data1/mysql_3106/tmp/mysql.pid
 
[mysqld]
# Base
port = 3106
socket = /data1/mysql_3106/tmp/mysql.sock
server-id=10341
user = mysql
basedir = /usr/local/ mysql - 5.7.23 plugin - dir = / usr /local/mysql-5.7.23/lib/plugin datadir = /data1/mysql_3106/data tmpdir = /data1/mysql_3106/ TMP max_allowed_packet = 64M character_set_server = utf8 collation_server = utf8_general_ci max_connections=5000 max_user_connections=1000 thread_cache_size = 128# Query Cache
#query_cache_type = 0 # <= MySQL 8.0.3
 
# Session variables
sort_buffer_size = 2M
tmp_table_size = 32M
read_buffer_size = 128k
read_rnd_buffer_size = 256k
join_buffer_size = 128k
 
# Other buffers and caches
table_definition_cache = 1400
table_open_cache = 2000
table_open_cache_instances = 16
 
# MySQL error log
log-error = /data1/mysql_3106/logs/mysqld.err
innodb_print_all_deadlocks = 1
 
# Slow Query Log
slow_query_log = 1
slow_query_log_file = /data1/mysql_3106/logs/mysql_slow.log
log_queries_not_using_indexes = 0
long_query_time = 1
min_examined_row_limit = 0
 
# General Query Log
general_log = 0
general_log_file = /data1/mysql_3106/logs/mysql_general.log
 
# Performance Schema
# performance_schema = OFF
 
# Binary logging and Replication
log_bin = mysql-bin
binlog_format = ROW
binlog_checksum = NONE # or CRC32
master_verify_checksum = OFF # ON if binlog_checksum = CRC32
slave_sql_verify_checksum = OFF # ON if binlog_checksum = CRC32
binlog_cache_size = 1M
binlog_stmt_cache_size = 3M
max_binlog_size = 512M
sync_binlog = 1
expire_logs_days = 7
log_slave_updates = 1
relay_log = mysql-relay-bin
relay_log_purge = 1
 
# MyISAM variables
key_buffer_size = 8M
myisam_recover_options = 'BACKUP,FORCE'
 
# MEMORY variables
max_heap_table_size = 64M # Should be greater or equal to tmp_table_size
 
# InnoDB variables
default_storage_engine = InnoDB
innodb_file_per_table = 1
innodb_data_home_dir=/data1/mysql_3106/data
innodb_data_file_path=ibdata1:10M:autoextend
innodb_log_group_home_dir=/data1/mysql_3106/data
innodb_buffer_pool_size = 4G
innodb_buffer_pool_instances = 8
innodb_stats_on_metadata=0
innodb_lock_wait_timeout=30
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 8M
innodb_log_file_size = 1024M
# innodb_write_io_threads = 8 # If you have a strong I/O system or SSD
# innodb_read_io_threads = 8 # If you have a strong I/O system or SSD
# innodb_io_capacity = 1000 # If you have a strong I/O system or SSD
 
# Group Replication parameter
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
 
slave_parallel_workers = 10
slave_preserve_commit_order = ON
slave_parallel_type = LOGICAL_CLOCK
 
To log the write collection to the binary log while the server collects it. The write set is based on the primary key of each row and is a unique identifier after the row has changed this identifier will be used to detect conflicts.
transaction_write_set_extraction = XXHASH64
The group name can be arbitrary, but the host name cannot be GTID! This group name must be the same for all nodes!
loose-group_replication_group_name = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" 
## To avoid automatically booting a second group with the same name every time you start, set it to OFF.
loose-group_replication_start_on_boot = OFF
loose-group_replication_local_address = "192.110.103.41:31061"
loose-group_replication_group_seeds = "192.110.103.41:31061192110 103.42:31061192110 103.43:31061"
# To avoid automatically booting a second group with the same name every time you start, set it to OFF.
loose-group_replication_bootstrap_group = OFF
## Turn off single-master mode parameters (this example tests multi-master mode, so turn off this parameter, turn on multi-master mode parameters
loose-group_replication_single_primary_mode = FALSE # = multi-primary
loose-group_replication_enforce_update_everywhere_checks=ON # = multi-primaryReport_host = 192.110.103.41 report_port = 3106Whitelist of client source IP addresses allowed to join group replication
# loose - group_replication_ip_whitelist = "192.110.0.0/16127.00 0.1/8"

Copy the code

Main MGR Configuration description:

cat my.cnf |grep group_replication -i
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot = OFF
loose-group_replication_local_address = "192.110.103.43:31061"
loose-group_replication_group_seeds = "192.110.103.41:31061192110 103.42:31061192110 103.43:31061"
loose-group_replication_bootstrap_group = OFF
loose-group_replication_single_primary_mode = FALSE # = multi-primary
loose-group_replication_enforce_update_everywhere_checks=ON # = multi-primary
Copy the code
  • Line 1 indicates that the node must collect a write set for each transaction and encode it as a hash value using the XXHASH64 hash algorithm.
  • The second line is used to tell the plug-in that there is a group named “aaAAAAA-AAAAA-aaAAA-aaAAAAAAAAAaaa” that needs to be added or created.
  • Line 3 tells the plug-in not to automatically start group replication when starting the MySQL instance.
  • Line 4 tells the plug-in that this instance uses port 192.110.103.43:31061 as the communication address between members of the group. Group_replication_local_address The local address configured must be able to communicate with all other members in the group.
  • Line 5 tells the plug-in that these hosts need to be contacted if nodes want to join the group. These are seed members, which are needed when a node wants to connect to the group. When a node wants to join the group, you need to contact one of them (seeds) and then ask the group to reconfigure the membership list to allow the node to be accepted in the group. Note that you do not need to list all the members of the group in this option, only the list of servers that the node should contact when it wants to join the group.
  • Line 6,7: relates to single-master/multi-master modes. Configuration Set to the multi-primary mode.

In single-primary mode, loose- group_REPLICATION_SINGLE_primary_mode =true LOOSE – group_REPLICATION_enforce_update_everywhere_checks =false

5.2 Setting a Replication Account (Performed on all Nodes)

Start/log in to MySQL

Start database
$su - mysql -c "/ usr/local/mysql - 5.7.23 / bin/mysqld_safe - defaults - file = / data1 / mysql_3106 / etc/my CNF &"

$tail -f /data1/mysql_3106/logs/mysqld.err
 
Log in to the database
/usr/local/ mysql - 5.7.23 / bin/mysql - S${DB_BASE_DIR}/tmp/mysql.sock

Copy the code

Set up account

Change the root password and create the required user
ALTER USER 'root'@'localhost' IDENTIFIED BY 'root' PASSWORD EXPIRE NEVER;
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'root';
create user root@'127.0.0.1' identified WITH mysql_native_password BY 'root';
grant all privileges on *.* to root@'127.0.0.1' with grant option;
flush privileges;
Mysql_native_password = mysql_native_password = mysql_native_password = mysql_native_password

#reset master; root@localhost : (none) > show master logs; root@localhost : (none) > reset master; Query OK, 0 rows affected (0.02sec) root@localhost: (none) > show master logs; +------------------+-----------+ | Log_name | File_size | +------------------+-----------+ | mysql-bin.000001 | 150 | +------------------+-----------+ 1 rowin set (0.00 sec)


Set the replication account
SET SQL_LOG_BIN=0;
CREATE USER repl@'.. 10. % % % ' IDENTIFIED WITH 'mysql_native_password' BY 'repl';
GRANT SUPER,REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO repl@'.. 192. % % % ';
GRANT SUPER,REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO repl@'.. 10. % % % ';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';
Copy the code

5.3 Installing the MGR Plug-in (On all Nodes)

Install the MGR plug-in

Install the MGR plug-in
root@localhost : (none) > INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.00 sec)

root@localhost : (none) >  SHOW PLUGINS;
+----------------------------+----------+--------------------+----------------------+---------+
| Name                       | Status   | Type               | Library              | License |
+----------------------------+----------+--------------------+----------------------+---------+
| binlog                     | ACTIVE   | STORAGE ENGINE     | NULL                 | GPL     |
| mysql_native_password      | ACTIVE   | AUTHENTICATION     | NULL                 | GPL     |
| BLACKHOLE                  | ACTIVE   | STORAGE ENGINE     | NULL                 | GPL     |
...
| group_replication          | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |
+----------------------------+----------+--------------------+----------------------+---------+
Copy the code

5.4. Start the Mgr-Multi-main mode

Start MGR, on the first node (41).

Perform this step only on the first node
mysql> SET GLOBAL group_replication_bootstrap_group=ON;
mysql> START GROUP_REPLICATION;
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;

Query information about the MGR group
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST   | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| group_replication_applier | 509810ee-f3d7-11e9-a7d5-a0369 fac2de4 | 192.110.103.41 | 3106 | ONLINE | +---------------------------+--------------------------------------+---------------+-------------+--------------+ 1 rowin set (0.00 sec)
Copy the code

Add other nodes to MGR: Run the following command on the other two machines.

Add other nodes to MGR, execute on other nodes
root@localhost : (none) > SELECT * FROM performance_schema.replication_group_members;
+---------------------------+-----------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+-----------+-------------+-------------+--------------+
| group_replication_applier |           |             |        NULL | OFFLINE      |
+---------------------------+-----------+-------------+-------------+--------------+
1 row in set(0.00 SEC) root@localhost: (none) >show global variables like'%seed%'; +-------------------------------+-------------------------------------------------------------+ | Variable_name | Value | +-------------------------------+-------------------------------------------------------------+ | Group_replication_group_seeds | 192.110.103.41:31061192110 103.42:31061192110 103.43:31061 | +-------------------------------+-------------------------------------------------------------+ 1 rowin set(0.00 SEC) root@localhost: (none) > START GROUP_REPLICATION; Query OK, 0 rows affected (3.23 SEC) root@localhost: (none) > SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+---------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+---------------+-------------+--------------+ | group_replication_applier | 509810ee-f3d7-11e9-a7d5-a0369 fac2de4 | 192.110.103.41 | 3106 | ONLINE | | group_replication_applier | e462be 53-f3d7-11e9-9125-a0369 fa6cce4 | 192.110.103.42 | 3106 | ONLINE | | group_replication_applier | ee4a9cec-f3d5-11e9-9ded-a0369fa6cd30 | 192.110.103.43 |        3106 | ONLINE       |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
3 rows in set (0.00 sec)
>SHOW STATUS LIKE 'group_replication_primary_member';
+----------------------------------+-------+
| Variable_name                    | Value |
+----------------------------------+-------+
| group_replication_primary_member |       |
+----------------------------------+-------+
root@localhost : test > show global variables like 'group_replication_single%';
+---------------------------------------+-------+
| Variable_name                         | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | OFF   |
+---------------------------------------+-------+
Copy the code

Three nodes are in online state. And the current multi-master mode, each node can be written.

Insert test data into any DB node, for example, node 1.

root@localhost : (none) > CREATE DATABASE test ;
root@localhost : (none) > use test;
Database changed
root@localhost : test > create table if not exists t1 (id int(10) PRIMARY KEY AUTO_INCREMENT,name varchar(50) NOT NULL);      
insert into t1 values(1,'zhang3');
Copy the code

Other nodes View Data is available on other nodes.

root@localhost : (none) > select * from test.t1;
+----+--------+
| id | name   |
+----+--------+
|  1 | zhang3 |
+----+--------+
1 row in set(0.00 SEC) root@localhost: (none) > insert into test.t1 values(2,'li4'); Query OK, 1 row affected (0.01sec) root@localhost: (none) > select * from performance_schema.replication_connection_status \G ; *************************** 1. row *************************** CHANNEL_NAME: group_replication_recovery GROUP_NAME: SOURCE_UUID: THREAD_ID: NULL SERVICE_STATE: OFF COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00 *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa SOURCE_UUID: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-6:1000003 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00 2 rowsin set (0.00 sec)

Copy the code

6. Switch between single-master mode and multi-master mode

6.1 Switching to single Master Mode

1) Multi-main mode before switching:

root@localhost : performance_schema > show global variables like 'group_replication_single%';
+---------------------------------------+-------+
| Variable_name                         | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | OFF   |
+---------------------------------------+-------+
1 row in set(0.00 SEC) root@localhost: Performance_schema > show global variables like'group_replication%'; +----------------------------------------------------+-------------------------------------------------------------+ | Variable_name | Value | +----------------------------------------------------+-------------------------------------------------------------+ | group_replication_allow_local_disjoint_gtids_join | OFF | | group_replication_allow_local_lower_version_join | OFF | | group_replication_auto_increment_increment | 7 | | group_replication_bootstrap_group | OFF | | group_replication_components_stop_timeout | 31536000 | | group_replication_compression_threshold | 1000000 | | group_replication_enforce_update_everywhere_checks | ON | | group_replication_flow_control_applier_threshold | 25000 | |  group_replication_flow_control_certifier_threshold | 25000 | | group_replication_flow_control_mode | QUOTA | | group_replication_force_members | | | group_replication_group_name | aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa | | Group_replication_group_seeds | 192.110.103.41:31061192110 103.42:31061192110 103.43:31061 | | group_replication_gtid_assignment_block_size | 1000000 | | group_replication_ip_whitelist | AUTOMATIC | | Group_replication_local_address | 192.110.103.41:31061 | | group_replication_member_weight 50 | | | group_replication_poll_spin_loops | 0 | | group_replication_recovery_complete_at | TRANSACTIONS_APPLIED | | group_replication_recovery_reconnect_interval | 60 | | group_replication_recovery_retry_count | 10 | | group_replication_recovery_ssl_ca | | | group_replication_recovery_ssl_capath | | | group_replication_recovery_ssl_cert | | | group_replication_recovery_ssl_cipher | | | group_replication_recovery_ssl_crl | | | group_replication_recovery_ssl_crlpath | | | group_replication_recovery_ssl_key | | | group_replication_recovery_ssl_verify_server_cert | OFF | | group_replication_recovery_use_ssl | OFF | | group_replication_single_primary_mode | OFF | | group_replication_ssl_mode | DISABLED | | group_replication_start_on_boot | OFF | | group_replication_transaction_size_limit | 0 | | group_replication_unreachable_majority_timeout | 0 | + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + 35 rowsin set (0.00 sec)
Copy the code

2) Change group_REPLICATION_SINGLE_primary_mode to ON for all nodes.

All nodes executeroot@localhost : none> stop group_replication; Query OK, 0 rows affected (9.02 SEC) root@localhost: none> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+---------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+---------------+-------------+--------------+ | group_replication_applier | 509810ee-f3d7-11e9-a7d5-a0369 fac2de4 | 192.110.103.41 | 3106 | OFFLINE | +---------------------------+--------------------------------------+---------------+-------------+--------------+ 1 rowin set(0.00 SEC) root@localhost: none>set global group_replication_enforce_update_everywhere_checks=OFF;
Query OK, 0 rows affected (0.00 sec)

root@localhost : none> set global group_replication_single_primary_mode=ON;
Query OK, 0 rows affected (0.00 sec)

root@localhost : none> show global variables like 'group_replication_single%';
+---------------------------------------+-------+
| Variable_name                         | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | ON    |
+---------------------------------------+-------+
1 row in set (0.00 sec)
Copy the code

3) Execute on the first node

SET GLOBAL group_replication_bootstrap_group=ON; 
START GROUP_REPLICATION; 
SET GLOBAL group_replication_bootstrap_group=OFF;
root@localhost : performance_schema > SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST   | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| group_replication_applier | 509810ee-f3d7-11e9-a7d5-a0369 fac2de4 | 192.110.103.41 | 3106 | ONLINE | +---------------------------+--------------------------------------+---------------+-------------+--------------+ 1 rowin set (0.00 sec)

Copy the code

4) Execute it on other nodes

root@localhost : test > SELECT * FROM performance_schema.replication_group_members;
root@localhost : test > START GROUP_REPLICATION; 
root@localhost : test > SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST   | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| group_replication_applier | 509810ee-f3d7-11e9-a7d5-a0369 fac2de4 | 192.110.103.41 | 3106 | ONLINE | | group_replication_applier | e462be 53-f3d7-11e9-9125-a0369 fa6cce4 | 192.110.103.42 | 3106 | ONLINE | | group_replication_applier | ee4a9cec-f3d5-11e9-9ded-a0369fa6cd30 | 192.110.103.43 |        3106 | ONLINE       |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
3 rows in set (0.00 sec)
Copy the code

Unlike MySQL8, you cannot use replication_group_members to directly display master. To run the SHOW STATUS LIKE ‘group_replication_primary_member’ command:

#
SELECT ta.* ,tb.MEMBER_HOST,tb.MEMBER_PORT,tb.MEMBER_STATE FROM performance_schema.global_status ta,performance_schema.replication_group_members tb  WHERE ta.VARIABLE_NAME='group_replication_primary_member'and ta.VARIABLE_VALUE=tb.MEMBER_ID; +----------------------------------+--------------------------------------+---------------+-------------+--------------+  | VARIABLE_NAME | VARIABLE_VALUE | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +----------------------------------+--------------------------------------+---------------+-------------+--------------+  | group_replication_primary_member | 509810ee-f3d7-11e9-a7d5-a0369fac2de4 | 192.110.103.41 |        3106 | ONLINE       |
+----------------------------------+--------------------------------------+---------------+-------------+--------------+   

[email protected] : (none) > show variables like 'group_replication_group_seeds'; +-------------------------------+-------------------------------------------------------------+ | Variable_name | Value | +-------------------------------+-------------------------------------------------------------+ | Group_replication_group_seeds | 192.110.103.41:31061192110 103.42:31061192110 103.43:31061 | +-------------------------------+-------------------------------------------------------------+ 1 rowin set(0.00 SEC) [email protected]: (none) > SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+---------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+---------------+-------------+--------------+ | group_replication_applier | 509810ee-f3d7-11e9-a7d5-a0369 fac2de4 | 192.110.103.41 | 3106 | ONLINE | | group_replication_applier | e462be 53-f3d7-11e9-9125-a0369 fa6cce4 | 192.110.103.42 | 3106 | ONLINE | | group_replication_applier | ee4a9cec-f3d5-11e9-9ded-a0369fa6cd30 | 192.110.103.43 |        3106 | ONLINE       |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
3 rows in set(0.00 SEC) [email protected]: (none) > SHOW STATUS LIKE'group_replication_primary_member';
+----------------------------------+--------------------------------------+
| Variable_name                    | Value                                |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | 509810ee-f3d7-11e9-a7d5-a0369fac2de4 |
+----------------------------------+--------------------------------------+
1 row in set(0.01sec) > select @@server_id; > select @@server_uuid; +--------------------------------------+ | @@server_uuid | +--------------------------------------+ | 509810ee-f3d7-11e9-a7d5-a0369fac2de4 |
+--------------------------------------+
1 row in set (0.00 sec)
Copy the code

6) Test write

[email protected]: (none) > select * from test.t1; +----+--------+ | id | name | +----+--------+ | 1 | zhang3 | | 2 | li4 | +----+--------+ 2 rowsin set(0.00 SEC) [email protected]: (none) > insert into test.t1 values(3,'wang5');
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement

# Using the master can insertMysql -h 192.110.103.41 -p 3106 -uroot -proot [email protected]: (none) > select * from test.t1; +----+--------+ | id | name | +----+--------+ | 1 | zhang3 | | 2 | li4 | +----+--------+ 2 rowsin set(0.00 SEC) [email protected]: (none) > insert into test.t1 values(3,'wang5'); Query OK, 1 row affected (0.00 SEC) [email protected]: (none) > select * from test.t1; +----+--------+ | id | name | +----+--------+ | 1 | zhang3 | | 2 | li4 | | 3 | wang5 | +----+--------+Copy the code

The above experimental description:

6.2 Switching back to the Multi-master Mode

Before switching mode, restart the group replication. Therefore, disable the group replication on all nodes and set parameters such as group_REPLICATION_SINGLE_primary_mode =OFF.

Stop group replication (all nodes execute) :
stop group_replication;
set global group_replication_single_primary_mode=OFF;
set global group_replication_enforce_update_everywhere_checks=ON;
show global variables like 'group_replication_single%';
+---------------------------------------+-------+
| Variable_name                         | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | OFF   |
+---------------------------------------+-------+
1 row in set (0.00 sec)
show global variables like 'group_replication_enforce_%'; +----------------------------------------------------+-------+ | Variable_name | Value | +----------------------------------------------------+-------+ | group_replication_enforce_update_everywhere_checks | ON  | +----------------------------------------------------+-------+ 1 rowin set (0.00 sec)

Select any node to execute
SET GLOBAL group_replication_bootstrap_group=ON; 
START GROUP_REPLICATION; 
SET GLOBAL group_replication_bootstrap_group=OFF;
SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST   | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| group_replication_applier | 509810ee-f3d7-11e9-a7d5-a0369 fac2de4 | 192.110.103.41 | 3106 | ONLINE | +---------------------------+--------------------------------------+---------------+-------------+--------------+# Other nodes execute
START GROUP_REPLICATION; 

MEMBER_ROLE of all nodes is PRIMARY
 SELECT * FROM performance_schema.replication_group_members;
root@localhost : (none) >  SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST   | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
| group_replication_applier | 509810ee-f3d7-11e9-a7d5-a0369 fac2de4 | 192.110.103.41 | 3106 | ONLINE | | group_replication_applier | e462be 53-f3d7-11e9-9125-a0369 fa6cce4 | 192.110.103.42 | 3106 | ONLINE | | group_replication_applier | ee4a9cec-f3d5-11e9-9ded-a0369fa6cd30 | 192.110.103.43 |        3106 | ONLINE       |
+---------------------------+--------------------------------------+---------------+-------------+--------------+
Copy the code

Test write on node2 and node3 respectively to test whether write data can succeed.

# Test write on node2[email protected]:test > select * from test.t1;
+----+--------+
| id | name   |
+----+--------+
|  1 | zhang3 |
|  2 | li4    |
|  3 | wang5  |
+----+--------+
3 rows in set(0.00 SEC) [email protected]:test > insert into test.t1 values(4,'ma6'); Query OK, 1 row affected (0.01sec)# Test write on node3[email protected]:test > select * from test.t1;              
+----+--------+
| id | name   |
+----+--------+
|  1 | zhang3 |
|  2 | li4    |
|  3 | wang5  |
|  4 | ma6    |
+----+--------+
4 rows in set(0.00 SEC) [email protected]:test > insert into test.t1 values(5,'qian7'); Query OK, 1 row affected (0.01sec)Copy the code

7. Failover

7.1 Failover in Single-master Mode

When the primary node fails, the automatic primary node selection mechanism automatically selects another primary node. During the primary node election, a new member view is searched and the potential primary nodes are sorted according to the value of group_REPLICATION_member_weigth. If all the members have the same MySQL version, the node with the highest weight is elected as the next primary node. If the weight value is the same, Their server_UUID is sorted in lexicographical order, and the first node in the list is selected as the next primary node. If a new master node is selected, the master node is automatically set to read-write, and the other nodes continue to be slaves, with the setting reserved to read-only.

If the MySQL version does not support group_REPLICATION_member_weigth, new nodes will be elected according to the dictionary order of server_UUID. It is a good practice to wait for the new node to finish applying relay-log before rerouting the client application to the new node.

7.2 Failover in Multi-Master Mode

Under the multi-master model, there is no concept of single master, and there is no process in which elections must take place, because under the multi-master model, there is no node playing a special role.

7.3 Searching for the Primary Node

The following shows how to query the current master node in the single-master model. Unlike MySQL8, master cannot be displayed with replication_group_members. To run the SHOW STATUS LIKE ‘group_replication_primary_member’ command:

SELECT ta.* ,tb.MEMBER_HOST,tb.MEMBER_PORT,tb.MEMBER_STATE FROM performance_schema.global_status ta,performance_schema.replication_group_members tb  WHERE ta.VARIABLE_NAME='group_replication_primary_member'and ta.VARIABLE_VALUE=tb.MEMBER_ID; +----------------------------------+--------------------------------------+---------------+-------------+--------------+  | VARIABLE_NAME | VARIABLE_VALUE | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +----------------------------------+--------------------------------------+---------------+-------------+--------------+  | group_replication_primary_member | 509810ee-f3d7-11e9-a7d5-a0369 fac2de4 | 192.110.103.41 | 3106 | ONLINE | +----------------------------------+--------------------------------------+---------------+-------------+--------------+#SELECT VARIABLE_VALUE FROM performance_schema.global_status WHERE VARIABLE_NAME='group_replication_primary_member';Or > SHOW STATUS LIKE'group_replication_primary_member';
+----------------------------------+--------------------------------------+
| Variable_name                    | Value                                |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | 509810ee-f3d7-11e9-a7d5-a0369fac2de4 |
+----------------------------------+--------------------------------------+
1 row in set (0.01 sec)
Copy the code

reference

  • Dev.mysql.com/doc/refman/…
  • Dev.mysql.com/doc/refman/…
  • MySQL MGR cluster setup
  • Mysql 5.7 Group Replication – Operation and Maintenance Summary
  • Steed jinlong https://www.cnblogs.com/f-ck-need-u/p/9216828.html