Introduce a,
In the last few articles we have covered RabbitMQ internals and use in detail, as well as SpringBoot and RabbitMQ integration, which are based on a single RabbitMQ machine.
We know that in these days of microservices, once a single server is down, it is almost impossible to provide high availability services, so to ensure high availability of services, we usually build a RabbitMQ cluster in a production environment, even if a RabbitMQ machine fails, Other normal RabbitMQ servers can still be used and the application will not be affected.
2. Principle of cluster architecture
In the last few articles we have shown that RabbitMQ has the following basic components: queues, switches, bindings, virtual hosts, and so on, which form the basis of AMQP message communication. These components exist in the form of metadata, which is always recorded in RabbitMQ:
- Queue metadata: Queue names and their properties
- Switch metadata: Switch name, type, and properties
- Binding metadata: A simple table showing how to route messages to queues
- Vhost metadata: Provides namespaces and security attributes for queues, switches, and bindings within a Vhost
This metadata, essentially, is a lookup table that contains the name of the switch and a list of bindings for a queue, and when you publish a message to the switch, you’re actually matching the routing key on the message to the list of bindings for the switch on the channel that you’re on, and then routing the message out.
With this mechanism it is much easier to pass switch messages across all nodes, and all RabbitMQ does is copy switch metadata to all nodes so that each channel on each node can access the full switch.
If the message producer is connected nodes is 2 or 3, queue 1 complete data is not in at this time the two nodes, then in the two nodes in the process of sending a message is mainly played the role of a routing forwarding, according to the two nodes on the metadata forwarded to node 1, eventually sent messages will be stored to node 1 1 in the queue.
Similarly, if the message consumer is connected to node 2 or node 3, these two nodes will also act as routing nodes for forwarding and will pull messages from queue 1 of node 1 for consumption.
Unlike the usual master-slave cluster architecture, RabbitMQ clusters simply synchronize metadata, with each queue content remaining on its own server node.
This design is mainly based on the performance of the cluster itself and storage space to consider:
- Storage space: The real place to store data is in the queue. If each cluster node has a full copy of all the queues, the storage space of each node will be very large and the message backlog capacity of the cluster will be very weak. For example, if you now store 3G queue content, you will run out of memory on another node with 1 GB of storage space, which means you cannot increase message backlog capacity by expanding cluster nodes.
- Performance: The publisher of the message needs to copy the message to each cluster node, and each message triggers disk activity, which causes a sharp increase in performance load across the cluster.
Since the content of each queue is still on its own server node, it also raises a new question: if the server where the queue resides dies, is all the queue data stored on the server lost?
RabbitMQ can store data on a single node in two ways:
- In-memory mode: This mode stores data in memory. If the server suddenly goes down and restarts, the queues attached to the node and their associated bindings are lost, and consumers can reconnect to the cluster and recreate queues.
- Disk mode: The model of data storage disk, if the server goes down suddenly restart, data will be automatically restored, the queue can transmit data, and before the recovery disk failure node, cannot let the consumer on the other nodes to connect to the cluster and recreate the queue, if consumers continue to declare the queue on other nodes, You get a 404 NOT_FOUND error, which ensures that when the failed node is added to the cluster after recovery, the queue messages on that node are not lost and the problem of redundant queues on more than one node is avoided.
Each node in the cluster, either a memory node or a disk node, stores all metadata information in memory only, whereas a disk node not only stores all metadata in memory, but also persists it to disk.
On single-node RabbitMQ, only the node is allowed to be a disk node, which ensures that all configuration and metadata information about the system will be recovered from disk after a node failure or restart.
On a RabbitMQ cluster, there will be at least one disk node, meaning that two or more disk nodes will be added to the cluster so that if one of them fails, the cluster can still run. All other nodes are set to in-memory, which makes things like queue and switch declarations faster and metadata synchronization more efficient.
3. Cluster deployment
To ensure consistency with the production environment, CentOS7 is used to deploy the environment, and three VMS are created.
IP address 197.168.24.206 197.168.24.233 197.168.24.234Copy the code
Open firewall restrictions, to ensure that 3 server network can be interworking!
3.1. Reset the host name
Because the RabbitMQ cluster is connected to the service through host names, you must ensure that the hosts can be pinged from each other and reset the host names of the three servers.
# change hostname node1 # Change hostname node2 # change hostname node3 # change hostname node3 #Copy the code
Edit the /etc/hosts file and add the following to /etc/hosts on all three machines:
sudo vim /etc/hosts
Copy the code
Add the following:
197.168.24.206 node1
197.168.24.233 node2
197.168.24.234 node3
Copy the code
3.2 rabbitMQ installation
RabbitMQ is based on Erlang, which is a bit more difficult to install than other software, but this example uses RPM, any novice can complete the installation, the process is as follows!
3.2.1 Prepare commands before installation
Run the following command to prepare the environment.
yum install lsof build-essential openssl openssl-devel unixODBC unixODBC-devel make gcc gcc-c++ kernel-devel m4 ncurses-devel tk tc xz wget vim
Copy the code
3.2.2 Download the RabbitMQ, Erlang, and socat installation packages
Rabbitmq-3.6.5 is a one-click install for beginners.
Create a rabbitmq directory, in this example /usr/app/rabbitmq, and run the following command to download the installation package.
-
Download the Erlang
Wget www.rabbitmq.com/releases/er…
-
Download socat
Wget repo. Iotti. 7 / biz/CentOS/by 8…
-
Download the rabbitMQ
Wget www.rabbitmq.com/releases/ra…
The final directory file is as follows:
3.2.3 Installing software packages
After downloading, it is important to install the package in sequence
-
Install Erlang
The RPM – the ivh Erlang – 18.3-1. El7. Centos. X86_64. RPM
-
Install socat
The RPM – the ivh socat 1.7.3.2-5. El7. Lux. X86_64. RPM
-
Install the rabbitmq
The RPM – the ivh the rabbitmq server – 3.6.5-1. Noarch. RPM
After the installation is complete, modify the rabbitmq configuration, the default configuration files in/usr/lib/rabbitmq/lib/rabbitmq_server – 3.6.5 / ebin directory.
Vim/usr/lib/rabbitmq/lib/rabbitmq_server - 3.6.5 ebin/rabbit. The appCopy the code
Change the value of the loopback_USERS node!
Re-run the rabbit node name, respectively
vim /etc/rabbitmq/rabbitmq-env.conf
Copy the code
Add a line to the file and do the following!
NODENAME=rabbit@node1
Copy the code
The other two nodes command is similar, and then, save! Run the following command to start the service.
Rabbitmqctl stop rabbitmqctl stop rabbitmqctl stop rabbitmqctl stopCopy the code
Run the following command to check whether the service is started successfully.
lsof -i:5672
Copy the code
If 5672 is monitored, it is started successfully.
3.2.4 Start the visual control console
Enter the following command to start the console!
rabbitmq-plugins enable rabbitmq_management
Copy the code
The IP address is the IP address of CentOS system. The result is as follows:
The default account and password are guest. If access is unavailable, check whether the firewall is enabled. If so, turn it off.
After logging in to the monitoring platform, the following interface is displayed:
3.3. Copy Erlang cookies
In a RabbitMQ cluster environment, metadata synchronization is implemented based on cookie sharing.
Here, copy the cookie file from node1 to node2. Since the file has permission of 400, it is not necessary to change the permission first, so you need to change the permission of the file in node1 to 777
chmod 777 /var/lib/rabbitmq/.erlang.cookie
Copy the code
Copy to node 2 with SCP, node 3 does the same.
scp /var/lib/rabbitmq/.erlang.cookie node2:/var/lib/rabbitmq/
Copy the code
Finally, change the permissions back
chmod 400 /var/lib/rabbitmq/.erlang.cookie
Copy the code
3.4. Cluster
Run the following command on node 2:
Node2 and node1 form a cluster. Node2 must be able to ping rabbitmqctl through node1's host name Join_cluster rabbit@node1 # Enable rabbitMQ service rabbitmqctl start_appCopy the code
Node 3 operates similarly!
View the cluster status on any host:
rabbitmqctl cluster_status
Copy the code
- Line 1: Indicates the current node information
- The second line: represents the node members in the cluster, and disc indicates that these are disk nodes
- Line 3: Represents the running node member
After logging in to the visual control console, you can clearly see that the three service nodes have been associated with each other!
If you want to remove a node from the cluster, take node 3 as an example, do as follows!
3 rabbitmqctl -n rabbit@node1 forget_cluster_node rabbit@node3Copy the code
If rabbitMQ cannot be started after removal, delete mnesia info!
rm -rf /var/lib/rabbitmq/mnesia
Copy the code
Then restart the service again!
3.5. Set memory Nodes
Rabbitmqctl join_cluster rabbit@node1 --ram rabbitmqctl join_clusterCopy the code
Where — RAM refers to as memory node, if not, then the default is disk node.
If the node is already a disk node in the cluster, you can use the following command to change the node to a memory node:
Change the node to memory node rabbitmqctl change_cluster_node_type ram # Start the rabbitmq service rabbitmqctl start_appCopy the code
3.6. Mirroring Queue
As mentioned above, the queue is stored on only one of the nodes by default. In the event of a node failure, although all metadata information can be restored to the node from the disk node, the queue message content of the in-memory node is not, resulting in the loss of messages.
RabbitMQ was aware of this problem for a long time and has added queue redundancy options since 2.6: mirrored queues.
The mirrored queue, which means that the master column still exists on only one node, synchronizes messages from the master column to all nodes through the associated rabbitMQ server. This is called the master-slave mode and the master column messages are backed up.
If there no failure occurs in the home team, its working process with the average queue, producers and consumers’ perceptions of their won’t change, when released, still is routed to the home side columns, and the home side column through a mechanism like radio, news spread synchronization to the remaining from the queue, it’s a bit like a fanout exchange. The consumer still reads the message from the main queue.
Once the master queue fails, the cluster from the oldest a column from the queue elections for a new home, which is to achieve high availability for queues, but we don’t abuse the mechanism, in the above said queue redundant operations will lead to can not increase the storage space, by extending the node and can cause performance bottlenecks.
The command format is as follows:
rabbitmqctl set_policy [-p Vhost] Name Pattern Definition [Priority]
Copy the code
Parameter Description:
-p Vhost: Specifies the queue of the specified Vhost. Name: Specifies the Name of the policy. Pattern: specifies the matching mode of the queue. Mirrors are defined in three parts: ha-mode, ha-params, and ha-sync-mode Ha-mode: specifies the mode of the mirroring queue. Valid values are all/exactly/nodes all: indicates that the mirroring is performed on all nodes in the cluster. Nodes: Mirrors mirrors on a specified number of nodes. The number of nodes is specified by ha-params. Nodes: mirrors mirrors on specified nodes. The value can be "automatic" or "manual priority". This parameter is optional, indicating the priority of policyCopy the code
For example, declare a policy named ha-all, which matches queues whose names start with HA, and configure mirroring to all nodes in the cluster:
rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'
Copy the code
There are many similar operations, you can refer to the official API for specific use.
Load balancing of the cluster
HAProxy provides high availability, load balancing, and proxy based on TCP and HTTP applications, and supports virtual hosting. It is a free, fast, and reliable solution. According to official data, its maximum support is 10GB of concurrency. HAProxy supports network switching from Layer 4 to Layer 7, that is, all TCP protocols are covered. That said, Haproxy even supports load balancing for Mysql. To implement soft load balancing for RabbitMQ clusters, you can select HAProxy.
4.1 installation of HAProxy
HAProxy is also easy to install. You can run the following command to install HAProxy on an independent server.
yum install haproxy
Copy the code
Edit the HAProxy configuration file:
vim /etc/haproxy/haproxy.cfg
Copy the code
We just need to add the following configuration at the end of the file!
Listen rabbitmq_cluster bind 0.0.0.0:5672 listen Rabbitmq_cluster bind 0.0.0.0:5672 listen Rabbitmq_cluster bind 0.0.0.0:5672 Rmq_node1 197.168.24.206:5672 Check Inter 5000 rise 2 fall 3 weight 1 Server RMq_node2 197.168.24.233:5672 Check inter 5000 rise 2 fall 3 weight 1 server RMq_node3 197.168.24.234:5672 Check Inter 5000 Rise 2 fall 3 weight 1 # haProxy monitoring page address Listen monitor bind 0.0.0.0:8100 mode HTTP option Httplog stats enable STATS URI /stats stats refresh 5sCopy the code
Binding configuration parameters:
- Bind: This defines the client connection IP address and port number for the client connection
- Balance roundrobin: indicates a weighted round load balancing algorithm
RabbitMQ cluster node configuration:
- Server RMq_node1: identifies the RabbitMQ service in HAProxy
- 197.168.24.206:5672: Specifies the service address of the back-end RabbitMQ
- Check Inter 5000: indicates the interval in which the RabbitMQ service is checked for availability. The example value is 5000
- Rise 2: indicates the number of health checks required to confirm the RabbitMQ service availability after a fault occurs. The value is 2
- Fall 2: indicates the number of failed health checks before HAProxy stops using the RabbitMQ service. The example value is 2
- Weight 1: indicates the weight ratio. The lower the value is, the data is allocated preferentially. For example, the value is 1
Start the HAProxy:
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
Copy the code
Log in to http://ip:8100/statsweb for monitoring and viewing!
Five, Java client use
If the HAProxy proxy server is configured, you can directly use the HAProxy proxy server address.
//ConnectionFactory Creates a physical connection for MQ ConnectionFactory = new ConnectionFactory(); 197.168.24.207 connectionFactory. SetHost (" "); / / proxy server address connectionFactory setPort (5672); / / proxy server port connectionFactory. SetUsername (" admin "); / / guest in the machine for a visit, only through a proxy server sends the message needs to be build user connectionFactory. SetPassword (" admin "); //guest connectionFactory.setVirtualHost("/"); // Virtual hostCopy the code
If you don’t have a proxy server, use Spring’s CachingConnectionFactory class to configure it.
The SpringBoot project is used as an example. The configuration file is as follows:
Spring. The rabbitmq. Addresses = 197.168.24.206:5672197168 24.233:5672197168 24.234:5672 spring. The rabbitmq. Username = guest spring.rabbitmq.password=guest spring.rabbitmq.virtual-host=/Copy the code
The RabbitConfig configuration classes are as follows:
@configuration public class RabbitConfig {/** * Initialize connection factories * @param addresses * @param userName * @param password * @param vhost * @return */ @Bean ConnectionFactory connectionFactory(@Value("${spring.rabbitmq.addresses}") String addresses, @Value("${spring.rabbitmq.username}") String userName, @Value("${spring.rabbitmq.password}") String password, @Value("${spring.rabbitmq.virtual-host}") String vhost) { CachingConnectionFactory connectionFactory = new CachingConnectionFactory(); connectionFactory.setAddresses(addresses); connectionFactory.setUsername(userName); connectionFactory.setPassword(password); connectionFactory.setVirtualHost(vhost); return connectionFactory; } /** * re-instantiate the RabbitAdmin action class * @param connectionFactory * @return */ @bean public RabbitAdmin rabbitAdmin(ConnectionFactory connectionFactory){ return new RabbitAdmin(connectionFactory); } /** * re-instantiate the RabbitTemplate action class * @param connectionFactory * @return */ @bean public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory){ RabbitTemplate rabbitTemplate=new RabbitTemplate(connectionFactory); / / data into a json into a message queue rabbitTemplate. SetMessageConverter (new Jackson2JsonMessageConverter ()); return rabbitTemplate; }}Copy the code
Six, summarized
This paper mainly introduces the working principle of RabbitMQ cluster and how to build a RabbitMQ cluster with load balancing capability.
Limited to the author of shallow talent, the content of this article may not understand the place in place, if there is unreasonable explanation of the place also hope to discuss the message together.
Three things to watch ❤️
If you find this article helpful, I’d like to invite you to do three small favors for me:
-
Like, forward, have your “like and comment”, is the motivation of my creation.
-
Follow the public account “Java rotten pigskin” and share original knowledge from time to time.
-
Also look forward to the follow-up article ing🚀
-
[666] Scan the code to obtain the learning materials package