DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm DockerSwarm Therefore, the hosts file is configured to ensure the communication between nodes. The following details are provided.

The deployment environment

  • System: CentOS8
  • Two servers: 10.1.1.1/10.1.1.2

Docker – compose files

version: '3'

services:
  rabbit1:
    container_name: rabbit1
    image: The rabbitmq: 3.7 - management - alpine
    restart: always
    hostname: rabbit1
    extra_hosts:
      - "rabbit1:10.1.1.1"
      - Rabbit2:10.1.1.2 ""
    environment:
      - RABBITMQ_ERLANG_COOKIE=MY_COOKIE
      - RABBITMQ_DEFAULT_USER=MY_USER
      - RABBITMQ_DEFAULT_PASS=MY_PASS
    ports:
      - "4369:4369"
      - "5671:5671"
      - "5672:5672"
      - "15671:15671"
      - "15672:15672"
      - "25672:25672"
Copy the code

This will write the docker-compose file for 10.1.1.1, and when you deploy another one, simply change rabbit1 to Rabbit2. The same goes for more servers, with IP configured to the EXTRA_hosts parameter.

Start the service

Run the following commands on both servers:

# docker-compose up -d
Copy the code

To join the cluster

To use Rabbit1 as the master node, run the following command on Rabbit2 to add it to the cluster:

# docker exec -it rabbit2 /bin/bash

rabbit2# rabbitmqctl stop_app
rabbit2# rabbitmqctl reset
rabbit2# rabbitmqctl join_cluster rabbit@rabbit1
rabbit2# rabbitmqctl start_app
Copy the code

By default, RabbitMQ starts as a disk node, and if you want to add it as a memory node you can add –ram.

To change the node type, run the following command:

# rabbitmqctl change_cluster_node_type disc(ram)
Copy the code

Rabbitmqctl stop_app is required before changing the node type.

Run the following command to check the cluster status:

# rabbitmqctl cluster_status
Copy the code

Note that since RAM nodes only store internal database tables in memory, this data must be synchronized from other nodes when the memory node is started, so a cluster must contain at least one disk node.

HAProxy load balancing

Ha also uses Docker mode to deploy, first look at the haproxy. CFG configuration file:

# Simple configuration for an HTTP proxy listening on port 80 on all
# interfaces and forwarding requests to a single backend "servers" with a
# single server "server1" listening on 127.0.0.1:8000

global
    daemon
    maxconn 256

defaults
    mode http
    timeout connect 5000ms
    timeout client 5000ms
    timeout server 5000ms

listen rabbitmq_cluster
    bind 0.0. 0. 0:5677
    option tcplog
    mode tcp
    balance leastconn
    server  rabbit1 10.11.1.:5672 weight 1 check inter 2s rise 2 fall 3
    server  rabbit2 10.22.2.:5672 weight 1 check inter 2s rise 2 fall 3

listen http_front
    bind 0.0. 0. 0:8002stats uri /haproxy? stats listen rabbitmq_admin bind0.0. 0. 0:8001
    server rabbit1 10.11.1.:15672
    server rabbit2 10.11.2.:15672
Copy the code

Take a look at the docker-compose file again:

version: '3'

services:
  haproxy:
    container_name: rabbit-haproxy
    image: haproxy
    restart: always
    hostname: haproxy
    network_mode: rabbitmq_default
    volumes:
      - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
    ports:
      - "5677:5677"
      - "8001:8001"
      - "8002:8002"
Copy the code

Once started, you can access the RabbitMQ cluster management page using the HA address.

If you have an existing load balancer, such as LVS, you can skip this step as well.

At this point, the cluster is ready to work, but there is an important caveat.

Cluster pattern

Normal mode

  • For A Queue, the message entity exists on only one of the nodes, and nodes A and B only have the same metadata, namely the Queue structure.
  • When A message is queued on node A and the consumer pulls it from node B, RabbitMQ temporarily transfers the message between node A and node B, fetching the message entity from node A and sending it to the consumer via node B.
  • Therefore, consumers should try to connect to each node to fetch messages from it. That is, for the same logical queue, physical queues should be established on multiple nodes. Otherwise, no matter whether consumers connect A or B, the exit will always be in A, which will cause bottlenecks.
  • Another problem in this mode is that when node A fails, node B cannot fetch the unconsumed message entity from node A.
  • If message persistence is made, it can be consumed only after node A recovers. Messages are lost if they are not persisted.

Mirror mode

  • This pattern solves the above problem, and differs from the normal pattern in that the message entity is actively synchronized between mirror nodes, rather than temporarily pulling data while the consumer is fetching it.
  • The side effects of this mode are obvious. In addition to system performance degradation, if there are too many mirrored queues and a large number of incoming messages, the network bandwidth of the cluster will be greatly consumed by this synchronous communication.
  • Therefore, this mode is suitable for the occasions requiring high reliability.

In my opinion, mirroring mode is safer in a production environment.

To use mirroring mode, either from the admin page or from the command line, requires a simple configuration. The page management method is not much described, here is how to set up through the command line, a command is done.

Add:

# rabbitmqctl set_policy -p testvhost testha "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
Setting policy "testha" for pattern "^" to "{"ha-mode":"all","ha-sync-mode":"automatic"}" with priority "0" for vhost "testvhost" ...
Copy the code

Remove:

# rabbitmqctl clear_policy -p testvhost testha
Clearing policy "testha" on vhost "testvhost" ...
Copy the code

To view:

# rabbitmqctl list_policies -p testvhost
Listing policies for vhost "testvhost" ...
vhost   name    pattern apply-to        definition      priority
testvhost       testha  ^       all     {"ha-mode":"all","ha-sync-mode":"automatic"}    0
Copy the code

Parameter Description:

  • Virtual host: indicates the vhost of the policy application.

  • Name: indicates the policy Name. The Name can be any Name, but the ASCIi-based Name without Spaces is recommended.

  • Pattern: A regular expression that matches the names of one or more queues (exchanges). Any regular expression can be used. Only one ^ matches all. ^test matches exchanges or queues named “test”.

  • Apply to: Pattern Indicates the application object.

  • Priority: specifies the Priority of multiple policies. A larger value indicates a higher Priority. Messages that do not have a specified priority are treated with 0 priority, and messages that exceed the maximum priority set by the queue are treated with maximum priority.

  • Definition: key/value pair that will be inserted into the optional parameter map matching Queues and exchanges.

    Ha-mode: indicates the policy key. There are three modes:

    • all: All queues.
    • exctly: Part (This parameter needs to be configuredha-paramsParameter, which is of type int. For example, 3, random 3 machines in many clusters).
    • nodes: Specifies (To be configuredha-paramsParameter, which is an array type. For example, [“rabbit@rabbit2”, “rabbit@rabbit3”] specifies two machines for rabbit2 and rabbit3.

    Ha-sync-mode: queue synchronization:

    • manual: Manual (default mode). The new queue mirror will not receive existing messages, it will only receive new messages.
    • automatic: Automatic synchronization. When a new mirror is added, the queue is automatically synchronized. Queue synchronization is a blocking operation.

That’s all for this article. Please leave your comments.