RabbitMQ is based on the AMQP protocol and can be passed between languages using a common protocol.

Closer agreement

The core concept

Server: Also known as broker, accepts client connections and implements AMQP entity services.

Connection: A connection to a specific broker network.

Channel: Network channel in which almost all operations take place. A channel is a channel through which messages are read and written. A client can establish multiple channels, each representing a session task.

Message: Message, data passed between the server and the application, consisting of properties and body. Properties allows you to modify messages with advanced features such as message priority and latency. Body is the message entity content.

Virtual host: Virtual host used for logical isolation and routing of top-level messages. A Virtual host can have multiple exchanges and queues. A Virtual host cannot have exchanges or queues of the same name.

Exchange: a switch that accepts messages and forwards them to bound queues based on routing keys.

Banding: Virtual connection between exchanges and queues. Binding can include routing keys

Routing key: A routing rule that the VM uses to determine how to route a message.

Queue: Message Queue, used to store messages.

Exchange

! [](https://upload-images.jianshu.io/upload_images/20887676-e8c0c9ca8dff41a2? imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

Type of switch, Direct, topic, fanout, headers, Wants (Whether persistence is required true required) Auto delete When the last queue bound on Exchange is deleted Exchange is deleted.

Direct Exchange. All messages sent to the Direct Exchange are forwarded to the Queue specified in RouteKey. The Direct Exchange can use the default Exchange (default Exchange). The default Exchange binds all queues, so Direct can be bound using Queue names (as routing keys). Or the consumer and producer routing keys match exactly.

Toptic Exchange, in which a message sent to a Topic Exchange is forwarded to a Queue of all concerned topics in the Routing key. Exchange makes a fuzzy match between the routing key and a Topic, and the queue needs to be bound to a Topic. “#” matches one or more words, and “” matches only one word. For example, “log. #” matches “log.info.test”. “log. “matches only log.error.

Fanout Exchange: Does not deal with routing keys and simply binds queues to the switch. Messages sent to the switch are sent to the queue bound to the switch. Fanout retweets are the fastest.

How to ensure 100% delivery of messages

What is reliability delivery at the production end?

Ensure successful delivery of messages

Ensure that the MQ node is successfully received

The sending MQ node (broker) receives an acknowledgement message

Improve the message compensation mechanism

Reliability delivery guarantee scheme

The message falls into the database and marks the message

! [](https://upload-images.jianshu.io/upload_images/20887676-2e037ca7f590bc48? imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

Delayed delivery of messages

! [](https://upload-images.jianshu.io/upload_images/20887676-9a7cc2f2a03b9669? imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

In high concurrency scenarios, each DB operation costs performance per operation. We use delay queues to reduce one database operation.

Message idempotency

If I perform an action, we can perform it 100 times, 1000 times, and the result must be the same for all 1000 times. For example, if a single thread updates an update count-1 a thousand times, the result is the same, so the update is idempotent. If a single thread updates an update count-1 a thousand times without thread-safe processing, the result may not be the same. Therefore, the concurrent update operation is not an idempotent operation. Corresponding to the message queue, even if we receive multiple messages, the effect is the same as consuming one message.

How to avoid repeated message consumption in high concurrency situations

Unique ID + fingerprint code, using the database primary key deduplication.

Advantages: Simple implementation

Disadvantages: Data write bottlenecks under high concurrency.

Use the atomicity of Redis to practice.

Idempotent with Redis is something to consider

Are databases dropped, and how can data and cache be idempotent (how can Redis and the database both succeed and fail at the same time)?

How about if you don’t drop it, you put it all in Redis which is the Redis and database synchronization strategy? And is it 100% successful in cache?

Confirm indicates the confirmation message and Return indicates the Return message

Understand the CONFIRM message confirmation mechanism

Acknowledgement of a message means that after a producer receives a posted message, if the Broker receives the message, it sends an acknowledgement to our producer that the Broker received the message.

How to implement confirm confirmation messages.

ConfirmSelect () : channel.confirmSelect()

Add a listener to a channel: addConfirmListener, which listens for success and failure results that are either resent or logged.

Return message mechanism

The Return message mechanism handles non-routable messages, and our producer sends the message to a queue by specifying an Exchange and a Routinkey, and we consumers listen to the queue for consumption!

In some cases, if we need to listen for unreachable messages when Exchange does not exist or the specified routing key cannot be found, we need to use a Return Listener!

A Mandatory value of true enables listeners to accept unreachable messages and process them. If set to false, the broker will delete the message automatically.

Custom listening on the consumer side

Traffic limiting on the consumption end

Suppose we have a scenario where we have a rabbitMQ server with tens of thousands of unconsumed messages, and then we open up any consumer client and there will be a huge amount of messages sent in a flash, but our consumer can’t handle it all at once.

This will cause your service to crash. There are other situations where there is a mismatch between your producer and consumer capabilities, where the producer side generates a lot of messages and the consumer side cannot consume as many messages in a high concurrency situation.

RabbitMQ provides a quality of service (qos) function that does not automatically acknowledge a certain number of messages (set to a consumer or Channel qos).

Void basicQOS(Unit prefetchSize, USHORT prefetchCount,Boolean Global).

PrefetchSize :0 Size limit for a single message. Zero means no limit, usually no limit.

PrefetchCount: Set a fixed value that tells rabbitMQ not to push more than N messages to a consumer at the same time. If N messages have not been ack, the consumer will block until it ack

Global: Whether truefalse applies the above Settings to channels, i.e., whether the above restrictions apply to channels or consumer levels.

Consumer ACK and requeue

Consumer end for consumption, if due to abnormal business we can log the record, and then compensation! (Maximum number of attempts can also be added)

If the server is down and other serious problems, then we need to manually ack to ensure that the consumer side of the consumption success!

Message back to queue

Requeuing is to repost messages to the broker if they have not been processed successfully!

Requeueing is usually not enabled in practice.

TTL queue/message

TTL Time to Live Indicates the TTL.

Support message expiration time, which can be specified when the message is sent.

Support queue expiration time. When a message is queued, it will be cleared automatically if the timeout period of the queue is exceeded.

Dead-letter queue

Dead Letter queue: DLX, dead-letter-exchange

With DLX, when a message becomes a dead message in a queue (that is, without any consumer consumption), it can be republished to another Exchange, the DLX.

There are several ways in which a message can become dead letter:

Message rejected (basic.reject/basic.nack) with requeue=false (not requeued)

TTL overdue

The queue length reaches the maximum. Procedure

DLX is also a normal Exchange, no different from normal Exchanges, it can be specified on any queue, actually set the attributes of a queue.

When the queue is dead, RabbitMQ will automatically re-publish the message to the Exchange and route it to another queue. The ability to listen for messages in this queue for processing compensates for the immediate parameter rabbitMQ previously supported.

Dead letter queue setup

Set up Exchange and Queue, then bind

Exchange: dlx. Exchange (custom name)

Queue: dlx.queue (custom name)

Routingkey: # (# indicates that any dead letter in a Routingkey will be routed)

We then declare the switch, queue, and bind normally, but we add a parameter to the queue:

arguments.put(“x-dead-letter-exchange”,”dlx.exchange”);

RabbitMQ Cluster mode

Active/standby mode: Implements rabbitMQ high availability clusters. This mode is easy to use when the number of concurrent requests and data is small. Also known as the Warren model. (In this mode, the active node provides write operations while the secondary node provides read operations. In active/standby mode, the secondary node does not provide read and write operations but only performs backup.) If the active node fails, the standby node automatically takes over from the active node to provide services.

Cluster mode: The classic mode is Mirror mode, to ensure 100% data loss, is relatively simple to implement.

Mirrored queues are a highly available solution for rabbitMQ data synchronization, typically with 2-3 nodes (3 nodes for 100% message reliability solutions).

! [](https://upload-images.jianshu.io/upload_images/20887676-7870f578d721b54e? imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

The shovel mode is the main mode for remote replication. The shovel mode is usually used for remote replication. It relies on federation of rabbitMQ to produce consistently reliable AMQP data.

The rabbitMQ deployment architecture uses dual-center mode (multi-center). A rabbitMQ cluster is deployed in two or more data centers. The rabbitMQ service of each center must provide normal message services and share messages in some queues.

The multi-live architecture is as follows:

! [](https://upload-images.jianshu.io/upload_images/20887676-f200441d2443d8a8? imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

The Federation plugin is a high-performance plugin that transmits messages between Brokers or clusters without the need to build clusters. The two parties can use different users or virtual hosts or use different versions of Erlang or rabbitMQ. The Federation plugin can use the AMQP protocol as a communication protocol and can accept discrete transmissions.

! [](https://upload-images.jianshu.io/upload_images/20887676-e5e9203cc84eadb3? imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

Federation Exchanges, as if Downstream were proactively pulling messages from Upstream, but

Instead of pulling all messages, the Bindings relationship must have been explicitly defined on the Downstream

The Exchange (i.e., the actual physical Queue to receive messages) pulls messages from Upstream

To the Downstream.

Using the AMQP protocol for inter-agent communication, Downstream will group binding relationships together and bind/unbind commands will be sent to Upstream switches.

Therefore, Federation Exchange only accepts messages with subscriptions.

HAProxy is a product that provides high availability, load balancing, and is based on TCP (Layer 4) and HTTP

Layer 7: Application agent software that supports web hosting, which is a free, fast and reliable solution

Solution.

HAProxy is especially useful for heavily loaded Web sites that need to know how to use it

Words hold or seven layers of processing. HAProxy runs on current hardware and can support tens of thousands

Concurrent connection.

And its mode of operation makes it easy and safe to integrate into your current architecture

It also protects your Web server from being exposed to the network.

Why does HAProxy perform so well?

The single-process, event-driven model has been significantly reduced. Context switching overhead and memory footprint.

Wherever available, single buffering can complete read and write operations without copying any data, saving significant CPU clock cycles and memory bandwidth

With the splice() system call on Linux 2.6 (>= 2.6.27.19)., HAProxy can implement zero-copy forwarding and zero-starting on Linux 3.5 and above.

Memory allocators allow instant memory allocation in a fixed-size memory pool, which can significantly reduce the time it takes to create a session

Tree storage: focuses on the use of elastic binary trees developed by the author many years ago to achieve O(log(N)) low overhead to hold timer commands, hold run queue commands, and manage polling and minimum connection queues

keepAlive

KeepAlived software mainly realizes high availability function through VRRP protocol. VRRP is

Virtual Router RedundancyProtocol(Virtual Router RedundancyProtocol)

VRRP is used to solve the single point of failure of static routes

When individual nodes go down, the entire network can run continuously, so Keepalived – aspects

Provides the function of configuring and managing LVS and performing health checks on nodes below LVS

Yes, on the other hand, it can also realize the high availability of system network services

The role of the keepAlive

Manage LVS load balancing software

Implementing LVS cluster node health check

Failover as a system network service

How does Keepalived achieve high availability

Keepalived Failover between high availability service pairs is implemented via VRRP (Virtual Router)

Redundancy Protocol, virtual router Redundancy Protocol.

While Keepalived is working correctly, the Master node sends heartbeat messages to the standby node continuously (in multicast mode) to tell the standby node that it is still alive. When the Master node fails, it cannot send heartbeat messages. Therefore, the standby node cannot detect the heartbeat of the Master node and invokes its own takeover program to take over the IP resources and services of the Master node.

When the active Master node recovers, the standby Backup node releases the IP resources and services that the active Master node takes over and restores to the original standby role.