• Q: Service systems interact with each other through MQ. When consumers encounter an unknown exception, message consumption fails. How to handle this problem to ensure the reliability of message consumption?

  • A:

    • Consider the following

      • When an ack

        • Whether a message succeeds or fails, it is ack and the message does not accumulate in MQ

        • Ack only on success, and messages accumulate in MQ

      • Consumer log

        • The message is stored after it is received

          • Failed to enter the database. Try again

          • The database is successfully imported, and subsequent service logic starts

        • Message ID, cannot be re-entered into the library, can update the status, such as retry times, execution status

        • Where did the records come from

          • Queue name

          • The switch

          • The routing key

          • The message data

        • state

          • Retry state
      • Consumption failure retry

        • Principle:

          • Messages can be re-consumed

          • You can customize the retry interval and retry times

          • Retry in a cluster environment. If a fault occurs on the node, the retry is consumed by other nodes to complete the service logic as soon as possible

          • Retry status record

          • When the retry limit (time, times) is reached, change the data and notify human intervention as appropriate

        • alternative

          • Rabbitmq has a retry mechanism

          • Retry using dead letter mechanism

            • If the service is down or disconnected from the network, you cannot continue to retry. You can only log in to the database and wait for compensation
      • The compensation

        • Timing task

        • Compensation is made based on the message of the final failure state, and the compensation logic needs to be implemented by the business side.

  • Rabbitmq has a retry mechanism

    • Spring-rabbitmq, the retry mechanism also retries in the current system memory.

      • Essentially, spring’s retry mechanism is used
    • Realizable function

      • Customize the retry interval mechanism

        • Time interval between

        • Interval time power factor

        • Maximum time interval

        • Maximum retry times

      • Retry process monitoring

        • Retry the callback before it starts

        • Callback on each retry failure

        • The maximum number of retries was reached

      • At the end of the retry, the message processing policy MessageRecoverer

        • ImmediateRequeueMessageRecoverer,

          • Return to queue immediately

          • If there are other consumers, the message can be re-consumed by other consumers

          • Messages keep retrying and need to be processed by themselves, breaking out of the retry loop

        • RejectAndDontRequeueRecoverer

          • Reject the message and do not re-queue

          • Messages are delivered to a dead letter exchange/queue if the current queue is configured

          • Only retry in this node until it fails

        • RepublishMessageRecoverer

          • Republished with the exception stack information, X-Exception, in the header

          • Exchange can be specified, if not some default values will be used,

          • Consider processing failure messages asynchronously.

      • How do I update the status of messages in the DB at the end of the retry

        • Successful consumption in retry can be processed in consumer Method to update the message status directly

        • At the end, consumption fails

          • Triggers retry to end the listening time,
            • But the listener does not get the message parameters
        • Exception handling errorHandler

          • This handler, which fires on each failure, can retrieve the message

            • Consider the number of retries, or status, of messages in each update DB

            • The number of retries is accumulated in the database,

            • But it increases the DB pressure