Today, I want to talk about the message queue. Hongjue can probably guess how people react when they hear the message queue, which can be roughly divided into the following categories.

The first group of people, who are naive, just started programming in college, have never used message queues, and even think that message queues are code to new a List or something; The second group of people have heard and understood message queues, but don’t quite know what they are, except that when you say message queues, three sets of words come to mind: peak clipping, asynchronous and decoupled. The third group of people, who have used message queues, have some understanding of them, but don’t know why they were designed this way, what was the history of message queues, and how did they evolve into what they are today? ** The fourth group of people, who already know enough about message queues, read this article as a review and refresher. ** What kind of person do you belong to? No matter how much you know about message queues, you’ll learn something by the end of this article.

What is a message queue? Why do we use message queues at all? Is it really just looking like a force to use? Of course not, the emergence of a technology, often to solve a pain point, let’s start from this pain point, to see what message queuing is to solve the problem was born.

We believe that before work, or in the work of the number of contact with the single machine will be more, no matter what business is crammed into a system, this situation will be less contact with the message queue scene. However, with the growth of business, the volume increases, the stand-alone system is difficult to maintain, nor can bear the increase of concurrency, so the original single application needs to be divided into multiple services. For example, Niuke uses distributed architecture to divide the original single system into user service, question bank service, job search service, discussion area service and so on. Each distributed node has clusters to ensure high availability.

Even with such a microservices architecture, if the concurrency of a core business is too high, the system can’t handle it. Such as taobao, tao, a lot of spelling, jingdong the payment in the electricity market scene scene, etc., you are a treasure to place the order and pay, call the payment service, after complete the payment, you also need to update the order status, this time you need to call the order service, we also under a single, usually in addition to complete these simple operations, will give you the corresponding points; Merchants will also receive the order message, and send you a message to confirm the order without error; At the same time you can also check your logistics status; And in order to recommend more suitable products to you, the system will make a similar recommendation according to your order and so on. What I say is that the naked eye can detect the actions of the system after we place an order.

** If a payment action needs to call so many services, and wait for their response to be successful, and then tell the user that you have paid successfully, the user’s whole experience in the system will be very bad. ** Imagine that a total of 50ms is required for request service + processing request + response. The scenarios we listed above: payment service, order service, points service, merchant service, logistics service, recommendation service, all require 300ms.

This is the time calculated if all services have been successfully called, such as the payment service failed to call the credits service, and the retry failed (perhaps due to the short period of time when the credits service was unavailable), at which point the payment could have failed. But the payment service will feel very aggrieved, clearly I only pay ah, your integral service failed, the impact of a treasure GMV but a big thing ah, in the final analysis is that the system is not decoupled. This still does not consider the scenario with high concurrency, the system can not handle so many requests at a time, the time for the whole response to the user is doubled, even because the link is too long, resulting in system downtime.

Imagine a simple payment function in your opinion that needs to wait for a long time for the response of the system. Will the user experience be good? Users will wonder whether the system is not working, and it seems that an e-commerce platform will not do so. This gives competitors an opportunity.

So is there a technology that can solve this problem, the payment service just you pay the order, points service, logistics service and so on although I also have to consider, but can not affect the success of the payment service.

To address this pain point, message queues were born. Your system is not decoupled, is it? If I decouple you, you call other services synchronously, is it? If I give you asynchronously, is your concurrency too high? I store the requests for you, and when you can handle them, you can take them from me. Message queue is born to better achieve peak cutting, asynchronous, decoupled.

So what exactly is a message queue? Well, there’s a very vivid example. Everyone has taken tests, whether in elementary school, middle school, high school or college. Simply put, there are three parties of people around message queues: producers, message queues, and consumers. The producer is the student, the student produces the message, the message refers to the test paper that you answer. When you finish your paper (regardless of whether you finish it or not), you need to submit it to the invigilator. The invigilator acts as a server that stores messages in a specific way, namely a message queue. The invigilator receives your paper but does not mark it on site (in fact, there are on-site marked……). “, you submit the test paper to the teacher, the teacher will nod to you, indicating that it has received your answer paper, this time you feel relieved to leave, equivalent to your production message, into the message queue success. The consumer here refers to the teacher who corrects the examination papers. The teacher will get a message (examination papers) from the message queue (the invigilator will give a stack of examination papers to the teacher who approves the examination papers) for consumption (correction). The message that has not been consumed by the teacher who approves the examination papers will be saved in the message queue, waiting for the consumption of the teacher who approves the examination papers.

So the message queue is basically a relay station, I don’t care when you consume the message, I’ve already sent it out and put it in there, you can take it whenever you want. Just like delivery, every time you send a package to your door, there will always be a failure (not at home, etc.). If you put the package in the delivery cabinet, the whole delivery/pickup is decoupled.

With two examples, you already know what a message queue is.

In the single payment scenario mentioned above, in order to cope with the high concurrency scenario, the message is stored in the message queue. Consumers and producers don’t know each other, I put a message in a message queue, it doesn’t matter who consumes me, it’s decoupled,The producer does not have to wait for the consumer to spend before returning success (after payment, regardless of whether credits are added, etc.), and goes from synchronous to asynchronous.Well, after reading this article, you probably have a general idea of what a message queue is. Then when we learn how to use it, we should explore how it works,Sir Hong is going to take you on a tour of RocketMQ. Ready?

May everyone read this article with skepticism and explore how it works.

Road obstruction and long, the past for preface, the future for chapter.

Looking forward to our next meeting!