Summary: This article explains why Ping An insurance chose RocketMQ, and how it chose which messaging middleware to use once it was decided.
Author: Sun Yuanyuan | Senior developer of Ping An Life Insurance
Why RocketMQ
First of all, I want to talk about why we chose RocketMQ. In the process of technology selection, the application scenario should be the first to consider clearly. Only when the application scenario is identified in the process of technology selection, there are clear goals and measurement standards. Features common to message-oriented middleware, such as asynchronous, decoupled, and peak-cutting, are not covered. These features determine whether or not your scenario needs to use message-oriented middleware. Here is how to choose which message-oriented middleware to use once you have decided to use it.
Synchronous dual-write ensures service data security and reliability
One of the most important requirements for business data is not to allow data to be lost. Therefore, the first point of RocketMQ is that it has a synchronous double-write mechanism. Data can be sent successfully only when it is flushed successfully on both master and slave servers. Under the condition of synchronous double write, the write performance of MQ will definitely decline compared with that of asynchronous flush asynchronous assignment, and there will be about 20% of the decline under the condition of asynchronous. Under the single-master-slave architecture, the write performance of 1K messages can still reach 8W+ TPS, which can fully meet the requirements for most business scenarios. In addition, the reduced performance can be compensated by the horizontal expansion of the broker, so that the performance can meet the business requirements under synchronous double-write conditions.
Performance is still strong in multi-topic application scenarios
Secondly, the usage scenarios of business system will be very many, and the problem caused by the wide usage scenarios is that a large number of topics will be created. Therefore, at this time, it is necessary to measure whether the performance of messaging middleware can meet the requirements in the multi-topic scenario. In my own tests, I randomly wrote 1K messages to 10,000 topics, achieving around 2W TPS for a single broker, which is much better than Kafka. Therefore, in the multi-topic application scenario, the performance is still strong, which is the second reason why we choose topic. This is also determined by the underlying file storage structure. Message-oriented middleware such as Kafka and RocketMQ can achieve near-memory read and write capabilities based on sequential reads and writes to files and memory mapping. Messages for all topics in RocketMQ are written in the same commitLog file, whereas messages in Kafka are organized as a basic unit of topics, independent of each other. In the multi-topic scenario, a large number of small files are created. A large number of small files have an addressing process during reading and writing, which is somewhat similar to random reading and writing, affecting the overall performance.
Supports transaction messages, sequential messages, delayed messages, message consumption failure retries, and so on
RocketMQ supports transaction messages, sequential messages, message consumption failure retries, and delayed messages, making it suitable for complex and changing business scenarios
Active community construction, Ali open source system
In addition, when choosing message-oriented middleware, consider community activity and the development language used for the source code. RocketMQ is developed in Java, which is friendly to Java developers, making it easier to read the source code to troubleshoot problems and do secondary development based on MQ. Most of the students in the community are friends from China, so we are close to RocketMQ open source contribution. We hope that more friends can participate in RocketMQ open source contribution to the domestic project.
Introduction and application of SPI mechanism
After explaining why RocketMQ was chosen, let’s take a look at how we applied RocketMQ based on the SPI mechanism. The full name of SPI (Service Provider Interface) is a built-in Service discovery mechanism in JDK. I personally understand that it is Interface oriented programming, leaving users with an extension point. Spring.factories in springBoot is also an application of the SPI mechanism. This diagram shows an application of SPI in RocketMQ. Our RocketMQ client based on the SPI mechanism was inspired by the APPLICATION of the SPI mechanism in MQ. RocketMQ implements ACL permission validation by implementing the AccessValidator interface, which is the default implementation in MQ. Permissions verification may be implemented in different ways due to different organizational structures. SPI mechanism provides an interface to provide extension points for developers to customize development. You only need to re-implement the AccessValidator interface when you need customization, and you don’t need to do much with the source code.
I’m going to show you a simple model of our configuration file, In this configuration file, except for sendMsgService, ConsumemsgPERFORMING, and consumeMsgOrderly, the rest are RocketMQ native configuration files. The three configuration items, sending and consuming messages, are the application of the SPI mechanism and provide interfaces for the implementation. Some of you may wonder if the SPI configuration file should be in the meta-INF.service directory. For the convenience of configuration file management, we simply put together the MQ configuration file. As mentioned earlier, meta-INF. service is just a default path, and changing it for easy administration does not violate the SPI mechanism.
ProConfigs supports all native MQ configurations. This decouples the configuration from the application implementation. The application only needs to focus on the specific business logic. Both the implementation of the producer consumer and the topic that the consumer consumes can be specified through a configuration file. The profile also supports multiple nameserver environments, sending messages to multiple RocketMQ environments and consuming messages from multiple different environments in more complex applications. The consumer provides two interfaces primarily to support concurrent and sequential consumption of RocketMQ. The next step is to share with you how to initialize the producer consumer from this configuration file. First of all, let’s introduce a core process of client loading that we abstract out.
Client core process details
As you can see in the figure, the core process of the client is abstracted into three parts, namely, the start period, the run period and the end period. Loading the configuration file first is the configuration file model just introduced. In the state of complete decoupling of configuration and application, the configuration file must be loaded before the subsequent process can be initialized. The producer and consumer business logic objects of the application implementation should be created for use by the producer and consumer before initializing them. Listen for configuration file changes at run time and adjust producer and consumer instances dynamically based on the changes. Again, the decoupling of configuration and application makes dynamic tuning possible. Termination is simpler, closing producers and consumers and removing them from the container. The termination period here refers to the termination of producers and consumers, not the termination of the entire application. Termination of producers and consumers may occur during dynamic adjustment, so the terminated instance must be removed from the container to facilitate initialization of subsequent producers and consumers. After introducing the basic process, this section describes the loading process of the configuration file.
How do I load a configuration file
Configuration file loading is a relatively simple process. This is mainly about how to accommodate older projects. The RocketMQ client supports a minimum JDK version of 1.6, so you should consider compatibility between old and new projects when packaging the client. In this case, the core package of our client is to support JDK1.6. The early project configuration files of Spring are generally placed in the resources path. We have implemented a set of methods to read configuration files and listen to configuration files by ourselves. On the basis of the core package with springBoot and packaged a set of automatic loading configuration file package for microservice project use, configuration file reading and monitoring are used in spring that set. Once the profile is loaded, how do the producers and consumers of the application implemented in the profile relate to the producers and consumers of RocketMQ? Let’s share some of this with you.
How to associate production consumers with business implementations
First, let’s look at how the consumer implements the association. Above is a message listener for the MQ consumer, which requires us to implement the specific business logic processing. You can associate the consumer in the profile with the RocketMQ consumer by associating the consumption logic implemented in the profile with this. The definition of a consumer’s interface is also simple: it consumes messages. The type of the consumption message can be specified by generics, the implemention-specific parameter type is retrieved when the consumer is initialized, and the message received by MQ is converted to the specific business type data. Conversions of the good news type are uniformly encapsulated by the client. The return value of the consumption message can be mapped as required to the status provided by MQ, which is briefly shown in the demo. When fetching a specific application consumer instance, if you use a Spring-managed object in your consumption logic, then your implementation of the consumption logic object should also be handled by Spring, and the initialized object should be retrieved from the Spring context. If you don’t use Spring to manage your spending logic, you can create specific application instances by reflection.
Unlike the consumer, who needs to pass the producer object initializing into the application code, the consumer is trying to get the logical object implemented in the application. How can we pass the producer into the business application?
The producers implemented in the business code need to inherit SendMessage, so the business code gets the RmqProducer object, which is an encapsulated producer object that normalizes the method of sending messages to conform to the company’s specifications. Methods in this object also check the naming convention for the topic, which has a common naming convention.
How to dynamically adjust production consumers
Talking about dynamic adjustment first needs to talk about the situation in which dynamic adjustment takes place, if there is no appropriate use of the situation to implement dynamic adjustment is a little bit unnecessary. Here are four scenarios where configuration files change:
When nameserver changes, all producers and consumers need to be reinitialized. This is usually an emergency switch to MQ when MQ is migrated or the current MQ cluster is not available.
In the case of increase or decrease of instances, only the corresponding instance can be started or closed. In the case of increase of application instances, a consumer needs to be added to consume new topics. In the case of decrease of consumers, the consumer needs to be closed in an emergency to stop the loss in time.
In the tweaking consumer thread scenario, we modified the source code a bit to allow the application side to obtain the consumer’s thread pool object to dynamically adjust the core thread count of the thread pool. When a consumer consumes too much data and consumes too much CPU resources, messages with higher priority cannot be processed in a timely manner, you can adjust the thread size of the consumer first.
Advantages of application
The original link
This article is the original content of Aliyun and shall not be reproduced without permission.