The purpose of this paper is to explore a methodology for hitting data through event triggering for future processing. Usually this problem is solved using timed tasks. So this article aims to eliminate all scheduled tasks from the system.
First, the use of scheduled tasks
In the current system design, the scheduled task is regarded as a very important component. Here are two scenarios that will serve as examples throughout this article.
1. Summary of document details
For example, the sales details of an e-commerce supermarket need to be classified and summarized according to certain conditions (location, channel, even suppliers, goods, etc.) on the previous day’s data to sum up the total sales.
In this scenario, scheduling a scheduled task is almost a mind-set: Select an idle time in the early morning and pull all data to be summarized by scheduling a scheduled task.
2. Automatic change of contract status
For example, when supermarkets sign rebate contracts with suppliers, effective time fields and current status fields are displayed on the contracts. The default state is To Take effect. Once the current time reaches the effective time, the contract state changes to Take effect.
This scenario also has an obvious one-size-fits-all aspect to it, so using timed tasks is almost the only option.
Event-driven usage
What’s wrong with using scheduled tasks? Why do I recommend events here?
Now I’ll redesign the functionality of the previous example with event-based triggering logic more in line with our thinking. By comparing them in different ways, you should be able to see the direct and obvious differences.
1. Summary of document details
As I said, summarizing is taking data that needs to be processed, grouping it and then condensing it according to specific summarizing conditions.
There are several key information in this: ① the data to be processed, ② according to the condition of group compression. So let’s think about the first Angle: what data needs to be processed. Needless to say, data that was not aggregated the day before needs to be aggregated! Actually, that’s not what I want to ask, but how to get the data. Only if we can get it.
Very simple. With a scheduled task, we can filter out this batch of data by the time period and whether it was processed or not. However, if there is a large amount of data, such as tens of millions or even billions a day, it is difficult to scoop it up and sort it in memory. Some people will say: not afraid, processing massive data we have experience: sharding processing. With sharding, we can take all open stores, say 100,000, through an interface to the underlying data, and then divide them into groups according to some rule, which can be processed separately in different application instances (or sequentially within the same application). If it is a relational database, adding an index to the store is basically the solution. Then there is another problem to consider: the daily sales of stores are not fixed, and some stores even have no sales all day; The strategy for sharding stores is fixed, which shards are which stores every day is fixed. The problem is that the amount of data in different shards may vary greatly, resulting in different shard calculation pressure.
Instead of using timed tasks, we have the following options, which I think are better than timed tasks.
A. Delay summary
When a store writes data to the system, the system checks whether the current data is written for the first time in the current day. If it is the first write, create a delayed summary event (the implementation could be timed MQ or timed thread pool, but preferably not using a database, otherwise we would need a timed task to sweep the library); In this way, stores with no business volume will not be aggregated. In order to reduce the pressure on the system, our specific delay time can also be separated, instead of all stores clustered together.
But with this scheme, compensation must be made accordingly. For example, with Java thread pools or Akka Acotr, information is lost if the system restarts. Messages can also fail to consume if MQ is used. So we need monitoring and alarm mechanisms to help.
B. Real-time summary
Each time data is written, the low summary dimension of the data is determined and the summary record is written. If the summary record already exists, update it directly.
This process is the main problem is to consider the concurrent security, we can use the distributed database lock or optimistic locking, even mysql on the duplicate key update (see [www.manxi.info/mysqlDuplic…
2. Change of contract status
The hallmark of this scenario using timed tasks is that at some point in time all records that hit a rule are processed. So we can create a scheduled task every morning to determine if any contracts need to become valid (or marked expired) today. But in reality, only one of the 365 days in a year is really useful, because the rest of the year is completely unrecorded-a waste of resources.
For this scenario, we can learn from redis’s lazy cleanup strategy for expired keys. We don’t need a mechanism to change the state of a contract at a precise (quasi-precise) time, but if the contract doesn’t need to be hit, it stays in the same state in the database: even after the expiration date, we can connect to the database and still see that it’s not in effect. Only when the business logic matches the contract, the contract needs to be found in the returned data for modification.
This method can effectively reduce the number of cycles in the system logic, but records in the previous state need to be found when querying records. Because the original contract we need to hit is already effective, using the lazy strategy we need to find the contract to be effective together, and then remove the expired contract from the effective contract, from the contract to be effective to find the effective contract; At this point, it is necessary to change both expired and effective contract states. Of course, we can use asynchronous logic to handle state changes, because even if the processing fails, the results of the next query will not be affected.
Event-driven benefits
From the example above we can see that using event triggers instead of timed tasks actually has pros and cons. The positive point is that the event mechanism is on-demand, come when you need it, don’t come when you don’t need it. The downside is that in order to keep the event mechanism running properly, you generally need a set of auxiliary logic: but what kind of logic doesn’t need a compensation mechanism? It’s just that some of them are already there and some of them need to be redeveloped.
In essence, a scheduled task is a stateless behavior that does not conform to object-oriented thinking. Event triggering simulates the process of dealing with one thing. For example, how do the operation and maintenance personnel deal with the summary of records before the management of the letter? If you wait until the next day to deal with all the extruded items from the previous day, this is a regular task. If the order is too large for a human to handle at once, it requires multiple people to handle it together. This is a distributed task. If more than one person can’t handle it for a while and it takes a long time (artificial pressure comes up), they may change to deal with one order at a time, otherwise most of the day will be idle, so the pressure will be reduced. For systems, too, real-time processing can effectively reduce system pressure spikes.
In fact, manual handling of backlog documents is still not the prototype of timed task. It is only because manual can handle it, which is a manifestation of human nature rather than a transformation of model. It is difficult to find the prototype of timed tasks in humans, although humans now use alarm clocks, but the task of alarm clocks is to remind and repeat the task
Let’s simulate the processing of contracts. Assuming no credit control, the contract finder would expect to have several boxes, one for pending, one for ongoing, and one for expired. When it came time to find a contract, he looked in the first two boxes and moved the one that was valid from the first to the second, and the one that was expired from the second to the third. Is there any way he could have moved those contracts earlier? Possible, but unnecessary: it was still human nature — he was too lazy. And after he moved it today, he probably didn’t use the contract all day, so why didn’t he move it tomorrow? So there is no good time to move, just move when you use it.
conclusion
So what’s the conclusion?
- For streaming data, I recommend real-time processing. The problem is concurrency security
- For batch data, I recommend the lazy strategy, where the problem is idempotent
Write in the last
If you like this idea, but find yourself in a situation where you can’t get rid of timed tasks, leave a comment and we’ll discuss it.