The framework is currently undergoing the inspection and testing of complex business scenarios such as demanding, high concurrency and massive users in the background of JD App. Updates and Bugfixes will be released at any time according to the actual situation.
Interested in blockchain, you can refer to the author of another GVP project, Java Blockchain primer.
If you just need to use this frame, look below. If need to understand how this framework is implemented step by step, from receiving the demand, to every step of the thinking, why do each class design, why have these methods, namely how to go from 0 to 1 to develop the framework, the author opened column how dedicated middleware in CSDN development from 0, including but not limited to the small frame. Jingdong internal colleagues can also find erp in CF.
Parallel common scenarios
1 The client requests the server interface, which needs to invoke the interfaces of other N microservices
For example, to request my order, you need to invoke user’s RPC, commodity details RPC, inventory RPC, coupons, and many other services. At the same time, these services also have interdependencies, such as the need to get a user’s field before requesting data from an RPC service. When all the results are finally obtained, or a timeout expires, the results are summarized and returned to the client.
2 Perform N tasks in parallel, and determine whether to perform the next task based on the execution results of the 1-N tasks
If a user can log in by email, mobile phone number, or user name, there is only one login interface. After the user initiates a login request, the database needs to be searched based on the email, mobile phone number, and user name at the same time. If one of the login requests succeeds, the database is considered successful and the user can proceed to the next step. Instead of trying the email first and then the phone number…
Another example is that an interface limits the number of parameters to be transferred in each batch, and the information of a maximum of 10 commodities can be queried at a time. If I have 45 commodities to be queried, I can query them in parallel in 5 heaps, and then the query results of these 5 heaps will be counted. It depends on whether you force all checks to succeed, or whether you return a few checks to the customer
For an interface, there are five pre-tasks to be handled. Three of them must be completed before the subsequent execution, and the other two are optional. You can proceed to the next step as long as the three are completed. Then the other two will have values if they are successful, and the default values if they are not completed.
3 Multi-batch tasks requiring thread isolation
For example, multiple groups of tasks are unrelated to each other. Each group requires an independent thread pool, and each group is a combination of an independent set of execution units. Similar to Hystrix’s thread pool isolation policy.
4. Single-machine workflow task scheduling
5. Other requirements for sequential arrangement
The core of parallel scenes — arbitrary choreography
Serial requests for multiple execution units
2 Parallel requests for multiple execution units
3 block wait, serial followed by multiple parallel
4 Blocks and waits until multiple parallel executions are completed
5 strings of parallel dependencies
6 Complex Scenarios
One of the possible requirements for parallel scenarios — a callback for each execution result
A traditional Future, CompleteableFuture, can sort of orchestrate tasks and pass results to the next task. For example, CompletableFuture has a then method, but can’t do a callback to every execution unit. For example, if A is successfully executed and B follows, I hope that A will have A callback result after execution, so that I can monitor the current execution status or log something. Failed. I could log an exception or something.
At this point, CompleteableFuture can’t do anything.
My framework provides this callback function. Also, if the execution is abnormal or timed out, the default value can be set when the execution unit is defined.
Possible requirements for parallel scenarios – strong and weak dependencies on execution order
As shown in figure 3, A and B are executed concurrently, followed by C.
In some cases, we want both A and B to be completed before we can execute C, and the CompletableFuture has an allOf(futures…) The.then() method does this.
In some cases, we want either A or B to finish executing C, and the CompletableFuture has an anyOf(futures… The.then() method does this.
My framework also provides similar functionality by setting addDepend dependencies in the Wrapper to specify whether the dependent task must complete. If a dependency is something that must should execute, then you must wait for all must dependencies to complete before executing yourself.
If none of the dependencies are must, then either dependency can be executed and you can execute yourself.
Note: This dependency is required and not required, and it is important that the execution unit cannot be repeated. For example, in Figure 4, if B completes, then A completes, then C finally completes, and then reaches A, then A is either already executing or has completed (failed), and then A should not be repeated.
There is another scenario, as shown in the following figure, where A and D start in parallel, D finishes first, and the Result task is executed before B and C start, and Result finishes, although B and C have not been executed, but there is no need to execute. Tasks B and C can be skipped if their NextWrapper has already had a result or is already executing. I provide the checkNextWrapperResult method to control whether or not I want to execute the logical control when the next task has already been executed. Of course, this control is only valid if there is only one nextWrapper.
One of the possible requirements for concurrent scenarios is to rely on upstream execution results as input parameters
For example, if A is A String and B is an int, B needs the result of A as its own input parameter. In other words, A and B are not independent, but result dependent.
B can’t get the result until A completes execution, but only knows the result type of A.
Well, my framework also supports this scenario. The result wrapper class of A can be taken as the input parameter of B when choreographed. Although it has not been executed at this time, it must be null, but it can be guaranteed that after the execution of A, B’s entry will be assigned.
One of the possible requirements for concurrent scenarios – time-outs for group tasks
For a group of tasks, although the time of each execution unit within the group is not controllable, I can control the execution time of the whole group not to exceed a certain value. Set timeOut to control the execution threshold of the whole group.
Possible requirements for concurrent scenarios – high performance, low thread count
The frame is unlocked throughout, and there is no locking place.
The number of created threads is small.
As such, A will run on the thread of the cell that B and C execute slower, without creating additional threads.
AsyncTool characteristics
Solve arbitrary multithreading parallel, serial, blocking, dependence, callback of the concurrent framework, can arbitrarily combine the execution order of each thread, with the full link back and timeout control.
Among them, A, B and C are A minimum execution unit (worker) respectively, which can be A time-consuming code, an Rpc call, etc., not limited to what you do.
The framework arranges these workers in any order of execution you want. And then you get the result.
Moreover, the framework provides callback of execution result and custom default value after execution failure for each worker. For example, after A completes execution, A’s listener receives A callback with the result of A’s execution (success, timeout, exception).
Once the units of execution have been assembled according to your needs, start executing on the main thread and block until the last execution completes. You can also set the timeout period for the entire group.
The framework supports subsequent execution units whose results are their own input parameters. For example, if your execution unit B’s entry parameter is ResultA, ResultA is the execution result of A, that can also be supported. At choreographer time, you can pre-set the input parameter of B or C to be A result of A, even if A has not yet started execution. When A completes execution, the result is naturally passed to B’s input parameter.
The frame is unlocked throughout.