Solomon_ Xiao Ge play architecture with everyone “play” distributed micro service Dubbo asynchronous interface implementation principle and practice welcome to like, like, like.

Follow my public account Solomon for more surprises

To realize the background

It is necessary to introduce the threading strategy of the server side in detail to deepen the judgment of the user in choosing the server side asynchrony. At the same time, it is necessary to introduce the “secret weapon” that is often used in the server side asynchrony.

Threading policy on the server

Dubbo supports a variety of NIO frameworks to implement Remoting protocols. Whether Netty, Mina or Grizzly, the implementations are all very similar in that they are all event-driven to establish network channels and read data streams. Grizzly’s introduction to threading strategy, for example, generally supports four of the following. Dubbo, as an RPC framework, chooses the first policy by default because it is impossible to determine whether a business service is CPU-intensive or IO blocking, and the first policy is the safest. Of course, after understanding these strategies, it is the most perfect to make targeted choices based on business scenarios.

The Worker thread strategy

The most common and pervasive strategy is in which the IO thread delegates NIO event processing to the worker thread.

This strategy is highly scalable. We can change the size of the IO and worker thread pools as needed, and there is no risk of interference between the same Selector channels, as can happen during a particular NIO event processing.

The downside is the cost of thread context switching.

Same – thread strategy

Probably the most effective strategy. Unlike the first, the same thread processes NIO events in the current thread, avoiding costly thread context switches.

This policy can adjust the SIZE of the IO thread pool and is also scalable; The disadvantage is also obvious, it requires that there must be no blocking processing in the business process, because it may prevent the processing of other NIO events occurring on the same IO thread.

The Dynamic strategy

As mentioned earlier, the first two strategies have obvious advantages and disadvantages. But what if policies could try to subtly exchange them at run time based on current conditions (load, collected statistics, and so on)?

This strategy may offer many benefits in terms of better control of resources, provided that the conditional evaluation logic is not overloaded and the complexity of the evaluation judgment makes this strategy inefficient. By the way, I encourage you to pay more attention to this strategy, which is probably the best match for Dubbo server asynchrony. I have been paying attention to Adaptive XX or Predictive XX in recent days, and it is nice to see Dynamic here. Dubbo, as a production-level micro service solution, must be both Adaptive, Predictive and Dynamic

Leader – follower strategy

This strategy is similar to the first, but instead of passing NIO event processing to the worker thread, it passes control to the Selector to the worker thread and the actual NIO event processing to the current IO thread. This strategy actually confuses the worker and IO thread stages, so I don’t recommend it

Coroutines and threads

In terms of CPU resource management, the minimum scheduling unit of BOTH OS and JVM is thread. The coroutine package implemented by business application through extension can have independent running unit, which is actually based on thread. The core should be to save the context and switch to another coroutine when I/O blocking or lock waiting. As for the low cost of coroutines and the more efficient use of CPU, these are supported by the user-mode implementation and context design of the coroutine library, but it is also suggested that we do performance tests based on actual business scenarios

In the default Dubbo thread policy, there is a worker ThreadPool to execute the business logic, but the problem of ThreadPool Full often occurs. In order to release the worker thread as soon as possible, another thread will be created in the implementation of the business service. At the expense of adding thread context switches again, there is also the need to consider link-level data transfer (e.g., tracing messages), flow control exit controls, and so on. Of course, if Dubbo can switch to the Same thread policy, coupled with the support of the coroutine library, server-side asynchrony is a recommended use.

The sample

Experience the Dubbo server asynchronous interface with an example. Please visit Github for Demo code

public class AsyncServiceImpl implements AsyncService {

    @Override
    public String sayHello(String name) {
        System.out.println("Main sayHello() method start.");
        final AsyncContext asyncContext = RpcContext.startAsync();
        new Thread(() -> {
            asyncContext.signalContextSwitch();
            System.out.println("Attachment from consumer: " + RpcContext.getContext().getAttachment("consumer-key1"));
            System.out.println(" -- Async start.");
            try {
                Thread.sleep(500);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            asyncContext.write("Hello " + name + ", response from provider.");
            System.out.println(" -- Async end.");
        }).start();
        System.out.println("Main sayHello() method end.");
        return "hello, " + name;
    }

Copy the code

Practical advice

  • Server-side asynchracy is a bogus proposition in terms of event-driven or Reactive. Additional reason: The original idea is that Dubbo doesn’t have enough server threads (default is 200), but in event-driven mode, 200 is not that many, just the number of CPU cores. As long as the business implements non-blocking, purely asynchronous business logic processing, any additional number of threads is a waste of resources.
  • To use server-side asynchrony, it is recommended that the threading policy of the server adopt the same Thread pattern + coroutine package.

yourLike and followIs the continuing power of the Solomon_ Shogo shell structure.

summary

Dubbo runs into a variety of bizarre requirements scenarios when supporting business applications, and server asynchrony provides users with a solution to ThreadPool Full. When ThreadPool Full occurs, do not use this method if the current system bottleneck is CPU. If the system Load is not high, it can be considered to increase the number of worker threads or adopt server asynchronous

Hot historical Articles

  • ๐Ÿ”ฅServerless Microservices elegant shutdown practices

  • ๐Ÿ”ฅ this algorithm can not understand! How are the 9 images presented

  • ๐Ÿ”ฅSpringBoot Mastery – Custom Condition annotations (Series 1)

  • ๐Ÿ”ฅJava is how to take off the beautiful woman’s clothes

  • ๐Ÿ”ฅ High-performance gateway was originally designed this way

  • ๐Ÿ”ฅREST FUL look still don’t understand, you play me!

  • How much ๐Ÿ”ฅServerless affects programmers

  • ๐Ÿ”ฅ How does distributed transaction XID connect all microservices in tandem

  • ๐Ÿ”ฅ Microservices Distributed Transaction TCC core implementation

  • ๐Ÿ”ฅ hundreds of millions of traffic site performance optimization methodology steps

  • ๐Ÿ”ฅ microservice Nacos implements proximity access through CMDB to improve performance

  • ๐Ÿ”ฅ Micro service architecture DNS service registration and discovery mechanism

  • . More and more

Chat ๐Ÿ† technology project stage v | distributed those things…