The author | Li Zhixin in rain source | alibaba cloud native public number
Since Dubbo became open source in 2011, it has been adopted by a large number of small and medium-sized companies and remains the most popular RPC framework in China. In 2014, Dubbo suspended its maintenance for a period of time due to alibaba’s internal organizational structure adjustment. Later, with the launch of Spring Cloud, the two systems combined to boost the popularity of micro services.
However, the world is changing fast, since the container technology represented by Docker and container choreography technology represented by K8s stepped on the stage, the cloud native era has arrived. In the cloud native era, immutable infrastructure brings immutable middleware infrastructure to the original middleware: gRPC unified the underlying communication layer; Protobuf consolidates serialization protocols; Service mesh, represented by Envoy + Istio, gradually unify the control side and data side of the service.
Dubbogo’s natural mission is: Bridging the Gap between Java and Go. Embrace the cloud native era by taking advantage of Go, the de facto first cloud native language, while keeping Go applications connected to Java applications. Dubbogo community will strive to build three arrows in 2020:
-
Dubbogo V1.5 version aligned with Dubbo 2.7 has been released;
-
Dubo-go-proxy project in sidecar form to be released in the near future;
-
And dubboGo 3.0 in progress.
Dubbogo 3.0 in one sentence: new communication protocol, new serialization protocol, new application registration model and new service governance capabilities! This paper focuses on dubboGO 3.0’s new communication protocol and application – level service registration discovery model.
3.0 vs gRPC dubbogo
Only by knowing yourself and your enemy can you make progress. The communication layer improvements of Dubbogo 3.0 are mainly borrowed from gRPC.
GRPC protocol, simple to say is http2 protocol based on the increase of a specific protocol header: “gRPC -” header field, using a specific unpack tool (Protobuf) to serialize the data, so as to achieve RPC calls.
As we all know, gRPC has almost no service governance ability, while Ali Cloud’s existing Dubbo framework has both RPC and service governance ability, and the overall strength is no less than gRPC. However, in the context of “everyone uses gRPC”, the new communication protocol of DubboGo 3.0 must be perfectly compatible with gRPC, fully compatible with the services already deployed by developers, and on this basis, continue the existing Dubbo protocol and service governance capabilities, and then launch a series of new strategies: Examples include mesh support, application level service registration, and so on.
Dubbogo 3.0 vs Dubbogo 1.5
At present, the existing Dubbo 2.7 protocol has achieved gRPC support as far as possible. Developers can convert THE PB IDL protocol to the stub supported by the framework through the Protoc-Gen-Dubbo tool, and then transfer the existing service governance capability to gRPC from top to bottom with the help of the RPC process of gRPC CONN, thus achieving the support of gRPC service.
Dubo-go V1.5. x also supports Stream calls to gRPC. Similar to Unary RPC, the capabilities of streaming RPC are incorporated into the framework by generating framework supported stubs on top of the underlying gRPC Stream calls. However, because Dubbo v2.7.x/Dubbo-go v1.5.x does not support streaming calls, there is no support for upper-level service governance for gRPC Stream calls.
The problem for developers is that we are more or less insecure when using Dubo-Go 2.7 for GRPC protocol transport.
The upcoming Dubo-Go 3.0 protocol will address this problem at its root.
Three levels of protocol compatibility
The author believes that the support of a service framework for third-party protocols can be divided into three levels: application level, protocol level and transmission level.
If a framework encapsulates an interface on top of a protocol SDK, it can be considered to be supported at the application level. Such a framework needs to follow the interface of the lower SDK and has poor scalability.
In the protocol hierarchy framework, from the configuration to the service management layer are provided by the framework, and under this agreement to the network transmission layer are the use of a fixed communication protocols, this framework can solve the problem of service governance, but the framework itself unable to deal with a third party fully fit, if not fit would be a support to the third party agreement, For example, dubo-Go 1.5’s bug with stream RPC support mentioned above.
If you want to further support more third-party protocols, you need to start from the transport layer, really understand the specific fields of third-party protocols, the frame model and data flow of the underlying protocols (such as HTTP2) on which you rely, and then develop data interaction modules that are completely consistent with third-party protocols as the bottom layer of this framework. The advantage of this is that it maximized the extensibility of the protocol, allowing developers to optionally add fields that are compatible with existing protocols to achieve functions that cannot be implemented by existing protocols, such as the backpressure strategy that DuBboGo 3.0 will support.
Http2-based communication flow
GRPC a HTTP2-based UNary RPC call transmission process is as follows:
- Client sends Magic message:
PRI * HTTP / 2.0 \ r \ n \ r \ nSM \ r \ n \ r \ n;
-
Server receives and checks whether it is correct;
-
The client and server send the setting frame to each other and send an ACK to confirm the frame.
-
The client sends a Header frame that contains the gRPC field and ends with End Headers.
-
The client then sends a Data frame containing the REQUEST information of the RPC call, with End Stream as the Data End flag.
-
Server calls the function to get the result;
-
The server sends a Header frame that contains the gRPC field and ends with End Headers.
-
The server then sends a Data frame containing the response information returned by the RPC call.
-
The server then sends a Header frame containing the RPC status and message, with End Stream as the End of the RPC call.
The HTTP2 Header frame that contains the gRPC call information looks like this:
In addition, in the STREAM call of gRPC, Data can be sent several times in the process of sending back from the server. After the call, the Header can be sent to terminate the RPC process and report the status information.
The duBboGO 3.0 communication layer will use the same communication flow over the HTTP2 communication protocol to ensure the underlying communication with gRPC.
Dubbogo 3.0 Expected communication architecture
In addition to using HTTP2 as the communication protocol, Dubbogo 3.0 will use Google Protobuf based triple protocol [hereafter referred to as dubbo3 protocol] as the serialization protocol of DubboGo 3.0. It lays the foundation for dubbo to support more programming languages in the future.
The current design of DuBboGo 3.0 transmission model is as follows:
-
To ensure that unARY RPC and STREAM RPC are supported at the same time, data flow structure is added on the server side and client side to complete data transfer in the form of asynchronous call.
-
Continue to support the original TCP communication capability;
-
Support dubBO3 protocol on HTTP2 communication protocol, decode process compatible with gRPC use of protobuf, to ensure that the gRPC service through.
Application – level service registration discovery
1. Introduction to application-level service registration discovery
Dubbogo 3.0 uses the next generation of service registry discovery architecture, which will replace the old “interface level registry discovery” with “application level registry discovery.”
To put it simply, interface-level registry discovery organizes data in the registry with RPC services as keys and instance lists as values, while our newly introduced “application-grained service discovery” uses Application names as keys. Value is a list of instances deployed by the application. This makes two differences:
-
The data mapping changed from RPC Service -> Instance to Application -> Instance;
-
There is less data, and the registry has no RPC Service and its associated configuration information.
It can be considered that the amount of data stored and pushed by the model based on application granularity is proportional to the number of applications and instances. Only when the number of applications increases or the number of instances of applications increases, the pressure of address push will increase.
For a model based on interface granularity, the amount of data is positively correlated with the number of interfaces, which is usually tens of times larger than the application granularity, given the fact that an application often publishes multiple interfaces. Another key point is that the definition of interfaces is more of an internal behavior on the business side. The granularity of interfaces leads to opacity in cluster size estimation, while the growth of instances and applications is usually planned on the o&M side and controllable.
Industrial and Commercial Bank of China has carried out production calculations on these two models: the application-level service registration model can make the amount of data in the registry become 1.68% of the original, and the new model can make ZooKeeper easily reach 100,000 level of services and 100,000 level of nodes.
2. The metadata center synchronization mechanism is introduced
As a result of the reduced amount of data in the data center, the RPC service-related data disappears in the registry, leaving only application-instance data. In order to ensure that the missing RPC service data can still be correctly perceived by the Consumer end, we have established a separate communication channel between the Consumer and Provider. Currently, there are two specific options for metadata synchronization, which are as follows:
-
Built-in MetadataService;
-
An independent metadata center that coordinates data through a medium-refined metadata cluster.
3. Compatible with older versions of Dubo-Go
In order to make the development process more transparent to older Dubbo-Go users and avoid the extensibility impact of specifying providers, we designed a set of RPC service-to-application name mappings. To try to automate the conversion from RPC service to provider application name at Consumer.
Dubbo v2.6.x, Dubbo v2.7.x, and Dubbo v3.0.x are all available in Dubbogo 3.0.
Unified routing support
Routing can be conceptually understood as selecting a subset of IP addresses from the existing IP address list based on specific routing rules. Routing is filtered based on the configured routing rules, and the intersection of all routing rules is obtained. Multiple routes form a routing chain like an assembly line. The final destination IP address set is selected from all address tables and the IP address is selected based on load balancing policies.
1. The routing chain
The logic of a routing chain can be simply interpreted as target = rn(… R3 (r2 (r1 (SRC)))). The internal logic of each router can be abstract as the input address addrs-in and n disjoint address pools addrs-pool-1 on the router according to the full address addrs-all. Addrs-pool -n takes the intersection as the output address according to implementation-defined rules. And so on, complete the calculation of the entire routing chain.
2. failover
You can configure the failover field in the routing rule configuration file. When searching for an address fails, you can perform failover, select other subset, and execute it in sequence until the address is found. Otherwise, the address cannot be found.
3. Bottom-pocket routing
In the configuration of routing rules, you can configure a match without any conditions. The final result is that at least one subset is selected to achieve the function of empty address protection.
As one of the three arrows the Dubbogo community will build in 2020 and launch in early 2021, DubboGo 3.0 will bring a different and refreshing development experience.
If you have any questions, you are welcome to join the Nacks group: Nacks group 31363295.
Dubbogo 3.0 is currently being developed in collaboration with dubbo’s official team, Ali Middleware.
Ali Cloud – Middleware team is looking for developers interested in DubBO3 (Java & Go), DAPR, arthas. You can contact Northlatitude on staples or by email at [email protected].
Author’s brief introduction
GitHubID LaurenceLiZhixin li, development engineer of Aliyun cloud native middleware team, developer of Dubbogo community, student of Software Engineering major in Sun Yat-sen University, is good at using Go language and focuses on cloud native and micro services and other technical directions.
In the rain (making @ AlexStocks), dubbo – go project and community leaders, a server has more than 10 years working experience in doing the research and development of the infrastructure of a line of programmers, ever been involved in improving Muduo/Pika/dubbo/Sentinel – go and other well-known projects, Currently, I am working on container choreography and Service Mesh in the Trusted Native Department of Ant Financial.