One day, when my mother was leaving, she said to my father and me, “Now we need to clean our house well.” She put a piece of paper on the door, saying, “Take care of the environment, everyone is responsible for it.” And then she left.

I think: I want to observe morning glory, no time, so on the people add a horizontal, become “care for the environment, adults are responsible”, dad also don’t want to clean, so under the big add a point, add a horizontal on another person, and then add a point under, become “care for the environment, the wife is responsible”. Mother came back, look at the words on the paper: care for the environment, the wife is responsible. “Oh, what’s going on here?”

Dubbo in the background

Heroes emerge from troubled times: With the development of the Internet and the continuous expansion of website applications, the conventional vertical application architecture has been unable to cope with it. The distributed service architecture and mobile computing architecture are imperative, and a governance system is urgently needed to ensure the orderly evolution of the architecture.

Single application Architecture

When site traffic is low, only one application is needed to deploy all functions together to reduce deployment nodes and costs. At this point, data access frameworks (ORM) that simplify the work of adding, deleting, modifying and reviewing are key.

Vertical application Architecture

As traffic increases, the acceleration caused by a single application increases less and less. One way to improve efficiency is to break the application into several unrelated applications to improve efficiency. At this point, a Web framework (MVC) for accelerating front-end page development is key.

Distributed Service Architecture

With the increasing number of vertical applications, the interaction between applications is inevitable. Core businesses are extracted as independent services, gradually forming a stable service center, so that front-end applications can respond to changing market demands more quickly. At this point, the distributed Services framework (RPC) for business reuse and integration is key.

Mobile Computing Architecture

As the number of services increases, problems such as capacity evaluation and waste of small service resources gradually emerge. In this case, a scheduling center needs to be added to manage cluster capacity in real time based on access pressure to improve cluster utilization. At this point, a resource scheduling and Governance center (SOA) for improving machine utilization is key.

Personal understanding of various architectures

The various architectures mentioned above, in my understanding:

  • Single application Architecture: In fact, our initial SSM (Spring + SpringMVC + Mybatis) or SSH (Spring + SpringMVC + Hibernate) is to deploy all functions in a project. In this architecture, A data access framework (ORM), designed to simplify the work of adding, deleting, modifying and reviewing, is key.

  • Vertical application architecture: With the expansion of business volume, one application has covered too many things. Take e-commerce system as an example: On the function of the goods (new product, category, brand, etc) and the marketing function (coupons, activities, etc) and the function of trading in an application, the project to start 10 minutes, every function to move to release, caused great trouble, so need to apply each will be a split into different application, application between independent deployment, Every application needs a corresponding front-end page, so the Web framework (MVC) for accelerating front-end page development is key in this architecture.

  • Distributed service Architecture: After vertical application architecture, the application will be more and more, more and more, the interaction between the application and application must also will be more and more close, this time it is easy to cause circular dependencies between application, when the release is a package, so the core business is extracted, as an independent service, gradually formed a stable service center, So that the front-end application can more quickly respond to the changing market demand. At this point, the distributed Services Framework (RPC) for business reuse and integration is key under this architecture.

  • Mobile computing architecture: After services are divided into different services, more and more services will be provided. With different emphases of services, some marginalized services will occupy resources, resulting in the waste of service resources. In this case, a scheduling center should be added to manage cluster capacity in real time based on access pressure to improve cluster utilization. At this point, a resource scheduling and Governance center (SOA) for improving machine utilization is key.

What the hell is a Dubbo

In distributed architecture, the problem of too many applications caused by vertical application architecture is mentioned. In this scenario, an RPC framework for improving business usage and integration is needed. Our Dubbo is a high-performance, Java-based RPC open source framework.

What is RPC framework

Remote Procedure Call (RPC). Simply understood, a node requests services provided by another node

Since this is a remote service call, it must be compared to a local service call:

  • Local service call: If you need to pass the age+1 of the local teacher object, you can implement an addAge() method, pass the teacher object, update the age and return it. The body of the local method call is specified by the function pointer.

  • Remote service invocation: The above operation is performed if the addAge() method is on the server side

What does it take to implement a remote service invocation? Since the body of the function executing the function is on the remote machine, how do you tell the machine that you need to call this method? And that’s what the RPC framework needs to do.

Conditions for remote service invocation

To satisfy a remote service invocation, the following three conditions are met:

  • First, the client needs to tell the server which function to call. There is a mapping between the function and the process ID. When the client makes a remote call, it needs to look up the function, find the corresponding ID, and then execute the function code (find the method).

  • Client needs to pass the local parameters to a remote function, the process of local calls, direct pressure stack, but in the process of remote call is no longer the same memory, cannot directly transfer function parameters, so we need the client into a byte stream, the parameters to the server, and then the server will be the byte stream into itself can read format, Is a serialization and deserialization process (finding parameters).

  • Once the data is ready, it needs to be transferred. The network transport layer needs to pass the ID of the call and the serialized parameters to the server, and then serialize the calculated results to the client, hence the need for various network protocols (TCP, HTTP, etc.)

RPC demo

Public class RPCClient<T> {public static <T> T getRemoteProxyObj(final class <? > serviceInterface, final InetSocketAddress addr) { // 1. Local interface calls into the JDK dynamic Proxy, implementing an interface in the dynamic Proxy remote call return (T) Proxy. NewProxyInstance (serviceInterface. GetClassLoader (), new Class <? >[]{serviceInterface}, new InvocationHandler() { @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { Socket socket = null; ObjectOutputStream output = null; ObjectInputStream input = null; Try {// 2. Create a Socket client and connect to the remote service provider based on the specified address. socket.connect(addr); Output = new ObjectOutputStream(socket.getOutputStream()); output.writeUTF(serviceInterface.getName()); output.writeUTF(method.getName()); output.writeObject(method.getParameterTypes()); output.writeObject(args); Input = new ObjectInputStream(socket.getinputStream ()); return input.readObject(); }finally { if (socket ! = null){ socket.close(); } if (output ! = null){ output.close(); } if (input ! = null){ input.close(); }}}}); }} Server:  public class ServiceCenter implements Server { private static ExecutorService executor = Executors.newFixedThreadPool (Runtime.getRuntime().availableProcessors()); private static final HashMap<String, Class> serviceRegistry = new HashMap<String, Class>(); private static boolean isRunning = false; private static int port; public ServiceCenter(int port){ ServiceCenter.port = port; } @Override public void start() throws IOException { ServerSocket server = new ServerSocket(); server.bind(new InetSocketAddress(port)); System.out.println("Server Start ....." ); try{ while(true){ executor.execute(new ServiceTask(server.accept())); } }finally { server.close(); } } @Override public void register(Class serviceInterface, Class impl) { serviceRegistry.put(serviceInterface.getName(), impl); } @Override public boolean isRunning() { return isRunning; } @Override public int getPort() { return port; } @Override public void stop() { isRunning = false; executor.shutdown(); } private static class ServiceTask implements Runnable { Socket client = null; public ServiceTask(Socket client) { this.client = client; } @Override public void run() { ObjectInputStream input = null; ObjectOutputStream output = null; try{ input = new ObjectInputStream(client.getInputStream()); String serviceName = input.readUTF(); String methodName = input.readUTF(); Class<? >[] parameterTypes = (Class<? >[]) input.readObject(); Object[] arguments = (Object[]) input.readObject(); Class serviceClass = serviceRegistry.get(serviceName); if(serviceClass == null){ throw new ClassNotFoundException(serviceName + "not found!" ); } Method method = serviceClass.getMethod(methodName, parameterTypes); Object result = method.invoke(serviceClass.newInstance(), arguments); output = new ObjectOutputStream(client.getOutputStream()); output.writeObject(result); }catch (Exception e){ e.printStackTrace(); }finally { if(output! =null){ try{ output.close(); }catch (IOException e){ e.printStackTrace(); } } if (input ! = null) { try { input.close(); } catch (IOException e) { e.printStackTrace(); } } if (client ! = null) { try { client.close(); } catch (IOException e) { e.printStackTrace(); }}}}}} The client only needs to know the interface ServiceProducer on the Server side. During the execution, the Server will invoke the actual method ServiceProducerImpl according to the specific instance, which conforms to the object oriented process in which the parent class reference points to the subclass object.Copy the code

Looking back at Dubbo, since dubbo is an RPC framework, it must satisfy the above three requirements. Next, let’s take a look at what Dubbo has in terms of the modules of the Dubbo project.

Dubbo module subcontracted

In the frame design section of the official Dubbo document, there is the following image:

From the above figure, we can clearly see the dependency relationship between each module. In fact, the above figure only shows the key module dependency relationship, as well as some modules, such as dubbo-Bootstrap cleaning module, etc. Next, I will give a brief introduction to each module, at least to understand the functions of each module.

Dubbo module

Dubo-registry — Registry module

Clustering based on registries delivering addresses, and abstractions to various registries.

Dubo-registry-api: Abstracts registry registration and discovery, implements some common methods, and lets subclasses focus only on some key methods.

The rest of the package is the implementation of specific registries. Dubbo’s registry implementations include Multicast, Zookeeper, Redis, Simple, Consul, Eureka, NACOS, and so on. This module encapsulates the implementation of the registry supported by Dubbo.

Dubo-cluster — Cluster module

To avoid a single point of failure, applications are now typically deployed on at least two servers. For some high-load services, more servers are deployed. Thus, the number of service providers in the same environment will be greater than 1. For service consumers, there are multiple service providers in the same environment. A problem arises when the service consumer needs to decide which service provider to choose to invoke. Also consider what to do when a service invocation fails, whether to retry, throw an exception, or just print exceptions, etc.

To address these issues, Dubbo defines the Cluster interface Cluster and Cluster Invoker. The purpose of a Cluster Cluster is to combine multiple service providers into a Cluster Invoker and expose this Invoker to the service consumer. As a result, the service consumer only needs to make remote calls through the Invoker, leaving it to the cluster module to decide which service provider to call and what to do if the call fails.

Masquerade multiple service providers as one provider, including load balancing, fault tolerance, routing, etc. The cluster address list can be statically configured or delivered by the registry.

The cluster module is the middle layer between the service provider and the service consumer, shielding the service provider from the service consumer so that the service consumer can focus on remote invocations. Such as making requests, accepting data returned by the service provider, and so on. That’s what clustering does.

  • The basic design principle of Dubbo is to use urls as a uniform format for configuration information. All extension points carry configuration information by passing urls. This package is used to generate configuration information according to uniform configuration rules.

  • Directory package: Directory represents multiple Invokers and its value changes as the registry pushes service changes. Invoker is an abstraction of Provider that calls Service. Invoker encapsulates the Provider address and Service interface information.

  • Loadbalance package: encapsulates the implementation of load balancing. It selects a specific Invoker from multiple invokers using the load balancing algorithm. If the Invoker fails to be invoked, it needs to be selected again.

  • Merger package: encapsulates merger returns, groups and aggregates into methods, and supports multiple types of data structures.

  • Router packet: Encapsulates the implementation of routing rules that determine the target server for a dubbo service invocation. There are two types of routing rules: conditional routing rules and scripted routing rules, and extensible.

  • Support package: encapsulates various invokers and clusters, including cluster fault-tolerant patterns and grouping-aggregated clusters and related Invokers.

Dubo-common — Common logic module

Includes Util classes and generic models.

Utility classes are common methods, and a common model is a model that has a common format throughout the project, such as urls, as mentioned above.

Now move to org.apache.dubbo

Dubo-config — Configure the module

Dubbo is the external API of Dubbo. Users use Dubbo through Config to hide all details of Dubbo. Dubbo also provides four configuration methods, including XML configuration, attribute configuration, API configuration, annotation configuration, configuration module is to achieve the four configuration functions.

  • Dubo-config-api: implements API configuration and attribute configuration.
  • Dubbo-config-spring: Implements XML configuration and annotation configuration.

Dubo-rpc — Remote call module

Abstract various protocols, as well as dynamic proxies, involving only one-to-one calls, with no concern for cluster management. For remote calls, the main thing is definitely the protocol, and Dubbo provides a lot of protocol implementations, but the official recommendation is to use Dubbo’s own protocol.

  • Dubo-rpc-api: Abstracts dynamic proxies and various protocols to implement one-to-one calls
  • The other packages are implementations of each protocol.
  • This module relies on the Dubo-Remoting module to abstract various protocols.

Dubo-remoting — Remote communication module

Equivalent to an implementation of the Dubbo protocol, this package is not required if RPC uses the RMI protocol. It provides a variety of client and server communication functions, such as Grizzly, Netty, Tomcat, etc. RPC protocols besides RMI use this module.

  • Dubo-remoting-api: Defines the client and server interfaces.
  • Dubo-remoting-grizzly: Client and Server based on the Grizzly implementation.
  • Dubo-remoting-http: Client and Server based on Jetty or Tomcat implementations.
  • Dubo-remoting-mina: Client and Server based on mina implementations.
  • Dubo-remoting-netty: Client and Server implemented based on Netty3.
  • Dubo-remoting-netty4: Client and Server implemented based on Netty4.
  • Dubo-remoting-p2p: P2P server used in multicast registries.
  • Dubo-remoting-zookeeper: encapsulates the ZooKeeper Client and communicates with the ZooKeeper Server.

Dubo-container — Container module

Spring starts with a simple Main load, and since services usually don’t require Web container features such as Tomcat/JBoss, there is no need to use a Web container to load services.

  • Dubbo-container-api: defines the Container interface and implements the Main method for loading services.
  • The other three provide corresponding containers for the Main method to load.

Dubo-monitor: monitoring module

Count the number of service calls, call time, and call chain tracking services.

  • Dubbo-monitor-api: defines the monitor-related interfaces and implements the filters required for monitoring.
  • Dubo-monitor-default: implements dubbo monitoring functions.

Dubo-demo — Sample module

Quick start examples, which include service providers and callers, the multicast registry, and XML configuration methods, can be described in the official documentation.

Quick start sample address

Dubo-filter — Filter module

This module provides some built-in filters.

  • Dubo-filter-cache: provides cache filters.
  • Dubo-filter-validation: Provides a parameter validation filter.

Dubo-plugin — Plugin module

  • Dubbo-qos: provides online O&M commands.
  • Dubo-auth: a plug-in that provides auth validation.

Dubo-serialization — serialization module

This module encapsulates the support implementation of various serialization frameworks.

  • Dubbo-serialization-api: defines the interface for serialization and data input and output.
  • The other packages are methods that implement the corresponding serialization framework. These types of serialization frameworks are built into Dubbo, and serialization also supports extensions.

Maven configuration file for Dubbo

  • Dubbo – BOM/POM.xml, using Maven BOM unified definition of dubbo version number.

Dubbo-test and Dubbo-Demo all refer to dubbo-bom/pom.xml in their POM files.

  • Dubo-dependencies -bom/ POM. XML: Uses Maven BOM to define the version numbers of third-party libraries that Dubbo depends on.

  • All /pow.xml: defines the dubbo packaging script. When using the Dubbo library, you need to introduce the modified POM file.

Pay attention and don’t get lost

Well folks, that’s all for this article… Stay tuned for weekly updates on common technology stacks!! If this article is written well, feel “small sami” I have something to ask for praise 👍 for attention ❤️ for share 👥 for me really very useful!!

White piao is not good, creation is not easy, everyone’s support and recognition, is the biggest power of my creation, we see the next article!

Lost the little novice monk | article “original”

If there are any mistakes in this blog, please comment, thank you very much!