Follow and text me to receive learning materials about Java and the latest interview materials
RPC Dubbo
Evolution of system architecture
1. Single application architecture
An architecture in which all functional modules are coded, compiled, packaged, and deployed in one Tomcat container in one project. The maintenance cost is low. Advantages:
- The early development cycle of the project is fast, and the team can iterate quickly when there are few members;
- Simple architecture: MVC architecture, only with IDE development, debugging.
- Easy to test: only through unit tests or browsers
- Easy to deploy
Disadvantages:
-
Over time, as business grows and functionality is iterated, projects become bloated and business coupling becomes severe
-
New Business difficulties
-
Core business and marginal business mixed together, problems affect each other.
2. Vertical architecture
In order to solve the problem of monomer frame, vertical division according to business began. The original single unit into a pile of single applications, this time from the original single application into multiple application deployment. Advantages:
- Can be optimized for different modules;
- Convenient horizontal expansion, load balancing, improved fault tolerance rate.
- Systems are independent of each other, so new business iterations will be more efficient.
Disadvantages:
- Services call each other, and if a service’s port or IP address changes, the calling system must manually change it.
- After cluster construction, load balancing is complicated.
- Service invocation methods are not uniform, based on HttpClient, WebService, interface protocol is not uniform.
- Poor service monitoring: metrics such as success rate of calls, failure rate of calls, total elapsed time, etc. are of no use other than relying on monitoring of ports and processes.
3. SOA Architecture
Access to Data: Service-oriented Architecture (SOA). The idea is to divide the business into appropriate and independently deployed modules according to the actual business, which are independent of each other (communicate via technologies such as Webservice/Double).
Second, the RPC
1. RPC core and general flow of call
1) Remote Procedure Call (RPC). Simply understood, a node requests the services of another node.
Call procedure: Client: service caller.
Client Stub: stores the address information of the server, packages the request parameters of the Client into network messages, and sends them to the server over the network.
The Server Stub receives and unpacks the request message sent by the client, and then invokes the local service for processing.
Server: The real provider of a service.
Network Service: Underlying transport, either TCP or HTTP
2. Complete RPC architecture diagram
In a typical RPC application scenario, including service discovery, load, fault tolerance, network transmission, serialization and other components, RPC protocol refers to how the program network transmission and serialization.
3. RPC core functions and implementation
Service addressing
Serialization and deserialization of data streams
Third, network transmission
1. Service addressing
- In RPC, all functions must have their own ID. This ID is uniquely determined in all processes.
- Customers must attach this ID when making remote calls. Then we need to maintain a corresponding table of functions and Call ids on the client side and the server side, respectively.
- When the client needs to make a remote Call, it looks up the table, finds the corresponding Call ID, and passes it to the server. The server also looks up the table, determines the function that the client needs to Call, and then executes the code for that function
Serialization and deserialization
Local call, we just push the argument onto the stack, and then the function reads directly from the stack. However, the client and server are different processes during remote calls and cannot pass parameters through memory. In this case, the client needs to convert the parameter into a byte stream, pass it to the server, and then convert the byte stream into a format that it can read.
Definition:
Serialization: The process of converting an object to a binary flow is called serialization
Deserialization: The process of converting a binary stream into an object is called deserialization
3. Network transmission
The network transport layer passes the call ID and serialized parameter bytes to the server, and then passes the serialized call results to the client.
Therefore, to achieve an RPC framework, just need to achieve the following three basic completion; :
- Call ID mapping: You can use function strings directly or integer ids. A mapping table is usually a Hash table.
- Serialization Deserialization: You can write it yourself, or use something like Protobuf or FlatBuffers.
- Network transmission, you can write Socket, or use Netty
Fourth, the Dubbo
Classic RPC framework
Call relationship description: ①
- The service container is responsible for starting, loading, and running the service provider.
- At startup, service providers register their services with the registry.
- At startup, service consumers subscribe to the registry for the services they need.
- The registry returns a list of service provider addresses to the consumer, and if there are changes, the registry pushes the change data to the consumer based on the long connection.
- The service consumer, from the provider address list, selects one provider to call based on the soft load balancing algorithm. If the call fails, selects another one to call.
- Service consumers and providers accumulate calls and call times in memory and regularly send statistics to the monitoring center every minute.
Config Configuration Center
The Proxy agent
Registry
Cluster load balancing, routing (polling, weight, consistency hash)
The Monitor to Monitor
Protocal initiates a call
Exchange Exchange
Transport (Netty)
Serialize serialization
Dubbo architectures are characterized by connectivity, robustness, scalability, and upgrades to future architectures
connectivity
- The registry is responsible for the registration and lookup of service addresses, which is equivalent to a directory service. Service providers and consumers only interact with the registry at startup, and the registry does not forward requests, which is less stressful
- The monitoring center is responsible for counting The Times and time of service invocation. The statistics are first summarized in memory and then sent to the monitoring center server every minute for presentation in reports
- The service provider registers its services with the registry and reports the invocation time to the monitoring center, which does not include network overhead
- The service consumer obtains the list of service provider addresses from the registry and invokes the provider directly according to the load algorithm, reporting the invocation time to the monitoring center, which includes the network overhead
- The registry, service provider, and service consumer are all long connected, except for the monitoring center
- The registry senses the presence of a service provider through a long connection. If the service provider is down, the registry will immediately push an event to notify consumers
- Both the registry and monitoring center went down without affecting the running providers and consumers, who cached the list of providers locally
- Both registries and monitoring centers are optional, and service consumers can directly connect to service providers
- After the database goes down, the registry can still provide a list of services query through the cache, but it cannot register new services
- In a peer-to-peer registry cluster, if one of the registries goes down, it will automatically switch to the other registry. After all the registries go down, service providers and service consumers can still communicate through the local cache
- The service provider is stateless. If any service provider breaks down, the service is not affected
- When all service providers go down, the service consumer application becomes unavailable and reconnects indefinitely waiting for the service provider to recover
- Registries are peer-to-peer clusters. Machine deployment instances can be dynamically added. All clients automatically discover new registries
- Service providers are stateless and machine deployment instances can be added dynamically, and the registry will push new service provider information to consumers
Robustness,
- The breakdown of the monitoring center does not affect the use of the system, but only the loss of some sample data
- After the database goes down, the registry can still provide a list of services query through the cache, but it cannot register new services
- If one of the registry peer clusters fails, it will automatically switch to the other one
- After all registries go down, service providers and service consumers can still communicate through local caches
- The service provider is stateless. If any service provider breaks down, the service is not affected
- When all service providers go down, the service consumer application becomes unavailable and reconnects indefinitely waiting for the service provider to recover
scalability
- Registries are peer-to-peer clusters. Machine deployment instances can be dynamically added. All clients automatically discover new registries
- Service providers are stateless and machine deployment instances can be added dynamically, and the registry will push new service provider information to consumers
Dubbo in Springboot
1, install,
- Install the zookeeper
- Monitoring center Dubbo Admin
2. Use mode
Configure classes and annotation methods
A Service must introduce dubbo annotations.
See the website
3. Timeout configuration
[Collect information]
<dubbo:reference interface="com.foo.BarService" check="false"
timeout="1000"/>
Copy the code
Configure the timeout period in the following order:
- Method first, interface second, global configuration second
- If the level is the same, the consumer takes precedence over the provider.
The service provider configuration is passed to the consumer via the registry via the URL.
Common configurations for Dubbo
Start the check
<dubbo:reference interface="com.foo.BarService" check="false" />
Copy the code
The timeout configuration
<dubbo:reference interface="com.foo.BarService" check="false"
timeout="1000"/>
Copy the code
Retry count
Idempotent (set retry times, query, delete, modify **) Non-idempotent (cannot be set, new)
<dubbo:reference id="orderService"
interface="com.end.dubbo.api.service.OrderService" retries="3" />
Copy the code
Multi-version grouping
<dubbo:reference id="orderService"
interface="com.end.dubbo.api.service.OrderService" group="g1" />
Copy the code
API and SPI
Spi: Callers customize the interface, and implementers implement different implementations for the interface. The caller chooses the implementation method he or she needs.Common spi:
- Database driver
- Log log
- Dubbo extension point development
- Springboot automatic assembler
The IMPLEMENTATION of SPI in the JDK
Principle:
In the Load method of the ServiceLoader we first get the context class loader, and then we construct a ServiceLoader, and in the ServiceLoader we have a lazy loader, The lazy loader reads the full path of the interface name from the meta-INF /services path through the BufferedReader, reads the contents of the file through the file’s class parser, and loads the full path of the class through the class loader.
Disadvantages:
- Unable to load on demand;
- Does not have the function of IOC;
- ServiceLoader is not thread-safe, and thread-safe issues can occur
Spi implementation in Dubbo
man=dubbo.impl.Man
woman=dubbo.impl.Woman The ability to load on demand and be scalable
7. Thread distribution model
1, configuration,
<dubbo:protocol name="dubbo" port="20880" threadpool="fixed" threads="200"
iothreads="8" accepts="0" queues="100" dispatcher="all"/>
Copy the code
Netty
[Collect information]
There are two types of threads in Netty: boss threads and worker threads
1) Boss thread
Accepts client connections: registers received connections to a worker thread.
Typically, the server starts one boss thread per bound port
2) Worker threads
Function: Handles IO events registered on the connection
Number: number of cores +1
Note: a worker thread can register multiple connections;
A connection can only be registered on one worker thread
Five dispatch strategies for thread pools
The default is all: all messages are dispatched to the thread pool, including requests, responses, connection events, disconnection events, heartbeats, and so on. That is, after the worker thread receives the event, it submits the event to the business thread pool and processes other things itself.
- Direct: After the worker thread receives the event, the worker executes it to the end (all messages are not sent to the thread pool, but are executed directly on the Io thread).
- Message: Only request response messages are sent to the thread pool. Other connection disconnection events, heartbeat, and other messages are executed directly on the I/O thread
- Only requests are sent to the thread pool, no responses (client thread pool), responses and other disconnected events, heartbeats, etc., executed directly on the IO thread
- ** Connection: ** On an IO thread, disconnection events are queued and executed one by one, with other messages dispatched to the thread
Dubbo4 common threadpools:
Fixed Fixed size thread pool, created when started, not closed, always held.
Cached thread pool, automatically deleted after one minute idle and rebuilt as needed.
A limited can scale a thread pool, but the number of threads in the pool only grows, not shrinks. The purpose of only growing without shrinking is to avoid performance problems caused by sudden heavy traffic when shrinking.
Eager creates the Worker thread pool first. When the number of tasks is larger than corePoolSize but smaller than maximumPoolSize, the Worker is created first to process the task. When the number of tasks is greater than maximumPoolSize, the task is put into a blocking queue. Blocking queue is thrown when RejectedExecutionException. (Compared to Cached: Cached directly throws an exception when the number of tasks exceeds maximumPoolSize rather than putting the task on a blocking queue.
Dubbo’s thread dispatch model Overall steps :(restricted by distribution policies, the default all is used as an example)
-
The main thread on the client side issues a request to get the Future and blocks while performing a GET.
-
The server receives the request using the Worker thread (Netty communication model) and submits the request to the server thread pool for processing!
-
After the server thread finishes processing, it returns the corresponding result to the worker thread pool of the client (Netty communication model). Finally, the worker thread submits the response result to the client thread pool for processing!
-
The client thread populates the future with the response result, then wakes up the waiting main thread, which retrieves the result and returns it to the client.