Last time we talked about the history of distributed architecture, the issues that distributed architecture needs to consider, and we’ll continue with distributed architecture.

Lightweight architectures will use Http+Nginx

How to solve load balancing + fault tolerance + service configuration + health check? Do you code it one by one? . Are there any ready-made solutions that directly implement these functions? Nginx fully supports these features. Therefore, enterprises will adopt Http+Nginx approach when doing lightweight architecture.

If nginx fails, the service will not work. Keeplived can be used in the middle layer to load Nginx.

To complete the load within Nginx, Nginx itself can be split vertically based on business. What if user server1 goes directly to serverN, because the architecture itself is so lightweight that it can’t support it.

Some old iron love to say that the performance is not enough to add the server, depends on whether they support elastic expansion, if the system does not support, add the server useless.

  • advantages

It’s easy and fast, and it costs almost nothing to learn

  • Applicable scenario

Lightweight distributed system, partial distributed architecture.

  • bottleneck

Nginx center load, Http transmission, JSON serialization, development efficiency, operation efficiency.

1. The Http transport

HTTP transmission itself is more complex with a request header, a request body, transmission content is more. You don’t have to worry about this with RPC.

2. JSON serialization

Really not high, than Java binary serialization efficiency is even lower, the biggest bottleneck lies in json parsing.

3. Operation and maintenance efficiency

Server1 and server2 are configured on Nginx, which increases the workload for operation and maintenance personnel.

4. Development efficiency

Is not not high, anyway is the need to resolve trouble, also have to spell trouble.

5.Nginx center load

Layer to layer communication consumes Nginx, the nginx center carries the load, there is definitely no direct connection block, after all, there are middlemen.

  • Considering large systems based on bottlenecks requires a more specialized solution that does the following:

Springcloud and Dubbo are based on these designs.

1. Decentralization, where clients directly connect to servers 2. Dynamic registration and discovery 3. Soft load balancing realization 4. Efficient and stable network transmission 5. Highly fault-tolerant serialization

  • (1) Registry logic

The server dynamically registers the service provider information. 2. The client receives the service provider information from the registry and stores it in the local cache. The registry listens to the provider status in real time and notifies the client of any changes

  • (2) Call logic

1. Load balancing 2. Fault tolerance 3. Transparency to service callers. When operating a database, you only need to operate the corresponding interface to complete the operation of the database.

  • (3) Transmission module

Mina, Servlet Container, Netty

  • (4) Serialization module

Kryo, Hessian, Java, Protobuf, JSON, XML

  • Logic for all RPC frameworks

Mainstream framework comparison

  • spring cloud

It’s a technology stack,

Service Discovery — Netflix Eureka Customer service load Balancing — Netflix Ribbon Circuit Breaker — Netflix Hystrix Service Gateway — Netflix Zuul Distributed Configuration — Spring Cloud Config

  • Doubbo

Provider: exposes the service Provider of the service

Container: service running Container Consumer: service invoking Consumer Registry: service registration and discovery service center Monitor: statistics service invocation monitoring center (optional)

Liverpoolfc.tv: dubbo.apache.org/zh-cn/docs/…

PS: The mainstream Java frameworks of microservices are SpringCloud and Dubbo. Their design ideas are based on distributed design ideas, mainly centering on service, discovery, registration, invocation and load. Be sure to understand his design approach. This is a great way to learn springCloud and Dubbo. Let’s start with dubbo.