Service dependencies
In distributed architectures, dependencies between services are common, and a business invocation often relies on multiple underlying services.
As shown in the figure below, for synchronous invocation, when the member service is unavailable, the order service request thread will be blocked. When the member service is invoked with a large number of requests, the whole member service resources may be exhausted and the service cannot continue to provide services externally. And this unavailability can travel up the request invocation chain, triggering an avalanche effect between services.
In the evolution of microservices, in order to maximize the advantages of microservices and ensure the high availability of the system, some service support components are needed to assist the effective collaboration between services, which is the scope of service governance.
Service registration and discovery
Servitization can reduce the high coupling between systems and make the system easier to maintain and expand horizontally. It can guarantee the availability of the system by means of flow control, isolation and degradation. The following is the servitization design in stock.
For the governance of microservices, the core is service registration and discovery. So the choice of component depends largely on its solution to service registration and discovery. There are many open source architectures in this area, the most common being Zookeeper and Eureka.
When using Zookeeper as the registry, due to the design of Zookeeper CP(Consistency and partition fault tolerance), it needs to be supplemented with high availability. Generally, service provider information is cached on the calling side.
Load balancing
Integrate load balancing functions into service consumption in the form of libraries. When a service consumer needs to access a service, he/she needs to query the service registry through the built-in load balancing component to obtain the list of available service providers, and then selects a target service address according to a load balancing policy. Finally, he/she sends a request to the target service.
Random strategy:
Call randomly from one of the available service nodes.
Polling strategy:
Call the list of available service nodes in sequence.
Weighted polling strategy:
Polling the available service nodes with fixed weights.
Minimum active number strategy:
To count the number of requests for each available service node, select the node with the small number of connections to call.
Local first policy:
The caller and provider of the service may be deployed on the same machine, reducing the performance penalty in network calls through local calls.
Service invocation client
The service invocation client provides transparent and efficient RPC remote invocation for services, inlays service governance policies such as registration and discovery of services, load balancing of service invocation, isolation and fault tolerance of services, and provides service monitoring and governance capabilities.
In this paper, hystrix command mode is adopted to encapsulate REST invocation, and policies such as isolation, timeout, traffic limiting, degradation, and load balancing of services are persisted on Zookeeper, and service governance policies are discovered in the way of service discovery and applied to service invocation. Successes and failures of services are reported to the influxDB using Spring asynchronous event notifications.
Service governance
Service monitoring
Hystrix Dashboard
The Hystrix Dashboard is used to monitor Hystrix indicators in real time. The real-time feedback from the Hystrix Dashboard helps us quickly identify problems in the system.
For cluster environment monitoring, turbine provided by Netflix can be used. Download and deploy war packet turbine via Maven public server search.maven.org to modify cluster node configuration. Add monitoring to the Hystrix dashboard at turbine address http://localhost{port}/turbine.stream?cluster=default
turbine.aggregator.clusterConfig=default turbine.instanceUrlSuffix=:8080/gateway/hystrix.stream Turbine. ConfigPropertyBasedDiscovery. Default. The instances = 10.66.70.1 10.66.70.2, 10.66.70.3Copy the code
The Hystrix Dashboard represents the health of the instance through color changes, decreasing from green > yellow > orange > red; In addition to the color change of the solid circle, its size will also change according to the request flow of the instance. The larger the flow is, the larger the solid circle will be. Therefore, through the display of the solid circle, fault instances and high pressure instances can be quickly found in a large number of instances.
Grafana monitoring
Spring interceptor is used to record service invocation logs, collect and analyze logs, and report them to influxDB. Grafana is used to visualize service invocation information in near real time.
The success and failure requests of service invocation collected by the client are reported to the InfluxDB by Spring asynchronous events. Grafana visualized the monitoring data and pushed the alarm information of service exceptions.
Service governance
The service discovery and governance class UML diagram for the service invocation client is as follows:
The service governance platform manages resource groups of each service, and manages core services independently from individual resource groups. Monitor service invocation pressure, average time consuming, number of errors, and invocation trend, and dynamically adjust the timeout, traffic limiting, degradation, resource pool, and load balancing policies of a single service.
Resource isolation
Thread pool isolation is implemented for service invocation to avoid failure propagation caused by threads exhaustion caused by different service failures. For some core processes such as login, registration, commodity information, order and payment, separate thread pools can be isolated on the basis of original thread isolation to ensure that core services are not affected.
fusing
You can apply different timeout policies to different interface requests. After a timeout, the service degradation logic is disabled to avoid service collapse. If the number of dependent service exceptions exceeds the upper limit, the service is disconnected and hystrix periodically checks whether the service is restored.
demotion
If the service invocation fails, times out, fuses are open, or the thread pool or semaphore capacity is exceeded, the service performs backup logic and supports fault tolerance such as failfast and FailSafe.
Current limiting
Service traffic limiting measures based on gateways can be combined with Nginx traffic limiting measures to avoid excessive system overload during peak traffic hours and affect core services.
Load Balancing Policy
Different load balancing policies can be applied to different services, including polling, weighted polling, random, local first, and minimum active policies.
Service Mesh
The Service Mesh is an infrastructure layer that handles communication between services. A Service Mesh is essentially an abstraction layer on top of TCP/IP. In practice, a Service Mesh is usually composed of a series of lightweight network agents that are deployed with the application without the application needing to know about their existence. Service Mesh is a new Service governance concept, which is to sink the governance of services from the application layer to the basic Service layer.
The Service Mesh is deployed on each host as an independent proxy process that can be shared by multiple Consumer applications on the host for Service discovery and load balancing.
The Service Mesh separates the logic responsible for Service discovery, load balancing, and fusing traffic limiting from the original consumer client process into an independent agent process. The independent agent is responsible for Service discovery, route shunting (load balancing), fusing traffic limiting, security control, and monitoring.
Service Mesh has the following characteristics:
- An intermediate layer of communication between applications
- Lightweight Web proxy
- The application is not aware
- Decouple application retry, timeout, monitoring, tracing, and service discovery
Current open source solutions for Service Mesh include Linkerd (Buoyant) and Istio led by Vendors such as Google and IBM. Linkerd is more mature and stable, Istio is richer and more powerful in design, and the community is relatively stronger.
Wenyuan network, only for the use of learning, if there is infringement please contact delete.
I’ve compiled the interview questions and answers in PDF files, as well as a set of learning materials covering, but not limited to, the Java Virtual Machine, the Spring framework, Java threads, data structures, design patterns and more.
Follow the public account “Java Circle” for information, as well as quality articles delivered daily.