Today’s sharing started, please give us more advice ~

The three elements of microservices architecture are business modeling, technology architecture and r&d process.

The first element of microservice architecture: business modeling

Why should we consider this factor in the first place? The essential difference between microservices architecture and traditional SOA is the granularity of its services and the business-oriented and componentized nature of the services themselves. For service modeling, we first need to define the categories of services and their relationship to the business, with as clear domain boundaries as possible.

For service modeling, it is recommended to use Domain Driven Design (DDD) method to identify the boundaries of each Boundary Context by identifying each subdomain in the Domain, judging whether these subdomains are independent and considering the interaction between subdomains.

For division, in the field of industry classification method of mainstream thought, in the system each domain can be divided into core subdomains, support subdomain and general subdomains three types, including the system core business belongs to the core subdomains, focus on business support one child domain called subdomains, can be used as a function of infrastructure can be attributed to the general subdomains. Take the e-commerce system as an example.

Services are modeled around business capabilities, which often represent a hierarchical structure. In my experience, we can divide the services in the business architecture into the following types: basic services, generic services, custom services, and other services. Here, we also show an example diagram of business service layering based on emporium landscape, as follows:

Each industry and each company has different business systems and product forms. I have no intention to expand too much on the application scenarios of business modeling. However, in the rest of the course, we will introduce how to complete the business modeling of the system based on DDD design ideas through a specific case to help you master how to use DDD to complete the domain modeling of the business system in the daily development process.

The second element of microservices architecture: technology architecture

In this course, I also extracted eight technical systems based on the current mainstream micro-service implementation technology in the industry, including service communication, service governance, service routing, service fault tolerance, service gateway, service configuration, service security and service monitoring.

Service communication

For microservices architecture, we focus on network connection patterns, I/O models, and service invocation patterns.

We know that there are two basic types of network connection based on TCP protocol, which is commonly known as long connection and short connection. The Dubbo framework uses long connections, while the Spring Cloud, which we’ll cover in this course, uses short connections.

Another concern for communication between services is the I/O model. The I/O model can also be implemented in a variety of ways, including blocking and non-blocking I/O. In terms of service gateways, services like Netflix’s Zuul are blocking I/O, while Spring’s homegrown Spring Cloud Gateway uses non-blocking I/O.

Another topic of service communication is invocation, and there are two main implementation mechanisms for synchronous invocation and asynchronous invocation. In order to simplify the process for developers, asynchronous to synchronous implementation mechanism is usually adopted, that is, developers use synchronous method calls, and the framework itself will implement asynchronous remote processing based on mechanisms such as Future.

Service governance

A service registry is a repository of routing information needed for service invocation, a medium of interaction between service providers and service consumers, and acts as a service registration and discovery server. Major microservice frameworks such as Dubbo and Spring Cloud build service registries based on distributed system coordination tools such as Zookeeper and Eureka.

Service routing

Spring Cloud and other mainstream microservice frameworks also have built-in client load balancing components such as Ribbon.

On the other hand, the starting point of load balancing is to provide service distribution rather than to solve routing problems, and common static and dynamic load balancing algorithms cannot achieve refined routing management. At this point we can use routing rules. The common implementation scheme of routing rules is whitelist or blacklist. That is, the service IP addresses to be routed (such as service IP addresses) are added to the routing pool to control whether they are visible for routing. Similarly, routing rules are a common feature of microservices development frameworks.

Service fault tolerance

There are a number of technical components related to service fault tolerance, including cluster fault tolerance policy represented by Failover, service isolation mechanism represented by thread isolation and process isolation, service flow limiting mechanism represented by sliding window and token bucket algorithm, and service circuit breaker mechanism. In terms of technical implementation, in Spring Cloud, some of these mechanisms are included in the service gateways described below, while others are refined into separate development frameworks, such as the Spring Cloud Circuit Breaker component, which is specifically designed to implement service Circuit breakers.

The service gateway

The core point of a service gateway is that all clients and consumers access microservices through a unified gateway and handle all non-business functions at the gateway layer.

In terms of functional design, the service gateway may have functions such as authentication, monitoring, caching, request management and static response processing while completing message format conversion between client and server. On the other hand, flexible routing policies can also be formulated at the gateway layer. For certain apis, we need to set restrictions such as whitelists, routing rules, and so on. In this course, we will introduce each of these capabilities based on two gateways, Netflix Zuul and Spring Cloud Gateway.

Service configuration

In microservice architecture, considering the number of services and the dispersion of configuration information, it is generally necessary to introduce the design idea of configuration center and related tools. Like a registry, a configuration center is a basic component of a microservice architecture. Its purpose is to centrally manage services. The difference is that the configuration center manages configuration information rather than service instance information.

To meet the above requirements, the configuration center usually relies on the distributed coordination mechanism, which ensures that the configuration information can be managed consistently and in real time among services in the distributed environment. A leading open source distributed coordination framework such as Zookeeper can be used to build a configuration center. Of course, Spring Cloud also provides a dedicated configuration center implementation tool, Spring Cloud Config.

Service security

Generally speaking, access security is developed around authentication and authorization. That is, we need to identify the user first, and then determine whether the user has access to the specified resource. From the perspective of individual microservices, our system can integrate each service access with the authorization server to obtain access Token. From the perspective of multiple service interactions, we need to ensure effective propagation of tokens between microservices. Internally, on the other hand, we can restrict access to service resources using different access policies.

In order to achieve secure access to microservices, we usually use OAuth2 protocol to realize the authorization mechanism for access to services, and use JWT technology to build a lightweight authentication system. The Spring family also provides the Spring Security and Spring Cloud Security frameworks to complete the building of these components.

Service monitoring

In microservices architecture, when the number of services reaches a certain level, we inevitably encounter two core problems. One is how do you manage the invocation relationships between services? The other is how to track the process and results of the business flow? This requires building distributed service tracking mechanisms.

The establishment of distributed service tracking mechanism requires the creation, collection, storage and query of call chain data, as well as the operation and visual management of these call chain data. It is not easy for a single tool or framework to accomplish all these tasks. Therefore, when developing microservice systems, we usually integrate multiple development frameworks for link tracking. In Spring Cloud, for example, integration of Spring Cloud Sleuth with Zipkin is provided.

The third element of microservices architecture: the r&d process

Martin Fowler also put forward the r&d management concept of organizing teams around “business functions” when introducing microservices architecture.

When looking for ways to break up a large application, the development process often revolves around product teams, project management, and large front-end and server-side teams, often referred to as functional teams. Any requirement, large or small, will lead to cross-team collaboration, increasing communication and collaboration costs.

Microservices architectures tend to segrege services around the organization of business functions, rather than one technical capability. As a result, teams are cross-functional, with each service built around the business and able to be independently deployed into production. That’s not the focus of the course, we’re not going to go into that.

Spring Cloud is introduced

Spring Boot, on which Spring Cloud is based, has become the most popular development framework in the Java EE field, which is used to simplify the framework construction and development process of Spring applications.

In terms of design, Spring Boot makes full use of the automatic Configuration mechanism of Convention over Configuration. Compared to traditional Spring applications, Spring Boot optimizes the development process in terms of automatic startup dependency management, simplified deployment, and application monitoring.

There are so many components in The Spring Cloud that we didn’t intend to go through all of them in detail, but rather to comb through the eight core components necessary to develop a microservice system. As shown in the figure below.

Case driven

At present, the Internet of things and smart wearable devices are increasingly developed. Imagine a daily scenario where patients detect their health information through smart wearable devices such as smart wristbands and portable pulsars, and then report the health information to the cloud platform in real time. When detecting abnormal conditions in user health information, the cloud platform will manually or automatically perform some health interventions to ensure user health. This is a very typical business scenario in big health and is where our case comes from.

Domain driven

From the perspective of domain modeling, we can divide the system into three subdomains, namely:

The User subdomain is used for User management. Users can register as system users, modify or delete User information, and provide an entrance for verifying the validity of User information.

Device sub-domain, used for Device management, medical staff can query a user’s wearable Device to obtain Device details, and obtain current health information based on the Device.

Intervention sub-domain, used for health Intervention management, medical staff can generate corresponding health Intervention according to the user’s current health information. Of course, you can also query the current status of the health interventions you have submitted.

From the classification of subdomain, user subdomain is relatively clear, obviously should be used as a general subdomain. Health intervention is the core business of SpringHealth, so it should be the core subdomain. As for device subdomains, they tend to be classified as support subdomains here.

Based on the above analysis, SpringHealth can be divided into three micro-services, namely user-service, device-service and intervention-service. The following figure shows the basic architecture of SpringHealth, where the intervention-service needs to interact remotely with user-service and device-service services in a rest-style manner.

Service design

Service list

When we use Spring Cloud to build a complete microservice technology solution, some technical components need to operate in the form of independent services, including:

(1) Registry services. We named this service Eureka-Server.

(2) Configure the center service. We will name this service config-server.

(3) API gateway service. For Zuul and Spring Cloud Gateway, we set up two separate Zuul-Server and Gateway-Server services and run one of them as needed.

(4) Security authorization service. Let’s name this service auth-Server.

(5) The last infrastructure-type service in the case is Zipkin service, which is not necessary but depends on whether we need to visually display the service access link. Therefore, a separate Zipkin-Server service will be built.

This division is just one scenario.

The service data

In this case, we will also establish three independent databases for the three business services, and the access information of the databases will be centrally managed through the configuration center, as shown in the figure below:

Use Spring Cloud for service governance

Service governance

Architecturally, state change management can adopt a publish-subscribe model, in which the service provider can publish the service according to the service definition, and the service consumer can subscribe to the service they are interested in and obtain various metadata including the service address. The publish-subscribe functionality is also embodied in status change push, which actively pushes changes to consumers of a registry service when its definition changes.

Based on the publish-subscribe design idea, a service listening mechanism was born. The service listening mechanism ensures that service consumers can monitor the status of service updates in real time. It is a passive change notification implementation solution, usually using listeners and callback mechanism, as shown in the figure below.

The service registry

Build a registry based on Eureka (not recommended, manufacturers do not update)

Build a single point Eureka server

Create a Maven project and name it Eureka-Server. At the same time, we introduced the spring-Cloud-starter-Eureka-server dependency, which is the main JAR package in Spring Cloud to implement spring Cloud Netflix Eureka function:

Create the Boot class EurekaServerApplication for Spring Boot, as shown below. A service with the @enableEurekaserver annotation is meant to be a Eureka server component.

Eureka also provides a number of configuration items for developers. These configuration items can be divided into three categories. One is used to control the Eureka server behavior, starting with Eureka. One is to consider configuration requirements from the perspective of the client, starting with Eureka.client. The last category focuses on the service instance itself registered with Eureka, beginning with Eureka.instance. Note that in addition to acting as a server-side component, Eureka can actually register with Eureka itself as a client, using client configuration items.

Now we try to add the following configuration information to the application. Yml file of the Eureka-server project. We do not want the Eureka service to register itself; registerWithEureka and fetchRegistry are both set to false.

Build the Eureka server cluster

We typically need to build a cluster of Eureka servers to ensure the availability of the registry itself. Different from the traditional cluster construction method, if we consider Eureka as a service, that is to say, Eureka services themselves can register with other Eureka services, so as to realize mutual registration and form a cluster. In Eureka, this high availability deployment mode is called Peer Awareness mode.

Now let’s prepare two Eureka service instances eureka1 and Eureka2. In Spring Boot, two configuration files, application-Eureka1.yml and application-Eureka2.yml, are provided to set relevant configuration items. The application-eureka1.yml configuration file is as follows:

The content of the application-eureka2.yml configuration file is as follows:

Build Eureka cluster pattern of the key lies in using the client configuration items Eureka. Client. ServiceUrl. Other Eureka defaultZone used to point to the cluster server. So the Eureka cluster is essentially built by registering itself as a service and registering itself with other registries, thus forming a set of mutually registered service registries to synchronize the list of services. Obviously, both the registerWithEureka and fetchRegistry configuration items in this scenario should use their default true values, so we don’t need to explicitly set them.

If you are trying to set up a cluster environment on the host, obviously Eureka1 and Eureka2 in the eureka.instance.hostname configuration item are not accessible, so you need to add the following information to the hosts file on the host.

127.0.0.1 eureka1

127.0.0.1 eureka2

Understand the Eureka server implementation principle

Service Register is the most basic concept of service governance. Each micro-service embedded in Eureka client completes service registration by providing IP address, endpoint and other basic information related to service discovery to Eureka server.

Because the Eureka client interacts with the server through a short connection, the Eureka client needs to actively report its runtime status at certain intervals to Renew the service.

A service cancellation (Cancel) means that the Eureka client actively informs the Eureka server that it no longer wants to register with Eureka. When the Eureka client does not send a service renewal message to the Eureka server for a period of time, the Eureka server considers that the service instance is no longer running and removes it from the service list (Evict).

Eureka service storage source code analysis

For a registry, we first need to focus on its data storage methods. In Eureka, we found that InstanceRegistry interface and its implementation class (located at com.net flix. Eureka. Registry in the package) to undertake the functions of this part. InstanceRegistry’s class layer structure looks like this:

From the above, it is not hard to see the Spring in the Cloud is also a InstanceRegistry (located at org.springframework.cloud.net flix. Eureka. Server package). It’s actually a wrapper based on the InstanceRegistry implementation in Netflix. The data structure used by Eureka to hold registration information is found in the InstanceRegistry interface implementation class AbstractInstanceRegistry above, as follows:

private final ConcurrentHashMap<String, Map<String, Lease>> registry = new ConcurrentHashMap<String, Map<String, Lease>>();

As you can see, this is a two-tier HashMap using the THread-safe ConcurrentHashMap in the JDK. The Key of the ConcurrentHashMap of the first layer is spring.application.name, which is the service name, and Value is a ConcurrentHashMap. The Key of layer 2 ConcurrentHashMap is instanceId, which is the unique instanceId of the service, and the Value is the Lease object.

Eureka uses the term Lease to abstract the service registration information. Lease objects hold the service instance information and some of The Times that the service instance was registered. Such as the registration time registrationTimestamp, the latest renewal time lastUpdateTimestamp and so on. To graphically represent this data structure, see the following figure:

InstanceRegistry itself inherits two very important interfaces in Eureka, namely the LeaseManager interface and LookupService interface. The LeaseManager interface is defined as follows:

Obviously what LeaseManager does is Eureka registry model of service registration, service renewal, service cancellation and service elimination and other core operations, focusing on the management of the service registration process. The LookupService interface is defined as follows, focusing on the management of applications and service instances:

Internally, in fact, for the registry server, the different operations such as service registration, renewal, cancellation, and elimination perform basically the same workflow, that is, they all operate on the service store and synchronize this operation to other Eureka nodes. Here we choose register method for service registration operation to expand, register method is very long, we cut the source code, get the following verification process:

Eureka service cache source code parsing

Another core function of the Eureka server-side component is to provide a list of services. To improve performance, the Eureka server caches a list of all registered services and updates the cached data through a timing mechanism.

We know that in order to obtain the details of a specific service instance registered on the Eureka server.

In Eureka, all access to the server is obtained through RESTful resources. ApplicationResource class (at com.net flix. Eureka. The resources in the package) provides according to the entrance to the application for registration information. Let’s look at the getApplication method of this class. The core code is as follows:

You can see that a cacheKey is built and the responsecache.get (cacheKey) method is called directly to return a string and build the response. The core of this is the get method here:

ResponseCacheImpl () {get (); ResponseCacheImpl;

You can see that there are two caches in the code above, readOnlyCacheMap and readWriteCacheMap. ReadOnlyCacheMap is a JDK ConcurrentMap, and readWriteCacheMap uses the LoadingCache type from The Google Guava Cache library. During the creation of a LoadingCache, the source of cached data is generated by calling the generatePayload method.

In this generatePayload method, the getApplications method in the AbstractInstanceRegistry section above is called to get the application information and put it in the cache. So we can associate the registration information with the cache information.

Here’s a design and implementation tip. Designing the cache as a read-only readOnlyCacheMap and a readWriteCacheMap provides a better separation of responsibilities. But because both caches hold the same data, we need to make sure that readWriteCacheMap is synchronized as we update it. For this ResponseCacheImpl provides a timed task, CacheUpdateTask, as follows:

Obviously, this scheduled task mainly updates data from readWriteCacheMap to readOnlyCacheMap.

Eureka high availability source code parsing

As we learned earlier, Eureka’s high availability deployment mode is called Peer Awareness mode. Corresponding, we in the InstanceRegistry class layer structure has also seen its PeerAwareInstanceRegistry an extension interface and the interface implementation class PeerAwareInstanceRegistryImpl.

We still discuss around service register this scene, there is also a register in the PeerAwareInstanceRegistryImpl method, as shown below:

Here we see a very important replicateToPeers method, which is used to synchronize state between server nodes. The core code for the replicateToPeers method is as follows:

In order to understand the operation, we first need to understand the cluster pattern had this code at com.net flix. Eureka. Cluster package, which contains the representing node PeerEurekaNode and PeerEurekaNodes class, And an HttpReplicationClient interface for data transfer between nodes. While replicateInstanceActionsToPeers method according to the different Action to invoke PeerEurekaNode different methods. For example, if the StatusUpdate Action is used, the PeerEurekaNode’s StatusUpdate method is invoked, which in turn executes the following code.

replicationClient.statusUpdate(appName, id, newStatus, info);

This code completes the communication between peereurekanodes. ReplicationClient is an instance of the HttpReplicationClient interface. This interface is defined as follows:

The HttpReplicationClient interface inherits from the EurekaHttpClient interface, which belongs to the Eureka client component. We will discuss this in detail when we introduce the basic principles of the Eureka client in the next class. Here, We only need to understand that had been provided JerseyReplicationClient (located at com.net flix. Eureka. Transport package) this is based on the Jersey HttpReplicationClient framework implementations. Take the statusUpdate method as an example, its implementation process is as follows:

This is a typical RESTful resource-based invocation method using the ApacheHttpClient4 utility class. Through the above analysis, we have mastered the internal operation mechanism of the whole Eureka server from the main dimensions.

Service discovery

Implementing service registration

We first need to ensure that we add a dependency on the Eureka client component spring-Cloud-starter-Netflix-Eureka-client in the Maven project, as shown below.

Then, let’s look at the User-Service Bootstrap class, which introduces a new annotation @enableeurekaclient. Of course, as we move forward, you’ll find that you can use a unified @SpringCloudApplication annotation to make @SpringBootApplication and @Enableeurekaclient work together.

The configuration of user-service is as follows:

The serviceUrl configuration item was introduced in the previous tutorial. Serviceurl. defaultZone specifies the address of the Eureka server.

Of course, if we also build Eureka server cluster based on the Peer Awareness mode introduced in the last lesson, The eureka. Client. ServiceUrl. DefaultZone the content of the configuration items should be “http://eureka1:8761/eureka/, http://eureka2:8762/eureka/,” is used to point to the current cluster environment.

Implementing service discovery

After we have successfully created and started the User-Service, we can see in Eureka’s interface that the service has been registered. We can obtain the basic information of the service, such as the service name, IP address, port and availability, as well as check the running status of the current service by accessing statusPageUrl, healthCheckUrl and other addresses. More importantly, basic data such as leaseInfo, which is directly related to the service registration process, can help us understand how Eureka works as a registry.

Understand Eureka client fundamentals

For Eureka, both providers and consumers of microservices are its clients. Service providers focus on functions such as service registration, service renewal and service offline, while service consumers focus on obtaining service information. At the same time, for service consumers, caching mechanisms are generally in place to improve the performance of service acquisition and to continue to use the service if the registry is not available.

In Netflix Eureka, a client package is specifically provided and a client interface EurekaClient is abstracted. The EurekaClient interface inherits from the LookupService interface, which is actually the parent of the InstanceRegistry interface we introduced last week. EurekaClient provides a series of extensions to the LookupService interface, but these are not the main ones. Instead, we should focus on its class-level mechanism, as shown below:

As you can see, EurekaClient interface has an implementation class DiscoveryClient (in com.Net Flix. Discovery package), which contains the core processing logic of service provider and service consumer. At the same time, register, renew and other methods introduced when we introduced the basic principle of Eureka server are provided. The implementation of the DiscoveryClient class is quite complex, so let’s focus on this line of code in its constructor:

initScheduledTasks();

By analyzing the code in this method, we see that the system initialates a number of scheduled tasks, including cacheRefresh cacheRefresh, heartbeat heartbeat, service instance replication InstanceInfoReplicator, etc. CacheRefresh is targeted at service consumers. The heartbeat and service instance replication are service provider oriented. Next, we will discuss the client operations of service registration and discovery from these two Eureka client components.

Service provider operation source code parsing

The service provider focuses on functions such as service registration, service renewal, and service offline, which it can do using the RESTful apis provided by the Eureka server. Due to the lack of space, the operation process of the service provider is also presented here using service registration as an example.

In the DiscoveryClient class, service registration is done by the Register method, as shown below. For simplicity, we cropped the code to omit non-core code such as log correlation:

The above register method is executed in the Run method of the InstanceInfoReplicator class. Operationally, the logic of the above code is very simple, that is, the service provider registers itself with the Eureka server and then determines the success of the operation based on the returned results. Obviously, the key here code is eurekaTransport registrationClient. Register (), launched the remote request DiscoveryClient through this line of code.

First let’s look at the EurekaTransport class, which is an internal class in the DiscoveryClient class that defines the registrationClient variable used to implement service registration. The registrationClient type is the EurekaHttpClient interface, which is defined as follows:

You can see that the EurekaHttpClient interface defines some of the underlying REST apis for the Eureka server, including Register, Cancel, sendHeartBeat, statusUpdate, getApplications, and so on. In Eureka, how to realize the remote communication between the client and the server is just a RESTful HTTP request from the working principle, but in the specific design and implementation can be said to be very sophisticated, so the class layer structure is also relatively complex. Let’s look at an implementation class EurekaHttpClientDecorator EurekaHttpClient interface, look from the name it is a Decorator (Decorator), as shown below:

Can see EurekaHttpClientDecorator by defining an abstract method execute (RequestExecutor RequestExecutor) to wrap EurekaHttpClient, This packaging is a manifestation of the agency mechanism.

Then we will look at how to build an EurekaHttpClient. Eureka also provides the EurekaHttpClientFactory class to build a specific EurekaHttpClient. Obviously, this is a typical application of the factory pattern. The EurekaHttpClientFactory interface is defined as follows:

Had been existed in a group of EurekaHttpClientFactory implementation class, including RetryableEurekaHttpClient and MetricsCollectingEurekaHttpClient, These classes are located in com.net flix. Discovery. Shared. Transport. The decorator package. At the same time, the com.net flix. Discovery. Shared. The transport package, there is also a EurekaHttpClients tools, Able to create through RedirectingEurekaHttpClient, RetryableEurekaHttpClient, SessionedEurekaHttpClient EurekaHttpClient after packaging. As follows:

This is a branch of the EurekaHttpClient creation process that encapsulates and proxies the request process through layers of wrappers. When performing remote requests, Eureka also provides another system to perform true remote calls. The original EurekaHttpClient is created through the TransportClientFactory. The TransportClientFactory interface is defined as follows:

TransportClientFactory also has a set of implementation classes, some of which are real-name and some anonymous. In the implementation of real-name class JerseyEurekaHttpClientFactory, for example, it is located in com.net flix. Discovery. Shared. Transport. The jersey package, The Jersey client is retrieved through the EurekaJerseyClient, which in turn uses the ApacheHttpClient4 object to complete the REST call.

To conclude, here’s another Eureka design and implementation tip, the so-called High Level API and Low Level API, as shown below:

For higher-level apis, a series of wrappers are created, primarily through the decorator pattern, to create the target EurekaHttpClient. In terms of lower level apis, which are mostly implementations of HTTP remote calls, Netflix offers a Jersey-based version, while Spring Cloud offers a RestTemplate based version, which we’ll talk about later.

Service consumer operation source code parsing

When we introduced the registry model, service consumers could be equipped with caching mechanisms to speed up service routing. For Eureka, DiscoveryClient, the client component, also has this caching capability.

The Eureka client completes the cache flush operation through scheduled tasks. We have mentioned previously that the initScheduledTasks method in DiscoveryClient is used to initialize various scheduled tasks. For cache flush, the scheduler initialization process is as follows:

The most important action for a service consumer is to obtain service registration information. In the refreshRegistry method here, we find that after a series of validations, the fetchRegistry method is finally called to update the registration information as follows. For simplicity, we’ve trimmed the code partially, leaving only the main flow:

The annotated methods here are useful because the logic of getAndStoreFullRegistry is relatively simple, and we’ll focus on the getAndUpdateDelta method to learn design tips on how to implement incremental data updates in Eureka. The clipped getAndUpdateDelta method code looks like this:

Reviewing the Eureka server side fundamentals, we know that the Eureka server side keeps a cache of service registries.

The Eureka documentation states that the data retention time is three minutes, and the Eureka client’s scheduling mechanism will flush the local cache every 30 seconds. In principle, the Eureka client can keep its data consistent with the Eureka server as long as it constantly retrieves the updated data from the server. However, if the client does not get the updated data within 3 minutes, the data on the server side will be inconsistent with that on the client side. This is an issue that must be considered in this update mechanism, and it is also a point of concern for us when designing similar scenarios.

To solve the above problems, Eureka adopts the consistent HashCode method. Each increment returned by the Eureka server carries a consistent HashCode. This HashCode is compared with the consistent HashCode calculated by the Eureka client using the local service list data. If the two are inconsistent, it indicates that there is a problem with the incremental update. At this point, a full update is required.

In Eureka, the method for calculating the consistent HashCode is as follows. As you can see, this method performs the encoding calculation based on the service registration instance information and returns a String result:

As a conclusion, the process of Eureka client cache periodic update is shown in the figure below, which is basically consistent with the process of service registration. That is to say, in Eureka, as clients of Eureka server, service providers and service consumers adopt the same system to complete the interaction with the server.

Load balancing

Spring Cloud also has a load balancer that works with Eureka, the Ribbon component. The interaction between Eureka and the Ribbon is shown below:

Today, we’ll take a closer look at how the Ribbon can be used to implement load balancing. The Ribbon is a client load balancing tool. The Ribbon automatically connects service instances based on a built-in load balancing algorithm. You can also design and implement custom load balancing algorithms and embed them in the Ribbon. At the same time, the Ribbon client provides a comprehensive set of supporting mechanisms to ensure the reliability and fault tolerance of the service invocation process, including connection timeouts and retries. The Ribbon is a typical implementation of client-side load balancing, so it needs to be embedded within service consumers.

The core functions of the Ribbon

1. Use @loadBalanced annotation.

The @LoadBalanced annotation is used to decorate the RestTemplate utility class that initiates the HTTP request and to automatically embed client-side load balancing in that utility class. Developers do not need to do any special development or configuration for load balancing.

2. Use the @RibbonClient annotation.

The Ribbon also allows you to fully control client load balancing behavior using the @RibbonClient annotation. This is useful in some specific scenarios where you need to customize the load balancing algorithm, and you can use this capability to achieve a more fine-grained load balancing configuration.

Use DiscoveryClient to obtain service instance information

Next, let’s demonstrate how to get service instance information in Eureka based on the service name. You can easily do this with DiscoveryClient.

First, we get the full list of service names currently registered with Eureka, as follows:

List serviceNames = discoveryClient.getServices();

Based on this list of service names, you can get all the services you are interested in and further get instance information about these services:

List serviceInstances = discoveryClient.getInstances(serviceName);

The ServiceInstance object represents a ServiceInstance and contains a lot of useful information, defined as follows:

Obviously, once we have a ServiceInstance list, we can implement client load balancing based on common random, polling, and other algorithms, as well as various custom routing mechanisms based on service URI information and so on. Once you have identified the ultimate target service for load balancing, you can use HTTP utility classes to make remote calls based on the address information of the service.

In the Spring world, the most common way to access HTTP endpoints is to use the RestTemplate utility class, so let’s do a little recap. Before demonstrating how to use the RestTemplate, let’s add an HTTP endpoint to the User-service of the SpringHealth case, as shown below:

We then build a test class to access this HTTP endpoint. If we can get the service definition in the registry, we can call the service through ServiceInstance, as follows:

As you can see, the RestTemplate utility class makes it easy to implement HTTP requests using urls in ServiceInstance. In the sample code above, we get the first service in the list of services using the instance.get (0) method, and then encapsulate the entire HTTP request invocation and get the results using the Exchange () method of the RestTemplate.

Call the service with the @loadBalanced annotation

If you know how to use RestTemplate, load balancing with the Ribbon in Spring Cloud is easy. All you need to do is add an annotation to the RestTemplate and that’s it.

Next, we continue our demonstration using the user-service described earlier. Because load balancing is involved, we first need to run at least two instances of the User-Service service. On the other hand, in order to display the results of the call in a load-balancing environment, we add logs to the UserController to facilitate viewing console output at run time. The code for the refactored UserController is shown below.

We know that intervention-service accesses user-Service to generate health intervention information. For a user-service, the intervention-service is its client. We create the RestTemplate in the InterventionApplication class of the intervention-Service with the @loadBalanced annotation. The code for the AbstronApplication class now looks like this:

With the work ready for the intervention-Service, you can now write code for a remote call to the User-Service. We add a new UserServiceClient class to the intervention Service project and add the following code:

As you can see, the above code is to inject the RestTemplate and then make a remote call to the User-service via the Exchange () method of the RestTemplate. Note, however, that the RestTemplate here already has client-side load balancing because we added the @LoadBalanced annotation when we created the RestTemplate in the AbstronApplication class. Also please note that the URL “http://userservice/users/ {userName}” in the “userservice” is in the user – service configuration service name, also is in in the name of the registry. As for the UserMapper class here, it’s just a data transfer object that does the serialization.

Customize load balancing policies using the @RibbonClient annotation

In the previous demo, we didn’t feel the Ribbon component at all. The Ribbon uses the built-in load balancing mechanism when performing load balancing based on the @loadBalanced annotation. By default, the Ribbon uses a polling strategy, and we have no control over which load balancing algorithm is in effect. In some cases, however, more refined control of the load balancing process is required. In this case, the @RibbonClient annotation can be used.

Typically, we need to specify the target service name and load balancing configuration class here. So, to use the @RibbonClient annotation, we need to create a separate configuration class that specifies the specific load balancing rules. The following code demonstrates is a custom configuration class SpringHealthLoadBalanceConfig:

The obvious purpose of this configuration class is to use RandomRule to replace the default load balancing policy in the Ribbon, RoundRobin. We can return the implementation strategy of any custom IRule interface as needed, and the definition of the IRule interface will be discussed in the next lesson.

With this SpringHealthLoadBalanceConfig after, we can use the configuration when invoke a particular service class, to client side load balancing implementation fine-grained control. Using SpringHealthLoadBalanceConfig in intervention – service to the user – service access to the sample code as shown below:

Can be noticed that we set up a goal in the @ RibbonClient service name for userservice, SpringHealthLoadBalanceConfig configuration class. RandomRule, a random load balancing policy, is now used every time a User-Service is accessed.

In contrast to the @LoadBalanced and @RibbonClient annotations, if the normal load balancing scenario is used, the @LoadBalanced annotation is usually only needed to complete the client load balancing. If we wanted to customize the Ribbon runtime behavior, we could use the @RibbonClient annotation.

Netflix Ribbon basic architecture

As a client load balancing tool, you do two things: the first thing is to get a list of servers in the registry; The second thing is to select a service from the list of services to invoke.

The core classes in the Netflix Ribbon

Netflix Ribbon’s core interface, ILoadBalancer, is designed around the above two questions. The interface is located in com.Net flix. Loadbalancer package and is defined as follows:

The class layer structure of the ILoadBalancer interface is as follows:

AbstractLoadBalancer is an abstract class that defines only two abstract methods and does not form the structure of a template method. So let’s take a look at the ILoadBalancer interface directly. The basic implementation class of this interface is BaseLoadBalancer, and the core functions of load balancing can be implemented in this class. This class code is very large and messy, and we need to trim it down in our understanding to get to the point.

Let’s start by reviewing some of the core components that BaseLoadBalancer contains as a load balancer. Three of the most important are the following.

1.IRule

The IRule interface is an abstraction of the load balancing policy and can be implemented to provide a variety of applicable load balancing algorithms, as we saw in the introduction of the @RibbonClient annotation last class. The interface is defined as follows:

Obviously choose method is the core method of this interface, and we will expand various load balancing algorithms based on this method in detail in the following paper.

  1. IPing

The IPing interface determines whether the target service is alive. The definition is as follows:

You can see that there is only one isAlive() method in the IPing interface, which “Ping” the service to get a response and determine whether the service is available.

3.LoadBalancerStats

The LoadBalancerStats class records real-time running information about load balancing and is used as runtime input for load balancing policies.

Note that the security lists of the allServerList and upServerList threads are maintained inside BaseLoadBalancer, So the addServers, getReachableServers, and getAllServers methods defined by the ILoadBalancer interface are mainly the maintenance and management of these lists. Take the addServers method as an example, which is implemented as follows:

Obviously, the process is to merge the old list of service instances, allServerList, and the newList of service instances, newServers, into a single newList, and then call the setServersList method to override the old list with the newList.

For load balancing, what we should focus on is the implementation of the chooseServer method in the ILoadBalancer interface, which we can easily imagine must integrate the implementation of a specific load balancing policy through the IRule interface described earlier. The chooseServer method in BaseLoadBalancer looks like this:

Sure enough, the CHOOSE method of the IRule interface is used. Let’s take a look at what load balancing algorithms the IRule interface in the Ribbon provides.

Load balancing policies in Netflix Ribbon

Generally speaking, load balancing algorithm can be divided into two categories, namely static load balancing algorithm and dynamic load balancing algorithm. Static load balancing algorithms are easy to understand and implement, typically including Random, Round Robin, and Weighted Round Robin algorithms. All static algorithms involving weights can be converted to dynamic algorithms because weights can be dynamically updated during run time. For example, in dynamic polling algorithm, weight values are based on continuous monitoring and updating of each server. In addition, assigning connections based on real-time performance analysis of servers is a common dynamic strategy. Typical dynamic algorithms include source IP hash algorithm, minimum number of connections algorithm, service invocation delay algorithm and so on.

Back to Netflix Ribbon, the class layer structure of the IRule interface is shown below:

It can be seen that the Load balancing implementation strategies in Netflix Ribbon are very rich, providing stateless static policies such as RandomRule and RoundRobinRule. And implements AvailabilityFilteringRule, WeightedResponseTimeRule dynamic strategy based on real-time routing server running status.

Also seen in the figure above is the RetryRule retry policy, which implements a retry mechanism for the selected load balancing policy. Retry is technically a service fault tolerance mechanism rather than a load balancing mechanism, but the Ribbon also has built-in functionality for this.

Static policies are relatively simple. RetryRule is not a strict load balancing policy, so here we focus on the different dynamic policies that the Ribbon implements.

1. BestAvailableRule strategy

Select the server with the lowest number of concurrent requests, look at each server and select the server with the lowest number of active requests.

2. WeightedResponseTimeRule strategy

This policy is related to the response time of the request, and obviously, if the longer the response time, the more limited the responsiveness of the service, the less weight should be assigned to the service. The calculation of response time relies on LoadBalancerStats in the ILoadBalancer interface described earlier. WeightedResponseTimeRule periodically reads the average response time from LoadBalancerStats, updating the weights for each service. The weight calculation is also relatively simple, that is, the response time of each request minus the average response time of each service is the weight of the service.

3. AvailabilityFilteringRule strategy

Filter out back-end servers that are in a persistent connection failure or high concurrency state by checking the running status of each server recorded in LoadBalancerStats.

Spring Cloud Netflix Ribbon

The Spring Cloud Netflix Ribbon in Spring Cloud provides an independent integrated implementation specifically for Netflix Ribbon.

Spring Cloud Netflix Ribbon is the client of Netflix Ribbon. For Spring Cloud Netflix Ribbon, our application service is like its client. The relationship between Netflix Ribbon, Spring Cloud Netflix Ribbon and application service and the core entrance are as follows:

This time, we’re going to start with the @loadBalanced annotation in the application services layer, go to Spring Cloud Netflix Ribbon, and then go from Spring Cloud Netflix Ribbon to Netflix Ribbon, Thus, the whole load balancing closed-loop management is formed.

@ LoadBalanced annotations

For those of you who have used the Spring Cloud Netflix Ribbon, why is a RestTemplate created with the @loadBalanced annotation automatically capable of client load balancing? This is one of the most frequently asked interview questions.

In fact, in the Spring Cloud Netflix Ribbon in an automatic configuration – LoadBalancerAutoConfiguration class. In this class, a list of RestTemplate objects decorated with @loadBalanced is maintained. During initialization, for all resttemplates decorated with the @LoadBalanced annotation, call the Customize method of RestTemplateCustomizer, This customization is done by adding a LoadBalancerInterceptor to the target RestTemplate, as shown below:

This LoadBalancerInterceptor is used for real-time intercepting. As you can see, its constructor passes in an object LoadBalancerClient, and its intercepting method essentially uses LoadBalanceClient to perform real load balancing. The LoadBalancerInterceptor class code looks like this:

The intercept method directly calls the Execute method of LoadBalancerClient to load balance the request.

LoadBalanceClient interface

LoadBalancerClient is a very important interface defined as follows:

There are two execute overloaded methods that perform a service call based on the service instance identified by the load balancer. The reconstructURI method is used to build the service URI, using the ServiceInstance information selected for load balancing to reconstruct the access URI, that is, using the host and port of the ServiceInstance plus the endpoint path of the service to construct a service that is actually accessible.

LoadBalancerClient inherits from the ServiceInstanceChooser interface, which is defined as follows:

From the perspective of load balancing, we should focus on the implementation of the Choose method. The concrete implementation is RibbonLoadBalancerClient which implements the LoadBalancerClient interface. RibbonLoadBalancerClient is located in the spring-cloud-netflix-ribbon project. So our code flow is transferred from the application to the Spring Cloud Netflix Ribbon.

In the LoadBalancerClient interface implementation class, RibbonLoadBalancerClient, the Choose method finally calls the getServer method as shown below:

The loadBalancer object is the implementation class for the ILoadBalancer interface in the Netflix Ribbon described earlier. In this way, we connect Spring Cloud Netflix Ribbon with the overall collaboration process of Netflix Ribbon.

Today’s share has ended, please forgive and give advice!