Eureka partial mechanism records
background
Eureka distributed service, registry, meets AP in CAP. The availability and fault tolerance of partitions are ensured through some technical details. Recently, I have also been looking at the source code of Eureka. For some of them, I have made some records here. For the following points (1) Eureka Server startup: Registry (2) Eureka Client startup: Service instance (3) Service Registration: Map data structure (4) Eureka Server cluster: Registry synchronization, multi-level queue task batch mechanism (5) Full pull registry: multi-level cache mechanism (6) Incremental pull registry: consistent hash comparison mechanism (7) Heartbeat mechanism: service renewal, renew (8) Service offline: Cancel (9) Service fault: Eureka Server is damaged by a network failureCopy the code
The realization principle of Eureka part mechanism
Eureka, as the registry of choice for Spring-Cloud, features and highlights are not to be repeated, but some of them will be noted below.
Start with an overall process and architecture diagram
It is not a comprehensive mechanism detail diagram, but a rough process diagram.
As can be seen from the figure, a service (Eurka-client) starts, registers itself with eureka-server, registers itself with Eureka-server as a provider, and periodically registers, Renew registries from the server (heartbeat mechanism) In Eureka, renewLease, and periodically pull and fetch registries from the server to ensure that registries maintained locally are completely consistent.
### Service registration
Eureka service registration is relatively simple, and there is not much content in it. The main thing is that during client initialization, Registry is scheduled to make HTTP requests through Jersey, and then the server side receives them. Metadata is stored in a memory map on the server side and can be viewed through the Status console. Below is a simple service registration process
The client registers the startup process
As a Eureka-client, we can also see that there will be some heartbeat renewal, as well as registry pull updates
Eureka client, the specific implementation in the DiscoveryClient class, the specific code details, can be viewed in the source code, the following is a client registration start flow chart
As you can see, at registration launch, three thread pool tasks are initialized to perform the renewal, heartbeat, and registry pull functions mentioned above.
As mentioned earlier, the client will be scheduled periodically, will pull to refresh the registry, in the client, there will be 2 pull mechanism
- The incremental synchronization
- Full amount of synchronization
The following is about the update process of the client every 30 seconds. On the server side, there is a queue that stores the registry of incremental changes made in the last 3 minutes
The client compares the return from the server with that from the local server, adds, deletes, and modifies the hash value from the latest local full registry, and performs consistency hash check on the hash value returned from the server. If the hash value is inconsistent with the hash value returned from the server, A full pull is carried out to ensure its consistency
Tier 3 caching mechanism on the server side
Server startup and cluster deployment
If you look at the project directory architecture on the server side, you can see that the Eureka-Server project has only one project directory, web. XML, in addition to some configurations in Resources
The specific startup logic is in the configuration of web. XML, the Listener Core project EurekaBootStrap
<! -- Web application initialization, Some initialization code had been server initialization - > < listener > < listener - class > com.net flix. Eureka. EurekaBootStrap < / listener - class > </listener>Copy the code
The general startup process is shown in the figure below. EurekaBootStrap implements the ServletContextListener interface. In the contextInitialized implementation method, it implements the server short environment, configuration initialization, and startup.
public void contextInitialized(ServletContextEvent event) {
try {
initEurekaEnvironment();
initEurekaServerContext();
ServletContext sc = event.getServletContext();
sc.setAttribute(EurekaServerContext.class.getName(), serverContext);
} catch (Throwable e) {
logger.error("Cannot bootstrap eureka server :", e);
throw new RuntimeException("Cannot bootstrap eureka server :", e); }}Copy the code
To ensure high service availability, the Eureka-server must be deployed in a cluster. In the initial overall architecture diagram, it can be seen that when the server is started, it registers with other servers as a Eureka-client and synchronously pulls the registry, thus ensuring information synchronization between different servers.
Below are some in-depth discussions on service state maintenance.
Service state maintenance
Service offline
Some instance information about service is maintained on the server side. Once a service is shut down, it must be notified to Eureka-server to offline the service, remove the service from the registry, and clear the cache. Finally, the offline service is completed, and the specific process is shown in the following figure
As you can see from this diagram, on the server side, a recentlyChangedQueue is maintained. As the name implies, the recentlyChangedQueue is maintained for changes in the last 3 minutes, and then a scheduled task is maintained for changes in the last 3 minutes to clean up the previous changes. This task, which is already built while architecting the instance table, runs in the background. It also cleans up the cache and invalidates it, so there’s a readWriteCacheMap that’s a read and write cache, and I’ll explain that later.
Server automatic fault awareness & service instance removal & network failure, self-protection mechanism
In addition to some normal shutdown, for example, when the service breaks down or oom is abnormal, the service does not automatically shutdown or go offline. In this way, the service still exists in the registry of the server. In this case, a mechanism is needed to ensure that the fault is automatically sensed. In addition, the faulty service is removed from the service list. Finally, the faulty service is periodically pulled by the client to realize synchronization awareness. Some details of the process are shown below
There are several key factors in this process:
- Renew heartbeat (Client)
- Heartbeat Time (Server)
- Compensation time (to prevent time extension due to STW such as server side GC)
- The last heartbeat
In addition, to prevent exceptions caused by service network faults during heartbeat renewal reporting, if self-protection is enabled, the self-protection mechanism is used to judge the service removal and a certain buffer is reserved in the self-protection mechanism to prevent all service removal due to exceptions
And when the service is removed, a certain proportion of random removal will be carried out, not all removal at one time.
Batch processing mechanism for status synchronization tasks between Eueka Servers
Eureka-server is deployed in a cluster. As we know, the registration, renewal, and offline of clients are randomly synchronized to a server. It is essential that the status of the server is synchronized with each other
View the source code. You can also see that every registration, logout, or removal of a service synchronizes data with other PeeerNodes. During the synchronization process, Eureka takes a triple queue and then commits the request in a single batch. When there are many services, registration or offline, there will not be too many requests, heartbeat renewal on time, this is very frequent, is bound to take the way of batch requests, state synchronization between services
The following figure shows the synchronization process
The actual logical code implementation is in this place, Eureka-core
Unavoidable omissions, shortcomings, please point out, welcome to support communication