What had been
Eureka is part of SpringCloud’s Netflix Microservices suite and is a service registration and discovery module. Eureka contains both server-side and client-side components. The server side, also known as a service registry, is used to provide registration of services to discovery. Eureka supports high availability. When a shard in the cluster fails, Eureka goes into automatic protection mode, which allows service discovery and registration to continue during the shard failure. When the failed shard recovers, other shards in the cluster synchronize their status again.
Client components contain service consumers and service producers. While the application is running, the Eureka client registers its services with the registry and periodically sends heartbeats to renew its service lease. At the same time, you can query the current registered service information from the server, cache them locally, and periodically refresh the service status.
Eureka, like Didi, manages and records information about service providers. Instead of finding the service themselves, service invokers tell Eureka what they need, and Eureka tells you what services fit your needs. At the same time, service providers and Eureka monitor each other through a “heartbeat” mechanism. When a service provider has problems, Eureka will remove it from the service list naturally. This enables automatic registration, discovery, and status monitoring of services.
Eureka schematic diagram
• Eureka: A service registry (which can be a cluster) that exposes its address to the outside world
• Provider: Register your information with Eureka after startup (address, what services are provided)
• Consumer: Subscribe to Eureka and Eureka sends the consumer a list of all provider addresses for the corresponding service, updated periodically
• Heartbeat (renewal) : The provider periodically refreshes its status to Eureka via HTTP
Eureka,
3.1 Three core roles in Eureka architecture:
• The service provider provides the service application, which can be SpringBoot application or any other technology implementation, as long as it is externally provided REST-style service. • The service consumer consumer application gets the list of services from the registry to know information about each service provider and where to invoke the service provider.
3.2 Highly available Eureka Server
Service synchronization Multiple Eureka servers also register with each other as services. When a service provider registers with a node in the Eureka Server cluster, the node synchronizes service information to each node in the cluster to achieve data synchronization. Therefore, no matter the client accesses any node in the Eureka Server cluster, the complete service list information can be obtained.
3.3 Service Providers
Service renewal After the registration of the service is complete, the service provider will place a heartbeat (a periodic Rest request to EurekaServer) to tell EurekaServer: “I am still alive”. This we call service renewal
There are two important parameters that modify the behavior of service renewal:
eureka:
instance:
lease-renewal-interval-in-seconds: 30
lease-expiration-duration-in-seconds: 90
Copy the code
• lease-renewal- interval-seconds specifies the renew interval. The default value is 30 seconds. • lease- expiration-durations-in-seconds specifies the expiration time of the serviceCopy the code
That is, by default the service sends a heartbeat to the registry every 30 seconds to prove that it is alive. If the heartbeat is not sent within 90 seconds, EurekaServer considers the service to be down and removes it from the service list. Do not change these two values in the production environment. The default values are enough.
3.4 Service consumers
When the service consumer starts, the value of eureka.client.fetch-registry=true is detected. If true, a read-only backup of the list of Eureka Server services is pulled and cached locally. The data is retrieved and updated every 30 seconds. We can modify it with the following parameters:
eureka:
client:
registry-fetch-interval-seconds: 5
Copy the code
In production, we do not need to change this value.
4 Eureka Cluster working Principles
5. Eureka self-protection mechanism
5.1 an overview of the
The protection mode is used when network partitions exist between a group of Eureka Server clients. Once in protected mode, Eureka Server will attempt to protect the information in its service registry and will not delete the data in the service registry, that is, will not unregister any microservices.
The paragraph in the picture shows that they have gone into self-protection mode.
5.2 Why is there a self-protection mechanism
To ensure the normal operation of EurekaClient, EurekaServer will not immediately remove the EurekaClient service when the network is disconnected with EurekaServer.
5.3 What is the self-protection mechanism
As shown in figure:
By default, EurekaClient periodically sends heartbeat packets to the EurekaServer. If Eureka does not receive a heartbeat packet from EurekaClient on the Server for a certain period of time (90 seconds by default), the service will be removed from the service registry. However, within a period of time (90 seconds by default), a large number of service instance heartbeats will be lost. In this case, EurekaServer will enable the self-protection mechanism. The service will not be removed (this can occur if the network is down but EurekaClient does not go down). At this point, in another registry, if the heartbeat is not received for a certain period of time, the service will be removed, which is a serious error, because the client can still send heartbeat, but the network latency problem, and the protection mechanism is created to solve this problem).
In self-protection mode, Eureka Server protects the information in the service registry and does not deregister any service instances.
Its design philosophy is that it would rather keep the wrong service registration information than blindly unregister any potentially healthy service instances. To sum up, self-protection mode is a security protection measure to deal with the long network. Its architectural philosophy is that it would rather keep all microservices (healthy and unhealthy) at the same time than blindly write off any microservices that might be healthy. Using self-protection mode makes Eureka clusters more robust and stable.