Series of articles:

SpringCloud source series (1) – registry initialization for Eureka

SpringCloud source code series (2) – Registry Eureka service registration, renewal

SpringCloud source code series (3) – Registry Eureka crawl registry

Service offline

Eureka Client referrals

When the eureka client service is shutdown, the shutdown of DiscoveryClient is triggered to shutdown the eureka client. The shutdown method is used to view the eureka client offline.

  • First remove the registered state changers, which no longer need to listen for instance state changes
  • It then shuts down a series of scheduling tasks, stopping interactions with the Eureka-Server, such as timing the heartbeat. It also frees up resources.
  • And then we callunregisterDeregistration is a call to the server sideDELETE /apps/{appName}/{instanceId}Offline instance
  • Finally, some other resources, such as EurekaTransport, were shut down.
@PreDestroy
@Override
public synchronized void shutdown(a) {
    if (isShutdown.compareAndSet(false.true)) {
        logger.info("Shutting down DiscoveryClient ...");

        // Remove the status change listener
        if(statusChangeListener ! =null&& applicationInfoManager ! =null) {
            applicationInfoManager.unregisterStatusChangeListener(statusChangeListener.getId());
        }

        // Stop scheduling tasks to release resources:
        // instanceInfoReplicator, heartbeatExecutor, cacheRefreshExecutor
        Scheduler, cacheRefreshTask, heartbeatTask
        cancelScheduledTasks();

        // If APPINFO was registered
        if(applicationInfoManager ! =null
                && clientConfig.shouldRegisterWithEureka()
                && clientConfig.shouldUnregisterOnShutdown()) {
            applicationInfoManager.setInstanceStatus(InstanceStatus.DOWN);
            // Call eureka-server's offline interface offline instance
            unregister();
        }

        // Continue to release resources
        if(eurekaTransport ! =null) { eurekaTransport.shutdown(); } heartbeatStalenessMonitor.shutdown(); registryStalenessMonitor.shutdown(); }}void unregister(a) {
    // It can be null if shouldRegisterWithEureka == false
    if(eurekaTransport ! =null&& eurekaTransport.registrationClient ! =null) {
        try {
            logger.info("Unregistering ...");
            // Unregister: DELETE /apps/{appName}/{instanceId}
            EurekaHttpResponse<Void> httpResponse = eurekaTransport.registrationClient.cancel(instanceInfo.getAppName(), instanceInfo.getId());
            logger.info(PREFIX + "{} - deregister status: {}", appPathIdentifier, httpResponse.getStatusCode());
        } catch (Exception e) {
            logger.error(PREFIX + "{} - de-registration failed{}", appPathIdentifier, e.getMessage(), e); }}}Copy the code

The Eureka Server service is offline

Following the DELETE /apps/{appName}/{instanceId} interface, you can find the cancelLease method of InstanceResouce, which is the gateway to the client logout.

Entering the registered Cancel method, you can see that it is similar to some of the previous interfaces, calling the service’s Cancel method to bring down the instance, and then calling replicateToPeers to copy it to other nodes in the cluster. And then the cancel method is actually the internalCancel method that was called.

@DELETE
public Response cancelLease(@HeaderParam(PeerEurekaNode.HEADER_REPLICATION) String isReplication) {
    try {
        // take the instance offline
        boolean isSuccess = registry.cancel(app.getName(), id, "true".equals(isReplication));

        if (isSuccess) {
            return Response.ok().build();
        } else {
            returnResponse.status(Status.NOT_FOUND).build(); }}catch (Throwable e) {
        returnResponse.serverError().build(); }}public boolean cancel(final String appName, final String id, final boolean isReplication) {
    // take the instance offline
    if (super.cancel(appName, id, isReplication)) {
        // Copy to cluster
        replicateToPeers(Action.Cancel, appName, id, null.null, isReplication);

        return true;
    }
    return false;
}

public boolean cancel(String appName, String id, boolean isReplication) {
    // Call an internal method to bring down the instance
    return internalCancel(appName, id, isReplication);
}
Copy the code

Let’s look at the internalCancel method:

  • First fetch lease information of all instances of the service from the registry based on the service name, and then remove instance lease information based on the instance ID
  • Adds the removed instance to a loop queue that was recently taken offlinerecentCanceledQueue
  • Take the instance offline. Note that the time at which the instance is taken offline is setevictionTimestampIs the current time
  • Then set the ActionType of the instance toDELETED, and the offline instance is added to the queue for the most recent changerecentlyChangedQueue
  • Then the cache is invalidatedreadWriteCacheMapIf a service instance changes, the cache must be cleared. However,readWriteCacheMapMay be30 secondsBefore it syncs toreadOnlyCacheMap.
  • Will be expected to renew at lastNumber of clients -1, and then updated the number of renewals per minute threshold
protected boolean internalCancel(String appName, String id, boolean isReplication) {
    read.lock();
    try {
        CANCEL.increment(isReplication);
        // Fetch the service lease information based on the service name
        Map<String, Lease<InstanceInfo>> gMap = registry.get(appName);
        Lease<InstanceInfo> leaseToCancel = null;
        if(gMap ! =null) {
            // Remove instance lease information based on the instance ID
            leaseToCancel = gMap.remove(id);
        }
        // Add the removed instance ID to the recently offline queue
        recentCanceledQueue.add(new Pair<Long, String>(System.currentTimeMillis(), appName + "(" + id + ")"));
        InstanceStatus instanceStatus = overriddenInstanceStatusMap.remove(id);
        
        if (leaseToCancel == null) {
            CANCEL_NOT_FOUND.increment(isReplication);
            return false;
        } else {
            // Set evictionTimestamp to the current timestamp
            leaseToCancel.cancel();
            InstanceInfo instanceInfo = leaseToCancel.getHolder();
            String vip = null;
            String svip = null;
            if(instanceInfo ! =null) {
                // Set the instance ActionType to DELETED
                instanceInfo.setActionType(ActionType.DELETED);
                // Join the latest change queue
                recentlyChangedQueue.add(new RecentlyChangedItem(leaseToCancel));
                // Update time
                instanceInfo.setLastUpdatedTimestamp();
                vip = instanceInfo.getVIPAddress();
                svip = instanceInfo.getSecureVipAddress();
            }
            // Invalidation cache
            invalidateCache(appName, vip, svip);
            logger.info("Cancelled instance {}/{} (replication={})", appName, id, isReplication); }}finally {
        read.unlock();
    }

    synchronized (lock) {
        if (this.expectedNumberOfClientsSendingRenews > 0) {
        	// Number of clients expected to renew - 1
            this.expectedNumberOfClientsSendingRenews = this.expectedNumberOfClientsSendingRenews - 1;
            // Update the threshold of the number of renewals per minuteupdateRenewsPerMinThreshold(); }}return true;
}
Copy the code

Service failure

When the service stops, the shutdown method of DiscoveryClient is triggered to shutdown eureka-client and a notification is sent to the registry. However, if the client is down or abnormally shut down, and the registry is unable to receive notification that the client is offline, the registry has a scheduled task to determine if the client instance is offline based on the heartbeat, and then remove the failed instance.

Removes the scheduled task initialization of the instance

In the last steps of EurekaBootStrap initialization, openForTraffic of the registry is called to do some final Settings. In the last step, the super.postInit method is called to do the final initialization, which creates a scheduled task to remove expired instances periodically.

registry.openForTraffic(applicationInfoManager, registryCount);
Copy the code
public void openForTraffic(ApplicationInfoManager applicationInfoManager, int count) {
    // Expected number of client renewals per minute
    this.expectedNumberOfClientsSendingRenews = count;
    // Update the hourly renewal threshold
    updateRenewsPerMinThreshold();
    / /...
    // Set the instance state to started
    applicationInfoManager.setInstanceStatus(InstanceStatus.UP);
    // Call the parent's post-initialization
    super.postInit();
}
Copy the code

PostInit:

  • The first is to turn on the count of recent renewals in a minute
  • Then create a scheduling task that periodically removes expired instances, scheduling tasks every other60 secondsPerform a
protected void postInit(a) {
    // Start the counter that counts the number of recent one-minute renewals
    renewsLastMin.start();
    if(evictionTaskRef.get() ! =null) {
        evictionTaskRef.get().cancel();
    }
    // Delete tasks periodically
    evictionTaskRef.set(new EvictionTask());
    evictionTimer.schedule(evictionTaskRef.get(),
            serverConfig.getEvictionIntervalTimerInMs(),
            // This command is executed every 60 seconds
            serverConfig.getEvictionIntervalTimerInMs());
}
Copy the code

Periodically remove expired instances

1. Remove the scheduled task of the instance

As you can see, each time an EvictionTask is run, a compensation time is acquired because the interval between EvictionTask executions may exceed the set 60 seconds, such as GC pauses or inaccurate timing due to local time drifts. The EVict method is then called to remove the instance.

In the scenario of time difference calculation, the idea of compensating time is worth learning, and the inaccuracy of time difference should be considered.

class EvictionTask extends TimerTask {
    private final AtomicLong lastExecutionNanosRef = new AtomicLong(0l);

    @Override
    public void run(a) {
        try {
            // Get the compensation time because the interval between EvictionTask executions may be longer than the set 60 seconds, such as GC pauses or local time drifts causing timing errors
            long compensationTimeMs = getCompensationTimeMs();
            logger.info("Running the evict task with compensationTime {}ms", compensationTimeMs);
            evict(compensationTimeMs);
        } catch (Throwable e) {
            logger.error("Could not run the evict task", e); }}long getCompensationTimeMs(a) {
        long currNanos = getCurrentTimeNano();
        long lastNanos = lastExecutionNanosRef.getAndSet(currNanos);
        if (lastNanos == 0L) {
            return 0L;
        }
        // The interval between tasks
        long elapsedMs = TimeUnit.NANOSECONDS.toMillis(currNanos - lastNanos);
        // Compensation time = time between tasks - Time between deleting tasks (default: 60 seconds)
        long compensationTime = elapsedMs - serverConfig.getEvictionIntervalTimerInMs();
        return compensationTime <= 0L ? 0L : compensationTime;
    }

    long getCurrentTimeNano(a) {  // for testing
        returnSystem.nanoTime(); }}Copy the code

2. Remove the instance

The process for removing instances is as follows:

  • First determine whether the lease expiration mechanism is enabled (mainly self-protection mechanisms, discussed in the next section).
  • Iterate through the registry to determine whether an instance has expired and add expired instances to the collection list.
  • The limit of the number of removed instances is calculated mainly for self-protection mechanism to avoid removing too many instances at one time.
  • It is then a matter of randomly selecting a limited number of expired instances from the collection to be removed.
  • Removing an instance is simply a call to the instance offline methodinternalCancelTo remove instances from the registry, queue up recent changes, invalidate caches, and so on, as described in the service offline section.
public void evict(long additionalLeaseMs) {
    logger.debug("Running the evict task");

    // Whether to enable lease expiration
    if(! isLeaseExpirationEnabled()) { logger.debug("DS: lease expiration is currently disabled.");
        return;
    }

    List<Lease<InstanceInfo>> expiredLeases = new ArrayList<>();
    for (Entry<String, Map<String, Lease<InstanceInfo>>> groupEntry : registry.entrySet()) {
        Map<String, Lease<InstanceInfo>> leaseMap = groupEntry.getValue();
        if(leaseMap ! =null) {
            for (Entry<String, Lease<InstanceInfo>> leaseEntry : leaseMap.entrySet()) {
                Lease<InstanceInfo> lease = leaseEntry.getValue();
                // Determine whether the instance lease has expired
                if(lease.isExpired(additionalLeaseMs) && lease.getHolder() ! =null) {
                    // Add to the list of expired collectionsexpiredLeases.add(lease); }}}}// Get the number of registered instances of the registry
    int registrySize = (int) getLocalRegistrySize();
    // Threshold for registry number retention: number of registered instances * Percentage renewal threshold (default 0.85)
    int registrySizeThreshold = (int) (registrySize * serverConfig.getRenewalPercentThreshold());
    // The number of instances that can be culled is limited to 15% at a time
    int evictionLimit = registrySize - registrySizeThreshold;

    // Get the minimum number of culls
    int toEvict = Math.min(expiredLeases.size(), evictionLimit);
    if (toEvict > 0) {
        Random random = new Random(System.currentTimeMillis());
        for (int i = 0; i < toEvict; i++) {
            // Randomly discard expired instances of toEvict from expiredLeases based on the number to be deleted
            int next = i + random.nextInt(expiredLeases.size() - i);
            Collections.swap(expiredLeases, i, next);
            Lease<InstanceInfo> lease = expiredLeases.get(i);

            String appName = lease.getHolder().getAppName();
            / / instance ID
            String id = lease.getHolder().getId();
            EXPIRED.increment();
            // Call the offline method
            internalCancel(appName, id, false); }}}Copy the code

3. Examples are selected in batches

It can be seen that the expired instances are not removed at one time, but a threshold toEvict is calculated, and only toEvict expired instances are randomly removed at one time, adopting the mechanism of batch selection + random selection.

For example, if there are 20 instances in the registry, the maximum number of instances that can be removed is toEvict = 20-20 * 0.85 = 3. That is to say, even if there are 5 expired instances, only 3 of them can be removed randomly this time, and the other two will be removed in the next removal task.

This batching + random pick-off mechanism can cause expired instances to take too long to go offline, especially in development environments where services are frequently started and stopped.

How do I determine whether an instance is expired

As you can see above, eureka calls lease. IsExpired (additionalLeaseMs) to determine if the instance isExpired. Enter the isExpired method and you can see that if the removal time is set or the current time is > (last update time of the instance + renewal period (90 seconds) + compensation time), the instance is considered expired. If the instance has not been renewed for more than one period, the client instance is considered faulty. It can’t be renewed. It has to be removed.

/**
 * Checks if the lease of a given {@link com.netflix.appinfo.InstanceInfo} has expired or not.
 *
 * Note that due to renew() doing the 'wrong" thing and setting lastUpdateTimestamp to +duration more than
 * what it should be, the expiry will actually be 2 * duration. This is a minor bug and should only affect
 * instances that ungracefully shutdown. Due to possible wide ranging impact to existing usage, this will
 * not be fixed.
 *
 * @param additionalLeaseMs any additional lease time to add to the lease evaluation in ms.
 */
public boolean isExpired(long additionalLeaseMs) {
    // The culling time has been set, or the current time > (instance last update time + renewal cycle (90 seconds) + compensation time)
    return (evictionTimestamp > 0 || System.currentTimeMillis() > (lastUpdateTimestamp + duration + additionalLeaseMs));
}
Copy the code

There is another issue that needs to be noted here. Check out the comment for isExpired. Eureka says this is actually a bug, but won’t fix it because the duration is actually added twice.

LastUpdateTimestamp will be updated when the client renew, set it to the current time + duration interval (default 90 seconds).

public void renew(a) {
    // Update Last update time, based on the current time plus a periodic interval, default 90 seconds
    lastUpdateTimestamp = System.currentTimeMillis() + duration;
}
Copy the code

The duration is also set at the time of registration to see what it means. As you can see, if the client is not configured with durationInSecs, the default is set to 90 seconds.

As can be seen from the description of getDurationInSecs, duration means to wait for how long the client does not renew its contract before removing it. Default is 90 seconds. For example, if a client is renewed every 30 seconds, it may fail more than three times before the client instance is considered faulty and removed.

public void register(final InstanceInfo info, final boolean isReplication) {
    int leaseDuration = Lease.DEFAULT_DURATION_IN_SECS;
    // If there is no cycle configuration in the instance, set the default to 90 seconds
    if(info.getLeaseInfo() ! =null && info.getLeaseInfo().getDurationInSecs() > 0) {
        leaseDuration = info.getLeaseInfo().getDurationInSecs();
    }
    // Register an instance
    super.register(info, leaseDuration, isReplication);
    // Copy to other server nodes in the cluster
    replicateToPeers(Action.Register, info.getAppName(), info.getId(), info, null, isReplication);
}

/**
 * Returns client specified setting for eviction (e.g. how long to wait w/o renewal event)
 *
 * @return time in milliseconds since epoch.
 */
public int getDurationInSecs(a) {
    return durationInSecs;
}
Copy the code

Instead of removing the instance after 90 seconds, you can see that in isExpired, lastUpdateTimestamp is added with a duration of 180 seconds. This means that a client instance is not renewed for more than 180 seconds before it is considered faulty and then removed.

The isExpired comment also states that the renew() method miscalculates lastUpdateTimestamp, and the actual expiration period is 2 * duration, but Eureka is not going to fix this bug because its scope is small.

So the conclusion here is that when the client is down (abnormal offline), instances in the registry are not removed 90 seconds later, but at least 180 seconds later.

Self-protection mechanism

If segment occasionally shaking or is temporarily unavailable, will not be able to receive the client’s contract, therefore had the server in order to ensure the availability, will be to judge whether the heartbeat of the recently received one minute less than a specified threshold, yes will trigger the self protection mechanism, close the lease expiration, no longer remove instance, thus protecting the registration information.

Self-protection mechanisms when removing instances

The evICT method that removes an instance calls isLeaseExpirationEnabled, which determines if a self-protection mechanism is triggered. If false is returned, the instance is not removed.

Let’s look at the isLeaseExpirationEnabled method:

  • First, if self-protection is not enabled, return true and the instance can be removed
  • Determines if self-protection is enabled (by default)The renewal threshold per minute is greater than 0 and the number of renewals in the latest one minute is greater than the renewal threshold per minuteIs to enable lease expiration
public boolean isLeaseExpirationEnabled(a) {
    // Check whether self-protection is enabled
    if(! isSelfPreservationModeEnabled()) {// The self preservation mode is disabled, hence allowing the instances to expire.
        return true;
    }
    // The renewal threshold per minute is greater than 0, and the number of renewals in the last minute is greater than the renewal threshold per minute
    return numberOfRenewsPerMinThreshold > 0 && getNumOfRenewsInLastMin() > numberOfRenewsPerMinThreshold;
}

public boolean isSelfPreservationModeEnabled(a) {
    return serverConfig.shouldEnableSelfPreservation();
}
Copy the code

This contract threshold numberOfRenewsPerMinThreshold per minute in front of a lot of places are seen, Service registration, logoff, openForTraffic, and have a regular task every 15 minutes will be called the following method to update the numberOfRenewsPerMinThreshold.

protected void updateRenewsPerMinThreshold(a) {
    // Renewal threshold per minute = Number of clients to be renewed * (60 / Renewal interval) * Renewal threshold
    // For example, if there are 10 instances registered, the number of clients expected to renew is 10, the default interval is 30 seconds, that is, each client should send heartbeat every 30 seconds, the default renewal percentage is 0.85
    // The number of renewal requests per minute threshold = 10 * (60.0/30) * 0.85 = 17, which means that at least 17 renewal requests are received per minute
    this.numberOfRenewsPerMinThreshold = (int) (this.expectedNumberOfClientsSendingRenews
            * (60.0 / serverConfig.getExpectedClientRenewalIntervalSeconds())
            * serverConfig.getRenewalPercentThreshold());
}
Copy the code

ExpectedNumberOfClientsSendingRenews registered in instances when + 1, in the instance offline when 1, the number is expected to contract the client representative.

/////////////// Instance registration ///////////////
synchronized (lock) {
    if (this.expectedNumberOfClientsSendingRenews > 0) {
        // The number of clients expected to renew + 1
        this.expectedNumberOfClientsSendingRenews = this.expectedNumberOfClientsSendingRenews + 1;
        // Update the number of renewal requests per minute threshold, which will be used in many other placesupdateRenewsPerMinThreshold(); }}/////////////// Instance offline (offline, fault removed) ///////////////
synchronized (lock) {
    // Number of clients expected to renew - 1
    if (this.expectedNumberOfClientsSendingRenews > 0) {
        this.expectedNumberOfClientsSendingRenews = this.expectedNumberOfClientsSendingRenews - 1;
        // Update the threshold of the number of renewals per minuteupdateRenewsPerMinThreshold(); }}Copy the code

RenewsLastMin = renewslastmin.increment (); renew (); renew (); And renewslastmin.getCount () returns the total renewals in the last minute.

public long getNumOfRenewsInLastMin(a) {
    return renewsLastMin.getCount();
}
Copy the code

Here’s an example of how to protect yourself from instance failures:

  • For example, if 20 instances are registered, the default interval for sending heartbeat renewal packets is 30 seconds, and the renewal threshold is0.85And turn on the self-protection mechanism.
  • So expect renewal expectedNumberOfClientsSendingRenews = 20, the number of clients per minute to send a heartbeat thresholdNumberOfRenewsPerMinThreshold = 20 * * 0.85 = 34 (60/30).
  • Normally 20 instances send heartbeats per minuteRenewsLastMin = 20 * (60/30) = 40Times.
  • thenNumberOfRenewsPerMinThreshold (34) > 0 && renewsLastMin (40) > numberOfRenewsPerMinThreshold (34)If it is, the failed instance is allowed to be removed.
  • So if there are 3 instances in the last minute that haven’t sent a renewal, at this timeRenewsLastMin = 17 * (60/30) = 34Times, andnumberOfRenewsPerMinThresholdStill, because the registry instance is not removed, the condition is not satisfied, even if the instance does fail, it can not be removed.

This is the self-protection mechanism of Eureka-Server. According to eureka-Server, if an instance fails to send heartbeat within a short period of time (more than 15%), it will assume that its network fault causes the client to fail to send heartbeat, and enter the self-protection mechanism to avoid removing the instance by mistake.

The self-protection mechanism causes the instance to be offline

In the development environment, some services are DOWN because they are frequently restarted, but the service instances are not removed. This is because of the self-protection mechanism of Eureka-Server.

1. The situation of enabling self-protection mechanism

First, eureka-server does the following configuration to enable the registry:

eureka:
  server:
    # Whether to enable self-protection
    enable-self-preservation: true
Copy the code

Start several client instances:

Then quickly stop demo-consumer (if closed properly, cancel is called to bring DOWN the instance), and you see that demo-consumer is DOWN, but the instance has not been removed.

It can be seen that the number of renewals in the last minute was 4, and the expected number of renewals per minute was 6. Since the conditions of judgment were not met, the self-protection mechanism was triggered, leading to the failure to remove instances.

Note that the number of clients expected to renew was 4, whereas the number of client instances actually registered was 3, because SpringCloud set the initial value to 1 when calling openForTraffic.

2. Situations in which self-protection mechanisms are turned off

Disable the self-protection mechanism if the configuration is as follows:

eureka:
  server:
    # Whether to enable self-protection
    enable-self-preservation: false
Copy the code

The registry console will tell us that self-protection is disabled:

The same operation quickly stops the instance and finds that the instance is still not removed:

That’s because it takes 180 seconds for an instance to be considered expired, so wait three minutes before the instance goes offline.

public boolean isExpired(long additionalLeaseMs) {
    return (evictionTimestamp > 0 || System.currentTimeMillis() > (lastUpdateTimestamp + duration + additionalLeaseMs));
}
Copy the code

3. Quickly close multiple instances

After 2 minutes, only one instance is offline, because Eureka-Server only removes 15% of instances at a time.

4. Where does DOWN come from

So where does the DOWN state come from? Since I started the client instance locally with IDEA, when I closed it, the state change listener was triggered, and then a registration call was triggered. The registered state was DOWN, so the state of the instance immediately became DOWN.

If you kill the -9 process directly, the state change listener will not be triggered and the registry instance will not become DOWN, but the instance will be offline and unavailable.

5. Instances go offline quickly

As you can see from the previous tests, you can adjust the following parameters to quickly bring an instance offline.

Eureka – server configuration:

eureka:
  server:
    # Whether to enable self-protection
    enable-self-preservation: false
    # Renewal threshold per minute
    renewal-percent-threshold: 0
    Remove the interval between scheduled tasks of the instance
    eviction-interval-timer-in-ms: 10000
Copy the code

Eureka – client configuration:

eureka:
  instance:
    # determine how long the instance has not sent a heartbeat
    lease-expiration-duration-in-seconds: 60
Copy the code

Design of the latest one minute counter

RenewsLastMin counts renewals in one minute. The type of renewsLastMin is MeasuredRate. The design of this class is also worth learning.

MeasuredRate uses two buckets to count, one for the previous interval and one for the current interval, and then uses a scheduled task to replace the current bucket count with the previous one at regular intervals. Then the current bucket is added when the count is increased, and the quantity is fetched from the previous bucket when the count of the previous interval is achieved.

public class MeasuredRate {
    private static final Logger logger = LoggerFactory.getLogger(MeasuredRate.class);
    // Use two buckets to count, one is the last minute, one is the current minute
    private final AtomicLong lastBucket = new AtomicLong(0);
    private final AtomicLong currentBucket = new AtomicLong(0);

    private final long sampleInterval;
    private final Timer timer;
    private volatile boolean isActive;

    / * * *@param sampleInterval in milliseconds
     */
    public MeasuredRate(long sampleInterval) {
        // Interval time
        this.sampleInterval = sampleInterval;
        / / timer
        this.timer = new Timer("Eureka-MeasureRateTimer".true);
        this.isActive = false;
    }

    public synchronized void start(a) {
        if(! isActive) { timer.schedule(new TimerTask() {
                @Override
                public void run(a) {
                    try {
                        // Execute once every minute to set the number of times in the current minute to the bucket in the previous minute
                        lastBucket.set(currentBucket.getAndSet(0));
                    } catch (Throwable e) {
                        logger.error("Cannot reset the Measured Rate", e);
                    }
                }
            }, sampleInterval, sampleInterval);

            isActive = true; }}public synchronized void stop(a) {
        if (isActive) {
            timer.cancel();
            isActive = false; }}/** * Returns the count in the last sample interval. */
    public long getCount(a) {
        // Get the count of the bucket in the last minute
        return lastBucket.get();
    }

    /** * Increments the count in the current sample interval. */
    public void increment(a) {
        // Add the count to the current bucketcurrentBucket.incrementAndGet(); }}Copy the code

Service fault removal and self-protection mechanism diagram

The following diagram summarizes service fault removal and self-protection mechanisms.