Microservice governance
The Spring Cloud tool suite provides comprehensive technical support for microservice governance. These governance tools mainly include service registration and discovery, load balancing management, dynamic routing, service degradation and failover, link tracing, and service monitoring. The main functional components of microservice governance are as follows:
- The registration management service component, Eureka, provides service registration and discovery capabilities.
- Load balancing service component the Ribbon provides load balancing scheduling management functions.
- The edge proxy service component Zuul provides gateway services and dynamic routing functions.
- Hystrix circuit breaker provides fault tolerance, service degradation, failover, and more.
- Aggregate service event flow component Turbine, which can be used to monitor the health of services in a cluster.
- The log collection component Sleuth provides the function of tracing and managing calls between services through log collection.
- Config provides unified configuration management service functions.
How these components work can be illustrated in a service-by-service invocation sequence diagram, as shown in Figure 5-1.
In this sequence diagram, Eureka manages each registered microservice instance and creates a metadata list for it. When a service consumer calls a microservice, the Ribbon schedules load balancing based on the list of microservice instances. By default, this scheduling uses a polling algorithm to pull an available instance from a list of instances, and Zuul routes the service based on the metadata of the instance. During routing,Hystrix checks the circuit breaker status of the microservice instance. If the circuit breaker is closed, normal service is provided. If the circuit breaker is on, it indicates that the service is faulty. Hystrix will perform failover and service degradation based on the configuration of the instance.
In addition, there are other components that can help with the governance of microservices. For example, Turbine can fully monitor circuit breakers for microservices, Config can be built — a configuration manager for online updates, Sleuth and Zipkin can be combined to build a tracking server, and so on. Through the use of these components and services, microservice governance can be further enhanced.
Since Eureka is no longer updated in new versions of Spring Cloud, a more powerful, third-party Consul is used to create the registry. Of course, this registry also provides support for related components in the Spring Cloud toolkit.
Create a registry using Consul
Consul is a very powerful and fairly stable registry that includes integrated configuration management functionality. In addition, it is easier to integrate when running and clustering in Docker.
Consul installation is not complicated. You can download Consul from the Consul website based on your operating system. After downloading and unzipping, you can start in development mode using the following instructions
consul agent -dev
Copy the code
After startup, you can open its console through the browser, the link address is as follows:
http:l/localhost:8500
Copy the code
If the figure shown in Figure 5-2 is displayed, the registry is ready On Consul The default service port on Consul is 8500, which is used for console management and service access.
In order to save the configuration information to a disk file, startup parameters similar to those in production are used, as follows:
Consul agent-server-bind =127.0. L-client =0.0.0 -bootstrap-expect=3-data-dir=/Users/apple/consul_data/application/data/- node=serverCopy the code
The meanings of these configuration parameters are as follows.
- -server Indicates that the server is started.
- -bind specifies the address to be bound to. (Some servers may bind multiple nics. You can use the bind parameter to specify the address to be bound.)
- – Client: specifies the address for the client to access. (Consul has rich AP interfaces. In this case, the client refers to the browser or caller.) ‘0.0 0.0 indicates no client address.
- – Bootstrap expect=3: Indicates that the minimum number of nodes in a Serv cluster is set to 3. If the number is lower than this value, the Serv cluster cannot work properly. Note: ZooKeeper has an odd number of clusters for election purposes. Onsul uses Raft algorithm). If no cluster is used, set this parameter to 1.
- -d ata-d indicates the directory for storing the specified data (the directory must exist).
- -node indicates the name of the node displayed in We.
Data-dir can set the address for saving configuration information. You can enter an existing directory path according to the device used.
Service registration and discovery
Once a microserver registers with Consul, it can be discovered by other services. For service registration, you need to complete the following steps.
1. Nong Lai quotes
References the service discovery and configuration management dependencies associated with Co UL as follows:
<dependency>
<groupid>org . springframework . cloud</groupid>
<artifactid>spring-cloud-starter co sul discovery</artifactid>
</dependency>
<dependency>
<group d>org spri gframework cloud</groupid>
<artifactid>spri ng-cloud-starter- consul- config</artifactid>
</dependency>
Copy the code
Among them, discovery component provides the function of service registration and discovery, and ONFIG component is a remote configuration management tool.
2. Set the connection
The configuration in connection registration is set in the configuration file Boot tr p. yml, which will be loaded before the system loads application.yml, as shown below:
Spring: Cloud: Consul: host: 127.0.0.1 Port: 8500 Discovery: serviceName: ${spring . application . name} healthCheckPath : /actuator/health healthCheckinterval : 15s tags : urlprefix-/${spr ng application . name} ins tance Id : ${spring . application . name} : ${vcap. Application. The instance id ♀ {SPR ng applicati on. The instance id ♀ {random value}}}Copy the code
In the preceding configuration, set the host port based on actual conditions. Other parameters do not need to be changed. ServiceName is the name of the microservice, and the variables it uses need to be set in the configuration file, as shown below:
s pri ng :
application :
name : catalogapi
Copy the code
That is, the name of the microservice is defined as logAPI. This way, when its service program needs to make a call to the microservice, it can make the call using this name. Therefore, the names of microservices must be unique in a registry.
3. Register activation
Add a annotation @enableDiscoveryClien to the main program of the microservice application to activate the function of service registration and discovery, as shown below: @SpringBootApplication @EnableDiscoveryClient public class SortsRestApiApplication { public static void main(String[) args) { SpringApplication.run(SortsRestApiApplication.class, args); } }Copy the code
When you have completed the preceding steps, start the microservice and the registered microservice is displayed on Consul’s console, as shown in Figure 5-3.
As you can see from Figure 5-3, there is another CatalogAPI service in addition to Consul itself, which is the successfully registered microservice. Click the relevant terms to the right of the CatalogAPI to also see the detailed data related to the health status of the microservice.
Unified Configuration Management
Configuration management is available on Consul, and it also supports YAML format, making configuration very powerful. Alternatively, you can save the configuration information in a disk file.
To enable the configuration management function, you need to add the following Settings to the microservice configuration file bootstrap.yml:
Spring: Cloud: consul: config: enabled: true # Default: true Format: YAML # indicates consul. Data-key: data # indicates the key value on Consul (or the name of the file). Default is data defaultContext: ${spring.application.name}Copy the code
This way, the configuration is read from Consul first when the microservice starts.
We can configure some independent parameters for each microservice, for example, data source configuration, etc. Figure 5-4 shows the data source configuration for the microservice CatalogAPI.
Finally, a complete configuration for connecting Consul looks like this:
Spring: Appl ication: name: catalogapi Cloud: Consul: host: 127.0.0.1 Port: 8500 Discovery: serviceName: ${spr ing. application. name} heal thCheckPath: /actuator/health healthCheckInterval: 15s tags: urlprefix-/$ {spring . application. name } instanceId: $ { spring.application.name} :${vcap.application. instance id:$ {spring. application. instance_ id:$ {random. value}} } # Config center config: enabled: true # Default: true Format: YAML # indicates Consul The file format above is YAML, PROPERTIES, key-value # and FILES data-key: data # indicates the KEY VALUE on Consul (or the name of the file). The default is data de faultContext: $ { spring . application. name }Copy the code
Play the role of circuit breaker properly
In order to improve the high availability of microservices, we sometimes enable the circuit breaker function in the intercall of microservices. A circuit breaker acts like a trip switch in a circuit, cutting off the circuit when the load is overloaded to degrade calls or perform failover operations. When the load is released, normal access is provided.
After many tests, we used the following configuration for breaker enabled applications, with a compromise between high availability and high performance:
# Whether to enable circuit breaker (false) feign.hystrix. enabled: true # Whether to enable circuit breaker (false) feign.hystrix. enabled: true # Whether to enable circuit breaker (false) Feign.hystrix. enabled: true True # Circuit breaker Timeout (true) hystrix.command-default.execution.timeout. enabled: True # The circuit breaker timeout must be greater than the ribbon timeout. Otherwise retry will not be triggered (>ConnectT imeout+ReadTimeout) hystrix.command. De fault.execution.isolation.thread. Hystrix.threadpool.default. coreSize: 500 # ribbon. ConnectTimeout: 3000 r ibbon. ReadTimeout: 15000 # for all operation request retry r ibbon. OkToRetryOnAllOperations: True # switch case retries r ibbon. MaxAutoRetriesNextServer: 1 # on the current instance of retries ribbon. MaxAutoRetries: 0Copy the code
There are two things to note about this configuration:
(1) The timeout of the circuit breaker must be greater than the sum of the timeout times in the load configuration, for example, 19000> 3000 + 15000 in the above configuration.
(2) The default maximum number of concurrent threads is 10, which is far from enough, so set it to 500. Readers can set according to the CPU frequency and number of the server.
Of course, for a microservice, the performance is optimal when the circuit breaker is not enabled.
How to achieve effective monitoring
By using the functions provided by the Spring Cloud tool suite, combined with third-party tools, we can monitor the operation of all microservices more effectively, thus providing a more secure and reliable guarantee for microservices.
For the use of these tools, we just need to reference the relevant tool components, add a little simple design, and relevant configuration, can use its powerful functions.
Monitor service health status
An excellent third-party management tool, Spring Boot Admin, is used to monitor the health status and alarm the service.
The contents of this section are in the base-admin module of the project, first referring to the dependency of its utility component, as shown below:
<dependency> <groupid>de. Codecentric </groupid> <artifactid> Spring-boot-adrnin-starter-server </artifactid> <versio port > Version of 1.0 < / > < / dependency >Copy the code
The tool also provides administrative console access control capabilities and its web user interface (WebUI) design, so you can enable these capabilities by simply adding a security management configuration in conjunction with Spring’s security components. The core code for this configuration is as follows:
@Override
protected void configure (HttpSecurity http) throws Exception {
SavedRequestAwareAuthenticationSuccessHandler successHandler = new
SavedRequestAwareAuthent icationSuccessHandler() ;
successHandler . setTargetUrlParameter ("redi rectTo") ;
http. authorizeRequests ()
. antMatchers ("/assets/**") .permitAll ()
. antMatchers (" /actuator/**") .permitAll ()
. antMatchers ("/ login") .permitAll ()
. anyRequest () . authenticated()
.and()
. formLogin() .loginPage ("/login") . successHandler (successHandler) .and()
logout () . logoutUrl (" /logout") .and()
. httpBasic() .and()
.csrf() .disable() ;
}
Copy the code
In the above code, the main thing is to authorize a few links and use the loginPage page in the loginPage Settings. On the loginPage page, use the web user interface (WebUI) provided by the Spring Boot Admin. Figure 5-5 shows the operating effect.
The user name and password shown in Figure 5-5 are designed using simple policies and can be directly set in the configuration file.
SpringBootAdmin monitors microservices through a registry, so it needs to be connected to the registry itself, and none of the monitored services need to be designed. To provide complete status data, we need to add the following configuration to the configuration file:
Management: e ndpoints: web: exposure: include: "*" e ndPOi NT: health: show-details: ALWAYSCopy the code
Log in to the Sping Boot Admin console and you can see the health of all the microservices registered in the registry, as well as related health data such as thread count, memory usage, and so on. Sping Boot Admin Figure 5-6 shows the running status and related health data.
Major fault alarm
SpringBootAdmin can also provide an alarm function for the services it monitors. When a critical failure occurs, such as service downtime, SpringBootAdmin can send an email to o&M personnel.
To do this, you must use the Spring Boot Mail component. Use the following configuration in the configuration file to enable Spring Boot Admin’s email notification function:
spring :
boot:
admin:
notify:
mail:
to : [email protected]
from : usercenter@ai . com
Copy the code
The email address set above must be valid, and the SpingBootMail sending and receiving function must be configured. In this way, when the microservice restarts or goes down, o&M personnel can receive an alarm notification email from the Spring Boot Admin.
Circuit breaker panel
The Base-Hystrix module of the Base-MicroService project is a circuit breaker instrument panel design.
The circuit breaker dashboard is a component of the Spring Cloud tool suite. To use this feature, we need to reference the following toolkits:
<dependenc s>
<dependency>
<groupid>org .springframework.cloud</groupid>
<artifactid>spring- cloud- starter- netflix- hystrix-dashboard</artifactid>
</dependency>
</dependencies>
Copy the code
A separate circuit breaker dashboard application can be used by adding the following code to the main program without accessing the registry:
@springBootApplication @Controller @enableHystrixDashboard Public Class HystrixApplication {@requestMapping ("/") Public String home() {return "forward: /hystrix"; }... }Copy the code
After starting the circuit breaker dashboard application, open your browser with the following link and see the console home page as shown in Figure 5-7:
http://localhost : 7979
Copy the code
In the console, we enter a service link address and port number as shown below, add hytrix.stream string, and click Monitor Stream button to Monitor related microservices:
http:/ / localhost: 8091/hystrix. stream
Copy the code
If the monitored service has a request, you can see the situation as shown in Figure 5-8.
This is just monitoring for a single microservice, so it’s not very useful in practice, just to provide some reference data for performance testing.
If Turbine components are used, it is possible to monitor groups of services. This converged service circuit breaker instrument panel was designed in the base-turbine module of the project engineering. By adding a reference to Turbine components and adding the service to the registry, you can specify the service to monitor in the configuration file, as shown below:
turbine:
appConfig: catalogapi, catalogweb
aggregator:
clusterConfig: default
clusterNameExpression: new String ("default")
Copy the code
In this configuration, we only monitored the CatalogAPI and CatalogWeb microservices. So, after starting the app, enter the link address and port number of the app in the home console, followed by turbine. Stream, to start the aggregation service’s circuit breaker dashboard.
http: / /localhost:8989/ turbine .stream
Copy the code
Figure 5-9 shows a monitoring instance of an aggregated service circuit breaker dashboard.
Zipkin link tracking
Use Zipkin to realize the link tracking function of microservices. Zipkin is an open source distributed link tracking system. Each service sends real-time data to Zipkin, and Zipkin generates dependency diagrams through Zipkin UI based on invocation relationships.
Zipkin provides in-memory storage, MySQL Cassandra, Elasticsearch, etc.
Zipkin uses a Trace structure to Trace a request, while splitting each Trace into dependent spans. In microservice applications, a user request may be handled by several microservices in the background, and each microservice that processes the request can be understood as a Span.
Download a working ZIPkin-Server JAR package from the Web to create the Zipkin service.
Once the download is successful, run it in a Java environment using the following instructions (JDK version 1.7 or later required) :
java -jar zipkin-server-*.jar --logging.level. zipkin2=INFO
Copy the code
Zipkin uses port 9411 by default, and after the program is successfully started, its console can be opened in a browser using the following link:
http:// localhost:9411/
Copy the code
Figure 5-10 shows the initial opening of the console.
In a microservice application, you can perform the following steps to add link tracing.
(1) Refer to the zipkin-supporting components in the Spring Cloud tool suite, as follows:
<! --> <dependency> <groupId> org.springframework. cloud</groupId> <arti factId>spring-cloud-starter-zipkin</artifactId> </dependency>Copy the code
(2) Add the following configuration items to the configuration file:
Tracking # link | spring: sleuth: sampler: aim-listed probability: 1.0 zipkin: sender: type: web: base - url: http://localhost:9411/Copy the code
After the above configuration, if there is a request in the service, the call records of related services, such as methods involved in the call process and dependencies between services, can be seen in the Zipkin console, as shown in Figure 5-11, Figure 5-12, and Figure 5-13.
We don’t save the Zipkin trace data here, and the data transfer is simply web-based, so it can only be tested at development time. In practice, you can store the trace data in Elasticsearch with the following data:
Transmission can also be implemented using asynchronous message communication. When stored in Elasticsearch, the data is split by day by default, causing Zipkin dependencies to fail to display properly. At this point, you need to do your calculations using zipkin-Dependencies, another open source toolkit. Go to GitHub and search for Zipkin-Dependencies.
Because the toolkit shuts down automatically after a single calculation, you need to set it to run at regular intervals, depending on your situation.
ELK log analysis platform
In addition to monitoring and tracking the operation and mutual invocation of microservices, the output log of microservices is also the most direct entry and practical basis for fault analysis. But to check each service console log is very inconvenient, especially micro service, not only use the Docker release, and distribution in many different servers, so it will use a log analysis platform, will serve all the micro log collection, centralized management, and to provide a unified management platform for query and analysis.
Create a log analysis platform
The log analysis platform ELK consists of Elasticsearch, Logstash and Kibana. Elasticsearch stores logs and provides search functions, Logstash collects logs, and Kibana provides a Web query interface. All three services are open source and can be installed using Docker.
Use a log analysis platform
To use the log collection function provided by the log analysis platform, add the following dependency references to the microservice project:
<! --> <dependency> <groupId> net.logstash. Logback </groupId> <artifactId> logstash- logback-encoder</artifactId> <version>4. 10</version> </ dependency>Copy the code
Add a logback.xm1 configuration file to the application as follows:
<? Xm1 version = "1.0" encoding = "utf-8"? > | <configuration> <property name="LOG_ HOME" value="/logs" /> <appender name=" STDOUT" class="ch. qos. logback. core. ConsoleAppender"> <encoder charset="UTF-8"> <pattern>ad{yyyy-MM-dd HH :mm:ss.Sss} [8thread] 8-5level glogger{50} - 8msg&n</pattern> </encoder> </ appender> <appender name="stash" class="net. logstash. logback. appender . LogstashTcpSocketAppender "> < destination > 192. 168.1.28: 5000</destination> <encoder charset="UTF-8" class= "net. logstash. logback. encoder . LogstashEncoder" /> </ appender> <appender name="async" class="ch. qos. logback. classic.AsyncAppender"> <appender-ref ref="stash" /> </appender> <! > <root level="info"> <appender-ref ref="STDOUT" /> <! -- Output to ELK--> <! --<appender-ref ref="stash" />--> </ root> </ configuration>Copy the code
In the above configuration, the “Stash” configuration is the setup to connect to the log analysis platform. In this configuration, assume that the IP address of the log collection server is 192.168.1.28. You can set the IP address as required.
After the application is started, you can open the Kibana log query console with the following link:
http://192.168.1.28:5601
Copy the code
On the log query console, you can query the log output of each application, as shown in Figure 5-14.
summary
This chapter begins with the creation of a registry and the registration and configuration of microservices. Then, based on the registry, through the implementation of health monitoring, service alarm, circuit breaker dashboard and link tracking functions, how to effectively monitor microservices. At the same time, combined with the use of log analysis platform, all running microservice applications are comprehensively and effectively governed.
Subsequent development and implementation of microservices will be based on this microservice governance environment, and references and configurations related to service governance will not be specified.
Three things to watch ❤️
If you find this article helpful, I’d like to invite you to do three small favors for me:
-
Like, forward, have your “like and comment”, is the motivation of my creation.
-
Follow the public account “Java rotten pigskin” and share original knowledge from time to time.
-
Also look forward to the follow-up article ing🚀
-
[666] Scan the code to obtain the learning materials package
Article is not original, and all of us feel while reading the article good remember thumb up forward + attention oh ~ this article source: www.toutiao.com/i6900081239…