Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.
This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.
One, foreword
In previous articles, we have introduced the theoretical basis of distributed link tracking in detail.
Seeing the figure above, we know that under the micro-service architecture, the functions of the system are composed of a large number of micro-service coordination. For example, the ordering business of e-commerce requires the step by step invocation of order service, inventory service, payment service and SMS notification service. Each service may be developed by a different team and deployed on hundreds or thousands of servers. With such a complex message transmission process, when a system failure occurs, we need a mechanism to quickly locate the fault point and confirm which service is the fault. Thus, distributed link tracing technology is born. The so-called distributed link tracing is to record the invocation process between services in some way during the runtime and help relevant personnel quickly locate faults through the visual UI. Distributed link tracing is the underlying infrastructure of microservice architecture operation and maintenance monitoring. Without it, relevant personnel are just like blind men touching an elephant, and they cannot understand the communication process between services.
2. Application architecture diagram
This article will introduce how to implement microservice link tracing based on Sleuth+Zipkin in Spring Cloud architecture, mainly demonstrating HTTP invocation.
Before going into details, let’s take a look at the application architecture of our example Spring Cloud integration with Zipkin, as shown in the figure below:As you can see from the architecture diagram, all services are registered with Nacos; When a client request arrives, information about the corresponding service is retrieved from Nacos and the request is reverse-brokered to the specified service instance.
The service services and components involved are as follows:
- Nacos, locally installed and started;
- Zipkin, locally installed and started;
- Spring Boot service A;
- Spring Boot service B;
- Spring Boot service C.
Three, quickly understand Sleuth
Sleuth is a service governance module provided by Spring Cloud. Sleuth is built into its standard ecosystem. It enables link tracing for microservices by extending Logging Logging. Under standard microservices, logs are generated in the following format:
The 2021-09-21 02:03:20. 166 INFO/a - service,,, 40327 - [erListUpdater - 0] c.net flix. Config. ChainedDynamicProperty: Flipping property: b-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647Copy the code
However, when the Spring Cloud Sleuth link tracing component is introduced, the following format becomes available:
The 2021-09-21 02:06:41. 410 INFO [a - service, 632 f57c51af8c7a4, 632 f57c51af8c7a4,true] 40415 --- [nio-7000-exec-1] c.netflix.config.ChainedDynamicProperty : Flipping property: b-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
Copy the code
The log output format is fixed and contains the following four parts:
[appname,traceId,spanId,exportable]
Copy the code
- Appname, indicating which microservice the log is generated by.
- TraceId: indicates the track number. A complete business process known as trajectory, such as: realize the login function needs from the service invoke service B, service again invoke service B C, it’s A process of login process is A trajectory, request to receive the response from the front-end application, every function of the complete business process of corresponding only TraceId.
- SpanId, step number. To realize the login function, there are three microservice processes from service A to service C, and each microservice process is given A different SpanId. A TraceId has multiple spanids, and SPANids can belong to only one TraceId.
- Export identifier: indicates whether the current log is exported. If the value is true, the current track data can be collected and displayed by other link tracing visualization services.
We simulated the invocation link of service A -> B -> C, and generated the following logs respectively:
The 2021-09-21 02:18:36. 494 the DEBUG [a - service, 14 aa6f21d700f377, 14 aa6f21d700f377,true] 40619 --- [nio-7000-exec-7] org.apache.tomcat.util.http.Parameters : The Set encoding to utf-8 02:18:36 2021-09-21. 524 the DEBUG [b - service, 14 aa6f21d700f377, 828 df12c1c851367,true] 40622 --- [nio-8000-exec-6] org.apache.tomcat.util.http.Parameters : The Set encoding to utf-8 02:18:36 2021-09-21. 571 the DEBUG [c - service, 14 aa6f21d700f377 ebd9892f8756801d,true] 40626 --- [nio-9000-exec-7] org.apache.tomcat.util.http.Parameters : Set encoding to UTF-8
Copy the code
It can be found that the order of call time is from A to C. Because it is a complete business process, TraceId is the same, but SpanId is different. These logs have been exported by Sleuth and normally collected and displayed by ZipKin. Zipkin is an open source distributed link tracking system for Twitter that collects and visualizes link tracking data from individual service instances. The log generated by the ABC service console is presented in the ZipKin UI as a link tracing chart.This visual UI is a necessary tool for fault analysis, enabling you to intuitively understand the dependency relationship, processing time, and processing status between services during service processing. Speaking of which, we must have a preliminary understanding of microservice link tracking and Sleuth+Zipkin combination. Let’s explain how to transform link tracking in microservice architecture through examples. This process is divided into two parts:
- Add Spring Cloud Sleuth to the service to generate link tracking logs;
- Collect link final logs through ZipKin to produce visual UI.
Iv. Preparation
1. Build Zipkin stand-alone environment
Here we use Docker to quickly launch the Zipkin demo.
# fetch mirror
sudo docker pull openzipkin/zipkin
# run image
sudo docker run -d -p 9411:9411 --name zipkin openzipkin/zipkin
Copy the code
2. Set up naoCOS single machine environment
Here we use Docker to quickly launch the nacOS demo example.
# fetch mirror
sudo docker pull nacos/nacos-server
# run image
sudo docker run -d -p 8848:8848 --env MODE=standalone --name nacos nacos/nacos-server
Copy the code
Fifth, micro-service integration Sleuth
Here, three microservice projects A, B and C are created, and the code invocation relationship is as follows:
Remote calls between A -> B -> C services are implemented via Fegin.
1. Create a SpringBoot project
First, create three Spring Boot projects A, B and C. A and B services pom.xml have the following dependencies:
<! Spring Web Application -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<! --Nacos client -->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<! OpenFeign -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
<version>2.2.6. RELEASE</version>
</dependency>
<! -- Add Sleuth dependency -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
<version>2.2.6. RELEASE</version>
</dependency>
Copy the code
C service POM.xml has the following dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<! -- Add Sleuth dependency -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
<version>2.2.6. RELEASE</version>
</dependency>
Copy the code
Since the calling relationship is service A -> B -> C, OpenFeign needs to be additionally relied on to realize the communication between services in A and B.
2. Configuration files
Then, we configure application.yml for each service. The three profiles are consistent except for the service name and port.
server:
port: 7000
spring:
cloud:
nacos:
discovery:
server-addr: localhost:8848
username: nacos
password: nacos
application:
name: a-service
sleuth:
web: #Spring Cloud Sleuth configuration items for Web components, such as SpringMVC
enabled: true
logging:
level:
root: debug # Debug level logging is enabled for demonstration purposes
Copy the code
3. Implement the core code
Implement the controller for service C first:
@RestController
public class SampleController {
@GetMapping("/c")
public String methodC(a){
String result = " -> Service C";
returnresult; }}Copy the code
Implementation methodC method generates a response string “-> Service C”, method mapping address “/ C”.
Then the CServiceFeignClient implements the communication client of the C service through Feign. The method name is methodC.
@FeignClient("c-service")
public interface CServiceFeignClient {
@GetMapping("/c")
public String methodC(a);
}
Copy the code
The controller calls methodC through methodB in response to the attached string “-> Service B”, which maps the method address to “/ B”.
@Controller
public class SampleController {
@Resource
private CServiceFeignClient cService;
@GetMapping("/b")
@ResponseBody
public String methodB(a){
String result = cService.methodC();
result = " -> Service B" + result;
returnresult; }}Copy the code
Finally, we then implement the BServiceFeignClient of service A through Feign to implement the communication client of service B, method named methodB.
@FeignClient("b-service")
public interface BServiceFeignClient {
@GetMapping("/b")
public String methodB(a);
}
Copy the code
The controller calls methodB through the methodA method at the same time, becomes responsive to the attached string “-> Service A”, method mapping address “/ A”.
@RestController
public class SampleController {
@Resource
private BServiceFeignClient bService;
@GetMapping("/a")
public String methodA(a){
String result = bService.methodB();
result = "-> Service A" + result;
returnresult; }}Copy the code
Now that our core link has been implemented, let’s do a test to verify. The interface to request service A using Postman.
You can see that the ABC three services produce results in sequential order, but the log contains any link trace data.
The 2021-09-21 02:18:36. 494 the DEBUG [a - service, 14 aa6f21d700f377, 14 aa6f21d700f377,true] 40619 --- [nio-7000-exec-7] org.apache.tomcat.util.http.Parameters : The Set encoding to utf-8 02:18:36 2021-09-21. 524 the DEBUG [b - service, 14 aa6f21d700f377, 828 df12c1c851367,true] 40622 --- [nio-8000-exec-6] org.apache.tomcat.util.http.Parameters : The Set encoding to utf-8 02:18:36 2021-09-21. 571 the DEBUG [c - service, 14 aa6f21d700f377 ebd9892f8756801d,true] 40626 --- [nio-9000-exec-7] org.apache.tomcat.util.http.Parameters : Set encoding to UTF-8
Copy the code
Although link logs are already generated, manual statistics of link logs are not practical in a production environment, and we also need to deploy a distributed link tracking system, Zipkin, to simplify the process.
Integrate Zipkin
Zipkin server deployment is very simple and can be quickly started using the Zipkin website. We’ve already deployed a demo with Docker.
1. Introduce clients
First we need each service to integrate a Zipkin client.
<! --Zipkin client -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zipkin</artifactId>
<version>2.2.6. RELEASE</version>
</dependency>
Copy the code
2. Configuration files
Then, we need each service’s application.yml to configure the Zipkin communication address and sample rate.
server:
port: 7000
spring:
cloud:
nacos:
discovery:
server-addr: localhost:8848
username: nacos
password: nacos
application:
name: a-service
sleuth:
sampler: # sampler
probability: 1.0 The sampling rate is the rate at which Trace is collected. The default is 0.1
rate: 10000 # Data collection amount per second, maximum n Trace per second
web: #Spring Cloud Sleuth configuration items for Web components, such as SpringMVC
enabled: true
zipkin: Set the zipkin server address
base-url: http://127.0.0.1:9411
logging:
level:
root: debug # for demonstration purposes
Copy the code
Note the following configuration items:
spring.sleuth.sampler.probability
It is the sampling rate. Suppose that the service generated 10 traces in the past one second. If the default sampling rate is 0.1, it means that only one Trace will be sent to the Zipkin server for analysis and collation; if it is set to 1, all 10 traces will be sent to the server for processing.spring.sleuth.sampler.rate
Indicates the maximum number of traces to be collected per second. It indicates the maximum number of traces that can be collected for each Trace.
3. Operation effect
Once the services are set up, start the application, re-test the service A interface, and then open the Zipkin server UIhttp://localhost:9411To view the call link. Click “Run Query”.Click “show”, the corresponding link call diagram and detailed link will appear.If you click the corresponding “dependency”, you can also see the more detailed content of the topology relationship.Click the corresponding service node to view the statistics of service invocation.
What if we voluntarily shut down service C?We can see that Zipkin has already indicated the cause of the failure in error, indicating that there is no instance of service C available, causing the processing to fail.
Seven, summary
In this class, we carried out practical cases and selected SpringCloud, the current popular micro-service framework, as an example to demonstrate how to integrate Sleuth + Zipkin in micro-services, and at the same time simulated abnormal situations.
Source code address:
- Github.com/zuozewei/bl…