This is the sixth day of my participation in the August More text Challenge. For details, see:August is more challenging

Reading reminder:

  1. This article is intended for those with some springBoot background
  2. The Hoxton RELEASE of Spring Cloud is used in this tutorial
  3. This article relies on the project from the previous article, so check out the previous article for a seamless connection, or download the source directly: github.com/WinterChenS…

Before a summary

  • The beginning of the SpringCloud series (part 1)
  • Nacos of SpringCloud series (2) | August more challenges
  • SpringCloud series (3) of the Open Feign | August more challenges
  • SpringCloud series (4) the SpringCloud Gateway | August more challenges
  • Swagger Knife4j and login permission verification
  • SpringCloud uses Sentinel as a fuse

The full text summary

  • Sleuth get started fast
  • Tracking principle
  • With Zipkin integration

Introduction to the

Through the previous learning, you have actually built a basic microservice architecture system to fulfill the business requirements. But with the development of business, the scale of the system is becoming more and more big, every business module between the service invocation is becoming more and more complicated, usually a request in the back-end will come after a number of different business modules together to produce the final result, in a complex micro service architecture system, almost every request form a complex distributed service call link, A delay or exception in any of the business services on each link causes the entire request to fail. Therefore, for each request, the full link call tracing is becoming more and more important. By implementing link tracing for request calls, we can quickly locate the root cause of the exception and the performance bottleneck in the link. There are many common solutions for distributed link tracking, such as Sleuth+Zipkin, Skywalking, etc. This article focuses on Sleuth+Zipkin solution.

Sleuth get started fast

Before the integration of Sleuth begins, a friendly reminder that for smooth integration, it is recommended to check out the previous article because the project relies on the previous article.

engineered

Add dependencies to the Consumer, Provider, Gateway, and Auth projects:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
Copy the code

If Hystrix and Sentinel conflict, you can remove feign. Hystrix. enable: true, because after Sentinel is connected, there is no need for Hystrix component to realize service circuit breaker, but it will cause component conflict.

Try to request interface: http://127.0.0.1:15010/consumer/nacos/echo/hello

You can see that the console log output has changed:

gateway:

2021- 08- 04 16:57:25.166  INFO [winter-gateway,88a4de6d2424cedf,88a4de6d2424cedf,false] 23972 --- [ctor-http-nio2 -] c.w.gateway.filter.AuthorizeFilter       : AccessToken: [eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJhdXRoX3VzZXIiLCJleHAiOjE2MzAxMzUzNTQsIm5iZiI6MTYyNzU0MzM1NCwidXNlcklkIjoxMDAwMDAwMDF9.c 09H4l_QW3v_ReNyec4nv-vqXtMDBlp4RRhh80RquPS1Ol_slH2k_dZ4vo_MYjCzJXKwWhZpt58UzgG6ZUfK8Q]2021- 08- 04 16:57:25.456  INFO [winter-gateway,88a4de6d2424cedf,88a4de6d2424cedf,false] 23972 --- [ctor-http-nio2 -] c.w.gateway.filter.AuthorizeFilter       : claims is:{sub=auth_user, exp=1630135354, nbf=1627543354, userId=100000001}
2021- 08- 04 16:57:25.457  INFO [winter-gateway,88a4de6d2424cedf,88a4de6d2424cedf,false] 23972 --- [ctor-http-nio2 -] c.w.gateway.filter.AuthorizeFilter       : userId:100000001
Copy the code

consumer:

2021- 08- 04 16:57:26.566  INFO [winter-nacos-consumer,88a4de6d2424cedf,1170003686062d81,false] 26520 --- [io- 16011.-exec- 1] com.alibaba.nacos.client.naming          : new ips(1) service: DEFAULT_GROUP@@winter-nacos-provider -> [{"clusterName":"DEFAULT"."enabled":true,"ephemeral":true,"healthy":true,"instanceHeartBeatInterval":5000."instanceHeartBeatTimeOut":15000."instanceId":"10.1.18.76 # 16012 # DEFAULT# DEFAULT_GROUP @ @ winter - nacos - provider"."ip":"10.1.18.76"."ipDeleteTimeout":30000."metadata": {"preserved.register.source":"SPRING_CLOUD"},"port":16012."serviceName":"DEFAULT_GROUP@@winter-nacos-provider"."weight":1.0}]
Copy the code

From the console output content above, we can see a little more like [[winter – gateway, 88 a4de6d2424cedf, 88 a4de6d2424cedf, false] log information, and these elements is tracking is an important part of implementing distributed service, The meaning of each value is described below.

  • The first value:winter-gatewayWhich records the name of the application, which isapplication.propertiesIn thespring.application.nameParameter configuration properties.
  • The second value:88a4de6d2424cedf.Spring Cloud SleuthOne that’s generatedID, known as theTrace IDIs used to identify a request link. A request link contains oneTraceIDMore than,SpanID.
  • The third value:88a4de6d2424cedf.Spring Cloud SleuthThe other one that’s generatedID, known as theSpan IDWhich represents a basic unit of work, such as sending an HTTP request.
  • The fourth value:false, indicating whether to output the information toZipkinAnd so on.

TraceID and SpanID of the above four values are the core of Spring Cloud Sleuth’s implementation of distributed service tracing. During a single invocation of the service request link, the same Trace ID is preserved and passed, thus concatenating the entire request Trace information distributed across different microservice processes. Taking the above output as an example, the winter-Gateway and Winter-nacos-Consumer belong to the same front-end service request source, so their Traceids are the same and reside on the same request link.

Tracking principle:

Service tracking in a distributed system is not complicated in theory. It mainly consists of two key points.

  • In order to implement request tracing, when a request is sent to the entry endpoint of a distributed system, only the service tracing framework needs to create a unique trace identity for the request, and the framework will ensure that the unique identity is passed until it is returned to the requestor during the flow of the distributed system. The only identifier is the Trace ID mentioned earlier. Using the TraceID record, we can associate the logs of all requests.
  • In order to count the time delay of each processing unit, when a request reaches a service component or the processing logic reaches a certain state, it is also marked with a unique identifier, namely SpanID, to mark the beginning, the specific process, and the end of the request. For each Span, it must have two nodes, the start and end of the Span, by recording the start and end of the Span timestamp, you can count the time delay of the Span, in addition to the timestamp record, it can also contain some other metadata, such as event name, request information, and so on.

With Zipkin integration

Zipkin installation

Common installation:

curl -sSL https://zipkin.io/quickstart.sh | bash -s
java -jar zipkin.jar
Copy the code

Docker – compose start:

docker-compose.yml:

version: '2'

services:
  zipkin:
    image: openzipkin/zipkin
    container_name: zipkin
    environment:
      - STORAGE_TYPE=mysql
      - MYSQL_DB=zipkin
      - MYSQL_USER=root
      - MYSQL_PASS=root
      - MYSQL_HOST=172.26208.1.
      - MYSQL_TCP_PORT=3306
    ports:
      - 9411:9411
Copy the code

At the same time, support a variety of data storage methods, such as ES mysql, more data collection and storage methods: github.com/openzipkin/…

Mysql > create table (s);

CREATE TABLE IF NOT EXISTS zipkin_spans (
  `trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit',
  `trace_id` BIGINT NOT NULL,
  `id` BIGINT NOT NULL,
  `name` VARCHAR(255) NOT NULL,
  `remote_service_name` VARCHAR(255),
  `parent_id` BIGINT,
  `debug` BIT(1),
  `start_ts` BIGINT COMMENT 'Span.timestamp(): epoch micros used for endTs query and to implement TTL',
  `duration` BIGINT COMMENT 'Span.duration(): micros used for minDuration and maxDuration query'.PRIMARY KEY (`trace_id_high`, `trace_id`, `id`)
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;

ALTER TABLE zipkin_spans ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTracesByIds';
ALTER TABLE zipkin_spans ADD INDEX(`name`) COMMENT 'for getTraces and getSpanNames';
ALTER TABLE zipkin_spans ADD INDEX(`remote_service_name`) COMMENT 'for getTraces and getRemoteServiceNames';
ALTER TABLE zipkin_spans ADD INDEX(`start_ts`) COMMENT 'for getTraces ordering and range';

CREATE TABLE IF NOT EXISTS zipkin_annotations (
  `trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit',
  `trace_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.trace_id',
  `span_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.id',
  `a_key` VARCHAR(255) NOT NULL COMMENT 'BinaryAnnotation.key or Annotation.value if type == -1',
  `a_value` BLOB COMMENT 'BinaryAnnotation.value(), which must be smaller than 64KB',
  `a_type` INT NOT NULL COMMENT 'BinaryAnnotation.type() or -1 if Annotation',
  `a_timestamp` BIGINT COMMENT 'Used to implement TTL; Annotation.timestamp or zipkin_spans.timestamp',
  `endpoint_ipv4` INT COMMENT 'Null when Binary/Annotation.endpoint is null',
  `endpoint_ipv6` BINARY(16) COMMENT 'Null when Binary/Annotation.endpoint is null, or no IPv6 address',
  `endpoint_port` SMALLINT COMMENT 'Null when Binary/Annotation.endpoint is null',
  `endpoint_service_name` VARCHAR(255) COMMENT 'Null when Binary/Annotation.endpoint is null'
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;

ALTER TABLE zipkin_annotations ADD UNIQUE KEY(`trace_id_high`, `trace_id`, `span_id`, `a_key`, `a_timestamp`) COMMENT 'Ignore insert on duplicate';
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`, `span_id`) COMMENT 'for joining with zipkin_spans';
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTraces/ByIds';
ALTER TABLE zipkin_annotations ADD INDEX(`endpoint_service_name`) COMMENT 'for getTraces and getServiceNames';
ALTER TABLE zipkin_annotations ADD INDEX(`a_type`) COMMENT 'for getTraces and autocomplete values';
ALTER TABLE zipkin_annotations ADD INDEX(`a_key`) COMMENT 'for getTraces and autocomplete values';
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id`, `span_id`, `a_key`) COMMENT 'for dependencies job';

CREATE TABLE IF NOT EXISTS zipkin_dependencies (
  `day` DATE NOT NULL,
  `parent` VARCHAR(255) NOT NULL,
  `child` VARCHAR(255) NOT NULL,
  `call_count` BIGINT,
  `error_count` BIGINT.PRIMARY KEY (`day`, `parent`, `child`)
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;
Copy the code

Access the console: http://127.0.0.1:9411

Modify the project

Add dependencies to the Consumer, Provider, Gateway, and Auth projects:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>
Copy the code

Modify the configuration of the four projects respectively:

spring:
  zipkin:
    sender:
      type: web
    base-url: http://localhost:9411/
    service:
      name: consumer Spring.application.name = spring.application.name
Copy the code

test

Start the four engineering consumer respectively, the provider, gateway, auth then browser visit: http://127.0.0.1:15010/consumer/nacos/feign-test/hello

Visit: http://127.0.0.1:9411/ click Run Query to Query the link trace data:

Click any SHOW to view link details:

You can see the process relationship of the entire request link, as well as the request time and other information.

conclusion

In a complex microservice architecture, link tracing between services is also critical to help locate service exceptions and performance bottlenecks, and Sleuth+Zipkin has other solutions available.

The source address

GitHub – WinterChenS/spring-cloud-hoxton-study: spring cloud hoxton release study

reference

Forezp.blog.csdn.net/article/det…

www.cnblogs.com/wuzhenzhao/…