Spring Cloud Alibaba from Entry to Mastery, the most comprehensive explanation in history (Part 2)
- Jump link: previous address
9. Message-driven microservices -Spring Cloud Alibaba
9.1 Spring implementation of asynchronous methods
- AsyncRestTemplate: Reference document
- @async NOTE: Reference documentation
- WebClient(Introduced in Spring 5.0) : Reference documentation
- MQ
9.2 Architecture Evolution After MQ is Introduced
- With the introduction of MQ, some synchronization time can be handled asynchronously, with MQ acting as a middleware to connect the two places
9.3 MQ Application Scenarios
- Asynchronous processing
- Flow peak cutting and valley filling
- Decouple microservices
9.4 Selection of MQ
- Types of MQ:
- Kafka
- RabbitMQ
- RocketMQ
- ActiveMQ
- Note: MQ selection
- RocketMQ success story: Reference
9.5 build RocketMQ
- ‘s note
9.6 Setting up the RocketMQ Console
- ‘s note
9.7 RocketMQ terminology and concepts
-
Term/Concept:
9.8 RocketMQ advanced
- To fully understand RocketMQ, we need to read the developer’s guide
9.9 Spring Message Programming Model 01
- Using RocketMQ in a project:
-
Introducing dependencies:
-
Write notes: No notes
-
Write the configuration:
-
The introduction of RocketTemplate:
-
Send a message:
If the group is not specified, an exception will be reported.
-
- Use of utility classes:
- RocketMQ: RocketMQTemplate
- ActiveMQ/Artemis:JmsTemplate
- RabbitMQ:AmqpTemplate
- Kafka: KafkaTemplate
Spring Message Programming Model 02
- Now that we’ve written message producers, let’s write message consumers:
- The steps are as follows:
- Introducing dependencies:
<dependency> <groupId>org.apache.rocketmq</groupId> <artifactId>rocketmq-spring-boot-starter</artifactId> <version>2.03.</version> </dependency> Copy the code
- Write configuration, written in application.yml:
rocketmq: name-server:127.0. 01.:9876 Copy the code
The name-server value is based on each person’s actual IP and port, based on the RocketMQ address we installed
- Create the object that receives the message:
@Data @NoArgsConstructor @AllArgsConstructor @Builder public class UserAddBOnusMsgDTO{ /** * for whom to integrate */ privateThe Integer userId./** ** how much more integral */ private Integer bonus; } Copy the code
- Create a message listener class:
@Service @RocketMQMessageListener(consumerGroup= "consumer-group",topic= "add-bonus") public class AddBonusListener implements RocketMQListener<UserAddBonusMsgDTO>{ @Override public void onMessage(UserAddBonusMsgDTO message){ System.out.println(message.getUserId()) System.out.println(message.getBonus()) } } Copy the code
Note: The consumerGroup is written to the configuration file in the producer and specified here in the consumer. Topic is added when the producer sends a message, and is specified when the listener class is received. Both must be included;
- Introducing dependencies:
- The official documentation provides a summary of the various MQ usage methods
- Consumer annotations for each MQ are briefly summarized:
- RocketMQ: RocketMQMessageListener
- ActiveMQ/Artemis:JmsListener
- RabbitMQ: RabbitListener
- Kafka: KafkaListener
9.11 Distributed Transaction 01
- Transactional(rollbackFor= exception.class) When an Exception is found, rollback
- Existing problems:
- Here is:
- Problem overview: When our logical code, not only to do database processing, some scenarios we need to send messages and interact with MySQL function; In this figure, we send the message first, and then write the message to the cache. The result is that if the code fails to execute while writing to the cache, the rollback operation only rolls back the database, and the message has already been listened to and processed by the consumer.
- RocketMQ implements the flow of transactions:
To put it simply, RocketMQ implements distributed transactions by saying that when it is time to send a message, it does not send it. Instead, it is in the “ready to send” phase. When all the code has been executed and nothing is unusual, it sends the message completely, at which point message consumers can listen to the message.
- Concepts and Terms:
- Half(Prepare) Message
- A message that is temporarily unavailable for consumption. The producer sends a message to the MQ Server, but the message is marked “undeliverable” and stored; Consumers are not going to consume this news
- Message Status Check
- Network disconnection or producer restart may result in a second acknowledgement of lost transaction messages. When MQ Server finds that a message has been in a semi-message state for a long time, it sends a request to the message producer asking about the final status (commit or rollback) of the message
- Message three states:
- Commit: Commits a transaction message that can be consumed by consumers
- Rollback: Rollback the transaction message. The broker will delete the message and the consumer cannot consume it
- UNKNOWN: The broker needs to check back to confirm the status of the message
- Half(Prepare) Message
9.12 Distributed Transaction 02- Encoding
-
Add a new table to the database to log RocketMQ transactions:
- Execution code:
create table rocketmq_transaction_log( id int auto_increment comment 'id' primary key, transaction_Id varchar(45) not nullComment 'transaction id', logvarchar(45) not nullComment 'log ')Copy the code
- Execution code:
-
Message producer writing: Send half message:
// Execute this code after the current code is successfully executed // Send a half-message String transactionId=UUID.randomUUID().toString() this.rocketMQTemplate.sendMessageInTransaction( "tx-add-bonus-group"."add-bonus",MessageBuilder.withPayload( UserAddBonusMsgDTO.builder().userId(share.getUserId) .bonus(50) .build() ).setHeader(RocketMQHeaders.TRANSACTION_ID,transactionId) .setHeader("share_id",id) .build(), auditDTO ) Copy the code
Here “tx-add-bonus-group”,”add-bonus” group name and topic are specified by themselves, can be changed according to actual situation. AuditDTO and share_id are the data that are passed in according to service needs. AuditDTO can be directly used in the message listening class, and share_id data can be directly obtained from the request header.
-
Written by message consumers:
@RocketMQTransactionListener(txProducerGroup = "tx-add-bonus-group") @RequiredArgsConstructor(onConstructor = @_(@Autowired)) public class AddBonusTransactionListener implements RocketMQLocalTransactionListener{ @Override public RocketMQLocalTransactionState executeLocalTransaction(Message msg, Object arg) String transactionId(String)headers.get(RocketMQHeaders.TRANSACTION_ID); Integer shareId= Integer.valueOf((String)headers.get("share_id")) try{ this.shareService.auditByIdInDB(shareId,(ShareAuditDTO) arg) return RocketMQLocalTransactionState.COMMIT; }catch(Exception e){ returnRocketMQLocalTransactionState.ROLLBACK; }}// write back to check the code when we @Override public RocketMQLocalTransactionState checkLocalTransaction(Message msg){ return null; }}Copy the code
When we performed successfully, perform RocketMQLocalTransactionState.COM MIT, failure is the ROLLBACK. For example, after executing the logic code and preparing to COMMIT, a power failure occurs. As a result, the data has been saved, but the COMMIT is not successful. So we need a backcheck method, and checkLocalTransaction() is a backcheck method that goes inside to determine if it was successfully executed. Combined with the RocketMQ transaction table we have established, we can perform a backcheck operation, as shown below:
// auditByInDB shows the content of the auditByInDB method:
-
Create a new save method. Our previous save method does not add transaction data to the log table. We can modify it like this: When the data is saved, the data is saved to the log table. If the callback method is not stored, the execution fails:
@Autowired private RocketmqTransactionLogMapepr rocketmqTransactionLogMapepr; @Transactional(rollbackFor= Exception.class) public void auditByIdWithRocketMqLog(Integer id, ShareAuditDTO auditDTO, String transactionId){ this.auditByIdInDB(id,auditDTO); this.rocketmqTransactionLogMapper.insertSelective( RocketmqTransactionLog.builder().transactionId(transactionId) .log("Audit sharing") .build() ); } Copy the code
-
Message consumer rewrite:
@Autowired private ShareService shareService; @Autowired priavte RocketmqTransactionLogMapper rocketmqTransactionLogMapper; @RocketMQTransactionListener(txProducerGroup = "tx-add-bonus-group") @RequiredArgsConstructor(onConstructor = @_(@Autowired)) public class AddBonusTransactionListener implements RocketMQLocalTransactionListener{ @Override public RocketMQLocalTransactionState executeLocalTransaction(Message msg, Object arg) String transactionId(String)headers.get(RocketMQHeaders.TRANSACTION_ID); Integer shareId= Integer.valueOf((String)headers.get("share_id")) try{ this.shareService.auditByIdWIthRocketMqLog(shareId,(ShareAuditDTO) arg,transactionId) return RocketMQLocalTransactionState.COMMIT; }catch(Exception e){ returnRocketMQLocalTransactionState.ROLLBACK; }}// write back to check the code when we @Override public RocketMQLocalTransactionState checkLocalTransaction(Message msg){ MessageHeaders headers= msg.getHeaders(); String transactionId= (String) headers.get(RocketMQHeaders.TRANSACTION_ID); // Query whether transaction data is saved this.rocketmqTransactionLogMapper.selectOne(RocketmqTransactionLog.builder().transactionId(transactionId).build()); // Determine whether to commit if(transactionLog ! =null) {return RocketMQLocalTransactionState.COMMIT; } returnRocketMQLocalTransactionState.ROLLBACK; }}Copy the code
Use headers and arg to pass parameters
9.13 What is Spring Cloud Stream?
- Is a framework for building message-driven microservices
- Architecture:
9.14 Spring Cloud Stream programming model
- Concept:
- Destination Binder
- Components that communicate with messaging middleware
- Destination Bindings
- Binding is a bridge between applications and messaging middleware for the consumption and production of messages, created by binder
- The Message (Message)
- Destination Binder
- Programming model diagram:
When message producers use Kafka to send messages, Kafka must be used to receive messages. When using SpringCloudStream to process messages, we receive Kafka messages, which can be received using other messaging middleware. SpringCloudStream encapsulates the message, so we don’t need to care what message middleware the producer is using.
Spring Cloud Stream code – Message producer
- Write producers:
- Add dependencies:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-rocketmq</artifactId> </dependency> Copy the code
- Add annotations (add annotations to the startup class) :
- Add the @enablebing (source.class) annotation as shown:
- Write configuration (application.yml) :
Spring: Cloud: Stream: RocketMQ: Binder: name-server:127.0. 01.:9876Bindings: output: # used to specify topic destination: stream-test-topicCopy the code
- The producer sends a message:
@GetMapping("test-stream") public String testStream(a){ this.source.output() .send( MessageBuilder .withPayload("Message body") .build() ); return "success"; } Copy the code
- Check whether the server is successfully sent.
- In the console we can check if there are any messages that have been sent under this group:
- If the console is always printing logs, we can lower the log level:
- Add dependencies:
Spring Cloud Stream message consumer
- Write consumers:
- Add dependencies:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-rocketmq</artifactId> </dependency> Copy the code
- Add an annotation (add an annotation to the startup class) : @enableBinding (sink.class)
- Write configuration (application.yml) :
spring: cloud: stream: rocketmq: binder: name-server: 127.0. 01.:9876Bindings: input: destination: stream-test-topic group: binder-group If you're not using RocketMQ, you can leave it blankCopy the code
- Listening consumption:
@Service @Slf4j public class TestStreamConsumer{ @StreamListener(Sink.INPUT) public void receive(String messageBody){ log.info("Received message via stream: messageBody = {}"); }}Copy the code
- Add dependencies:
9.17 Spring Cloud Stream Interface Customization: message producer
- Interface:
public interface MySource{ String MY_OUTPUT= "my-output"; @Output(MY_OUTPUT) MessageChannel output(a); } Copy the code
- On the startup class, the ** @enableBinding annotation introduces mysource. class as shown:
- Add the configuration:
- Define an interface to send a message:
Using a custom interface we can send and receive messages;
Spring Cloud Stream Interface Customization: Message consumers
- Create method:
- The startup class introduces:
- Configuration class modification:
- Using a custom interface: Message consumption listening, as shown:
9.19 Message Filtering
- ‘s note
9.20 Monitoring Spring Cloud Stream
- Spring Cloud Actuator provides us with three endpoints to monitor streams:
- /actuator/bindings
- /actuator/channels
- /actuator/health
9.21 Spring Cloud Stream Exception Handling
- Error handling notes
- Global exception handling is defined as follows:
@StreamListener("errorChannel") public void error(Message message){ ErrorMessage errorMessage= (ErrorMessage) message; log.warn("Exception occurred,errorMessage = {}",errorMessage); } Copy the code
Spring Cloud Stream + RocketMQ implements distributed transactions
- While SpringCloud Stream itself does not implement distributed transactions, its combination with RocketMQ is a distributed transaction using RocketMQ. When combined with others, it uses distributed transactions of other message-oriented middleware.
- The distributed transaction transformation of Spring Cloud Stream is shown below:
-
Change the sender from rocketRestTemplate to Source:
-
Define composite transactions in the configuration file:
-
When defining a group name for a message’s listener, make sure it is the same as in the configuration file, as shown in the figure below:
-
10. Spring Cloud Gateway
10.1 Why Use a Gateway
- In the case that we do not use Gateway, when we directly communicate with micro-services, each service needs to carry out Gateway login verification, and at the same time need to solve the login status synchronization of each service and other functions;
- Gateway can be used to expose a domain name. No matter how much micro-service is added, it only needs to point to a Gateway. It can carry out login, verification, authorization and some interception operations uniformly.
10.2 What is the Spring Cloud Gateway?
- It is the gateway to SpringCloud (generation 2) and will replace Zuul (generation 1) in the future
- Built on Netty, Reactor, and WebFlux (so it starts up faster than other microservices generally)
- Its advantages:
- Strong performance: 1.6 times better than the first-generation gateway Zuul 1.x! Performance PK
- Powerful: Built-in many practical functions, such as forwarding, monitoring, traffic limiting, etc
- The design is elegant and easy to expand
- Its disadvantages:
- Relying on Netty and Webflux, not the Servlet programming model, has some adaptation costs
- It does not work under Servlet containers and does not build into WAR packages
- Spring Boot 1.x is not supported
10.3 Writing the Spring Cloud Gateway
- Create the project gateway, which is omitted here
-
pom.xml:
-
Parent project dependencies:
-
Define SpringCloud versions, Gateway dependencies:
-
SpringCloudAlibaba and other introductions:
-
Add Nacos and Actuator dependencies:
-
-
Application. Yml configuration:
-
Port configuration and NACOS and Gateway configuration:
-
Configure the following configurations:
-
-
Start the gateway
-
Gateway is based on Netty, so it is very fast to start. Was ready for service from the forward, because the gateway: discovery: a locator: enabled: true can automatically forwarded to the corresponding path to service;
- Forwarding rule: Access to ${GATEWAY_URL}/{microservice X}/ path will be forwarded to/path of microservice X
10.4 Core Concepts
- Route:
- The basic element of the Spring Cloud Gateway can be understood simply as a forwarding rule. Contains the ID, destination URL, Predicate set, and Filter set.
- Predicate (verb) : Java. Util. The function. The Predicate, Spring Cloud Gateway using the matching conditions of the Predicate to realize routing
- Filter: Modifies the request and response
- Example of route configuration:
10.5 Architecture Analysis
- Architecture diagram:
- Source:
10.6 Route Predicate Factories
- Factory illustration:
- Predicate factory notebook
10.7 Customizing the Route predicate Factory
- The custom route PredicateFactory class must be named with the PredicateFactory ending
- General steps:
- Inheritance AbstractRoutePredicateFactory < > custom configuration object
- Add constructor
- Overriding abstract methods
- Add a configuration rule to the configuration
10.8 Built-in Filter factory details
- Note: A summary of all built-in filter factories
10.9 Customizing filter Factories
- Content Introduction:
- Filter life cycle
- How to customize filter factories
- Core API
- Write a filter factory
- Filter life cycle:
- Pre: indicates before the Gateway forwards a request
- Post: After the Gateway forwards the request
- Custom filter Factory – Method 1:
- Examples of inheritance and reference:
- Configuration mode:
- Custom filter Factory – Method 2:
- Custom filter factory – Core API
The core Api of these filter factories is simple, as the name implies;
- Write a filter project:
-
Create a class: PreLogGatewayFilterFactory:
Finally, annotate the filter factory with @Component
-
Add content in the configuration and modify the factory class to print the new data in the configuration:
The configuration file, introduced to the ab two configuration values, can through the config in our factory. GetName, config. GetValue is available; When a request passes through the filter factory, the log is printed;
-
10.10 Global Filter
- It applies to all routes and has the concept of execution order. The smaller the order, the earlier the order is executed.
- Record notes
10.11 Suspense: How to integrate Sentinel for Spring Cloud Gateway?
- Gateway integration hystrix
Sentinel will not support Gateway until version 1.6
10.12 Monitoring the SpringCloud Gateway
- SpringCloud Gateway monitoring
10.13 Troubleshooting and Debugging Techniques
- Spring Cloud Gateway troubleshooting summary
10.14 Advanced: Filter execution order
- Conclusion:
- The smaller the order, the more advanced the execution
- The Order of the filter factory increases from 1 in configuration Order
- If a default filter is configured, the line executes the default filter of the same Order
Those in the same order take precedence in default-filter
- Return OrderedGatewayFilter if you want to control the Order yourself
- Source:
10.15 Spring Cloud Gateway Traffic Limiting
- SpringCloud Gateway Flow limiting manual
10.16 Summary of this chapter
- Routing, routing predicate factory, filter factory, global filter
- Gateway integration:
- Registered to Nacos
- Integrated Ribbon
- Fault tolerance (Hystrix by default, Sentinel also available)
11. User authentication and authorization of micro services
11.1 Authentication and Authorization – An inevitable topic
- Each application basically requires login to verify user permissions
11.2 Stateless vs. Stateless
- Stateful:
- Here is:
- Advantages: The server has strong control
- Disadvantages: there is a central point, eggs in one basket, migration trouble, server-side storage data, increased server-side stress
- Stateless:
- Here is:
- Advantages: To the center, no storage, simple, arbitrary capacity expansion, shrinkage
- Disadvantages: the server side control is relatively weak (can’t force people to log off at any time, modify the login duration, etc.)
11.3 Micro-Service Authentication Scheme 01 Security Everywhere
- A blog about security everywhere
- Common protocols are: OAuth2.0, series of articles
- Representative implementation:
- Spring Cloud Security, example code
- Jboss Keycloak, sample code
Look at Keyclock, but it doesn’t support CLoud, it’s a Servlet model, and it doesn’t work with Gateway.
- Advantages: High security Disadvantages: High implementation cost, low performance
11.4 Micro Service Authentication Scheme 02 – External stateless, internal stateful Scheme
- Here is:
- It can be compatible with older architecture projects. Old projects may not have tokens, but they can get information from the Session.
It doesn’t have security or performance advantages, but it has the advantage of being compatible with older project services
11.5 Micro-service Authentication Scheme 03 – Gateway authentication and Authorization and Internal naked Running Scheme
- Here is:
Low security and high performance
11.6 Micro Service Authentication Scheme 04 Internal Streaking Improvement Scheme
- Here is:
- Each service can parse the Token, and each service will not run naked. However, each service knows the Token decryption method, which is easy to expose.
11.7 Microservice Authentication Scheme 05 Comparison
- Comparison diagram:
11.8 Access control Model
- The model is shown as follows:
- RBAC:
What is 11.9 JWT? What is it?
- JWT, which stands for Json Web Token, is an open standard (RFC 7519) for securely transferring information between parties. JWT can be authenticated and trusted because it is digitally signed.
- JWT composition diagram:
- Formula:
- JWT’s handwriting
11.10 AOP implements login status check
- How to implement login status check:
SpringAop is recommended for implementation, which is decoupled and flexible
- Manual implementation of section:
- Define notes:
- Define section:
- Define the exception catching class:
11.11 Feign Implements Token transfer
-
The Controller layer can accept tokens:
-
Tokens can be carried in Feign:
-
The above method carries tokens and requires configuration for each of them, which is quite troublesome. When Feign calls more interfaces, we can use interceptors to carry tokens uniformly:
-
When using global configuration, you need to add the following configuration to the configuration:
11.12 RestTemplate Implements Token transfer
- There are two ways
exchange()
andClientHttpRequestInterceptor
- Examples of exchange() code:
- Example using RestTemplate interceptor:
- Interceptor configuration:
- RestTemplate configuration:
11.13 Java Custom Annotations
- Java custom annotation notes
11.15 Summary of this chapter
Xii. Configuration Management
12.1 Why Is Configuration Management Implemented?
- Different configurations exist in different environments
- Configuration properties must be dynamically updated without restart
Nacos can be used as a configuration server for both of these functions;
12.2 Using Nacos to Manage Configuration
-
Add dependencies:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId> </dependency> Copy the code
-
Write configuration: – The contents of the configuration must be the same as the configuration in Nacos, always remember this diagram:
-
Establish the bootstrap. Yml:
Spirngconfig is configured separately from Nacos. You are advised to store the config in bootstrap.yml. Otherwise, the configuration may not take effect and the application cannot read the configuration.
-
Write the configuration in the code:
-
Setup configuration in Nacos:
After configuring the content, click Publish.
-
Restart the application, call the interface, find the parameters can be obtained, complete!
12.3 Configuring Dynamic Attribute Refresh and Rollback
-
Annotate classes that require dynamic attribute refreshes
@RefreshScope
, can dynamically refresh the configuration, as shown in the figure: -
When we change itmuch to itmuch.com, the interface is refreshed automatically:
-
Roll back:
12.4 Application Configuration Sharing
-
Configuring sharing:
-
shared-dataids:
-
Mode 2: ext-config:
-
Mode 3: Automatic mode:
The automatic way is that the content is the common configuration data for all environments. Profiles. Active points to environments that can place their changed configurations;
-
Priority:
12.5 Boot Context
- The default remote configuration has a higher priority, which can be configured using the following code:
- Remote configuration priority must be configured in Nacos to take effect, as shown in the figure below:
12.6 Data Persistence
Service discovery components are placed in folders. The configuration data is stored in an embedded database (mysql is recommended for the production environment).
12.7 Build a cluster of Nacos available for production
- Build a cluster of NACOS available for production
12.8 Configuration Best Practices
- Don’t put it remotely if you can put it locally
- Avoid priority and simplify configuration
- Specification, such as annotations for all configuration properties
- As few configuration managers as possible (Nacos security permission functions are not complete, for the sake of security and efficient management, as few as possible)
Call chain monitor -Sleuth
13.1 Calling Chain Monitoring
- Using call chain monitoring, we can clearly see which methods the interface calls, which methods consume how much time, and if there is a problem, which method is the problem, we can quickly locate the problem
- The industry’s leading call chain monitoring tool
- Spring Cloud Sleuth+Zipkin
- Skywalking, Pinpoint
13.2 integrated Sleuth
- What is Sleuth?
- Sleuth is a distributed tracking solution for Spring Cloud
- Sleuth terms?
- Span: The basic unit of work of Sleuth, uniquely identified by a 64-bit ID. In addition to ids, spans also contain other data, such as descriptions, timestamp events, annotations (labels) for key-value pairs, SPAN IDS, SPAN parent ids, and so on
Each row of data can be interpreted as a span
- Trace: A tree consisting of a set of spans is called trace
- Annotation:
- CS: A Client sends a request. This annotation describes the beginning of a SPAN
- SR (Serverr Received On the server side) : The server side gets the request and is ready to process it
- SS (Server Sent) : This annotation indicates that request processing is completed (when the response is Sent back to the client)
- CR (Client Received Received by a Client) : indicates the end of span. The client successfully receives the response from the server
- Span: The basic unit of work of Sleuth, uniquely identified by a 64-bit ID. In addition to ids, spans also contain other data, such as descriptions, timestamp events, annotations (labels) for key-value pairs, SPAN IDS, SPAN parent ids, and so on
- Integrate Sleuth for user center:
-
And rely on
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency> Copy the code
-
Change the log level (print more logs, optional)
-
13.3 Zipkin construction and integration
- What’s a Zipkin?
- Zipkin is an open source distributed tracking system for Twitter that collects temporal data to track system invocation problems
- Generally speaking, Zipkin is to store and display the data collected by SLEUTH. Its visual interface can provide us with more friendly and clearer decisions.
- Build Zipkin Server (version of this article: 2.12.9)
- Method 1: Use the official Zipkin Shell to download the latest version:
curl -sSL https://zipkin.io/quickstart.sh | bash -s Copy the code
- Method 2: Go to the Maven central repository and visit the following address:
https://search.maven.org/remote_content?g=io.zipkin.java&a=zipkin-server&v=1 Copy the code
The downloaded file is named zipkin-server-2.12.9-exec.jar
- Method 3: Use the baidu disk address to download version 2.12.9:
https://pan.baidu.com/s/1HXjzNDpzin6fXGrZPyQeWQ Copy the code
Password: aon2
- Method 1: Use the official Zipkin Shell to download the latest version:
- Start Zipkin and execute the command shown below:
Java jar zipkin - server - 2.12.9 - exec. Jar
- Sleuth+Zipkin integration for user center:
- Add dependencies: Remove Sleuth and add zipkin dependencies (remove Sleuth because there is a Sleuth dependency in Zipkin)
- Gradle:
compile group: 'org.springframework.cloud', name: 'spring-cloud-sleuth-zipkin', version: '2.2.3.RELEASE' Copy the code
- Maven:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-sleuth-zipkin</artifactId> <version>2.23..RELEASE</version> </dependency> Copy the code
- Gradle:
- Add configuration: Set the Zipkin server address and sampling rate, as shown in the figure:
The higher the sampling rate, the better, and the lower the sampling rate, the better. The higher the sampling rate, the more accurate the analysis, but its performance consumption will be more serious;
- Add dependencies: Remove Sleuth and add zipkin dependencies (remove Sleuth because there is a Sleuth dependency in Zipkin)
13.4 Solution of Nacos error after Zipkin integration
- The SpringCloudhttp://localhost:9411/ is used as the service name in the service discovery component; As a result, NacosThe Client tries to find a name from the Nacos Server called:
localhost:9411
The service of… The service doesn’t exist, so it keeps reporting exceptions; - Solution:
- Option 1: Have Spring Cloud correctly recognize http://localhost:9411/ as a URL, not as a service name (select this project)
- Solution 2: Register Zipkin Server with Nacos
- To solve it in the way of plan 1, we need to put
discoveryClientEnabled: false
, as shown in the figure:
The small hump here is named because there is a small bug that will be fixed and separated by –
13.5 Integrate Zipkin for all microservices
- Refer to the introduction method of 13.5;
13.6 Zipkin Data Persistence
-
Persistent mode:
- MySQL(not recommended, performance issues)
- Elasticsearch
- Cassandra
- Related documents: (github.com/openzipkin/…
-
Download ElasticSearch:(version 5,6,7 recommended)
- Download address: (website address) [elastic. Co/cn/downloads/past – releases# elasticsearch]
- Unzip and go to directory:
- Switch to the bin directory and execute:
./elasticsearch
Or background execution./elasticsearch -d
- Look at Zipkin’s environment variables, configuration
STORAGE_TYPE
andES_HOSTS
And then execute the Zipkin Server service:
Zipkin other environment variables: github.com/openzipkin/…
13.7 Dependency diagram
- This is required if you are using ElasticSearch
spark job
The dependency diagram can be analyzed as follows:
It’s a zipkin subproject, downloaded first and launched second
- Zipkin Dependencies use the ElasticSearch environment variable:
- Start the Zipkin Dependencies:
- Zipkin Denpendencies additional environment variables :(github.com/openzipkin/…
- Zipkin Dependencies specifies the analysis date:
Scripts can be written for daily execution;
Existing code optimization and improvement
14.1 Simple Indicators: Statistic
- Principles of annotation:
- Each step is a major business process
- Core method
- Condition, branch, judgment before
- Using the Statistic plugin:
It is recommended that the service be online and the comment rate should reach 35%
14.3 Alibaba Java Code Specification (P3c)
- github
- IDEA plugin support, search
Alibaba Java Coding Guidelines
14.4 SonarQube
- The tutorial
- Download: JDK8 only supports 6.x to 7.8.x versions
- Installation:
- View log command:
tail -f .. /.. /logs/sonar.logCopy the code
- Visit home :localhost:9000/about
- Both the account and password are admin
- According to the notes we fuse with the project, it is suggested to use Token, command line way to fuse. After the fusion, we can easily see how many bugs the application has, where the code is not elegant and other information:
- It also has many plug-ins, such as Chinese plug-ins, other monitoring plug-ins and so on:
The embedded database is not suitable for the production environment because it is inconvenient to scale. Therefore, you are advised to use a database such as MySql instead
15. Advanced: multi-dimensional micro-service monitoring
1.51 Summary of this chapter
- Spring Boot Actutator: Monitors the health of microservice instances
- Sentinel Dashboard: QPS, traffic limiting, etc for monitoring instances
- Spring Cloud Sleuth+Zipkin: Monitors the invocation of services
15.2 Spring Boot Actuator Monitors databases
-
SpringBoot Admin:
- It is an easy-to-use monitoring data management tool tailored for Spring Boot
- (making address) [github.com/codecentric…].
- (official document) [codecentric. Making. IO/spring – the boot…
-
Setup SpringBoot admin
- Add depends on:
- The integrated version, as shown below:
- Add SpringBootAdmin and Nacos to register Admin with Nacos, as shown in the following figure:
- Writing comments:
- Add on the startup class
@EnableAdminServer
Note, as shown:
- Add on the startup class
- Add the configuration:
- Configure in application.yml:
- Add depends on:
-
Steps for monitored services:
- Add depends on:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> Copy the code
- Add the configuration:
- Add depends on:
15.3 the JVM monitoring
- Spring Boot Actuator: Metrics, Heapdump, threaddump
- Java comes with JVM monitoring tools: JConsole, JVisualVM
- Built-in tools to open:
- Jconsole: Enter jConsole in IDEA or CMD, as shown:
- Jvisualvm: Similar to JConsole, enter
jvisualvm
Ok, as shown in the figure:
Jvisualvm is similar to JConsole but a little more powerful, but they are both client-side and hopefully a powerful monitoring tool in the form of Web browsing.
15.4 Visual Analysis of GC Logs, Thread Dump logs, and Heap Dump
- Step 1 set the startup parameters to print GC details log:
- Step 2, select the item, right click to select
Synchronize 'xxxx'
, generate gC.log, as shown in the figure: - Locate the output log file, right-click and select
Reveal in Finder
, export the file: - Will generate the file in
gceasy.io
Click Select file to open, and a statistical chart will be generated - Generate statistical chart as shown in figure:
- Although this tool is powerful, it is not an open source product. We can use the product shown in the picture instead, but it may be less functional:
15.5 Log Monitoring
- The ELK architecture is shown below:
Log monitoring tools are not mandatory, as long as they are appropriate and do not have to be ELK
15.6 Other Monitoring
- When monitoring, we should be comprehensive. For example, when docker is used, we should monitor Docker. With A Linux server, we should monitor the performance of the server. With RabbitMQ, we should also monitor RabbitMQ.
- Only when the monitoring is perfect can we analyze the problem more comprehensively.
16. Advanced: perfect integration of heterogeneous micro services
17.1 How to Perfectly Integrate Heterogeneous Microservices
- Non-springcloud services are called heterogeneous microservices
- Perfect integration:
- SpringCloud microservices perfectly invoke heterogeneous microservices
- Heterogeneous microservices perfectly use SpringCloud microservices
- Perfect calls: Each needs to meet the following requirements:
- Service discovery
- Load balancing
- Fault-tolerant processing
Perfect integration for Spring Cloud Wii
- Perfect integration of Github addresses with SpringCloud Wii
- Part of the configuration is shown in the figure, refer to the above tutorial configuration
It is a future sub-project of SpringCloudAlibaba
Xviii. Course Summary:
- SpringCloud is a set of tools and SpringCloudAlibaba is a one-stop solution
- It is not desirable to know how it is and not know why, but to know the core principles so that Eureka, Nacos, or other service discovery or other components can be learned quickly.