Microservices Architecture

Evolution of Internet application architecture

With the development of the Internet, the user group is gradually expanding, the traffic of the website increases exponentially, and the conventional single architecture can no longer meet the request pressure and the rapid iteration of business, so the change of architecture is imperative. Let’s take the architecture evolution of dragnet as an example, from the single architecture analysis at the beginning to the current microservice architecture step by step.

Taobao: LAMP, Linux, Apache, MySQL, PHP

1) Single application architecture

At the beginning of the birth, pull hook users, data scale is small, all function modules of the project in a project code, compile, packaged and deployed in a Tomcat container architecture model is the monomer application architectures, this architecture is simple and practical, easy to maintenance, and low cost, become the mainstream of the era architectural approaches.

Advantages:

  • Efficient development: the pace of development in the early stage of the project is fast, and the team can iterate quickly when there are few team members. Simple architecture: MVC architecture, only IDE development and debugging can be used
  • Easy to test: just unit tests or browsers
  • Easy to deploy: Start in a container packaged as a single executable JAR or as a war package

Monolithic applications are easier to deploy and test, and run well in the early stages of a project. However, as the requirements increased and more people joined the development team, the code base expanded rapidly. As a result, individual applications become more and more bulky, with lower maintainability and flexibility, and higher maintenance costs.

Disadvantages:

  • Poor reliability: An application Bug, such as an infinite loop or memory overflow, may crash the entire application
  • High complexity: Take a single application at the level of a million lines as an example, the whole project contains many modules, the boundaries of modules are vague, the dependencies are not clear, the quality of the code is uneven, and they are piled together in disorder. Makes the whole project very complicated.
  • Limited scalability: A single application can only be expanded as a whole, but cannot be scaled based on service modules. For example, some modules in your application are computationally intensive and require a powerful CPU; Some modules are IO intensive and require more memory. Because these modules are deployed together, hardware choices have to be compromised.

Application architecture further enrich change volume rise, monomer, such as the application of cluster deployment, using Nginx for load balancing, increasing cache server, file server, database cluster and separation of doing, speaking, reading and writing, etc., through the above measures to enhance its ability to cope with high concurrency and cope with some complex business scenarios, but still belongs to the monomer application architecture.

2) Vertical application architecture

In order to avoid the problems mentioned above, vertical division of modules was started. The principle of vertical division was based on the existing business characteristics. The first core goal was to avoid mutual influence between businesses, and the second was to improve efficiency and reduce the dependence between components after the expansion of the R&D team.

advantages

  • System splitting implements traffic sharing and solves concurrency problems
  • It can be optimized for different modules
  • Convenient horizontal expansion, load balancing, improved fault tolerance rate
  • The systems are independent and do not affect each other, making new business iterations more efficient

disadvantages

  • Services call each other, and if a service’s port or IP address changes, the calling system must manually change it
  • After cluster construction, load balancing is complicated, for example, Intranet load. When migrating machines, routes of callers will be affected, leading to online faults
  • Service invocation methods are not uniform, based on HttpClient, WebService, interface protocol is not uniform
  • Poor service monitoring: other than relying on port and process monitoring, there are no monitoring indicators such as success rate of calls, failure rate, total elapsed time, etc

3) SOA application architecture

After the vertical division, module increases, the cost of maintenance in also get higher, some common business and module to repeat more and more, in order to solve the above mentioned interface protocols is not unified, service cannot be monitoring, load balance, the introduction of alibaba open-source Dubbo, a high-performance, lightweight open source Java RPC framework, Seamless integration with the Spring framework. It provides three core capabilities: interface-oriented remote method invocation, intelligent fault tolerance and load balancing, and automatic service registration and discovery.

Service-oriented Architecture (SOA) is a service-oriented Architecture. According to the actual business, the system is divided into appropriate and independently deployed modules, which are independent of each other (through Webservice/Dubbo and other technologies for communication).

Advantages: Distributed, loosely coupled, flexible extension, reusable.

Disadvantages: Large service extraction granularity, high coupling degree between service caller and provider (interface coupling degree)

4) Microservices application architecture

Microservices architecture can be said to be an extension of SOA architecture, which is less granular and more independent of services. The application is broken down into tiny services, which can use different development languages and storage, and often communicate with each other through lightweight services such as Restful. Microservices architecture is all about small, independent, lightweight communication.

Microservices is a refinement of SOA with finer granularity. One of the key points of microservices architecture is that business needs to be thoroughly componentized and servitized

Microservice architectures are similar and different from SOA architectures

One of the obvious differences between microservices and SOA architectures is the granularity of service separation, but the SOA phase we are seeing is relatively fine for the pull-back architecture. , so the above pull SOA to pull microservices, in terms of service separation change is not big, but the introduction of a relatively complete new generation of Spring Cloud microservices technology. Of course, what we see above are the results of the evolution of the pull architecture. Each stage has actually undergone many changes. The service separation of the pull architecture has actually gone from coarse to fine, not absolutely in one step.

Take a pull box example to illustrate the difference in granularity between SOA and microservices

In the early stage of SOA architecture, both “resume delivery module” and “talent search module” had the requirement of presenting resume content, but it may be slightly different. At the beginning, we maintained a set of resume query and presentation code in each module. Later, we split the service into a more fine-grained resume base service, which can be called by different modules.

Ideas and advantages and disadvantages of microservices architecture

The core idea of microservice architecture design is “micro”. The granularity of separation is relatively small, so that the coupling degree of single responsibility and development can be reduced, small functions can be independently deployed and expanded, with strong flexibility and small impact range of upgrade.

Advantages of microservice architecture: Microservice architecture and microservices

  • Microservices are small and easy to focus on specific business functions
  • Microservices are small, and each microservice can be independently implemented by a small team (development, testing, deployment, operation and maintenance). The team cooperation is decoupled to a certain extent, which facilitates the implementation of agile development
  • Microservices are small, facilitating reuse and assembly between modules
  • Microservices are very independent, so different microservices can be developed in different languages and loosely coupled
  • With a microservices architecture, it’s easier to introduce new technologies

Disadvantages of microservices architecture

  • In microservice architecture, distributed complexity is difficult to manage. When the number of services increases, management will become more complex.
  • Distributed link tracking is difficult under microservice architecture.

Core concepts in microservices architecture

  • Service registration and service discovery

For example: Job search -> Resume Services Service provider: Resume Services Service Consumer: Job search Services Registration: Service provider registers/enlists information about the services provided (server IP and port, service access protocol, etc.) to the registry Service Discovery: Service consumers can obtain a more real-time list of services from the registry and then select a service to access based on certain policies

  • Load balancing

Load balancing distributes the request load to multiple servers (application servers, database servers, etc.) to improve service performance and reliability

  • fusing

Fusing is circuit breaker protection. In microservice architecture, if downstream services respond slowly or fail due to excessive access pressure, upstream services can temporarily cut off calls to downstream services to protect the overall availability of the system. This measure of sacrificing part while preserving the whole is called circuit breaker.

  • Link to track

Microservice architectures are becoming more and more popular, and a project is often split into many services, so a single request requires many services. Different microservices may be developed by different teams, implemented in different programming languages, and the entire project may be deployed on many servers (even hundreds or thousands) across multiple data centers. Link tracing is to log and monitor the performance of many service links involved in a request

  • API gateway

Under the microservice architecture, different microservices tend to have different access addresses. The client may need to call the interfaces of multiple services to complete a business requirement. If the client is allowed to communicate directly with each microservice, it may appear as follows:

  • 1) The client needs to call different URL addresses, which increases the difficulty of maintenance calls
  • 2) In certain scenarios, there is also the problem of cross-domain request (cross-domain problem will be encountered when the front and back end are separated. Originally, we can solve it by using Cors at the back end, but now we use gateway, so we can put it on the gateway layer)
  • 3) Each microservice requires separate identity authentication

Then, THE API gateway can better deal with the above problems in a unified way. The API request calls the unified access API gateway layer, and the gateway forwards the request. The API gateway is more focused on security, routing, traffic, etc. (the microservices team can just focus on business logic), and its features include

  • 1) Unified Access (Routing)
  • 2) Security protection (unified authentication, responsible for gateway access authentication, communication with “Access Authentication Center”, actual authentication service logic transferred to “Access Authentication Center” processing)
  • 3) Blacklist and whitelist (to control access to the gateway through IP address control function) 3) protocol adaptation (to achieve communication protocol verification, adaptation conversion function)
  • 4) Flow control (flow limiting)
  • 5) Long and short link support
  • 6) Fault tolerance (load balancing)

Spring Cloud review

What is Spring Cloud

SpringCloud is an ordered collection of frameworks. It takes advantage of the development convenience of Spring Boot to subtly simplify the development of distributed system infrastructure, such as service discovery registry, configuration center, message bus, load balancing, circuit breakers, data monitoring, etc., which can be started and deployed with one click using Spring Boot’s development style. Spring Cloud does not remanufacture the wheel. It just combines the mature and practical service frameworks developed by various companies and encapsulates them in the Spring Boot style, masking the complex configuration and implementation principles. Finally, a simple and easy to understand, easy to deploy and easy to maintain distributed system development kit was set aside for developers.

Spring Cloud is an ordered collection of frameworks (Spring Cloud is a specification)

Develop service discovery registries, configuration centers, message buses, load balancers, circuit breakers, data monitoring, and more

Simplifies microservice architecture development (autowage) by leveraging the development convenience of Spring Boot

It’s important to note here that Spring Cloud is really a set of specifications, a set of specifications for building microservices architecture, not a framework that you can use immediately (a specification is what functional components should be, how they fit together, and what they do together). Under this specification, some components are developed by Netflix, a third party, and some frameworks/components are officially developed by Spring, including a set of frameworks/components developed by Alibaba, a third party, Spring Cloud Alibaba. These are the implementation of Spring Cloud specification.

Netflix did something called SCN

Spring Cloud has absorbed Netflix’s offerings and built several components of its own

Alibaba built on that with a bunch of microservice components,Spring Cloud Alibaba (SCA)

What problems does SpringCloud solve

SpringCloud specification and implementation intention to solve some problems in the implementation process of microservice architecture, such as service registration discovery problems in microservice architecture, network problems (such as circuit breaker scenario), unified authentication security authorization problems, load balancing problems, link tracking and other problems.

  • Distributed/versioned Configuration
  • Service Registration and Discovery
  • Routing (Intelligent Routing)
  • Service-to-service calls
  • Load balancing
  • Circuit Breakers
  • Global locks
  • Leader Ship Election and Cluster State
  • Distributed Messaging

Spring Cloud architecture

As mentioned above, Spring Cloud is a microservices-related specification, which intends to provide one-stop service for building microservice architecture. It adopts component (framework) mechanism to define a series of components, and various components are targeted to deal with specific problems in microservices. Together, these components constitute the Spring Cloud microservice technology stack

Spring Cloud core components

Components in the Spring Cloud ecosystem can be divided into first-generation Spring Cloud components and second-generation Spring Cloud components according to their development.

First Generation Spring Cloud (Netflix, SCN) The second generation of Spring Cloud (mainly Spring Cloud Alibaba, SCA)
The registry Netflix Eureka Alibaba Nacos
Client load balancing Netflix Ribbon Alibaba Dubbo LB, Spring Cloud Loadbalancer
fuse Netflix Hystrix The Alibaba Sentinel
The gateway Netflix Zuul: Mediocre performance, will exit the Spring Cloud ecosystem in the future Official Spring Cloud Gateway
Configuration center Official Spring Cloud Config Alibaba Nacos, Ctrip Apollo
The service call Netflix Feign Alibaba Dubbo RPC
message-driven Official Spring Cloud Stream
Link to track Official Spring Cloud Sleuth/Zipkin
Alibaba SeATA distributed transaction solution

Spring Cloud Architecture (component collaboration)

Components in Spring Cloud work together to support a complete microservice architecture. Such as

  • The registry is responsible for registering and discovering services, which is a good way to connect services together
  • The API gateway is responsible for forwarding all incoming requests
  • The circuit breaker monitors calls between services and protects against repeated failures.
  • The configuration center provides a unified configuration information management service to inform each service to obtain the latest configuration information in real time

Spring Cloud vs. Dubbo

Dubbo is an open source service framework with excellent performance, which is based on RPC invocation. For Spring Cloud Netflix, which has a high usage rate, it is based on HTTP, so its efficiency is not as high as Dubbo. However, the problem lies in the incomplete components of Dubbo system. It cannot provide one-stop solutions, for example, service registration and discovery need to be realized by Zookeeper, etc. However, Spring Cloud Netflix really provides one-stop service-oriented solutions with Spring family background.

Dubbo usage has been higher than Spring Cloud in previous years, but Spring Cloud is already showing great growth in service-oriented/microservice solutions.

Relationship between Spring Cloud and Spring Boot

Spring Cloud just makes use of the characteristics of Spring Boot, so that we can quickly implement microservice component development. Otherwise, when we use Spring Cloud, The associated Jar packages for each component required us to import our own configuration and required developers to consider compatibility and so on. So Spring Boot is a way for us to quickly implement the Spring Cloud microservices technology.

Case preparing

Case description

In this section we will simulate a call between microservices in a normal way, and then we will step by step transform the case using Spring Cloud components.

Requirements:

Complete business flow chart:

Case database environment preparation

Commodity Information Sheet:

CREATE TABLE products(
    id INT PRIMARY KEY AUTO_INCREMENT,
    NAME VARCHAR(50), # commodity name priceDOUBLE, 
    flagVARCHAR(2Goods_descVARCHAR (100), # product description imagesVARCHAR(400),# goods_stockINT,# goods_typeVARCHAR(20)# Product type);Copy the code

The case of engineering

We constructed the engineering environment based on Spring Boot, and our engineering module relationship is as follows:

The parent project lagou – the parent

Create a module in Idea and name it lagou-parent

pom.xml

<! -- Parent project packaging method -->
<packaging>pom</packaging>
<! -- Spring Boot parent boot dependency -->
<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.1.6. RELEASE</version>
</parent>

<dependencies>
    <! - web depend on -- -- >
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <! -- Log dependency -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-logging</artifactId>
    </dependency>
    <! -- Test dependencies -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
    <! - lombok tools - >
    <dependency>
        <groupId>org.projectlombok</groupId>
        <artifactId>lombok</artifactId>
        <version>1.18.4</version>
        <scope>provided</scope>
    </dependency>
    <! --Actuator helps you monitor and manage SpringBoot apps -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
    <! -- Hot deployment -->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-devtools</artifactId>
        <optional>true</optional>
    </dependency>
</dependencies>
      
<build>
    <plugins>
        <! Build plugins -->
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <configuration>
              <source>11</source>
              <target>11</target>
              <encoding>utf-8</encoding>
            </configuration>
        </plugin>
        <! -- Package plugin -->
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <executions>
              <execution>
                <goals>
                  <goal>repackage</goal>
                </goals>
              </execution>
            </executions>
        </plugin>
    </plugins>
</build>
Copy the code

Common component microservices

1) Introduce database driver and Mybatis – Plus in public component microservices


      
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"    
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>lagou-parent</artifactId>    
        <groupId>com.lagou</groupId>
        <version>1.0 the SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>
    <artifactId>lagou-service-common</artifactId>
    
    <dependencies>
        <dependency>
            <groupId>com.baomidou</groupId>
            <artifactId>mybatis-plus-boot-starter</artifactId>        
            <version>3.3.2 rainfall distribution on 10-12</version>
        </dependency>
        <! -- PoJO persistence -->
        <dependency>
            <groupId>javax.persistence</groupId>
            <artifactId>javax.persistence-api</artifactId>      
            <version>2.2</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>      
            <scope>runtime</scope>
        </dependency>
    </dependencies>
</project>    
Copy the code

2) generate the database entity classes: com.lagou.com mon. Pojo. Products

package com.lagou.common.pojo;

import lombok.Data;

import javax.persistence.Id;
import javax.persistence.Table;

@Data 
@Table(name="products")
public class Products{
    @Id 
    private Integer id;
    private String name;
    private double price;
    private String flag;
    private String goodsDesc; 
    private String images;
    private long goodsStock;
    private String goodsType;
}
Copy the code

Commodity microservices

Product microservices are service providers and page static microservices are service consumers

Create lagou-service-product, inherit from Lagou-parent

1) Introduce common component coordinates in the POM file of commodity microservices

<dependencies>
    <dependency>
        <groupId>com.lagou</groupId>
        <artifactId>lagou-service-common</artifactId>      
        <version>1.0 the SNAPSHOT</version>
    </dependency>
</dependencies>
Copy the code

2) Configure the port, application name, and database connection information in the YML file

Spring: Application: name: lagou-service-product datasource: driver-class-name: com.mysql.jdbc.Driver url: jdbc:mysql://localhost:3306/lagou? useUnicode=true&characterEncoding=utf8&serverTimezone=UTC username: root password: wu7787879Copy the code

3)Mapper interface development

package com.lagou.product.mapper;

import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.baomidou.mybatisplus.core.mapper.Mapper;
import com.lagou.common.pojo.Products;

/** * Now using Mybatis-plus component, this component is Mybatis enhanced version * can be very friendly integration with SpringBoot, compared to Mybatis framework only use convenient changes * no specific function changes * specific use: Let the specific Mapper interface inherit from BaseMapper
public interface ProductMapper extends BaseMapper<Products>{}Copy the code

4) serive layer development

package com.lagou.product.service.impl;

import com.lagou.common.pojo.Products;
import com.lagou.product.mapper.ProductMapper;
import com.lagou.product.service.ProductService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

@Service
public class ProductServiceImpl implements ProductService {

    @Autowired
    private ProductMapper productMapper;
    /** * Query the commodity object based on the commodity ID */
    @Override
    public Products findById(Integer productId) {
        returnproductMapper.selectById(productId); }}Copy the code
  1. The controller layer development
package com.lagou.product.controller;

import com.lagou.common.pojo.Products;
import com.lagou.product.service.ProductService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/product")
public class ProductController {

    @Autowired
    private ProductService productService;

    @RequestMapping("/query/{id}")
    public Products query(@PathVariable Integer id){
        returnproductService.findById(id); }}Copy the code
  1. Start the class
package com.lagou.product;

import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
@MapperScan("com.lagou.product.mapper")
public class ProductApplication {
    public static void main(String[] args) { SpringApplication.run(ProductApplication.class,args); }}Copy the code

Page static microservices

1) Introduce common component dependencies in POM files


      
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>lagou-parent</artifactId>
        <groupId>com.lagou</groupId>
        <version>1.0 the SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>

    <artifactId>lagou-service-page</artifactId>

    <dependencies>
        <dependency>
            <groupId>com.lagou</groupId>
            <artifactId>lagou-service-common</artifactId>
            <version>1.0 the SNAPSHOT</version>
        </dependency>
    </dependencies>
</project>
Copy the code

2) Configure the port, application name, and database connection information in the YML file

Spring: Application: name: lagou-service-page datasource: driver-class-name: com.mysql.jdbc.Driver url: jdbc:mysql://localhost:3306/lagou? useUnicode=true&characterEncoding=utf8&serverTimezone=UTC username: root password: wu7787879Copy the code
  1. Write the PageController, in the PageController to call the corresponding URL of the commodity microservice
package com.lagou.page.controller;

import com.lagou.common.pojo.Products;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestTemplate;

@RestController
@RequestMapping("/page")
public class PageController {

    @Autowired
    private RestTemplate restTemplate;

    @GetMapping("/getData/{id}")
    public Products findDataById(@PathVariable Integer id){
        Products products =restTemplate.getForObject("http://localhost:9000/product/query/"+id,Products.class);
        System.out.println("Get product object from lagou-service-product :"+products);
        returnproducts; }}Copy the code
  1. Write a startup class that injects the RestTemplate
package com.lagou.page;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.web.client.RestTemplate;

import java.awt.print.Pageable;

@SpringBootApplication
public class PageApplication {

    public static void main(String[] args) {
        SpringApplication.run(PageApplication.class,args);
    }

    @Bean
    public RestTemplate restTemplate(a){
        return newRestTemplate(); }}Copy the code

Case code problem analysis

We use the RestTemplate in the page static microservice to invoke the Product state interface of the product microservice. What are the problems in a microservice distributed cluster environment? How to solve it?

Existing problems:

  • 1) In service consumers, we hard-coded THE URL address into the code, which is not convenient for later maintenance.
  • 2) The service provider has only one service. Even if the service providers form clusters, the service consumers still need to implement load balancing by themselves.
  • 3) Among service consumers, the status of service providers is not clear.
  • 4) When the service consumer invokes the service provider, if there is a fault, can it be found in time and not throw an exception page to the user?
  • 5) Is there room for optimization in RestTemplate? Can you play like Dubbo?
  • 6) How to realize the unified authentication of so many micro-services?
  • 7) It is troublesome to modify multiple configuration files every time! ?
  • 8)…

The problems mentioned above are some of the inevitable problems in microservices architecture:

  • 1) Service management: automatic registration and discovery, status supervision
  • 2) Service load balancing
  • 3) fuse
  • 4) Remote procedure call
  • 5) Gateway interception and route forwarding
  • 6) Unified certification
  • 7) Centralized configuration management, real-time automatic update of configuration information

The Spring Cloud system has solutions for each of these problems, which we’ll look at one by one.

The first generation of SpringCloud core components

Note: As mentioned above, the GateWay component Zuul will be out of the Spring Cloud ecosystem in the future, so we will directly explain GateWay. In the course chapter planning, we will divide GateWay into the first generation of Spring Cloud core components.

The overall structure of each component is as follows:

Formally, Feign is a top three Feign = RestTemplate + Ribbon + Hystrix

Eureka Service Registry

Common service registries: Eureka, Nacos, Zookeeper, Consul

About the Service registry

Note: Service registries are essentially decoupled service providers and service consumers.

Service consumer –> service provider

Service consumer –> Service Registry –> Service provider

For any microservice, there should in principle be or support multiple providers (such as multiple instances of commodity microservices deployed), due to the distributed nature of the microservice.

Furthermore, in order to support elastic scaling and scaling characteristics, the number and distribution of a microservice provider is often dynamic and cannot be determined in advance. As a result, the static LB mechanism commonly used in the singleton application phase is no longer suitable, and an additional component needs to be introduced to manage the registration and discovery of microservice providers, and this component is the service registry.

Registry implementation principles

In distributed microservice architecture, the service registry is used to store the address information of service providers and the attribute information related to service publishing. Consumers can obtain the address information of service providers through active query and passive notification, instead of obtaining the address information of providers through hard coding. The consumer only needs to know which services are published in the current system, not exactly where the services exist. This is transparent routing.

  • 1) Service provider startup
  • 2) Service providers actively register relevant service information in the registry
  • 3) Service consumers to obtain service registration information:
    • Pull pattern: A service consumer can actively pull a list of available service providers
    • Push pattern: Service consumers subscribe to services (When service providers change, the registry will actively push the updated service list to consumers

4) Service consumers directly invoke service providers

In addition, the registry also needs to complete the health monitoring of service providers, when it is found that the failure of service providers need to be removed in time;

Comparison of mainstream service centers

  • Zookeeper
    • Dubbo + Zookeeper
    • Zookeeper is a distributed service framework and a subproject of Apache Hadoop. It is used to solve data management problems frequently encountered in distributed applications, such as unified naming service, status synchronization service, cluster management, and configuration item management of distributed applications.
    • To put it simply, the essence of ZooKeeper = storage + listening for notifications.
    • They used to do a service registry, mainly because it has nodes change notifications, as long as the client to monitor related service node, all changes to a service node, can be timely notification to listen to the client, such as the caller as long as use the Zookeeper client can achieve the function of service node subscriptions and change notice, It’s very convenient. In addition, Zookeeper availability is ok, because as long as more than half of the election nodes are alive, the whole cluster is available, with a minimum of three nodes.
  • Eureka
    • Open source by Netflix and integrated into the SpringCloud architecture by Pivatal, it is a service registration and discovery component developed based on RestfulAPI style.
  • Consul
    • Consul is a multi-data center distributed, highly available service publishing and registration software developed by HashiCorp based on the Go language. Consul uses Raft algorithm to ensure service consistency and supports health checks.
  • Nacos
    • Nacos is a dynamic service discovery, configuration management, and service management platform that makes it easier to build cloud-native applications. Simply put, Nacos is the combination of registry and configuration center, which helps us solve the problems of service registration and discovery, service configuration and service management that will be involved in micro-service development. Nacos is one of the core components of Spring Cloud Alibaba and is responsible for service registration and discovery, as well as configuration.
Component name language CAP External exposed interface
Eureka Java AP (Self-protection mechanism, guaranteed availability) HTTP
Consul Go CP HTTP/DNS
Zookeeper Java CP The client
Nacos Java Supports AP/CP switching HTTP

The CAP theorem, also known as the CAP principle, refers to the principle that in a distributed system, Consistency, Availability and Partition tolerance can only be two of the three characteristics at most at the same time.

P: Fault tolerance of partition: when a distributed system encounters a node or network partition failure, it can still provide services that meet the requirements of consistency or availability (must be met).

C: Data consistency: All nodes see the same data at the same time

A: Reads and writes always succeed

CAP can’t be all three, either AP or CP

Service registry component Eureka

The general principle of the service registry is compared with the mainstream service registry solutions, focusing on Eureka.

  • Eureka infrastructure

  • Eureka interaction process and principle

Eureka consists of two components: Eureka Server and Eureka Client. Eureka Client is a Java Client used to simplify the interaction with Eureka Server. Eureka Server provides the ability of service discovery. When each micro-service is started, it will register its own information (such as network information) with Eureka Server through EurekaClient, and Eureka Server will store the information of the service.

  • 1) IN the figure, US-EAST-1C, US-EAST-1D and US-EAST-1E represent different areas, that is, different computer rooms
  • 2) Each Eureka Server in the figure is a cluster.
  • 3) In the figure, Application Service registers the Service with Eureka Server as the Service provider. After receiving the registration event, Eureka Server will perform data synchronization in the cluster and partition. The Application Client, as a consumer (service consumer), can obtain the service registration information from EurekaServer for service invocation.
  • 4) After the micro-service is started, it will periodically send heartbeat to the Eureka Server (the default interval is 30 seconds, and the default Eureka Server 90S will exclude the ones that have not been renewed) to renew its own information
  • 5) If the Eureka Server does not receive the heartbeat of a microservice node within a certain period of time, the Eureka Server will cancel the microservice node (default: 90 seconds).
  • 6) Each Eureka Server is also a Eureka Client, and the service registration list is synchronized between multiple Eureka servers through replication
  • 7) Eureka Client will cache the information in Eureka Server. Even if all Eureka Server nodes go down, the service consumer can still find the service provider using the information in the cache

Eureka improves system flexibility, scalability and high availability through mechanisms such as heartbeat detection, health check and client caching.

Details of Eureka

Eureka metadata details

Eureka metadata comes in two types: standard metadata and custom metadata.

Standard metadata: host names, IP addresses, port numbers, and other information are published in the service registry for calls between services.

User-defined metadata: You can use eureka.instance.metadata-map to customize metadata in the KEY/VALUE format. This metadata can be accessed from a remote client.

Similar to the

Instance: # use IP to register, otherwise the host name will be registered. IpAddress instance-id = ipAddress instance-id = ipAddress instance-id = ipAddress instance-id ${spring.cloud.client.ip-address}:${spring.application.name}:${server.port}:@project.version@ metadata-map: ip: 192.168.200.128 Port: 10000 User: YuanJing PWD: 123456Copy the code

We can use DiscoveryClient to get all metadata information for a given microservice in the program

package com.lagou.page.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.client.ServiceInstance;
import org.springframework.cloud.client.discovery.DiscoveryClient;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;
import java.util.Map;
import java.util.Set;

@RestController
@RequestMapping("/metadata")
public class MetadataController {

    @Autowired
    private DiscoveryClient discoveryClient;

    @RequestMapping("show")
    public String showMetadata(a){
        String result = "";
        List<ServiceInstance> instances = discoveryClient.getInstances("lagou-service-page");
        for (ServiceInstance instance:instances) {
            // Get service metadata
            Map<String, String> metadata = instance.getMetadata();
            Set<Map.Entry<String, String>> entries = metadata.entrySet();
            for (Map.Entry<String,String> entry : entries){
                String key = entry.getKey();
                String value = entry.getValue();
                result+="key:"+key+",value:"+value; }}returnresult; }}Copy the code

The Eureka client looks at the solution

The service provider (also the Eureka client) registers the service with EurekaServer and completes the service renewal

Service Registration Details (Service Provider)

  • 1) When we import the eureka-client dependency coordinates, configure the Eureka service registry address
  • 2) When the service is started, it will initiate a registration request to the registry, carrying service metadata information
  • 3) The Eureka registry stores the service information in the Map.

Service Renewal Details (Service Provider)

Services renew (heartbeat) to the registry every 30 seconds (also known as registration). If they are not renewed, the lease expires after 90 seconds, and then the service is invalidated. Renewal every 30 seconds is called heartbeat detection

  • Eureka Client: Renew the Eureka Server status once 30 seconds (configuration on the Client)
  • The Eureka Server: 90S has not renewed, so the micro-service instance is deleted from the service registry (Map) (configured on the Client side).
  • Eureka Client: 30S to pull the latest service registry and cache it locally (Client configuration)

Often we don’t need to adjust these two configurations

The default value is 30 seconds. Lease -renewal-interval-in-seconds: EurekaServer removes lease-expiration-durations-in-seconds from the list if no heartbeat occurs within 90 secondsCopy the code

Access to the List of Services (Service Registry) details (Service Consumers)

The service pulls a list of services from the registry every 30 seconds, which can be modified by configuration. Often we don’t need to adjust

Registry -fetch-interval-seconds: 30Copy the code

1) When the service consumer starts up, the read-only backup is obtained from the EurekaServer service list and cached locally. 2) Every 30 seconds, 3) The interval of 30 seconds can be changed by configuring eureka.client.registry-fetch-interval-seconds

Eureka service overview

Service offline: 1) When the service is normally shut down, the REST request for service offline will be sent to EurekaServer. 2) After receiving the request, the service center puts the service offline

Failure elimination: Eureka Server will be timed (Eureka.server. Eviction – interval-timer-in-MS, default 60 seconds) for inspection. If the instance does not receive the heartbeat within a certain period (defined by eureka.instance.lease-expiration-duration-inseconds, which is 90 seconds by default), the instance is deregistered.

Self-protection mechanism: Self-protection mode is a security protection measure against abnormal network fluctuations. Using self-protection mode can make Eureka cluster more robust and stable operation.

The self-protection mechanism works as follows: If more than 85% of the client nodes have no normal heartbeat within 15 minutes, Eureka considers that the network between the client and the registry is faulty, and the Eureka Server automatically enters the self-protection mechanism. The following situations occur:

  1. Eureka Server no longer removes services from the registry that should expire because they have not received heartbeats for a long time.
  2. Eureka Server can still accept registration and query requests for new services, but will not be synchronized to other nodes to ensure that the current node is still available.
  3. When the network becomes stable, the new registration information of the current Eureka Server is synchronized to other nodes.

Therefore, Eureka Server can cope well with the loss of some nodes due to network failure, unlike ZK, where half of the nodes are unavailable and the whole cluster becomes unavailable.

Why is there a self-protection mechanism?

By default, If Eureka Server does not receive a heartbeat from a microservice instance within a certain period of time (90 seconds by default), Eureka Server will remove the instance. However, when the network partition failure occurs, the microservice cannot communicate with Eureka Server normally, and the microservice itself is running normally. At this time, the microservice should not be removed, so self-protection mechanism is introduced.

The following information is displayed on the service center page

In standalone testing, it was easy to be satisfied that the heartbeat failure rate was less than 85% within 15 minutes, at which point Eureka’s protection mechanism would be triggered. Once the protection mechanism was enabled (by default), the service instances maintained by the service registry would not be accurate. In this case, we disable the protection mechanism by modifying the Eureka Server configuration file to ensure that the instances in the registry that are not available are promptly removed (not recommended).

Eureka: server: enable-self-preservation: falseCopy the code

Experience: It is recommended that the production environment turn on self-protection mechanisms

Ribbon Load Balancing

About Load Balancing

Load balancing is generally divided into server load balancing and client load balancing

The so-called server-side load balancer, such as Nginx and F5, routes the request to the target server according to certain algorithms after it arrives at the server.

Client load balancing, for example, takes the Ribbon as an example. The service consumer client has a list of server addresses. Before making a request, the caller selects a server to access using a load balancing algorithm.

Ribbon is a load balancer released by Netflix. Eureka works with the Ribbon. The Ribbon reads service information from Eureka and loads the service provided by the service provider based on certain algorithms.

Ribbon Load balancing policy

Load Balancing Policy describe
RoundRobinRule: indicates a polling policy By default, the server obtained for more than 10 times is unavailable and an empty server is returned
RandomRule: a RandomRule strategy If a random server is null or unavailable, the server is selected in a while loop
RetryRule: Retry policy Retry within a specified period of time. By default, it inherits RoundRobinRule and also supports custom injection. RetryRule will check whether the selected server is null or alive after each selection, and will continue to check within 500ms. While RoundRobinRule fails more than 10 times, RandomRule does not have a lapse time as long as the serverList does not hang.
BestAvailableRule: Minimum connection number policy Iterate through the serverList and select the server with the smallest number of connections available. The algorithm has a LoadBalancerStats member variable that stores the health and connection count of all servers. If the selected server is null, RoundRobinRule is called to re-select the server.
AvailabilityFilteringRule: available filtering strategy If the polling policy is extended, the system selects a server through the default polling, and then checks whether the server is available due to timeout and whether the number of current connections exceeds the upper limit.
ZoneAvoidanceRule: zone tradeoff policy (default policy) Extended polling strategy, inheriting 2 filters: ZoneAvoidancePredicate and AvailabilityPredicate, in addition to filtering servers that time out and have too many links, also filters out all nodes in a zone that does not meet requirements, polling service instances within a zone/room. Filter first, then poll

Modifying a load balancing policy:

# for the called Fang Wei service name, without effect is global lagou - service - product: ribbon: NFLoadBalancerRuleClassName: Com.net flix. Loadbalancer. RandomRule # random strategyCopy the code
Lagou - service - product: ribbon: NFLoadBalancerRuleClassName: com.net flix. The loadbalancer. RoundRobinRule # polling strategyCopy the code

Ribbon core source code analysis

How the Ribbon works

  • Old rule: SpringCloud takes full advantage of SpringBoot’s autowiring features, and look for spring.Factories configuration files
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.netflix.ribbon.RibbonAutoConfiguration
Copy the code

  • LoadBalancerAutoConfiguration class configuration

Assembly verification:

Automatic injection:

Inject the restTemplate customizer:

Set the loadBalancerInterceptor for the retTemplate object

At this point, we know that the annotated RestTemplate object is added with a LoadBalancerInterceptor, which intercepts subsequent requests for payload processing.

Hystrix fuse

Belongs to a fault tolerance mechanism

Avalanche effects in microservices

When the cohesive force inside the snow on a hillside cannot resist the pull of gravity, it slides downward, causing a mass of snow to collapse, a natural phenomenon known as an avalanche.

In microservices, a request may require multiple microservice interfaces to realize, which will form complex invocation links.

** Service avalanche effect: ** is the phenomenon of “service invoker unavailability” (result) caused by “service provider unavailability” (cause) and magnified by unavailability.

Fan in: represents the number of times that the micro-service is invoked. A large fan in indicates that the module has good reuse. Fan out: Indicates the number of other micro-services that the micro-service invokes

In microservice architecture, an application may consist of multiple microservices, and data interaction between microservices is completed through remote procedure call. This leads to the problem of assuming that microservice A calls microservice B and microservice C, and that microservice B and microservice C call other microservices, which is called “fan-out”. If the response time of the invocation of A microservice on the fan-out link is too long or unavailable, the invocation of microservice A will occupy more and more system resources, resulting in system crash, the so-called “avalanche effect”.

As shown in the figure, the response time of the most downstream commodity microservice is too long, a large number of requests are blocked, and a large number of threads will not be released, which will lead to the exhaustion of server resources and eventually lead to the paralysis of upstream service or even the whole system.

Formation reasons:

The process of serving avalanche can be divided into three stages:

  1. The service provider is unavailable
  2. Retry increases request traffic
  3. The service caller is not available

Each stage of the service avalanche can be caused by a different cause:

Avalanche effect solution

For the sake of availability and reliability, the technical means adopted to prevent the overall slow down or even collapse of the system;

Below, we introduce three technical approaches to dealing with avalanche effects in microservices, all of which are designed to prevent overall system slowdowns and even crashes from the perspective of system availability and reliability.

Service fusing

Circuit breaker mechanism is a kind of micro – service link protection mechanism to deal with avalanche effect. We are exposed to the term fuse breaker in all sorts of situations. In a high-voltage circuit, if the voltage in one place is too high, the fuse will fuse and protect the circuit. In stock trading, if the stock index is too high, circuit breakers will be used to suspend stock trading. Similarly, circuit breakers play a similar role in microservices architecture. When a microservice on the fan out link is unavailable or the response time is too long, the system interrupts the invocation of the microservice on the node to degrade the service and quickly returns an incorrect response. When detecting that the microservice invocation response of this node is normal, the call link is restored.

Note: 1) Service circuit breakers focus on “disconnection”, cutting off calls to downstream services. 2) Service circuit breakers and service downgrades are often used together, as is the case with Hystrix.

Service degradation

Generally speaking, the overall resources are not enough, first of all, some of the services are not closed (when calling me, I will return you a reserved value, also known as the bottom of the data), to tide over the peak, and then open those services.

Service degradation is generally considered as a whole, that is, when a service is down, the server will no longer be called. At this point, the client can prepare a local fallback callback and return a default value. In this way, although the service level is reduced, it is still available and better than hanging directly.

Service traffic limiting Service degradation means that when the service is faulty or the performance of core processes is affected, the service is temporarily shielded and then enabled after the peak or the problem is resolved. However, there are some scenarios that cannot be resolved by service degradation, such as core functions such as seckill services. In this case, the concurrent/request volume of these scenarios can be limited in combination with service flow limiting

There are a lot of measures to limit traffic, like

  • Limit total concurrency (e.g. database connection pools, thread pools)
  • Limit the number of instantaneous concurrent connections (e.g. Nginx limits the number of instantaneous concurrent connections)
  • Limit the average rate within the time window (e.g. Guava’s RateLimiter, Nginx’s limit_req module, limit the average rate per second)
  • Limit the remote interface call rate, limit the consumption rate of MQ, etc

Hystrix profile

Hystrix(Porcupine), with the manifesto “Defend your Application”, is an open-source delay and fault tolerance library developed by Netflflix to isolate access to remote systems, services, or third-party libraries to prevent cascading failures, thereby improving system availability and fault tolerance. Hystrix implements delay and fault tolerance mainly through the following points.

  • Package request: Use HystrixCommand to wrap the invocation logic for dependencies. Page static microservice method (@hystrixCommand adds Hystrix control)
  • Trip mechanism: When the error rate of a service exceeds a certain threshold, Hystrix can trip and stop requesting the service for a period of time.
  • Resource isolation: Hystrix maintains a small thread pool (bulkhead mode) for each dependency. If the thread pool is full, the

Dependent requests are rejected immediately, rather than queued up, speeding up failure determination.

  • Monitoring: Hystrix monitors operational metrics and configuration changes, such as successes, failures, timeouts, and rejected requests, in near real time.
  • Rollback mechanism: Rollback logic is performed when a request fails, times out, is rejected, or when a circuit breaker is turned on. The fallback logic is provided by the developer, such as returning a default value.
  • Self-repair: After the circuit breaker is open for a period of time, it will automatically enter the “half-open” state (detect whether the service is available, if it is still not available, return to the open state again).

Hystrix application

Fusing process

Purpose: The product microservice does not respond for a long time, and the service consumer — > page static microservice fails to prompt the user

  • Introducing dependencies: Introducing Hystrix dependency coordinates to the service Consumer project (static microservice) (can also be added to the parent project)
<! -- Fuse Hystrix-->
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
</dependency>
Copy the code
  • Enable fuses: Add fuses to the startup class of the Service Consumer Project (static microservices) @enablecircuitbreaker
/** ** ** ** ** ** ** **@SpringCloudApplication = @SpringBootApplication+@EnableDiscoveryClient+@EnableCircuitBreaker* /

@SpringBootApplication
@EnableDiscoveryClient //@EnableEurekaClient
@EnableCircuitBreaker // Turn on the fuse
public class PageApplication {

    public static void main(String[] args) {
        SpringApplication.run(PageApplication.class,args);
    }
    @Bean
    @LoadBalanced//Ribbon load balancing
    public RestTemplate restTemplate(a){
        return newRestTemplate(); }}Copy the code
  • Define service degradation handling methods: the business method is associated with the fallbackMethod property of @hystrixCommand on the service degradation handling method
/** * The provider simulates the processing timeout and calls the method to add Hystrix control */
// Use the @hystrixCommand annotation for circuit breaker control
@hystrixCommand (// Thread pool id, to keep it unique, ThreadPoolKey = "getProductServerPort2", ThreadPoolProperties = {@hystrixProperty (name="coreSize",value =" 1"), @hystrixProperty (name="maxQueueSize",value="20") CommandProperties ConfigureCommandProperties = {// Each property is a HystrixProperty @HystrixProperty(name="execution.isolation.thread.timeoutInMilliseconds",value="2000") } )
@RequestMapping("/getPort2")
public String getProductServerPort2(a){
    String url = "http://lagou-service-product/server/query";
    return restTemplate.getForObject(url,String.class);
}
Copy the code
  • Commodity microservices simulate timeout operations
@RestController
@RequestMapping("/server")
public class ServerConfigController {

    @Value("${server.port}")
    private String serverPort;

    @RequestMapping("/query")
    public String findServerPort(a){
        try {
            Thread.sleep(10000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        returnserverPort; }}Copy the code

down-cycled

Configure the @hystrixCommand annotation to define the degradation handling method

@hystrixCommand (// Thread pool id, to keep it unique, Not only share the threadPoolKey = "getProductServerPort3TimeoutFallback", ThreadPoolProperties = {@hystrixProperty (name = "coreSize", value = "2"), @hystrixProperty (name = "maxQueueSize", value = "20") CommandProperties = {// Each property is a hystrixProperty@hystrixProperty (name =) "Execution. The isolation. Thread. TimeoutInMilliseconds," value = "2000"), / / hystrix advanced configuration, The working process of the custom details / / statistical time window defined @ HystrixProperty (name = "metrics. RollingStats. TimeInMilliseconds," value = "8000"), / / statistics, the minimum number of requests in time Windows @ HystrixProperty (name = "circuitBreaker. RequestVolumeThreshold", value = "2"). / / statistics of the number of errors in time Windows percentage threshold @ HystrixProperty (name = "circuitBreaker. ErrorThresholdPercentage", value = "50"), / / repair of the active window length @ HystrixProperty (name = "circuitBreaker. SleepWindowInMilliseconds", value = "3000")}, FallbackMethod = "myFallBack" // fallbackMethod)
@RequestMapping("/getPort3")
public String getProductServerPort3(a) {
    String url = "http://lagou-service-product/server/query";
    return restTemplate.getForObject(url, String.class);
}

/** * Define a fallback method that returns default values * Note: This method takes the same parameters and returns the same values as the original method */
public String myFallBack(a) {
    return "1"; // Backpocket data
}
Copy the code

Hystrix bulkhead mode

Namely: thread pool isolation policy

If all fuse methods use A Hystrix thread pool (10 threads) without setting anything, then this will cause A problem. This problem is not caused by the fan out link microservice being unavailable, but by our threading mechanism. If method A requests all 10 threads, Method 2 requests are processed without access to B at all because there are no threads available, not because the B service is unavailable.

Instead of increasing the number of threads, Hystrix creates a separate thread pool for each control method. This mode is called “bulkhead mode” and is also a means of thread isolation.

Hystrix workflow with advanced applications

  • 1) Open a time window (10s) when the call fails
  • 2) In this time window, whether the number of statistical calls reached the minimum number of requests?
    • If not, reset the statistics and go back to Step 1
    • If yes, check whether the percentage of failed requests to all requests reaches the threshold.
      • If so, trip (no longer request corresponding service)
      • If not, reset the statistics and go back to Step 1
  • 3) If tripping, an active window will open (5s by default). Every 5s, Hystrix will let a request pass to the service in question and see if the call is successful. If it is successful, reset the circuit breaker back to step 1, and if it fails, go back to Step 3
/** * if the number of requests reaches 2 within 8 seconds, and the failure rate is more than 50%, then the trip window is set to 3s */
HystrixCommand(commandProperties = {// Set the statistics window time @hystrixProperty (name = "metrics.rollingStats.timeInMilliseconds",value = "8000"), / / statistics window, the minimum number of requests @ HystrixProperty (name = "circuitBreaker. RequestVolumeThreshold", value = "2"). / / statistics window error request 50% threshold set @ HystrixProperty (name = "circuitBreaker. ErrorThresholdPercentage", value = "50"), / / repair of the active window time @ HystrixProperty (name = "circuitBreaker. SleepWindowInMilliseconds", value = "3000")})
Copy the code

The configuration we did with annotations above can also be configured in configuration files:

Hystrix: command: default: circuitBreaker: # force the fuse to open. If this property is set to true, force the circuitBreaker to open and reject all requests. False off by default forceOpen: # false triggering fuse error ratio threshold, the default value 50% errorThresholdPercentage: 50 # after fusing the sleep time, the default value is 5 seconds sleepWindowInMilliseconds: The default value is 20 requestVolumeThreshold: 2 Execution: Isolation: Thread: # timeoutInMilliseconds: 2000Copy the code

Health check Observation trip State based on Springboot (automatic delivery of microservices exposed health check details)

Exposure: endpoints: web: exposure: include: "*" alwaysCopy the code

Access to health check: http://localhost:9100/actuator/health

Hystrix thread pool queue configuration example:

Once in the production environment, a lot of repayment orders were suddenly suspended. Later we checked the reason and found that Hystrix call exception occurred during the internal system call. In the development process, because the core thread number is set relatively large, there is no such exception. I put it in a test environment, and this happens occasionally.

Later adjustments to the maxQueueSize property did improve. After running in production for a while, my first thought was to check the maxQueueSize property, which was already set.

When is wondering, why maxQueueSize attribute doesn’t work, by looking at the official documentation later found Hystrix and a queueSizeRejectionThreshold attribute, this property is to control the queue threshold, the largest and Hystrix default configuration only 5, So no matter how large we set maxQueueSize, it doesn’t matter. Both attributes must be configured simultaneously

Hystrix: threadpool: default: coreSize: 10 The maximum number of queue 1000 # BlockingQueue, queueSizeRejectionThreshold default values - 1: 800 # even maxQueueSize not reached, after reaching queueSizeRejectionThreshold the value, the request will be rejected, the default value is 5Copy the code

Correct configuration cases:

  • Lower the number of core threads and set the maximum number of queues and the queue rejection threshold to larger values:
hystrix:
    threadpool:
        default:
            coreSize: 10
            maxQueueSize: 1500
            queueSizeRejectionThreshold: 1000
Copy the code

Feign remotely invokes the component

Introduction of Feign

Feign is a lightweight RESTful HTTP service client developed by Netflflix to initiate and remotely invoke requests. Feign invokes HTTP requests in the way of Java interface annotations instead of directly invoking HTTP request packets encapsulated in Java. Feign is widely used in Spring Cloud solutions.

Similar to Dubbo, the service consumer takes the service provider’s interface and invokes it as if it were a local interface method, actually making a remote request.

  • Feign makes it easy and elegant to call the HTTP API: Instead of concatenating the url and calling the API of the restTemplate, in SpringCloud it’s very simple to use Feign, create an interface (on the consumer-service caller side), add some annotations to the interface and the code is done
  • SpringCloud has enhanced Feign to support SpringMVC annotations (OpenFeign)

Essence: Encapsulates the Http call flow, more in line with the interface oriented programming habits, similar to Dubbo service call

Feign Configures the application

Create an interface (add annotations) in the service caller project (consumption)

Effect Feign = RestTemplate+Ribbon+Hystrix

  • Introduction of Feign dependencies (or superclass projects) in service Consumer Projects (page static microservices)
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
Copy the code
  • The service Consumer Project (static microservice) startup class adds Feign support using the @enableFeignClients annotation
@SpringBootApplication
@EnableDiscoveryClient // Enable service discovery
@EnableFeignClients / / open Feign
public class PageApplication {

    public static void main(String[] args) { SpringApplication.run(PageApplication.class,args); }}Copy the code

Note: Remove the Hystrix fuse support annotation @enablecircuitbreaker at this point to include imported dependencies because Feign will automatically import

  • Create the Feign interface in the consumer microservice
@FeignClient(name = "lagou-service-product")
public interface ProductFeign {

    /** * Get product information by id *@param id
    * @return* /
    @RequestMapping("/product/query/{id}")
    public Products query(@PathVariable Integer id);

    /** * get the port number *@return* /
    @RequestMapping("/server/query")
    public String findServerPort(a);

}
Copy the code

Note:

1) The name attribute of the @feignClient annotation is used to specify the name of the service provider to invoke, consistent with spring.application.name in the service provider YML file

2) The interface method in the interface is just like the Hander method in the remote service provider Controller (only as local call), so the parameter binding can use @pathVariable, @requestParam, @requestheader, etc. This is also OpenFeign’s support for SpringMVC annotations, but note that value must be set otherwise an exception will be thrown

  1. @feignClient (name = “lagou-service-product”), name can only appear once in the consumer microservice. @feignClient (name = same name, no error in previous version); So it’s best to define all the information that calls a microservice in a Feign interface.
  • Modify PageController original call method
@RestController
@RequestMapping("/page")
public class PageController {
    @Autowired
    private ProductFeign productFeign;
    @RequestMapping("/getData/{id}")
    public Products findDataById(@PathVariable Integer id) {
        return productFeign.query(id);
    }
    @RequestMapping("/getPort")
    public String getProductServerPort(a) {
        returnproductFeign.findServerPort(); }}Copy the code

Feign support for load balancing

Feign already integrates Ribbon dependencies and auto-configuration, so there is no need to introduce additional dependencies. You can configure the Ribbon globally using Ribbon. Xx, or you can configure specific services using the service name.ribbon.

Feign’s default request processing timeout duration is 1s. Sometimes our services need to be executed for a certain amount of time. In this case, we need to adjust the request processing timeout duration

Lougou-service-product: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon: ribbon 5000 # to retry OkToRetryOnAllOperations all operations are: True #### According to the configuration above, when accessing a fault request, it attempts to access the current instance again (the number of times is set by MaxAutoRetries). #### If it fails, it attempts to access the current instance again. If it fails, it attempts to access another instance. Change the instance access again (the number of changes is configured by MaxAutoRetriesNextServer), #### if it still fails, return a failure message. MaxAutoRetries: 0 # for the currently selected instance retries, not including the first call MaxAutoRetriesNextServer: 0 # switch case of retries NFLoadBalancerRuleClassName: Com.net flix. Loadbalancer. # RoundRobinRule load strategy adjustmentCopy the code

Feign support for fuses

1) Enable Feign support for fuses in the Feign client project profile (application.yml)

Feign: Hystrix: enabled: trueCopy the code

The Feign timeout setting is actually the timeout setting in the Ribbon above

Hystrix timeout Settings (just follow the Hystrix Settings before OK)

Note:

1) After Hystrix is enabled, all methods in Feign will be managed, and any problems will be dealt with by the corresponding fallback logic

2) For the timeout point, there are currently two timeout Settings (Feign/ Hystrix), the circuit breaker is performed according to the minimum value of the two timeout times, that is, the circuit breaker enters the fallback degradation logic if the processing time exceeds the minimum timeout time

Hystrix: command: default: circuitBreaker: # force the fuse to open. If this property is set to true, force the circuitBreaker to open and reject all requests. False off by default forceOpen: # false triggering fuse error ratio threshold, the default value 50% errorThresholdPercentage: 50 # after fusing the sleep time, the default value is 5 seconds sleepWindowInMilliseconds: The default value is 20 requestVolumeThreshold: 2 Execution: Isolation: Thread: # timeoutInMilliseconds: 5000Copy the code

2) Custom FallBack processing class (FeignClient interface is required)

package com.lagou.page.feign.fallback;

import com.lagou.common.pojo.Products;
import com.lagou.page.feign.ProductFeign;
import org.springframework.stereotype.Component;

@Component
public class ProductFeignFallBack implements ProductFeign {
    @Override
    public Products query(Integer id) {
        return null;
    }

    @Override
    public String findServerPort(a) {
        return "1"; }}Copy the code
@FeignClient(name = "lagou-service-product",fallback = ProductFeignFallBack.class)
public interface ProductFeign {
    /** * Query the commodity object * by commodity ID@param id
    * @return* /
    @GetMapping("/product/query/{id}")
    public Products queryById(@PathVariable Integer id);

    @GetMapping("/service/port")
    public String getPort(a);
}
Copy the code

Feign’s support for request compression and response compression

Feign supports GZIP compression of requests and responses to reduce performance losses during communication. To enable request and response compression, use the following parameters:

Feign: hystrix: enabled: true # Enable request and response compression Text/HTML, application/XML, application/json # set compressed data types, and is the default value of min - request - size: 2048 # set the trigger compress the size of the lower limit, here is also the default response: Enabled: true # This function is disabled by defaultCopy the code

GateWay GateWay component

Gateway: An important part of the microservices architecture

The LAN has the concept of gateway. The LAN receives or sends data through this gateway. For example, when Vmware virtual machine software is used to build virtual machine clusters, we often need to select an IP address in the IP segment as the gateway address.

GateWay we learned –>Spring Cloud GateWay (which is just one of many GateWay solutions)

GateWay profile

Spring Cloud GateWay is a new project of Spring Cloud that aims to replace Netflix Zuul, It is developed based on Spring5.0+SpringBoot2.0+WebFlux (Reactor model based on high performance responsive communication framework Netty, asynchronous non-blocking model), and its performance is higher than Zuul. In official tests, GateWay is 1.6 times that of Zuul. It aims to provide a simple and effective unified API route management method for microservices architecture.

Spring Cloud GateWay not only provides a unified routing mode (reverse proxy), but also provides basic GateWay functions based on the Filter chain, such as authentication, traffic control, fusing, path rewriting, log monitoring, etc.

The location of the gateway in the architecture

GateWay Core Concepts

Spring Cloud GateWay is asynchronous non-blocking by nature, based on the Reactor model (synchronous non-blocking I/O multiplexing mechanism)

A request – > the gateway matches according to certain conditions – then forwards the request to the specified service address; In the process, we can do some more specific controls (limiting traffic, logging, whitelist)

  • Route: The most basic part of a gateway and the basic unit of work of the gateway. A route consists of an ID, a destination URL (the final route to the address), a set of assertions (matching criteria), and a Filter Filter (refinement control). If the assertion is true, the route is matched.
  • Predicates: Reference the Java8 assertions in Java. Util. The function. The Predicate, the developer can match all the content in the Http request (including the request headers, request parameters, etc.) (similar to the location of nginx matching), if the assertion match the request routing.
  • Fifilter: A standard Spring webFilter that uses filters to execute business logic before or after a request.

How the GateWay works

Spring official introduction:

The client sends a request to the Spring Cloud GateWay, then finds the route matching the request in the GateWay Handler Mapping and sends it to the GateWay Web Handler. The Handler then sends the request through the specified filter chain to our actual service to perform the business logic, and then returns. The filters are separated by dashed lines because the filter may perform business logic before (PRE) or after (POST) sending the proxy request.

Filter The Pre Filter can perform parameter verification, permission verification, traffic monitoring, log output, and protocol conversion. The POST Filter can perform response content, response header modification, log output, and traffic monitoring.

GateWay Description of routing rules

Spring Cloud GateWay helps us to build many Predicates functions, realizing various routing matching rules (through headers, request parameters, etc.) to match the corresponding routing.

Match after point in time

spring: cloud: gateway: routes: - id: after_route uri: https://example.org predicates: T17 - After = 2017-01-20:42:47. 789 - from America/DenverCopy the code

Match before time point

spring: cloud: gateway: routes: - id: before_route uri: https://example.org predicates: T17 - Before = 2017-01-20:42:47. 789 - from America/DenverCopy the code

Time interval matching

spring: cloud: gateway: routes: - id: between_route uri: https://example.org predicates: T17 - Between = 2017-01-20:42:47. 789 - from America/Denver, 2017-01-21 T17:42:47. 789 - from America/DenverCopy the code

Specifies that the Cookie re matches the specified value

spring:
    cloud:
        gateway:
            routes:
            - id: cookie_route
              uri: https://example.org
              predicates:
              - Cookie=chocolate, ch.p
Copy the code

Specifies that the Header re matches the specified value

spring:
    cloud:
        gateway:
            routes:
            - id: header_route
              uri: https://example.org
              predicates:
              - Header=X-Request-Id, \d+
Copy the code

Request Host to match the specified value

spring:
    cloud:
        gateway:
            routes:
            - id: host_route
              uri: https://example.org
              predicates:
              - Host=**.somehost.org,**.anotherhost.org
Copy the code

Request Method Matches the specified request Method

spring:
    cloud:
        gateway:
            routes:
            - id: method_route
              uri: https://example.org
              predicates:
              - Method=GET,POST
Copy the code

Request a path regex match

spring:
    cloud:
        gateway:
            routes:
            - id: path_route
              uri: https://example.org
              predicates:
              - Path=/red/{segment},/blue/{segment}
Copy the code

The request contains a parameter

spring:
    cloud:
        gateway:
            routes:
            - id: query_route
              uri: https://example.org
              predicates:
              - Query=green
Copy the code

The request contains a parameter whose value matches the regular expression

spring:
    cloud:
        gateway:
            routes:
            - id: query_route
              uri: https://example.org
              predicates:
              - Query=red, gree.
Copy the code

Remote address matching

Spring: Cloud: gateway: routes: -id: remoteaddr_route URI: https://example.org Predicates: -remoteADDR =192.168.1.1/24Copy the code

GateWay Dynamic route description

GateWay supports automatic retrieval and access of a list of services from a registry, known as dynamic routing

The implementation steps are as follows

1) Add registry client dependencies to POM.xml (because eureka client has been introduced to get registry service list)

2) Dynamic routing configuration

Note: When setting up dynamic routes, the URI starts with lb: // (lb stands for service from registry) followed by the name of the service to be forwarded to

GateWay filter

GateWay Filter Introduction

In terms of filter life cycle (influencing timing points), there are two main pre and Post:

Lifecycle timing points role
pre This filter is invoked before the request is routed. We can use this filter to authenticate, select requested microservices in the cluster, log debugging information, and so on.
post This filter is executed after routing to the microservice. Such filters can be used to add standard HTTPheaders to responses, collect statistics and metrics, send responses from microservices to clients, and so on.

From the perspective of filter types, Spring Cloud GateWay filters are divided into GateWayFilter and GlobalFilter

Filter type scope
GateWayFilter Apply to a single route
GlobalFilter Apply to all routes

For example, Gateway Filter can remove placeholders in urls and forward routes

Predicates: -path =/product/** filters: -stripprefix =1 predicates: -path =/product/** filters: -stripprefix =1Copy the code

Note: the GlobalFilter GlobalFilter is one of the most commonly used filters by programmers, and we’ll focus on this type

Customize global Filter to implement IP access restriction (blacklist and whitelist)

When a request is received, the IP address of the client that sends the request is determined. If the access to the customized GateWay Global Filter is denied in the blacklist, the Global Filter interface can be implemented to implement functions such as blacklists and blacklists and traffic limiting.

package com.lagou.gateway;

import lombok.extern.slf4j.Slf4j;
import org.springframework.cloud.gateway.filter.GatewayFilterChain;
import org.springframework.cloud.gateway.filter.GlobalFilter;
import org.springframework.core.Ordered;
import org.springframework.core.io.buffer.DataBuffer;
import org.springframework.http.HttpStatus;
import org.springframework.http.server.reactive.ServerHttpRequest;
import org.springframework.http.server.reactive.ServerHttpResponse;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ServerWebExchange;
import reactor.core.publisher.Mono;

import java.util.ArrayList;
import java.util.List;

/** * defines a global filter that takes effect for all routes */
@Slf4j
@Component // Let the container scan to register
public class BlackListFilter implements GlobalFilter.Ordered {

    // Simulate blacklist (actually can go to database or redis query)
    private static List<String> blackList = new ArrayList<>();

    static {
        blackList.add("0:0:0:0:0:0:0:1"); // Simulate the local address
        blackList.add("127.0.0.1");
    }

    /** * Filter core method *@paramExchange encapsulates the context of request and Response objects *@paramChain Gateway filter chain (including global filter and single-route filter) *@return* /
    @Override
    public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
        System.out.println("....BlackListFilter....");
        // Check whether the client is in the blacklist. If yes, deny access. If no, permit access
        // Retrieve the Request and Response objects from the context
        ServerHttpRequest request = exchange.getRequest();
        ServerHttpResponse response = exchange.getResponse();

        // Get the client IP from the Request object
        String clientIp = request.getRemoteAddress().getHostString();
        // Take clientIp to the blacklist query, if there is no access
        if(blackList.contains(clientIp)) {
            // Refuse to visit, return
            response.setStatusCode(HttpStatus.UNAUTHORIZED); / / status code
            log.info("=====>IP:" + clientIp + "In the blacklist, will be denied access!");
            String data = "Request be denied!";
            DataBuffer wrap = response.bufferFactory().wrap(data.getBytes());
            return response.writeWith(Mono.just(wrap));
        }

        // Make a valid request, release, and perform subsequent filters
        return chain.filter(exchange);
    }

    /** * The return value indicates the order (priority) of the current filter, the smaller the value, the higher the priority *@return* /
    @Override
    public int getOrder(a) {
        return 0; }}Copy the code

GateWay high availability

GateWay as a very core component, if the failure, then all requests may not be routed processing, so we need to do GateWay high availability.

The high availability of GateWay is simple: multiple GateWay instances can be started to achieve high availability, and load balancing devices such as Nginx can be used upstream of GateWay for load forwarding to achieve high availability.

Start multiple GateWay instances (let’s say two, one port 9002 and one port 9003), and all that is left is to use Nginx and others to complete the load proxy. The following is an example:

Upstream GateWay {server 127.0.0.1:9002; Server 127.0.0.1:9003; } location / { proxy_pass http://gateway; }Copy the code

SpringCloudConfig Distributed configuration center

Application scenario of the distributed configuration center

Often, we use configuration files to manage some configuration information, such as application.yml

Single application architecture, configuration information management, maintenance will not be particularly troublesome, manual operation can be, because it is only a project;

Microservice architecture. Because there may be many microservices in our distributed cluster environment, it is impossible for us to modify the configuration one by one and then restart it. In certain scenarios, we also need to dynamically adjust the configuration information during operation, such as: Dynamically adjust the data source connection pool size based on the load of each microservice, and we want the microservice to update automatically when the configuration changes.

The scenario is summarized as follows:

1) centralized configuration management, a micro service architecture there may be hundreds or thousands of service, so it is important to focus on configuration management work (a modification, everywhere) 2) different environment configuration, such as the data source configuration in the different environment (development dev, test, test production prod) are different in the 3) can be adjusted dynamically during the run. For example, the data source connection pool size can be dynamically adjusted according to the load of each microservice, which can be automatically updated after configuration modification. 4) If configuration content changes, the microservice can automatically update the configuration

Then we need centralized management of configuration files, which is the role of a distributed configuration center.

SpringCloudConfig

Introduction of the Config

Spring Cloud Confifig is a distributed configuration management solution, including two parts of the Server side and the Client side.

  • Server: Provides storage of configuration files, provides the content of configuration files in the form of interfaces, and is easily embedded in Spring Boot applications by using @enableconfiFigServer annotations
  • Client: Obtains configuration data through the interface and initializes the application

Second generation of SpringCloud core components (SCA)

SpringCloud is a collection of several frameworks, including spring-Cloud-Config, Spring-Cloud-Bus and nearly 20 sub-projects, It provides solutions in the fields of service governance, service gateway, intelligent routing, load balancing, circuit breaker, monitoring and tracking, distributed message queue, configuration management and so on. Through the Spring Boot style encapsulation, Spring Cloud shields the complex configuration and implementation principles, and finally gives developers a simple and easy to understand and deploy distributed system development kit. In general, Spring Cloud contains the following components, which are primarily Netflix open source.

Like Spring Cloud, Spring Cloud Alibaba is a microservices solution that contains the necessary components for developing distributed application microservices, making it easy for developers to use these components to develop distributed application services through the Spring Cloud programming model. Relying on Spring Cloud Alibaba, you only need to add some annotations and a little configuration, you can plug Spring Cloud application into Ali Micro service solution, and quickly build distributed application system through Ali middleware.

Ali Open Source Components

Nacos: A dynamic service discovery, configuration management, and service management platform that makes it easier to build cloud-native applications.

Sentinel: Take traffic as the entry point to protect the stability of services from multiple dimensions, such as flow control, fuse downgrading and system load protection.

RocketMQ: An open source distributed messaging system based on highly available distributed clustering technology that provides low-latency, highly reliable message publishing and subscription services.

Dubbo: This is not to say more, in the country is very widely used in a high-performance Java RPC framework.

Seata: Alibaba open source product, an easy-to-use high-performance microservices distributed transaction solution.

Arthas: An open source Java dynamic tracking tool based on bytecode enhancement technology.

Ali Commercial Components

As a commercial company, Alibaba launched Spring Cloud Alibaba, which went public in large part to help promote its own Cloud offerings by taking over the developer ecosystem. So in the open source community, with a lot of bootleg, Ali commercial components, overall ease of use and stability is still very high.

Alibaba Cloud ACM: an application configuration center that centrally manages and pushes application configurations in a distributed architecture environment.

Alibaba Cloud OSS: Alibaba Cloud Object Storage Service (OSS) is a Cloud Storage Service provided by Alibaba Cloud.

Alibaba Cloud SchedulerX: a distributed task scheduling product developed by Alibaba Middleware team, which provides second-level, precise timing (based on Cron expression) task scheduling service.

Integrate SpringCloud components

Spring Cloud Alibaba is a complete set of micro-service solution components. It is not enough to only rely on the current open source components of Alibaba, but more to integrate the current community components, so Spring Cloud Alibaba can integrate GateWay components such as Zuul and GateWay. It can also inherit Ribbon, OpenFeign and other components.

Nacos service registry and configuration center

Nacos introduction

Nacos (Dynamic Naming and Configuration Service) is an open source platform of Alibaba for Service discovery, Configuration management, and Service management in micro-service architecture.

Nacos is the combination of registry + configuration center (Nacos=Eureka + Config + Bus)

Nacos. IO download address: github.com/alibaba/Nac…

Nacos features

  • Service discovery and health check
  • Dynamic configuration Management
  • Dynamic DNS Service
  • Service and metadata management (from a management platform perspective, NacOS also has a UI page where you can see registered services and their instance information (metadata information), etc.), dynamic service weighting, and dynamic service graceful downsizing can be done

Nacos singleton service deployment

  • Download the unzip package and run the command to start (we used the most recent stable version nacos-server-1.1.0.tar.gz).
Linux/MAC: sh startup. Sh -m standalone Windows: CMD startupCopy the code
  • Visit nacos console: http://127.0.0.1:8848/nacos/#/login or http://127.0.0.1:8848/nacos/index.html (the default port 8848, account and password nacos/nacos)

Microservices are registered with Nacos

  1. Introduce SCA dependencies in the parent POM
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Greenwich.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        <! --SCA -->
        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-alibaba-dependencies</artifactId>
            <version>2.1.0. RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
Copy the code
  1. When nacOS client dependencies are introduced in the GSS project, the Eureka-client dependencies must be removed
<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
Copy the code
  1. Application. Yml modified to add nacOS configuration information

Delete the configurations related to calling config and Eureka from the YML file; otherwise, the startup fails

Cloud: nacos: discovery: server-addr: 127.0.0.1:8848 #nacos server addressCopy the code
  1. Launch goods microservices and observe the nacOS console

Protection threshold: can be set to a floating point number between 0 and 1, which is actually a scale value (current health instances of the service/total current instances of the service)

Scene:

Under the general process, NACOS is a service registry, and service consumers need to obtain the available instance information of a service from NACOS. There are healthy or unhealthy states of service instances, and NACOS will return healthy instances when it returns the instance information to consumers. At this time, there will be some problems in some high concurrency and heavy traffic scenarios

If the service has 100 examples, A 98 instances are not healthy, only two instances is healthy, if the nacos returns only two examples of health information, all subsequent requests of consumers will be assigned to the two examples, traffic flow, 2 cases of health also carry not to live, the entire service A it carry not to live, Upstream microservices can also crash, creating an avalanche effect.

The meaning of the protection threshold is

When the number of healthy instances/total instances of service A is less than the protection threshold, it means that there are really few healthy instances and the protection threshold will be triggered (state true)

Nacos will provide all the instance information (healthy + unhealthy) of the service to the consumer, who may access an unhealthy instance and fail, but this is better than causing an avalanche, sacrificing some requests and ensuring that the entire system is available.

Note: Ali often adjusts this protection threshold parameter when using NACOS internally.

Load balancing

When the Nacos client is introduced, it will be associated with the Ribbon dependency package. When we use OpenFiegn, we will also introduce Ribbon dependencies. Ribbon and Hystrix can be configured in the original way

Here, we registered the commodity microservice and a 9001 port with Nacos to test the load balancing, which we can see from the background.

Activation:

lagou-server-page

lagou-server-product-9000

lagou-server-proudct-9001

Testing:

http://localhost:9100/page/getPort

Nacos Data Model (Domain Model)

Namespace, Namespace, Group, and cluster are used for categorizing and managing services and configuration files. After categorizing services and configuration files, certain effects, such as isolation, can be achieved

For example, in the case of services, services in different namespaces cannot call each other

Namespace: A Namespace that isolates different environments, such as development, test, and production

Group: grouping of several services or configuration sets, usually one system into a Group (pull recruitment, pull headhunting, pull education)

Service: a Service, such as a commodity microservice

DataId: A configuration set or you can think of it as a configuration file

Namespace + Group + Service is similar to the GAV coordinates in Maven, where the GAV coordinates are used to lock the Jar, and the Namespace + Group + Service is used to lock the Service

Namespace + Group + DataId is similar to the GAV coordinates in Maven, where the GAV coordinates are used to lock the Jar

Best practices

Nacos abstracts the concepts of Namespace, Group, Service, DataId, etc. What exactly stands for depends on how it is used (very flexible)

concept describe
Namespace Represents different environments, such as development dev, test Test, production environment PROD
Group It stands for a project, like the Lagooyun project
Service Specific XXX service in a project
DataId A specific XXX configuration file in a project

Nacos configuration center

Previous: Spring Cloud Config + Bus (automatic update of configuration)

  1. Add a configuration file to Github

2) Create the Config Server configuration center – > download the configuration information from Github

3) Configure Config Client in the microservice (where the configuration information is used) – > ConfigServer to obtain the configuration information

With Nacos, distributed configuration is much easier

Github is no longer needed (configuration information is directly configured in Nacos Server), Bus is no longer needed (dynamic refresh can still be done)

The following

1. Go to the Nacos Server and add the configuration information

2. Transform the specific microservice into a Nacos Config Client, which can obtain configuration information from the Nacos Server

NacosServer Adds the configuration

Enable Nacos configuration management in microservices

1) Add dependencies

<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
Copy the code

2) How to lock configuration files in Nacos Server (dataId) in microservices

Lock configuration files with Namespace + Group + dataId. If Namespace is not specified, the default is public; if Group is not specified, the default is DEFAULT_GROUP

The full format of the dataId is as follows

${prefix}-${spring.profile.active}.${file-extension}
Copy the code
  • Prefix for spring. The default application. The name of the value, but can be by spring configuration items. The cloud. Nacos. Config. The prefix to configuration.
  • Spring.profile. active is the profile corresponding to the current environment. Note: When spring.profile.active is null, the corresponding hyphen – will also not exist, and the dataId concatenation format becomes prefix.{prefix}.prefix.{file-extension}.
  • File – exetension as the configuration of content data format, can be configured a spring. Cloud. Nacos. Config. The file – the extension to the configuration. Currently, only properties and YAML types are supported.
Cloud: nacos: discovery: server-addr: 127.0.0.1:8848 config: server-addr: 127.0.0.1:8848 Namespace: 26AE1708-28DE-4f63-8005-480c48ED6510 # namespace ID group: DEFAULT_GROUP # File-extension: YAMl can be omitted if the default grouping is usedCopy the code

3) Automatic configuration update via Spring Cloud native annotation @refreshScope

package com.lagou.page.controller;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.context.config.annotation.RefreshScope;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/config")
@RefreshScope // Automatically refresh
public class ConfigClientController {

    @Value("${lagou.message}")
    private String message;

    @Value("${database.type}")
    private String dbType;

    @RequestMapping("/remote")
    public String findRemoteConfig(a) {
        return message + ""+ dbType; }}Copy the code

4) Thinking: a microservice wants to obtain configuration information of multiple dataids from the configuration center Nacos Server. If possible, extend multiple dataids

Cloud: nacos: discovery: server-addr: 127.0.0.1:8848 # nacos config config: server-addr: Namespace: 07137f0a-BF66-424b-b910-20ece612395A # namespace id group: DEFAULT_GROUP # DEFAULT_GROUP # DEFAULT_GROUP Yaml ext-config[0]: data-id: abc.yaml group: DEFAULT_GROUP refresh: true # enable dataId dynamic refresh ext-config[1]: data-id: def.yaml group: DEFAULT_GROUP refresh: true # enable dataId dynamic refresh ext-config[1]: data-id: def.yaml group: DEFAULT_GROUP refresh: True # Enable dynamic refresh of the extended dataIdCopy the code

SCASentinel traffic defense for distributed systems

Sentinel introduces

Sentinel is a flow control, fuse downgrading component for cloud native microservices.

Replace Hystrix for problems: service avalanche, service degradation, service fuse, service limiting

Hystrix:

Service consumer (static microservice) – > Invoke service provider (commodity microservice)

Introduce Hystrix in the caller

1) Build a monitoring platform dashboard by myself

2) There is no UI interface for service fusing, service degradation and other configurations (@hystrixCommand is used for setting, code intrusion)

Sentinel:

  • 1) Standalone deployable Dashboard/ console component (basically a JAR file that can be run directly)
  • 2) Reduce code development and achieve fine-grained control through UI interface configuration

Sentinel is divided into two parts:

  • Core libraries: Java clients are independent of any framework/library, can run in all Java runtime environments, and have good support for frameworks such as Dubbo /Spring Cloud.
  • Console :(Dashboard) based on Spring Boot, packaged can run directly, no additional application containers such as Tomcat.

Sentinel has the following characteristics:

  • Rich application scenarios: Sentinel has undertaken the core scenarios of Alibaba’s double Eleven traffic drive in the past 10 years, such as SEC killing (i.e. burst traffic control within the range of system capacity), message peaking and valley filling, cluster flow control, real-time fusing of unavailable downstream applications, etc.
  • Complete real-time monitoring: Sentinel also provides real-time monitoring capabilities. From the console, you can see a summary of the performance of a single machine-by-second data, or even a cluster of less than 500 machines, for accessing the application.
  • Extensive Open source ecosystem: Sentinel provides out-of-the-box integration modules with other open source frameworks/libraries, such as SpringCloud and Dubbo. You can quickly access Sentinel by introducing the appropriate dependencies and simple configuration.
  • Sophisticated SPI extension points: Sentinel provides an easy-to-use, sophisticated SPI extension interface. You can quickly customize the logic by implementing an extension interface. For example, customize rule management and adapt dynamic data sources.

Key features of Sentinel:

Sentinel’s open source ecology:

Sentinel deployment

Download: github.com/alibaba/Sen… We used V1.7.1

Start: java-jar sentinel-dashboard-1.7.1.jar &

Username/password: sentinel/sentinel

Service transformation

In our existing business scenario, “static microservice” calls “commodity microservice”, and we carry out fusing degradation and other controls in static microservice. Then, we transform static microservice and introduce Sentinel core package.

In order not to pollute the previous code, copy a page to static microservice lagou-service-page-901-sentinel

  • Pom.xml introduces dependencies
<! -- Sentinel Core Environment Dependence -->
<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId>
</dependency>
Copy the code
  • Application. Yml changes (Configure Sentinel Dashboard, expose breakpoints still need to have, delete original Hystrix configuration, delete original OpenFeign degrade configuration)
Server: port: 9101 (50, 50, 50, 50, 50, 50, 50, 50) true application: name: lagou-service-page datasource: driver-class-name: com.mysql.jdbc.Driver url: jdbc:mysql://localhost:3306/lagou? useUnicode=true&characterEncoding=utf8&serverTimezone=UTC username: root password: wu7787879 cloud: nacos: discovery: Server-addr: 127.0.0.1:8848 config: server-addr: 127.0.0.1:8848 Namespace: 26AE1708-28DE-4f63-8005-480c48ED6510 # namespace ID group: DEFAULT_GROUP # file-extension: Sentinel: transport: dashboard: 127.0.0.1:8080 port: 8719 Endpoints: web: exposure: include: "*" # Endpoint: health: show-details: alwaysCopy the code

After the above configuration, static microservices are started and monitored using Sentinel

At this point we find that nothing has changed on the console because we only need to issue a request trigger because of lazy loading

Sentinel key Concepts

The name of the concept The concept description
resources It can be anything in a Java application, for example, a service provided by the application, or another application invoked by the application, or even a piece of code. The API interface we request is the resource
The rules The rules set around the real-time status of resources can include flow control rules, fuse degrade rules, and system protection rules. All rules can be dynamically adjusted in real time.

Sentinel traffic rule module

The concurrent capacity of the system is limited. For example, system A supports one QPS. If too many requests come in, THEN A should control the flow

Resource name: the default request path

For source: Sentinel can limit the flow for the caller, fill in the microservice name, default (regardless of source)

Threshold Type/Single-machine threshold

QPS :(number of requests per second) traffic limiting occurs when the QPS for invoking this resource reaches the threshold

Threads: when the number of threads to invoke the resources carried out while the threshold current limiting (thread to handle the request, if the business logic execution time is very long, flood peak flow comes, will cost a lot of thread resources, these thread resources accumulation, could eventually cause the service is not available, further upstream service is not available, could eventually service avalanche)

Cluster or not: Specifies whether to restrict traffic in a cluster

Flow control mode:

Direct: When the resource invocation reaches the flow limiting condition, the resource is directly restricted

Association: The associated resource call limits itself when it reaches the threshold

Link: Only traffic on a specified link is recorded

Flow control effect:

Fast failure: Fails directly and throws an exception

Warm Up: Based on the value of the cold loading factor (default 3), the QPS threshold is reached after the preheating period from the threshold to the cold loading factor

Queue waiting: queue at a uniform speed to allow the request to pass at a uniform speed. The threshold type must be set to QPS, otherwise it is invalid

Associated limiting of flow control mode

When the number of associated resource calls reaches the threshold, traffic limits itself. For example, the user registration interface needs to invoke the ID verification interface (usually the ID verification interface). If the id verification interface request reaches the threshold, traffic limits the user registration interface using association.

package com.lagou.edu.controller;

import com.lagou.edu.controller.service.ResumeServiceFeignClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/user")
public class UserController {

    /** * User registration interface *@return* /
    @GetMapping("/register")
    public String register(a) {
        System.out.println("Register success!");
        return "Register success!";
    }

    /** * Verify registration id card interface (need to call public security household resources) *@return* /
    @GetMapping("/validateID")
    public String validateID(a) {
        System.out.println("validateID");
        return "ValidateID success!"; }}Copy the code

Emulating the intensive request /user/validateID interface, we can see that the /user/register interface is also limited

Link current limiting in flow control mode

Link refers to request link (call chain: A–>B–C, D–>E–>C)

In link mode, the system controls the traffic of the invocation link where the resource resides. You need to configure the entry resource in the rule, which is the context name of the link entry for this invocation.

A typical call tree is shown below:

The requests from Entrance1 and Entrance2 in the figure above both invoke the resource NodeA. Sentinel allows limiting the flow of resources based only on the statistics of a call entry. For example, in link mode, setting the entry resource to Entrance1 means that only calls from Entrance1 are recorded in NodeA’s traffic limiting statistics, regardless of calls coming through Entrance2.

Warm up of flow control effect

When the system is idle for a long time, when the traffic suddenly increases, directly pulling the system to the high water level may instantly overwhelm the system, such as the second kill module of the e-commerce website.

Through the Warm Up mode, the traffic slowly increases and reaches the set value of the request processing rate after the preset warmup time.

By default, the Warm Up mode starts at 1/3 of the set QPS threshold and slowly increases to the QPS set value.

Queuing for flow control effect

In queuing mode, the interval for passing requests is strictly controlled. That is, the requests are passed at a uniform speed and part of the requests are queued. This mode is usually applied to scenarios such as peak clipping and valley filling in message queues. You need to set the timeout period. If the waiting time exceeds the timeout period, the request is rejected.

A lot of traffic is coming, not directly reject the request, but the request is queued, a uniform speed through (processing), the request can wait to be processed, can not wait (waiting time > timeout time) will be rejected

For example, if QPS is set to 5, a request is processed every 200 ms, and additional requests are queued. The timeout period represents the maximum queuing time. Requests exceeding the maximum queuing time will be rejected. In queuing mode, the QPS value should not exceed 1000 (request interval: 1 ms).

Sentinel degradation rule module

Flow control is to control the large flow from the outside, and the perspective of fuse downgrading is to deal with the internal problems.

Sentinel degradation will restrict the invocation of a resource in the call link when it is in unstable state (such as call timeout or abnormal proportion increases), so that the request can fail quickly and avoid cascading errors caused by affecting other resources. When a resource is degraded, all calls to the resource are automatically fuses within the next downgrade window, which is actually a Fuse in Hystrix. strategy

Sentinel does not miss a request like Hystrix does and try to repair itself, which is that the fuse is triggered and the request is rejected within the time window and then restored after the time window.

  • RT (Average response time)

If the average response time exceeds the threshold (in ms), all calls to this method will automatically fuse (throw a DegradeException) within the next time window (in s). Pay attention to the Sentinel default statistical RT limit is 4900 ms, is beyond the threshold will be classified as 4900 ms, if need to change this limit by startup configuration items – Dcsp. Sentinel. Statistic. Max. RT = XXX to configure.

  • Abnormal proportion

When requests per second for the resource >= 5 and the ratio of total exceptions per second to passes exceeds the threshold, the resource enters the degraded state, that is, within the next time window (in s), all calls to this method are automatically returned. The threshold range for the abnormal ratio is [0.0, 1.0], representing 0-100%.

  • Number of abnormal

When the number of resource anomalies in the last 1 minute exceeds the threshold, the system fuses. Notice the statistical timeWindow is minute. If the timeWindow is less than 60 seconds, the circuit breaker may enter the circuit breaker again after the circuit breaker is over and the timeWindow is greater than = 60 seconds.

SCA summary

SCA actually evolved into three lines

  • First line: Open source some components
  • The second line: Ali internal maintenance of a branch, its own business line use
  • Third line: Ali cloud platform deployment of a set, pay to use

From the strategic point of view, SCA is to fit aliyun. At present, the promotion and popularity rate of these open-source components are not high, and the community activity is not high. The stability and experience of Sentinel still need to be further improved. According to the actual use, Sentinel has better stability and experience than Nacos.