— All the brain maps are made by myself, do not abuse without permission — need to deeply grasp the responsibility and use of each component, write more practice


Husband learning must be static also, only to learn also, not learn beyond wide only, not beyond into learning


1. Microservice architecture

(Review here in conjunction with previous lessons)

  • | 【 application 】 simple monomers
  • |->Rise in the business– Need cluster deployment, load balancing, cache server, file server, database cluster (read/write separation)
  • | – > application 】 【 complex monomer
  • |—> To improve efficiency and decoupling– Basically splitting up the business so that it doesn’t affect each other
  • | — — — — > “vertical application architecture”
  • |—–> Inconsistent interface protocols + service monitoring difficulties– Using distributed RPC (Dubbo, etc.)
  • | — — — — — – > “- a service-oriented architecture SOA”
  • |——-> Larger service extraction granularity + higher coupling degree of service caller + provider– Split applications into mini-services
  • | — — — — — — — — > “micro service”

Micro service keywords & advantages

  1. Small business granularity
    • Easy function focusing
    • Each microservice can be implemented individually (Dev->Test->Deploy->Ops) to facilitate agile development
    • Easy to reuse between services + easy to assemble
  2. Service independent
    • Different microservices can be developed in different languages and are loosely coupled
  3. Lightweight communication
    • Restful

disadvantages

  1. Distributed management is difficult, and distributed tracking links are complex

Relevant concepts

  1. Service registry The service provider registers/enlists information about the services it provides (server IP and port, service access protocol, and so on) to the registry
  2. Service discovery Service consumers can obtain a relatively real-time list of services from the registry and then select a service to access based on certain policies
  3. Load balancing spreads the load of requests across multiple servers to improve performance + reliability
  4. Fusing (circuit breaker protection)When the downstream service is overloaded with accessSlow/fail accordinglyWhen the upstream service can be temporaryCut off theDownstream call
  5. Link tracing Logs and monitors performance of multiple service links involved in a request
  6. API gatewayAPI request invocationUnified Access API gateway layerThe gateway forwards the request!

    • routing
    • Security Protection (Identity Authentication)
    • Blacklist and whitelist (Access Control)
    • Protocol adaptation (Communication protocol adaptation + protocol verification)
    • Traffic Control (Traffic limiting)
    • Long and short link support
    • Fault tolerance (load balancing)

Ii. Overview of Spring Cloud

The development convenience of Spring Boot is utilized to ingeniously simplify the development of distributed system infrastructure. The mature and practical service frameworks developed by various companies are combined and repackaged in the Spring Boot style. Complex configuration and implementation principles are shielded. Finally, a simple and easy to understand, easy to deploy and easy to maintain distributed system development kit was set aside for developers

======> Spring Cluod is a “specification” (an ordered collection of frameworks)

Part 1 – Component description

  • The first generation was mainly Netfix encapsulation
  • The second generation is mainly packaged by Alibaba
  • NacosNot only asThe registryCan also be used asConfiguration center

Part 2 – Architecture

Part 3 – Comparing Spring Cloud and Dubbo

  • SCN (Spring Cloud) is currently widely used, but it is based on HTTP and not as efficient as Dubbo
  • Dubbo system components are not complete and cannot be providedOne-stop solution— Such as registering to discover implementations that depend on Zookeeper, etc
  • => Now the Spring Cloud ecosystem is getting better and better

Three, the case ☆☆☆

Reference source

Part 1 – Requirements description

Functional requirement points provided by using the pull check:

Two-way matching between the recruiter (R) and the applicant (C) : 1. Start a scheduled task for R, and push a certain number of C to R’s resource pool according to the “employing standard” every day; 2. Check ==> C’s resume status (open/hidden) during the push process, and only push the open resume; A. [Automatic Delivery micro-service] with “automatic delivery” function; B. [Resume micro-service] has the function of “resume query”; When A calls B, A- service consumer/B- service provider

Part 2 – Database preparation

MySQL 8.0 +

Resume basic information sheetr_resume

【 Core fields 】

  1. UserId Resume owner
  2. IsDefault whether the resume isDefault
  3. IsOpenResume Resume status
DROP TABLE IF EXISTS `r_resume`;
CREATE TABLE `r_resume` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `sex` varchar(10) DEFAULT NULL COMMENT 'gender',
  `birthday` varchar(30) DEFAULT NULL COMMENT Date of birth,
  `work_year` varchar(100) DEFAULT NULL COMMENT 'Years of service',
  `phone` varchar(20) DEFAULT NULL COMMENT 'Mobile number',
  `email` varchar(100) DEFAULT NULL COMMENT 'email',
  `status` varchar(80) DEFAULT NULL COMMENT 'Current status',
  `resumeName` varchar(500) DEFAULT NULL COMMENT 'Resume name',
  `name` varchar(40) DEFAULT NULL,
  `createTime` datetime DEFAULT NULL COMMENT 'Creation date',
  `headPic` varchar(100) DEFAULT NULL COMMENT 'avatar',
  `isDel` int(2) DEFAULT NULL COMMENT 'Delete default value 0- Not deleted 1- Deleted',
  `updateTime` datetime DEFAULT NULL COMMENT 'Resume Update Time',
  `userId` int(11) DEFAULT NULL COMMENT 'user ID',
  `isDefault` int(2) DEFAULT NULL COMMENT 'Default resume 0- default 1- Non-default',
  `highestEducation` varchar(20) DEFAULT ' ' COMMENT 'Highest degree',
  `deliverNearByConfirm` int(2) DEFAULT '0' COMMENT 'Send attached resume confirmation 0- Confirmation required 1- No confirmation required',
  `refuseCount` int(11) NOT NULL DEFAULT '0' COMMENT 'Number of resumes rejected',
  `markCanInterviewCount` int(11) NOT NULL DEFAULT '0' COMMENT 'Marked as number of interviews available',
  `haveNoticeInterCount` int(11) NOT NULL DEFAULT '0' COMMENT 'Number of interviews notified',
  `oneWord` varchar(100) DEFAULT ' ' COMMENT 'Introduce yourself in a sentence',
  `liveCity` varchar(100) DEFAULT ' ' COMMENT 'Residential City',
  `resumeScore` int(3) DEFAULT NULL COMMENT 'Resume score',
  `userIdentity` int(1) DEFAULT '0' COMMENT 'User Identity 1- Student 2- Worker',
  `isOpenResume` int(1) DEFAULT '3' COMMENT 'Talent Search - Open RESUME 0- Closed, 1- open, 2- Resume does not meet Posting criteria passive closed 3- Never set open resume'.PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2195388 DEFAULT CHARSET=utf8;
Copy the code

Please refer to Appendix 1 of notes for specific data

Part 3 – Engineering environment preparation & Effect realization

The engineering environment is constructed based on Spring Boot, and the diagram is as follows:

It is transformed into the following engineering structure

“Parent module”

  1. Import the required packages into the POM

      
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.archie</groupId>
    <artifactId>resume-demo-parent</artifactId>
    <version>1.0 the SNAPSHOT</version>
    <modules>
        <module>service-common</module>
        <module>service-resume</module>
        <module>service-autodeliver</module>
    </modules>

    <! -- Parent project package is poM -->
    <packaging>pom</packaging>

    <! -- Spring Boot parent boot dependency -->
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.1.6. RELEASE</version>
    </parent>

    <dependencies>
        <! - web depend on -- -- >
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <! -- Log dependency -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-logging</artifactId>
        </dependency>
        <! -- Test dependencies -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <! -- Actuator helps you monitor and manage Spring Boot apps -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <! -- Hot deployment -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <optional>true</optional>
        </dependency>
    </dependencies>


    <build>
        <plugins>
            <! Build plugins -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>11</source>
                    <target>11</target>
                    <encoding>utf-8</encoding>
                </configuration>
            </plugin>
            <! -- Package plugin -->
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>
Copy the code

“Submodule”

  1. The POM of the Common module introduces JPA coordinates (simple projects save coding)
<! --Spring Data Jpa-->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <scope>runtime</scope>
</dependency>
Copy the code
  1. The common module creates poJOs
public class Resume {
    
    private Long id; / / the primary key
    private String sex; / / gender
    private String birthday; / / birthday
    private String work_year; // Years of work
    private String phone; / / cell phone number
    private String email; / / email
    private String status; // Current status
    private String resumeName; // Resume name
    private String name; / / name
    private String createTime; // Create time
    private String headPic; / / avatar
    private Integer isDel; 0- Not deleted 1- Deleted
    private String updateTime; // Resume update time
    private Long userId; / / user ID
    private Integer isDefault; // Whether the default resume is 0- default 1- non-default
    private String highestEducation; // Highest degree
    private Integer deliverNearByConfirm; // Send attachment Resume Confirmation 0- Confirmation required 1- No confirmation required
    private Integer refuseCount; // Number of resumes rejected
    private Integer markCanInterviewCount; // Is marked as the number of interviews available
    private Integer haveNoticeInterCount; // The number of interviews has been informed
    private String oneWord; // Introduce yourself in one sentence
    private String liveCity; // Live in the city
    private Integer resumeScore; // Resume score
    private Integer userIdentity; // User id 1- student 2- worker
    private Integer isOpenResume; // Talent search - Open resume 0- closed, 1- open, 2- resume does not meet the Posting criteria passive closed 3- never set open resume

	Get/set / / ignore

}
Copy the code
  1. The Resume module creates the Dao
public interface ResumeDao extends JpaRepository<Resume.Long> {}Copy the code
  1. The Resume module creates the Service
public interface ResumeService {
    Resume findDefaultResumeByUserId(Long userId); } -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -@Service
public class ResumeServiceImpl implements ResumeService {
    
    @Autowired
    private ResumeDao resumeDao;
    
    @Override
    public Resume findDefaultResumeByUserId(Long userId) {
        Resume resume = new Resume();
        resume.setUserId(userId);
        // Query only the default resume
        resume.setIsDefault(1);
        Example<? extends Resume> example = Example.of(resume);
        returnresumeDao.findOne(example).get(); }}Copy the code
  1. The resume module creates the Controller
@RestController
@RequestMapping("/resume")
public class ResumeController {
    
    @Autowired
    private ResumeService resumeService;
    
    @GetMapping("/openstate/{userId}")
    public Integer findDefaultResumeState(@PathVariable Long userId) {
        Resume resume = resumeService.findDefaultResumeByUserId(userId);
        returnresume.getIsOpenResume(); }}Copy the code
  1. The Resume module creates the Main startup entry
@SpringBootApplication
@EntityScan("com.archie.dao")
public class ResumeApplication {
    
    public static void main(String[] args) { SpringApplication.run(ResumeApplication.class, args); }}Copy the code
  1. The Resume module creates the YML configuration
server:
  port: 8080
Spring:
  application:
    name: lagou-service-resume
  # Database connection information
  datasource:
    username: root
    password: root
    url: jdbc:mysql://localhost:3306/springDB? useUnicode=true&characterEncoding=utf-8&serverTimezone=GMT
    driver-class-name: com.mysql.cj.jdbc.Driver
  jpa:
    database: MySQL
    show-sql: true
    hibernate:
      naming:
        Avoid converting hump names to underline names
        physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
Copy the code

  1. Autodeliver module application. Yml
server:
  port: 8090
Spring:
  application:
    name: lagou-service-autodeliver
  # Database connection information
  datasource:
    username: root
    password: root
    url: jdbc:mysql://localhost:3306/springdb? useUnicode=true&characterEncoding=utf-8&serverTimezone=GMT
    driver-class-name: com.mysql.cj.jdbc.Driver
  jpa:
    database: MySQL
    show-sql: true
    hibernate:
      naming:
        Avoid converting hump names to underline names
        physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
Copy the code
  1. Autodeliver module Controller
@RestController
@RequestMapping("/autodeliver")
public class AutodeliverController {
    
    @Autowired
    private RestTemplate restTemplate;
    
    /**
     * http://localhost:8090/autodeliver/checkState/{userId}
     * @param userId
     * @return* /
    @GetMapping("/checkState/{userId}")
    public Integer findResumeOpenState(@PathVariable Long userId) {
        // Invoke the remote service --> RESUME microservice interface RestTemplate --same as--> JdbcTempate
        Integer forObject = restTemplate.getForObject("http://localhost:8080/resume/openstate/" + userId, Integer.class);
        
        returnforObject; }}Copy the code
  1. Autodeliver module Main entry
@SpringBootApplication
public class AutodeliverApplication {
    
    public static void main(String[] args) {
        SpringApplication.run(AutodeliverApplication.class, args);
    }
    
    // Use the RestTemplate template object to make a remote call
    @Bean
    public RestTemplate getRestTemplate(a) {
        return newRestTemplate(); }}Copy the code
  1. The AutoDeliver module calls the Resume service remotely

Part 4 – Problem analysis

[Existing problems in distributed cluster environment] :

  1. In service consumers, we hard-coded the URL address into the code, which is not convenient for later maintenance
  2. The service provider has only one service, and even if the service provider forms a cluster, the service consumer needs to implement load balancing on its own
  3. Among the service consumers, the status of the service provider is unclear
  4. When a service consumer invokes a service provider, can it discover a fault in time and not throw an exception page to the user?
  5. Is there room to optimize the RestTemplate request invocation? Can you play like Dubbo?
  6. How to realize so many unified authentication of micro services?
  7. Isn’t it tedious to change the configuration file every time…

Next, we will break one by one! Well, ▔ ▔ ㄏ

Initial Spring Cloud Core Components (I)

Full component reference source code

Part 1 – Description of service Center

Current mainstream service centers:

  • Zookeeper

    Apache Hadoop is a sub-project that mainly solves data management problems in distributed applications: unified naming service, status synchronization service, cluster management, distributed application configuration…

    They have aNode change notification function, can be timely notified to monitor customers;

    In addition to ZookeeperHigh availabilityIf half of the nodes survive, the whole cluster is available;
  • Eureka

    A subproject of Netflix open source, integrated into Spring Cloud by Pivatal; Is aBased on the RestfulAPIThe service registration and discovery component of
  • Consul

    Developed by HashiCorp based on the Go languageSupports multi-DC distributionHighly available service publishing and registering service software

    usingRaft algorithmEnsure service consistency and supportHealth check
  • One of the core components of Nacos Spring Cloud Alibaba, approximately equal to registry + configuration center.

“They are”

Anatomy of Zookeeper # Nuggets article #

“A Brief introduction to Eureka”

  • Eureka has two main components
    • Es-eureka Server provides the service discovery function
    • Ec-eureka Client A Java Client that simplifies interaction with the Eureka Server
    • When interactive microservices are started, EC registers its information with ES, which stores it.
  • Eureka interaction flow

    • Us-east-1c, US-EAST-1D, and US-EAST-1E indicate different areas, that is, different equipment rooms
    • Each Eureka Server is a cluster
      • Application Service Registers the Service with the Eureka Server
      • Eureka Server receives registration event -> Data synchronization in cluster and partition
      • The Application Service thus has the capability to invoke the Service from Eureka Server
    • After the microservice is started, it periodically sends heartbeat (default interval is 30 seconds) to the Eureka Server to renew its information
    • Eureka Server is also Eureka Client. The service registration list is synchronized between multiple ES through replication
    • The Eureka Client caches information in the Eureka Server.

      ( Even if all Eureka Server nodes go down, the service consumer can still find the service provider using the information in the cache )

Had been throughThe heartbeat detection,Health checkClient cacheAnd other mechanisms to improve system flexibility, scalability and availability

“Eureka Application and High Availability Cluster” ☆

Change from the previous demo

  1. Spring Cloud dependency is introduced in lagou-parent’s POM
    <! Spring Cloud is a comprehensive project with many sub-projects.
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>Greenwich.RELEASE</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>
Copy the code

  1. Dependencies are introduced in the current project POM.xml
<dependencies>
 <! --Eureka Server dependencies -->
   <dependency>
     <groupId>org.springframework.cloud</groupId>
     <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
   </dependency>
 
 <! -- Eureka Server needs to introduce Jaxb
    <dependency>
        <groupId>com.sun.xml.bind</groupId>
        <artifactId>jaxb-core</artifactId>
        <version>2.2.11</version>
    </dependency>
    <dependency>
        <groupId>javax.xml.bind</groupId>
        <artifactId>jaxb-api</artifactId>
    </dependency>
    <dependency>
        <groupId>com.sun.xml.bind</groupId>
        <artifactId>jaxb-impl</artifactId>
        <version>2.2.11</version>
    </dependency>
    <dependency>
        <groupId>org.glassfish.jaxb</groupId>
        <artifactId>jaxb-runtime</artifactId>
        <version>2.2.10 - b140310.1920</version>
    </dependency>
    <dependency>
        <groupId>javax.activation</groupId>
        <artifactId>activation</artifactId>
        <version>1.1.1</version>
    </dependency>
    <! -- Introducing Jaxb, end
</dependencies>
Copy the code

[Note] JDk9 + does not load the JAXB JAR package. You need to manually import it into the POM. Otherwise, the EurekaServer service cannot be started.

  1. application.yml
# Eureka service port number
server:
  port: 8761
spring:
  application:
    name: cloud-eureka-server The application name will be used as the service name in Eureka
# Eureka client configuration (interacting with the Server)
eureka:
  instance:
    hostname: localhost The host name of the current Eureka Server
  client:
    service-url: EurekaServer (EurekaServer, EurekaServer)
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
    register-with-eureka: false You do not need to register as a Server
    fetch-registry: false You don't need to get other services from the service center
Copy the code
  1. SpringBoot starts the class using@EnableEurekaServerDeclare the current project as EurekaServer service
@SpringBootApplication
// Declare that the current project is Eureka service
@EnableEurekaServer
public class LagouEurekaServerApp8761 {

    public static void main(String[] args) { SpringApplication.run(LagouEurekaServerApp8761.class,args); }}Copy the code
  1. Launch the Main entry and visit http://127.0.0.1:8761



Set up Eureka Server HA cluster

In a production environment, we will configure the Eureka Server cluster for high availability. Nodes in the Eureka Server cluster share the service registry through point-to-point (P2P) communication

  1. Modify the local host attribute
127.0.0.1 myEurekaA
127.0.0.1 myEurekaB
Copy the code
  1. Modify two YML configuration files in the Lagou-Cloud-Eureka-server project
# Eureka Server service port
server:
  port: 8761
spring:
  application:
    name: lagou-cloud-eureka-server The application name will be used as the service name in Eureka

    The eureka Server is also a Client
eureka:
  instance:
    hostname: myEurekaA  Host name of the current Eureka instance
  client:
    service-url:
      The Eureka Server is a Client. The Eureka Server is a Client.
      In cluster mode, defaultZone should point to other Eureka servers. If there are more instances of other Servers, concatenate them with commas
      defaultZone: http://myEurekaB:8762/eureka/
    register-with-eureka: true  Can be changed to true in cluster mode
    fetch-registry: true Can be changed to true in cluster mode
  dashboard:
    enabled: true

# # # # # # # # # # # # #

# Eureka Server service port
server:
  port: 8762
spring:
  application:
    name: lagou-cloud-eureka-server The application name will be used as the service name in Eureka

    The eureka Server is also a Client
eureka:
  instance:
    hostname: myEurekaB  Host name of the current Eureka instance
  client:
    service-url: Configure the address of the Eureka Server that the client interacts with
      defaultZone: http://myEurekaA:8761/eureka/
    register-with-eureka: true
    fetch-registry: true
Copy the code
  • In one instance, the other instances serve as mirror nodes in the cluster
  • Register-with-eureka and fetch- Registry are set to false for single node because there is only one Eureka Server and you do not need to register yourself, but now you have a cluster and you can register the service in other nodes of the cluster
  1. Start the two Main entrances separately

  1. Access the management console pages of both EurekaServerhttp://myEurekaA:8761/ 和 http://myEurekaB:8762/The registry will be foundLAGOU-CLOUD-EUREKA-SERVERThere are already two nodes, and the registered- Replicas (neighboring cluster replication nodes) already contain each other

— Microservice provider — > Register with Eureka Server cluster —
  • Register resume microservices (8080, 8081)
  1. Spring-cloud-commons dependencies are introduced in the parent project
        <! Spring Cloud Commons -->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-commons</artifactId>
        </dependency>
Copy the code
  1. The POM file introduces coordinates, adding the relevant coordinates of eureka client
        <! -- Eureka Client Client dependencies -->
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
        </dependency>
Copy the code
  1. The configuration application. Ylml
# Register with Eureka Service Center
eureka:
  client:
    service-url:
      # register to cluster, separated by [,]
      defaultZone: http://myEurekaB:8762/eureka/,http://myEurekaB:8762/eureka/
  instance:
  	# display IP instead of hostname in service instance (for compatibility with older versions)
    prefer-ip-address: true 
    The instance name can be customized
    instance-id: ${spring.cloud.client.ip-address}:${spring.application.name}:@project.version@
Copy the code
  1. Start the class to add annotations
// @enableeurekaclient // EnableEurekaClient (only for Eureka)
@EnableDiscoveryClient // Start the registry client (general annotations, you can also use Nacos later) --> recommended
Copy the code

[Note] From the Spring Cloud Edgware version, @enableDiscoveryClient or @enableeurekaclient can be omitted. You can register microservices with the service discovery component by adding related dependencies and configuring them. The functions of @enableDiscoveryClient and @enableeurekaclient are the same. However, if eureka server is selected, @enableEurekaclient is recommended; if other registries are selected, @enableDiscoveryClient is recommended. Considering the generality, @enableDiscoveryClient can be used in the future

  1. Start the Main entry

— Microservices consumer — > Register with Eureka Server cluster —
  1. The POM file introduces coordinates, adding the relevant coordinates of eureka client
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-commons</artifactId>
</dependency> 
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
Copy the code
  1. Configure the application.yml file
server:
  port: 8090
Spring:
  application:
    name: lagou-service-autodeliver
  # Database connection information
  datasource:
    username: root
    password: root
    url: jdbc:mysql://localhost:3306/springdb? useUnicode=true&characterEncoding=utf-8&serverTimezone=GMT
    driver-class-name: com.mysql.cj.jdbc.Driver
  jpa:
    database: MySQL
    show-sql: true
    hibernate:
      naming:
        Avoid converting hump names to underline names
        physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
        # Register with Eureka Service Center
eureka:
  client:
    service-url:
      # register to cluster, separated by [,]
      defaultZone: http://myEurekaA:8761/eureka/,http://myEurekaB:8762/eureka/
  instance:
    prefer-ip-address: true # display IP instead of hostname in service instance (for compatibility with older versions)
    The instance name can be customized
    instance-id: ${spring.cloud.client.ip-address}:${spring.application.name}:@project.version@
Copy the code
  1. Add the @enableDiscoveryClient annotation to the startup class to enable service discovery
@EnableDiscoveryClient
Copy the code

  1. Transform the consumer interface
    / * * * had upgrade version {username} * * http://localhost:8090/autodeliver/checkState2/@param userId
     * @return* /
    @GetMapping("/checkState2/{userId}")
    public String findResumeOpenState2(@PathVariable Long userId) {
        // Get the service instance information and interface information of Eureka Server
        /* 1. Obtain the instance information of service-resume from Eureka Server */
        List<ServiceInstance> instances = discoveryClient.getInstances("lagou-service-resume");
        /* 2. If there are multiple instances, select one to use (load balancing process) */
        ServiceInstance instance = instances.get(0);
        /* 3. Obtain host port*/ from metadata information
        String host = instance.getHost();
        int port = instance.getPort();
    
        String URL = "http://" + host + ":" + port + "/resume/openstate/" + userId;
    
        // Invoke the remote service --> RESUME microservice interface RestTemplate --same as--> JdbcTempate
        Integer forObject = restTemplate.getForObject( URL, Integer.class);
        return forObject + "\t Service acquisition from Eureka cluster";
    }
Copy the code
  1. The results of

“Eureka Custom Metadata”

  • Eureka has two types of metadata:
    1. Standard metadata Host names, IP addresses, port numbers, and other information (calls between services)
    2. Customize metadatauseeureka.instance.metadata-mapThe configuration, in Key/Value format (remote client access), is as follows:

instance:
  prefer-ip-address: true
    metadata-map:
      node1: "hello"
      node2: "world"
Copy the code

[Eureka’s Self-protection] :

  • By default, If Eureka Server does not receive a heartbeat from a microservice instance within a certain period of time (90 seconds by default), Eureka Server will remove the instance. However, when the network partition failure occurs, the microservice cannot communicate with Eureka Server normally, and the microservice itself is running normally. At this time, the microservice should not be removed, so self-protection mechanism is introduced
  • When you’re in self-preservation mode
    1. No service instances are culled
    2. Registration and query requests for new services can still be accepted, but will not be synchronized to other nodes to ensure that the current node is still available
    3. througheureka.server.enable-self-preservationSelf-protection can be disabled. The default value is on

Eureka Core Source code

To follow up

Part 2 – Ribbon Load Balancing

General load balancing classification

  1. Server load balancing:

    When the request arrives at them, the request is routed to the target server according to the specified algorithm;

  2. Client load balancing:

    The client has an address Server address list and selects a Server to access by specifying an algorithm before calling it.

The Ribbon is a load balancer published by Netflix. It usually works with Eureka to read service information from Eureka, load the service based on the algorithm, and invoke the service

“Ribbon Apps”

There is no need to introduce additional Jar coordinates because we introduced Eureka-Client in the service consumer, which introduces Ribbon related jars

To use, simply add annotations to the RestTemplate

@Bean
// Ribbon load balancing
@LoadBalanced
public RestTemplate getRestTemplate(a) {
   return new RestTemplate();
}
Copy the code

Modify the service provider API return value to return the port number of the current instance for ease of observing load

The test case

The results of

“Ribbon Load Balancing Strategy”

Top interface for com.net flix. Loadbalancer. IRule

  1. RoundRobinRule: An empty server is returned after more than 10 rounds
  2. RandomRule: a random server that is null or unavailable will be selected in a while loop
  3. RetryRule: Retry retry within a specified period of time. The default inheritance is RoundRobinRule, and custom injection is also supported. After each selection, the server of the election will be judged whether it is null or alive, and the selection will continue within 500ms
  4. BestAvailableRule: Traverses the serverList and selects the available server with the smallest number of connections. If the selected server is null, RoundRobinRule is called to select the server again.
  5. AvailabilityFilteringRule: available filtering expanded the polling strategy, will first through the default polling to select a server, and then to judge whether the server timeout is available, the current number of connections is overrun, success back again.
  6. ZoneAvoidanceRule: Zone tradeoff (the default) extends the polling policy and inherits two filters: 1. Zone avoidancePredicate 2. AvailabilityPredicate Filters out servers that time out and have too many connections, and filters out all nodes in a zone that does not meet this requirement

Modifying a policy:

# for the microservice name of the called party, if not added, it takes effect globally
lagou-service-resume:
	ribbon:
 		NFLoadBalancerRuleClassName: com.netflix.loadbalancer.RandomRule Load policy adjustment
Copy the code

“Ribbon Core Source”

To follow up

Part 3 – Hystrix fuse

“Avalanche effect”

  • Fan in: represents the number of times that the micro-service is called. If the fan in is large, it indicates that the module has good reusability
  • Fan out: Indicates the number of other microservices invoked by the microservice. If the fan out value is large, the service logic is complex ↓

If the response time of A microservice invocation on the fan-out link is too long or unavailable, the invocation of microservice A will occupy more and more system resources, leading to system crash. This is the so-called “avalanche effect”.

[Solution] :

  1. Service fusing
    • A microservice link protection mechanism for avalanche effects

      When a microservice on the fan out link is unavailable or the response time is too long, the system interrupts the invocation of the microservice on the node to degrade the service and quickly returns an incorrect response. When the microservice invocation response of this node is detected to be normal, the call link is restored.
      • (Service circuit breaker focuses on “disconnection”, cutting off the invocation of downstream services)
      • (Service circuit breakers and service downgrades are often used together, as with Hystrix.)
  2. Service degradation
    • When the total resources are insufficient, the non-essential service is closed first (when called, the first return a reserved value:Out the data) and open the link after the link is recovered.
  3. Service current limiting
    • When some services cannot be degraded, you can perform traffic limiting:
      • Limit total concurrency (thread pool)
      • Limit the number of instantaneous concurrent connections (nginx’s instantaneous concurrent connections)
      • Limit the average rate within the time window
      • Limit the call rate of the remote interface
      • Limit the consumption rate of MQ…

“Hystrix Brief Description”

Hystrix (porcupine —–> quills)

A delay and fault tolerance library, open-source by Netflix, is used to isolate access to remote systems, services, or third-party libraries to prevent cascading failures, thereby improving system availability and fault tolerance

[Features] :

  1. The package request uses HystrixCommand to wrap the invocation logic for the dependency
  2. Trip Mechanism When the error rate of a service exceeds a certain threshold, Hystrix can trip and stop requesting the service for a period of time
  3. Resource isolation Hystrix maintains a small thread pool (bulkhead mode) (or semaphore) for each dependency. If the thread pool is full, requests to the dependency are rejected immediately rather than queued up, speeding up failure determination.
  4. Monitoring allows near-real-time monitoring of operational metrics and configuration changes, such as successes, failures, timeouts, and rejected requests
  5. Rollback mechanism Rollback logic is performed when a request fails, times out, is rejected, or when a circuit breaker is turned on. Back to the

For example, return a default value. 6. Self-repair The circuit breaker automatically enters the half-open state after it has been turned on for a period of time

“Hystrix Applications”

  1. Hystrix dependent coordinates were introduced in the Service Consumer Project (Automated delivery microservice)
<! -- Fuse Hystrix-->
<dependency>
 <groupId>org.springframework.cloud</groupId>
 <artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
</dependency>
Copy the code
  1. Added fuse open annotations to the startup class of the Service Consumer Project (automated delivery microservices)@EnableCircuitBreaker
// @enablehytrix // Enable Hystrix (Hystrix only)
@EnableCircuitBreaker // Start the circuit breaker
Copy the code

Currently spring Cloud provides a very nice annotation for Eureka/Ribbon/Hystrix. @SpringBootApplication is equivalent to @SpringBootApplication + @EnableDiscoveryClient + @EnableCircuitBreaker


  1. Crosscutting the business method
  • HystrixCommandProperties. All attributes of the underlying definition can be found in Java
    / * * * Hystrix simple modification version {username} * * http://localhost:8090/autodeliver/checkState4/@param userId
     * @return* /
    @hystrixCommand (// The fusing property commandProperties = {// If there is no response for more than two seconds, the fusing occurs @HystrixProperty(name="execution.isolation.thread.interruptOnTimeout", value="2000") } )
    @GetMapping("/checkState4/{userId}")
    public String findResumeOpenState4(@PathVariable Long userId) {
        // Using the Ribbon does not require us to choose our own service instances to access
        // It will directly recognize the lagou-service-resume in the following path to find the corresponding host and port to add to it
        String URL = "http://lagou-service-resume/resume/openstate/" + userId;
        // Invoke the remote service --> RESUME microservice interface RestTemplate --same as--> JdbcTempate
        Integer forObject = restTemplate.getForObject( URL, Integer.class);
        return forObject + "\ T test timeout fuse";
    }
    
    / Hystrix service degradation modified version * * * * a lot of non-core business, and not directly throw an exception, need to return a out data, make the function more friendly {username} * * http://localhost:8090/autodeliver/checkState5/@param userId
     * @return* /
    @hystrixCommand (// The fusing property commandProperties = {// If there is no response for more than two seconds, the fusing occurs @HystrixProperty(name="execution.isolation.thread.interruptOnTimeout", value="2000") }, fallbackMethod = "myFallback" )
    @GetMapping("/checkState5/{userId}")
    public String findResumeOpenState5(@PathVariable Long userId) {
        // Using the Ribbon does not require us to choose our own service instances to access
        // It will directly recognize the lagou-service-resume in the following path to find the corresponding host and port to add to it
        String URL = "http://lagou-service-resume/resume/openstate/" + userId;
        // Invoke the remote service --> RESUME microservice interface RestTemplate --same as--> JdbcTempate
        Integer forObject = restTemplate.getForObject( URL, Integer.class);
        return forObject + "\ T test timeout fuse (including bottom data)";
    }
    
    /** * back method, when timeout 2s, return pocket data * (parameter/return value is the same as the original method) *@return* /
    public String myFallback(Long userId) {
        return "I'm so sorry! Server is not responding properly. Please refresh again.";
    }
Copy the code
  1. Implementation effect

“Hystrix Bulkhead Mode (Thread Pool Isolation Policy)”

If nothing is set, all fuse methods use a Hystrix thread pool (10 threads), which can cause problems! The problem is not that the fan-out link microservice is unavailable, but that our threading mechanism is: if method A’s request uses all 10 threads, method 2’s request cannot access B at all because there are no threads available, not that service B is unavailable.

Instead of increasing the number of threads, Hystrix creates a separate thread pool for each control method. This mode is called “bulkhead mode” and is also a means of thread isolation:

You can use the JPS command with the Jstack command to view the status of the machine thread

“Hystrix Workflow and Advanced Applications”

  1. Hystrix trip (determined within time window)
  2. Hystrix Self-repair (Active window determination)

Hystrix process customization and status observation

We can observe trip status based on SpringBoot’s health check mechanism

# springBoot exposes breakpoint interface such as health check
management:
  endpoints:
    web:
      exposure:
        include: "*"
  Expose health interface details
  endpoint:
    health:
      show-details: always
Copy the code

Also customize the Hystrix process

    @hystrixCommand (// The fusing property commandProperties = {// If there is no response for more than two seconds, the fusing occurs @ HystrixProperty (name = "execution. The isolation. Thread. InterruptOnTimeout", value = "2000"), / * hystrix advanced configuration, */ /* * Within 8 seconds, the number of requests reaches 2, and the failure rate is more than 50%, Just tripping * trip after the active window is set to 3 s * * * / / / statistical time window to define @ HystrixProperty (name = "metrics. RollingStats. TimeInMilliseconds," value = "8000"), / / statistics, the minimum number of requests in time Windows @ HystrixProperty (name = "circuitBreaker. RequestVolumeThreshold", value = "2"). / / statistics of the number of errors in time Windows percentage threshold @ HystrixProperty (name = "circuitBreaker. ErrorThresholdPercentage", value = "50"), / / repair of the active window length @ HystrixProperty (name = "circuitBreaker. SleepWindowInMilliseconds", value = "3000")}, fallbackMethod = "myFallback" )
    @GetMapping("/checkState5/{userId}")
    public String findResumeOpenState5(@PathVariable Long userId) {
        // Using the Ribbon does not require us to choose our own service instances to access
        // It will directly recognize the lagou-service-resume in the following path to find the corresponding host and port to add to it
        String URL = "http://lagou-service-resume/resume/openstate/" + userId;
        // Invoke the remote service --> RESUME microservice interface RestTemplate --same as--> JdbcTempate
        Integer forObject = restTemplate.getForObject( URL, Integer.class);
        return forObject + "\ T test timeout fuse (including bottom data)";
    }
    
    /** * back method, when timeout 2s, return pocket data * (parameter/return value is the same as the original method) *@return* /
    public String myFallback(Long userId) {
        return "I'm so sorry! Server is not responding properly. Please refresh again.";
    }
Copy the code

Health Access Interface:http://localhost:8090/actuator/health

“Hystrix Dashboard Interrupt-monitoring Dashboard”

Create a new Module cloud-Hystrix-Dashboard-9000 and observe the details of each fuse

  1. Ensure that the parent project has imported SpringBoot actuator (Health Monitoring)
<! -- Actuator helps you monitor and manage Spring Boot apps -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
Copy the code
  1. Join in the entry method@EnableHystrixDashboardannotations
@SpringBootApplication
@EnableHystrixDashboard // Enable the Hystrix dashboard
public class HystrixDashboardApplication9000 {
    
    public static void main(String[] args) { SpringApplication.run(HystrixDashboardApplication9000.class); }}Copy the code
  1. Configure application.yml for the new module
server:
  port: 9000
Spring:
  application:
    name: cloud-hystrix-dashboard
# Register with Eureka Service Center
eureka:
  client:
    service-url:
      # register to cluster, separated by [,]
      defaultZone: http://myEurekaA:8761/eureka/,http://myEurekaB:8762/eureka/
  instance:
    prefer-ip-address: true # display IP instead of hostname in service instance (for compatibility with older versions)
    The instance name can be customized
    instance-id: ${spring.cloud.client.ip-address}:${spring.application.name}:${server.port}:@project.version@
Copy the code
  1. Register a Servlet with the consumer
 /** * Register a Serlvet in the monitored microservices. In the future, we access this servlet to obtain the Hystrix monitoring data of the service@return* /
  @Bean
  public ServletRegistrationBean getServlet(a){
      HystrixMetricsStreamServlet streamServlet = new HystrixMetricsStreamServlet();
      ServletRegistrationBean registrationBean = new ServletRegistrationBean(streamServlet);
      registrationBean.setLoadOnStartup(1);
      registrationBean.addUrlMappings("/actuator/hystrix.stream");
      registrationBean.setName("HystrixMetricsStreamServlet");
      return registrationBean;
  }
Copy the code
  1. Effect of simple

  1. Friendly display of dashboard

http://localhost:9000/hystrix

“Hystrix Turbine Aggregation Monitoring”

The new Module cloud – hystrix – turbine – 9001

  1. The introduction of pom
<dependencies>
  <! -- Hystrix Turbine -->
  <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-netflix-turbine</artifactId>
  </dependency>

  <! -- There are two reasons for introducing Eureka client. 1. Services under the micro-service architecture should be registered with the service center as far as possible for unified management. We want to aggregate hystrix data streams for each instance of the lagou-service-AutoDeliver service. Then we need to configure the service name in the applica.yml file. Therefore, instance information such as IP address and port is needed when turbine obtains data flows of specific instances under the service. How can turbine obtain such information according to the service name? Of course you can get it from eureka service registry -->
  <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
  </dependency>

</dependencies>
Copy the code
  1. application.yml
server:
  port: 9001

Spring:
  application:
    name: cloud-hystrix-turbine

# Register with Eureka Service Center
eureka:
  client:
    service-url:
      # register to cluster, separated by [,]
      defaultZone: http://myEurekaA:8761/eureka/,http://myEurekaB:8762/eureka/
  instance:
    prefer-ip-address: true # display IP instead of hostname in service instance (for compatibility with older versions)
    The instance name can be customized
    instance-id: ${spring.cloud.client.ip-address}:${spring.application.name}:${server.port}:@project.version@

# turbine configuration
turbine:
  # appCofing Configure the service name to be aggregated, such as here aggregating hystrix monitoring data for automatically delivering microservices
  # If you want to aggregate monitoring data of multiple microservices, you can use English comma concatenation, such as A, B, C
  appConfig: service-autodeliver
  clusterNameExpression: "'default'"   The default cluster name
Copy the code
  1. Entrance to create
    public static void main(String[] args) {
        SpringApplication.run(HystrixTurbineApplication9001.class,args);
    }
Copy the code

Hystrix Core Source Code

To follow up

Part 4 – Feign Remote call component

“Feign Notes”

Feign is a lightweight RESTful HTTP service client developed by Netflix that invokes HTTP requests as Java interface annotations. Instead of calling Feign directly by encapsulating HTTP request messages in Java, it is widely used in Spring Cloud solutions.

Essence: Encapsulates the Http call flow, more in line with the interface oriented programming habits, similar to Dubbo service call

“Feign App”

  1. Introduce Feign dependencies
  <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-openfeign</artifactId>
  </dependency>
Copy the code
  1. The startup class uses annotations@EnableFeignClientsAdd support
@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients  // Enable Feign client function
public class AutodeliverApplication8091 {
    
    public static void main(String[] args) { SpringApplication.run(AutodeliverApplication8091.class, args); }}Copy the code

Note: Remove the Hystrix fuse support annotation @enablecircuitbreaker at this point to include imported dependencies because Feign will automatically import

  1. Create the Feign interface
// original: http://service-resume/resume/openstate/ + userId;
// @feignClient indicates that the current class is a Feign client,
// value(Alias->name) specifies the name of the service to be requested by the client (the service name of the service provider registered in the registry)
@FeignClient(value = "lagou-service-resume",fallback = ResumeFallback.class,path = "/resume")
public interface ResumeServiceFeignClient {
    
    // All Feign has to do is assemble the URL to initiate the request
    // We call this method to call the local interface method, so we are actually doing a remote request
    // value must be set, otherwise an exception will be thrown
    @GetMapping("/openstate/{userId}")
    public Integer findDefaultResumeState(@PathVariable("userId") Long userId);

}
Copy the code

【 Attention 】 :

  • The name attribute of the @FeignClient annotation is used to specify the name of the service provider to invoke, and in the service provider YML filespring.application.nameconsistent
  • Interface methods in an interface, like Hander methods in a remote service provider Controller (only as if called locally), can be used for parameter binding@PathVariable,@RequestParam,@RequestHeaderThis is OpenFeign’s support for SpringMVC annotations, but note that value must be set otherwise an exception will be thrown
  1. The launch portal implements remote calls

“Feign support for load Balancing and fusing”

The Ribbon is integrated internally and can be configured directly for consumers

# for the microservice name of the called party, if not added, it takes effect globally
lagou-service-resume:
  ribbon:
    Request connection timeout
    ConnectTimeout: 2000
    Request processing timeout
    # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Feign timeout value Settings
    ReadTimeout: 15000
    All operations are retried
    OkToRetryOnAllOperations: true
    #### Based on the configuration above, when a failed request is reached, it tries to access the current instance again (the number of times is specified by MaxAutoRetries).
    #### If not, another instance is accessed, if not, another instance is accessed (the number of changes is configured by MaxAutoRetriesNextServer),
    #### If you still fail, a failure message is displayed.
    MaxAutoRetries: 0 Number of retries for the currently selected instance, not including the first call
    MaxAutoRetriesNextServer: 0 Number of retries for switching instances
    NFLoadBalancerRuleClassName: com.netflix.loadbalancer.RoundRobinRule Load policy adjustment
Copy the code

If a request is interrupted, it can be accessed as long as it is recovered within 15 seconds

# Feign support for Hystrix
feign:
  hystrix:
    enabled: true
# Timeout configuration for native Hystrix
hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Hystrix timeout value Settings
            timeoutInMilliseconds: 15000
Copy the code

The fallback logic requires defining a class that implements the FeignClient interface and implements the methods in the interface

@Component  // Don't forget this annotation, it should also be scanned
public class ResumeFallback implements ResumeServiceFeignClient {
    @Override
    public Integer findDefaultResumeState(Long userId) {
        return -9; }}Copy the code

Indicate this class in the fallback property of @FeignClient

@FeignClient(value = "lagou-service-resume",fallback = ResumeFallback.class,path = "/resume")
public interface ResumeServiceFeignClient {... }Copy the code

Feign’s support for Request compression and Response Compression

Feign supports GZIP compression of requests and responses to reduce performance losses during communication

feign:
 compression:
 	request:
 		enabled: true# open request compression mime types: text/HTML, application/XML, application/json # set compressed data types, here is also the default min - request - size:2048Response: enabled:trueTurn on response compressionCopy the code

Feign Log Level Configuration

Feign is an HTTP request client, similar to our browser, which can print detailed log information (response headers, status codes, etc.) when requesting and receiving a response. If we want to see the Feign request log, we can configure it. By default, Feign is not enabled.

  1. Enable the Feign log function and level
// Feign log level (Feign request process information)
// NONE: By default, no logs are displayed ---- best performance
// BASIC: only the request method, URL, response status code, and execution time are recorded ---- Production problem tracing
// HEADERS: Records the HEADERS of the request and response at the BASIC level
// FULL: Records the header, body, and metadata of the request and response ---- is suitable for development and test environments@Configuration
public class FeignConfig {
   @Bean
   Logger.Level feignLevel(a) {
   returnLogger.Level.FULL; }}Copy the code
  1. Set the log level to DEBUG
logging:
  level:
     Feign logs respond only to logs at the debug level
     com.lagou.edu.controller.service.ResumeServiceFeignClient: debug
Copy the code

Feign Core Source code

To follow up

[Appendix 1] – Resume data

BEGIN;
INSERT INTO `r_resume` VALUES (2195320.'woman'.'1990'.'two years'.'199999999'.'[email protected]'.'I am currently out of service and can be brought up quickly'.'The Resume of the Rice Husk'.'wps'.'2015-04-24 13:40:14'.'images/myresume/default_headpic.png'.0.'2015-04-24 13:40:14'.1545132.1.'bachelor'.0.0.0.0.' '.'guangzhou'.15.0.3);
INSERT INTO `r_resume` VALUES (2195321.'woman'.'1990'.'two years'.'199999999'.'[email protected]'.'I am currently out of service and can be brought up quickly'.'The Resume of the Rice Husk'.'wps'.'the 2015-04-24 14:17:54'.'images/myresume/default_headpic.png'.0.'the 2015-04-24 14:20:35'.1545133.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195322.'woman'.'1990'.'two years'.'199999999'.'[email protected]'.'I am currently out of service and can be brought up quickly'.'The Resume of the Rice Husk'.'wps'.'the 2015-04-24 14:42:45'.'images/myresume/default_headpic.png'.0.'the 2015-04-24 14:43:34'.1545135.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195323.'woman'.'1990'.'two years'.'199999999'.'[email protected]'.'I am currently out of service and can be brought up quickly'.'The Resume of the Rice Husk'.'wps'.'the 2015-04-24 14:48:19'.'images/myresume/default_headpic.png'.0.'the 2015-04-24 14:50:34'.1545136.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195331.'woman'.'1990'.'two years'.'199999999'.'[email protected]'.'I am currently out of service and can be brought up quickly'.'The Resume of the Rice Husk'.'wps'.'the 2015-04-24 18:43:35'.'images/myresume/default_headpic.png'.0.'the 2015-04-24 18:44:08'.1545145.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195333.'woman'.'1990'.'two years'.'199999999'.'[email protected]'.'I am currently out of service and can be brought up quickly'.'The Resume of the Rice Husk'.'wps'.'the 2015-04-24 19:01:13'.'images/myresume/default_headpic.png'.0.'the 2015-04-24 19:01:14'.1545148.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195336.'woman'.'1990'.'two years'.'199999999'.'[email protected]'.'I am currently out of service and can be brought up quickly'.'The Resume of the Rice Husk'.'wps'.'the 2015-04-27 14:13:02'.'images/myresume/default_headpic.png'.0.'the 2015-04-27 14:13:02'.1545155.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195337.'woman'.'1990'.'two years'.'199999999'.'[email protected]'.'I am currently out of service and can be brought up quickly'.'The Resume of the Rice Husk'.'wps'.'the 2015-04-27 14:36:55'.'images/myresume/default_headpic.png'.0.'the 2015-04-27 14:36:55'.1545158.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195369.'woman'.'1990'.'More than 10 years'.'199999999'.'[email protected]'.'I am currently out of service and can be brought up quickly'.'Rice husk'.'wps'.'the 2015-05-15 18:08:19'.'images/myresume/default_headpic.png'.0.'the 2015-05-15 18:08:19'.1545346.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195374.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 17:53:37'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 17:53:39'.1545523.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195375.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 18:11:06'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 18:11:07'.1545524.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195376.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 18:12:19'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 18:12:19'.1545525.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195377.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 18:13:28'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 18:13:28'.1545526.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195378.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 18:15:16'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 18:15:16'.1545527.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195379.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 18:23:06'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 18:23:06'.1545528.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195380.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 18:23:38'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 18:23:39'.1545529.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195381.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 18:27:33'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 18:27:33'.1545530.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195382.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 18:31:36'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 18:31:39'.1545531.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195383.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 18:36:48'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 18:36:48'.1545532.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195384.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 19:15:15'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 19:15:16'.1545533.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195385.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 19:28:53'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 19:28:53'.1545534.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195386.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 19:46:42'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 19:46:45'.1545535.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
INSERT INTO `r_resume` VALUES (2195387.'woman'.'1990'.'1 years'.'199999999'.'[email protected]'.'I'm currently employed and considering a new environment'.'Rice husk'.'wps'.'the 2015-06-04 19:48:16'.'images/myresume/default_headpic.png'.0.'the 2015-06-04 19:48:16'.1545536.1.'bachelor'.0.0.0.0.' '.'guangzhou'.65.0.3);
COMMIT;

SET FOREIGN_KEY_CHECKS = 1;
Copy the code