• Summary:

    1, function: record behavior information, record error process, convenient to find historical records, online problem investigation.

    2. Core problem: Too much recorded information will lead to too large file storage, the printing process consumes performance, and it is difficult to locate problems quickly; If too little information is recorded, the root cause of the problem cannot be found and the problem cannot be traced.

    3. Make trade-offs in performance, storage, information and query positioning according to requirements.

  • Output mode:

    1, output console print, such as log.info(), system.out.println (); Typically used in development environments. The console must be turned off to print logs in the online environment.

    Output to a file store, such as log4j, logback to a file, and JVM startup parameters are configured to print GC logs.

    3. Store the database table and generally customize the log for the project, such as data new, update the log.

    4, Output middleware, such as ELK stored in Elasticsearch.

  • Solution for each log type:

    1. General stack logs: such as various initialization instances printed during project startup and connection establishment logs. Generally use Logback, log4j2 implementation, log4j performance is poor, generally not recommended, and the portal usually use Slf4j output log, pasted here, (Slf4j log4j Logback relationship) link, specific can be viewed by yourself. For general projects, Slf4j and Logback are used. For SpringBoot projects, logs can be printed and dynamically modified using SpringBoot Actuator.

    2, store to database log: the simplest implementation is the interface directly into the database to insert a log data, here paste a regular table building SQL statement.

    CREATE TABLE `system_log` (
      `id` varchar(32) NOT NULL DEFAULT ' ' COMMENT 'id'.`module` varchar(255) DEFAULT NULL COMMENT 'modules'.`user_name` varchar(32) DEFAULT NULL COMMENT 'Operator nickname'.`user_id` varchar(32) DEFAULT NULL COMMENT 'Operator ID'.`method` varchar(255) DEFAULT NULL COMMENT 'methods'.`operation` varchar(255) DEFAULT NULL COMMENT 'operation'.`args` varchar(5000) DEFAULT NULL COMMENT 'parameters'.`time` int(11) unsigned DEFAULT '0' COMMENT 'Execution time (ms)'.`ip` varchar(32) DEFAULT NULL COMMENT 'IP'.`create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Creation time',
      PRIMARY KEY (`id`) USING BTREE
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Operation Log table';
    Copy the code

    But in order to decouple code, reduce code intrusion, and reduce the performance impact of logging on user request time, you can use Spring AOP custom annotations to log printing. For those who are not familiar with Spring AOP, see this article.

    Thread pools can also be used for asynchronous printing by adding @async annotations to log printing methods. @enableAsync + @async, message queue can also be used for asynchronous printing, but the project needs to access message queue middleware, if the project has access to the message queue can be used.

    3. Distributed mass log solution ELK (recommended), namely, Logstash collects, filters, and outputs logs, Elasticsearch stores logs, and Kibana queries logs to centrally manage distributed services’ large-scale batch logs.

    ELK’s brief introduction:

    ELK stands for Elasticsearch, Logstash, and Kibana, all of which are open source software. A new FileBeat is added, which is a lightweight log collection and processing tool (Agent). FileBeat consumes less resources and is suitable for collecting logs from various servers and transferring them to Logstash. This tool is also recommended by the official.

    Elasticsearch is an open source distributed search engine that collects, analyzes, and stores data. Its features are: distributed, zero configuration, automatic discovery, index automatic sharding, index copy mechanism, restful interface, multi-data sources, automatic search load and so on.

    Logstash is a tool for collecting, analyzing, and filtering logs. It supports a large number of data acquisition methods. The client is installed on the host that needs to collect logs. The server filters and modifies the received node logs and sends them to ElasticSearch at the same time.

    Kibana is also an open source and free tool that provides a log analysis friendly Web interface for Logstash and ElasticSearch to help aggregate, analyze and search important data logs.

    Here is only a brief introduction to the basic concept and role of ELK, paste your own collection of several learning links can be viewed, ELK official documents (Good English can see this), specific implementation tutorial SpringBoot integration ELK tutorial, want to understand the specific implementation principle can see each software separate tutorial: Elasticsearch tutorial, Logstash tutorial, Kibana tutorial.

    Here is my personal Github project url,Springboot-dubbo-zookeeperThere is a regular logback log printing configuration file, there is a thread pool log printing, there is a RabbitMq implementation, you can click a Star.

    Summary: this article mainly analyzes various types of project log solutions, pasted some of my personal learning process encountered better articles, mainly for learning to share, if the violation can contact the author to delete.