In recent days, I have nothing to do. I found a more lightweight distributed log system PlumeLog on the code cloud, so I studied it, wrote a demo and made a record

PlumeLog overview

  • A non-invasive distributed log system collects logs based on log4j, log4j2, and LogBack and sets the link ID to facilitate the query of associated logs
  • Based on ElasticSearch as the query engine
  • High throughput, high query efficiency
  • The whole process does not occupy the local disk space of the application program, maintenance free; Transparency for the project does not affect the operation of the project itself
  • No need to modify the old project, introduce direct use, support dubbo, support springcloud

Second, preparation

Server Installation

  • PlumeLog is adapted to Redis or Kafka. Redis is sufficient for the general project. I use Redis directly
  • Then you need to install elasticsearch, official website to download address: www.elastic.co/cn/download…
  • Download the Plumelog package, Plumelog-server, from gitee.com/frankchenlo…

The service start

  • Start Redis and make sure you can connect locally to Redis (server security group open port, Redis configured with accessible IP)
  • Start elasticSearch on port 9200 by default

3. Modify the configuration

These files are decompressed from the Plumelog package

Modify application.properties, paste my configuration here, the main change is redis and ES configuration

spring.application.name=plumelog_server server.port=8891 spring.thymeleaf.mode=LEGACYHTML5 spring.mvc.view.prefix=classpath:/templates/ spring.mvc.view.suffix=.html spring.mvc.static-path-pattern=/plumelog/** # value of 4 kinds of redis, kafka, rest, said restServer # redis use redis when the queue with kafka when queue # # kafka said rest from the rest interface log # restServer said as a rest interface server startup Plumelog. model=redis # If kafka is used, enable the following configuration # plumelog. Kafka. KafkaHosts = 172.16.247.143:9092172.16. 247.60:9092172.16. 247.64:9092 # # plumelog. Kafka. KafkaGroupName = logConsumer redis configuration, version 3.0 must configure redis address, Because of the need to monitor alarm plumelog. Redis. RedisHost = 127.0.0.1:6379 # if there is a password to use redis, enable the following configuration plumelog. Redis. RedisPassWord = 123456 Plumelog. Redis. RedisDb = 0 # if using rest, enable the following configuration # plumelog. Rest. RestUrl = http://127.0.0.1:8891/getlog # # plumelog. Rest. RestUserName = plumelog plumelog. Rest. RestPassWord = 123456 # elasticsearch related configuration Plumelog.es. EsHosts =127.0.0.1:9200 #ES7. #plumelog.es. IndexType =plumelog # Recommended value: Daily log size /ES VM MEMORY size A reasonable fragment size ensures ES write and query efficiency plumelog.es. Shards =5 plumelog.es . Plumelog. Es. Refresh interval = 30 s # log index build way shown day by day, hour by hour plumelog. Es. IndexType. Model # = day es set password, enable the configuration below #plumelog.es. UserName =elastic #plumelog.es. PassWord =FLMOaqUGamMNkZ2mkJiY # Plumelog. maxSendSize=5000 # Plumelog.ui. url=http://127.0.0.1:8891 # Password =123456 # Log retention days, 0 or no default permanent retention admin.log.keepDays=30 # Login configuration #login.username=admin #login.password=adminCopy the code

Do not start Plumelog-Server until it is finally started

To improve performance, you are advised to use the following configuration methods

The daily log volume is less than 50 GB, and SSDS are used

plumelog.es.shards=5

plumelog.es.replicas=0

plumelog.es.refresh.interval=30s

plumelog.es.indexType.model=day

The daily log volume is more than 50 GB, and mechanical hard disks are used

plumelog.es.shards=5

plumelog.es.replicas=0

plumelog.es.refresh.interval=30s

plumelog.es.indexType.model=hour

The volume of logs in a single day is more than 100 GB, and mechanical hard disks are used

plumelog.es.shards=10

plumelog.es.replicas=0

plumelog.es.refresh.interval=30s

plumelog.es.indexType.model=hour

The daily log volume is more than 1000GB and the SSD disk is used. This configuration can run to more than 10T for a day without any problem

plumelog.es.shards=10

plumelog.es.replicas=1

plumelog.es.refresh.interval=30s

plumelog.es.indexType.model=hour

Increase of plumelog.es. Shards and adjust the maximum number of fragments in the ES cluster in hour mode

PUT /_cluster/settings
{
  "persistent": {
    "cluster": {
      "max_shards_per_node":100000
    }
  }
}
Copy the code

Create a SpringBoot project

Project configuration

pom.xml

<? The XML version = "1.0" encoding = "utf-8"? > < project XMLNS = "http://maven.apache.org/POM/4.0.0" XMLNS: xsi = "http://www.w3.org/2001/XMLSchema-instance" Xsi: schemaLocation = "http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd" > < modelVersion > 4.0.0 < / modelVersion > < the parent > < groupId > org. Springframework. Boot < / groupId > The < artifactId > spring - the boot - starter - parent < / artifactId > < version > 2.4.3 < / version > < relativePath / > <! -- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>demo-mybatis</artifactId> <version>0.0.1-SNAPSHOT</version> <name>demo-mybatis</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> < the groupId > org. Mybatis. Spring. The boot < / groupId > < artifactId > mybatis - spring - the boot - starter < / artifactId > < version > 2.1.4 < / version >  </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <! Plumelog --> <dependency> <groupId>com.plumelog</groupId> <artifactId>plumelog-logback</artifactId> < version > 3.3 < / version > < / dependency > < the dependency > < groupId > com. Plumelog < / groupId > < artifactId > plumelog - trace < / artifactId > < version > 3.3 < / version > < / dependency > < the dependency > <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-aop</artifactId> <version>2.1.11.RELEASE</version> <scope>provided</scope> </dependency> <dependency> <groupId>cn. Hutool </groupId> <artifactId>hutool-core</artifactId> <version>5.5.8</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <excludes>  <exclude> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </exclude> </excludes> </configuration> </plugin> </plugins> </build> </project>Copy the code

Resources next two configuration files

application.properties

Server port = 8888 spring. The datasource. Url = JDBC: mysql: / / 127.0.0.1:3306 / test? useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true&useSSL=false spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver spring.datasource.username=root Spring. The datasource. Password = 123456 spring. The datasource. Hikari. Maximum - the pool - size = 10 # mybatis configuration mybatis.type-aliases-package=com.example.demomybatis.model mybatis.configuration.map-underscore-to-camel-case=true Mybatis. Configuration. The default -- the fetch size = 100 mybatis configuration. The default - the statement - timeout = 30 # Jackson formatting date/time spring.jackson.date-format=YYYY-MM-dd HH:mm:ss spring.jackson.time-zone=GMT+8 Spring. Jackson. Serialization. Write - dates - as - timestamps = false # SQL log print logging.level.com.example.demomybatis.dao=debug Log file logging.file.path=/logCopy the code

logback-spring.xml

<? The XML version = "1.0" encoding = "utf-8"? > <configuration> <appender name="consoleApp" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern> %date{yyyy-MM-dd HH:mm:ss.SSS} %-5level[%thread]%logger{56}.%method:%L -%msg%n </pattern> </layout> </appender> <appender name="fileInfoApp" class="ch.qos.logback.core.rolling.RollingFileAppender"> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>ERROR</level> <onMatch>DENY</onMatch> <onMismatch>ACCEPT</onMismatch> </filter> <encoder> <pattern> %date{yyyy-MM-dd HH:mm:ss.SSS} %-5level[%thread]%logger{56}.%method:%L -%msg%n </pattern> </encoder> <! - rolling strategy - > < rollingPolicy class = "ch. Qos. Logback. Core. Rolling. TimeBasedRollingPolicy" > <! Log </fileNamePattern> </rollingPolicy> </appender> <appender name="fileErrorApp" class="ch.qos.logback.core.rolling.RollingFileAppender"> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <level>ERROR</level> </filter> <encoder> <pattern> %date{yyyy-MM-dd HH:mm:ss.SSS} %-5level[%thread]%logger{56}.%method:%L -%msg%n </pattern> </encoder> <! - set up the rolling strategy - > < rollingPolicy class = "ch. Qos. Logback. Core. Rolling. TimeBasedRollingPolicy" > <! -- Path --> <fileNamePattern>log/error.%d.log</fileNamePattern> <! -- Controls the maximum number of archived files that can be retained, if the number exceeds the maximum, delete the old files, if set to scroll every month and <maxHistory> is 1, save only the files of the last month, <MaxHistory>1</MaxHistory> </rollingPolicy> </appender> <appender name="plumelog" Class = "com. Plumelog. Logback. Appender. RedisAppender" > < appName > mybatisDemo < / appName > < redisHost > 127.0.0.1 < / redisHost > <redisAuth>123456</redisAuth> <redisPort>6379</redisPort> <runModel>2</runModel> </appender> <! -- Root must come last. <root level="INFO"> <appender-ref ref="consoleApp"/> <appender-ref ="fileInfoApp"/> <appender-ref ref="fileErrorApp"/> <appender-ref ref="plumelog"/> </root> </configuration>Copy the code

An appender for Plumelog is added to the logback configuration file and then referenced in root, which is the configuration to push logs to the Redis queue

Traceid and link tracing global configuration

Traceid interceptor configuration

package com.example.demomybatis.common.interceptor; import com.plumelog.core.TraceId; import org.springframework.stereotype.Component; import org.springframework.web.servlet.HandlerInterceptor; import org.springframework.web.servlet.ModelAndView; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.util.UUID; /** * Created by wenbo on 2021/2/25. */ @Component public class Interceptor implements HandlerInterceptor { @Override Public Boolean preHandle(HttpServletRequest Request, HttpServletResponse Response, Object Handler) { Set (uuid.randomuuid ().toString().replaceAll("-", ""))); return true; } @Override public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) { } @Override public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) { } } package com.example.demomybatis.common.interceptor; import org.springframework.context.annotation.Configuration; import org.springframework.web.servlet.config.annotation.InterceptorRegistry; import org.springframework.web.servlet.config.annotation.WebMvcConfigurer; /** * Created by wenbo on 2021/2/25. */ @Configuration public class InterceptorConfig implements WebMvcConfigurer { Public void addInterceptors(InterceptorRegistry registry) {Override public void addInterceptors(InterceptorRegistry registry) { AddInterceptor (new Interceptor()).addPathPatterns("/**"); }}Copy the code

Global Link tracing configuration

package com.example.demomybatis.common.config;

import com.plumelog.trace.aspect.AbstractAspect;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.springframework.stereotype.Component;

/**
 * Created by wenbo on 2021/2/25.
 */
@Aspect
@Component
public class AspectConfig extends AbstractAspect {
    @Around("within(com.example..*))")
    public Object around(JoinPoint joinPoint) throws Throwable {
        return aroundExecute(joinPoint);
    }
}
Copy the code

Here you replace the pointcut path with your own package path

@ComponentScan({“com.plumelog”,”com.example.demomybatis”})

At the end of the demo link: gitee.com/wen\_bo/dem…

5. Start the project

Java -jar plumelog-server-3.2.jar starts Plumelog

Go to http://127.0.0.1:8891

The default user name and password are admin

Finally, I’ll show you the renderings

Three things to watch ❤️

If you find this article helpful, I’d like to invite you to do three small favors for me:

  1. Like, forward, have your “like and comment”, is the motivation of my creation.

  2. Follow the public account “Java rotten pigskin” and share original knowledge from time to time.

  3. Also look forward to the follow-up article ing🚀

  4. [666] Scan the code to obtain the learning materials package

Author: wenbo

The original link: www.haowenbo.com/articles/20…