preface

Welcome to our GitHub repository Star: github.com/bin39232820… The best time to plant a tree was ten years ago, followed by now

omg

Today, the company has a project connected to GrayLog SO. Xiao Liu wrote an article to record it, SO that I can refer to it in the future. I found a lot of things, we used baidu, but after a period of time, we forgot them and baidu again, SO what? Why don’t I take some time to record the process of my first Baidu visit? After this may be reflected deep point, if you want to find, it will be a lot easier!

Graylog introduction

Graylog is a production-level log collection system that integrates Mongo and Elasticsearch for log collection. Mongo is used to store Graylog metadata information and configuration information, ElasticSearch is used to store data.

The architecture diagram is as follows:

The production environment configuration diagram is as follows:

Install Graylog

Install Graylog, Mongo, elasticSearch, and docker-compose in docker-compose mode.

Docker-comemage. yml contains the following content:

version: '2' services: # MongoDB: https://hub.docker.com/_/mongo/ mongodb: image: mongo:3 # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.6/docker.html elasticsearch: image: Docker. Elastic. Co/elasticsearch/elasticsearch - oss: 6.6.1 environment: -http. host=0.0.0.0 -transport. host= localhost-network. host=0.0.0.0 - "ES_JAVA_OPTS= -xMS256m -XMx256m "ulimits: memlock: soft: -1 hard: -1 mem_limit: 512m # Graylog: https://hub.docker.com/r/graylog/graylog/ graylog: image: Graylog/Graylog :3.0 environment: # CHANGE ME (must be at least 16 characters)! - GRAYLOG_PASSWORD_SECRET=somepasswordpepper # Password: admin - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 - GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/ links: -mongodb: mongo-ElasticSearch Depends_on: - mongodb - elasticsearch ports: # Graylog web interface and REST API - 9000:9000 # Syslog TCP - 1514:1514 # Syslog UDP - 1514:1514/udp # GELF TCP - 12201:12201 # GELF UDP - 12201:12201/udp ! [](https://p6-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/2882bc4dd9e3404dbccb828efadb703e~tplv-k3u1fbpfcp-watermark.image)! [](https://p1-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/67a7c91d49be4a6da628a8c6133078c1~tplv-k3u1fbpfcp-watermark.image)Copy the code

Other ways to view the official documentation, docs.graylog.org/en/3.0/page…

Configuration Graylog

Access in the browserhttp://ip:9000, as shown in figure:

The default user name and password are admin, as shown in the figure.

Select Input from the System button to enter an input source, as shown in the figure

Here, take GELF UDP as an example. Select GELF UDP in the position in the figure and click Launch New Input after selecting, as shown in the figure

And then the difference between stream, what does that mean? Distinguish different channels according to different rules

SpringBoot logs are output to Graylog

First let’s add traceId

Since we may have multiple copies of a service, how can we filter all the link logs for the same thread? So we added traceId to the log

Increase LogMdcFilter

package cn.xbz.common.filter; import org.slf4j.MDC; import javax.servlet.*; import javax.servlet.annotation.WebFilter; import java.io.IOException; /** * @title adds traceId * @author Xingbz * @createdate 2019-4-12 */ @webfilter (urlPatterns = "/*", filterName = "logMdcFilter") public class LogMdcFilter implements Filter { private static final String UNIQUE_ID = "traceId"; @Override public void init(FilterConfig filterConfig) { } @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { boolean bInsertMDC = insertMDC(); try { chain.doFilter(request, response); } finally { if(bInsertMDC) { MDC.remove(UNIQUE_ID); } } } @Override public void destroy() { } private boolean insertMDC() { UUID uuid = UUID.randomUUID(); String uniqueId = uuid.toString().replace("-", ""); MDC.put(UNIQUE_ID, uniqueId); return true; }}Copy the code

Configure the Logback log format

. <pattern>%d{HH:mm:ss} [%thread][%X{traceId}] %-5level %logger{36} - %msg%n</pattern> ...Copy the code

Asynchronous task replenishment

After the first two steps are complete, the request sent from the front end can output the traceId. However, some scheduled or asynchronous tasks that do not pass the front end cannot pass the filter. So we need to add another class

package cn.xbz.common.aspect; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; import org.slf4j.MDC; import org.springframework.stereotype.Component; /** * @title add traceId * @author Xingbz * @createdate 2019-4-16 */ @aspect @Component public class LogMdcAspect { private static final String UNIQUE_ID = "traceId"; @Pointcut("@annotation(org.springframework.scheduling.annotation.Async)") public void logPointCut() { } @Around("logPointCut()") public Object around(ProceedingJoinPoint point) throws Throwable { MDC.put(UNIQUE_ID, UUID.randomUUID().toString().replace("-","")); Object result = point.proceed(); Mdc. remove(UNIQUE_ID); return result; }}Copy the code

Done, filter logs by ID, convenient

Also have a look at our Lackback profile

<? The XML version = "1.0" encoding = "utf-8"? > <configuration scan="true" scanPeriod="60 seconds" debug="false"> <springProperty scope="context" name="springAppName"  source="spring.application.name"/> <! -- values from application.yml --> <springProperty scope="context" name="graylogHost" source="logback.grayLog.host" /> <springProperty scope="context" name="graylogPort" source="logback.grayLog.port" /> <springProperty scope="context" name="originHost" source="logback.grayLog.originHost" /> <springProperty scope="context" name="appName" source="logback.grayLog.appName" /> <springProperty scope="context" name="appVersion" source="app.version" /> <property name="log.path" value="logs"/> <property name="log.maxHistory" value="15"/> <! <property name="maxFileSize" value="100MB"/> <! <property name="totalSizeCap" value="20GB"/> <property name=" log.colorpattern" value="%magenta(%d{yyyy-MM-dd HH:mm:sss}) %highlight(%-5level) %boldCyan(${springAppName:-}) %yellow(%thread) %red([%X{traceId}]) %green(%logger) %msg%n"/> <property name="log.pattern" value="%d{yyyy-MM-dd HH:mm:sss} %-5level ${springAppName:-} %thread %red([%X{traceId}]) %logger %msg%n"/> <! - the output to the console - > < appender name = "console" class = "ch. Qos. Logback. Core. ConsoleAppender" > < encoder > <pattern>${log.colorPattern}</pattern> </encoder> </appender> <! - the output to the log platform -- -- > < appender name = "GELF" class = "DE. Siegmar. Logbackgelf. GelfUdpAppender" > <graylogHost>${graylogHost}</graylogHost> <graylogPort>${graylogPort}</graylogPort> <encoder class="de.siegmar.logbackgelf.GelfEncoder"> <originHost>${originHost}</originHost> <includeRawMessage>false</includeRawMessage> <includeMarker>true</includeMarker> <includeMdcData>true</includeMdcData> <includeCallerData>false</includeCallerData> <includeRootCauseData>false</includeRootCauseData> <includeLevelName>true</includeLevelName> <shortPatternLayout class="ch.qos.logback.classic.PatternLayout"> <pattern>%m%nopex</pattern> </shortPatternLayout> <fullPatternLayout class="ch.qos.logback.classic.PatternLayout"> <pattern>%m%n</pattern> </fullPatternLayout> <staticField>app_name:${appName}</staticField> <staticField>app_version:${appVersion}</staticField> <staticField>os_arch:${os.arch}</staticField> <staticField>os_name:${os.name}</staticField> <staticField>os_version:${os.version}</staticField> <staticField>uri:%X{uri}</staticField> <staticField>uid:%X{uid}</staticField> <staticField>ip:%X{ip}</staticField> <staticField>traceId:%X{traceId}</staticField> </encoder> </appender> <! - output to a file -- > < appender name = "file_info" class = "ch. Qos. Logback. Core. Rolling. RollingFileAppender" > < rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${log.path}/info/info.%d{yyyy-MM-dd}.%i.log</fileNamePattern> <! If the log file exceeds 100MB, the log file will start with index 0 and name the log file. For example, the log - adapterError - 1992-11-06. 0. The log - > < timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <maxFileSize>${maxFileSize}</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> <MaxHistory>${log.maxHistory}</MaxHistory> <totalSizeCap>${totalSizeCap}</totalSizeCap> </rollingPolicy> <encoder> <pattern>${log.colorPattern}</pattern> </encoder> <! -- <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>INFO</level> <onMatch>ACCEPT</onMatch> <onMismatch>DENY</onMismatch> </filter>--> </appender> <appender name="file_error" class="ch.qos.logback.core.rolling.RollingFileAppender"> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${log.path}/error/error.%d{yyyy-MM-dd}.%i.log</fileNamePattern> <! If the log file exceeds 100MB, the log file will start with index 0 and name the log file. For example, the log - adapterError - 1992-11-06. 0. The log - > < timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <maxFileSize>${maxFileSize}</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> <MaxHistory>${log.maxHistory}</MaxHistory> <totalSizeCap>${totalSizeCap}</totalSizeCap> </rollingPolicy> <encoder> <pattern>${log.colorPattern}</pattern> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>ERROR</level> <onMatch>ACCEPT</onMatch> <onMismatch>DENY</onMismatch> </filter> </appender> <! Output to the logstash appender--> <! -- <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">--> <! -- <destination>${logstash.url}:4560</destination>--> <! -- <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>--> <! -- </appender>--> <root level="debug"> <appender-ref ref="console"/> </root> <root level="info"> <appender-ref ref="GELF" /> <appender-ref ref="file_info"/> <appender-ref ref="file_error"/> <! -- <appender-ref ref="logstash" />--> </root> </configuration>Copy the code

The address configured in this place is the Graylog address of the server and the IP address of the host. Udp is used to send logs.

At the end

Ok, that’s all, in fact, is to record a simple usage, for later use, good query.

Daily for praise

Ok, everybody, that’s all for this article, you can see people here, they are real fans.

Creation is not easy, your support and recognition, is the biggest power of my creation, our next article

Six pulse excalibur | article “original” if there are any errors in this blog, please give criticisms, be obliged!