Why do you need a distributed logging component?

Full Java learning Materials collection link: wpa.qq.com/msgrd?v=3&u…

Before the official start of this article, I would like to share a system I was responsible for before. Its architecture is as follows:

Every time WHEN I check the problem, I can initially locate the problem at the logical level, but in order to explain to the business side, I need to prove the business side (the log information is the solid proof).

A request must have been processed by one of these eight machines, but which one, I don’t know. Therefore, I need to grep a handful of logs on each machine to find the corresponding log to prove my analysis.

Sometimes, maybe the access layer needs to be involved, just to troubleshoot a problem, people are stupid (scrolling through the log takes too long).

Later, I saw my colleague’s operation (wrote a script in Item2: quickly log in to bastion machine (no need to input account and password information), cut Windows according to the number of application servers and switch to the corresponding log directory). Basically, one click to log in to multiple application servers. Well, it’s much faster than it used to be.

More recently, the company’s operations side pushed Web page login to the application server (automatic login to the Fortress), which eliminated scripting (batch operation was supported). However, from the experience at that time, there was no problem that Item2 was accessed smoothly (always felt like Kaka’s).

There is a problem though, because most of the time we don’t know which file is in info/WARN/error. Most of the time, only a file to check, although it can directly check the wildcard a check, if the log is too large, bring pause time is also very annoying.

Once the system is asked a service problem, the frequency of checking logs is too high. So I want to write the log information to the search engine in a certain Q planning, and learn the knowledge of the search engine. Then someone in the group saw it and commented: How about Graylog?

There was already a logging framework in the group, but I didn’t know that… So I plugged into the Graylog, and my productivity went up, and I got a Q for that.

I haven’t logged in to the application server since, and at one point almost couldn’t write grep.

01, Lightweight ELK (Graylog)

Speaking of ELK, even if you haven’t heard of it before, it’s really popular on the back end. This time Austin uses a lighter ELK framework: Graylog

This framework is very easy to use as a user. (I guess the operation and maintenance should also be quite simple. Many Graylog users send UDP directly to the Server, instead of installing an agent on the machine to collect logs.)

A picture is worth ten words:

Official documentation: ** docs.graylog.org/docs**

I know quite a few enterprises use it to view logs and business monitoring alerts, so I’ll give you a taste of it in this article.

02. Deploy Graylog

Docker-compose is composed for docker-compose as usual. If you keep following my path, you should be familiar with docker-compose. Docker-comemage. yml is the content of docker-comemage. yml is the content of docker-comemage. yml

Version: '3' Services: mongo: image: mongo:4.2 Networks: - Graylog ElasticSearch: image: Docker. Elastic. Co/elasticsearch/elasticsearch - oss: 7.10.2 environment: - HTTP. Host = 0.0.0.0 - transport. Host = localhost - network. Host = 0.0.0.0 - "ES_JAVA_OPTS = - Dlog4j2. FormatMsgNoLookups = true -Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 deploy: resources: limits: memory: 1g networks: -graylog Graylog: image: graylog/graylog:4.2 environment: - GRAYLOG_PASSWORD_SECRET=somepasswordpepper - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 - GRAYLOG_HTTP_EXTERNAL_URI=http://ip:9009/ # /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh networks: - graylog restart: always depends_on: - mongo - elasticsearch ports: - 9009:9000 - 1514:1514 - 1514:1514/udp - 12201:12201 - 12201:12201/udp networks: graylog: driver: bridgCopy the code

The only thing that needs to be changed in this file is IP (the original port is 9000, but I have already occupied 9000 port, so I changed the port to 9009, you can feel free to do so).

Docker-compose up -d: docker-compose up -d: docker-compose up -d: docker-compose up -d: docker-compose up

After startup, you can access the corresponding Graylog background address using IP :port. The default account and password are admin/admin

Next, configure the inputs, find GELF UDP, click Launch New Input, fill in the Title field, and save the inputs.

Well, at this point, our GrayLog setup is done.

GrayLog is used for SpringBoot

Remember the logging framework we used in the Austin project? That’s right, logback. Writing log data to Graylog is easy in two steps:

1. Introduce dependencies:

<dependency>
  <groupId>de.siegmar</groupId>
  <artifactId>logback-gelf</artifactId>
  <version>3.0.0</version>
</dependency>
Copy the code

Graylog > logback.xml

<appender name="GELF" class="de.siegmar.logbackgelf.GelfUdpAppender"> <! -- Graylog service address --> <graylogHost> IP </graylogHost> <! -- UDP Input port --> <graylogPort>12201</graylogPort> <! -- Maximum GELF data block size (in bytes), 508 is the recommended minimum and the maximum is 65467 --> <maxChunkSize>508</maxChunkSize> <! - whether or not to use compression - > < useCompression > true < / useCompression > < encoder class = "DE. Siegmar. Logbackgelf. GelfEncoder" > <! --> <includeRawMessage>false</includeRawMessage> <includeMarker>true</includeMarker> <includeMdcData>true</includeMdcData> <includeCallerData>false</includeCallerData> <includeRootCauseData>false</includeRootCauseData> <! Whether to send the name of the log level, Otherwise, the log level is represented by a number by default --> <includeLevelName>true</includeLevelName> <shortPatternLayout class="ch.qos.logback.classic.PatternLayout"> <pattern>%m%nopex</pattern> </shortPatternLayout> <fullPatternLayout class="ch.qos.logback.classic.PatternLayout"> <pattern>%d - [%thread] %-5level %logger{35} - %msg%n</pattern> </fullPatternLayout> <! <staticField>app_name: Austin </staticField> </encoder> </appender>Copy the code

In this configuration information, the only thing to change is the IP address, access here is finished, we open the console again, you can see the log information.

04. GrayLog

GrayLog query syntax: I only use a few of these in my daily life, so LET me show you what I use in my daily life. If it is not enough, then go to the website document well just a matter of: * * docs.graylog.org/docs/query-… **

Full_message :”13788888888″

Level_name :”ERROR” level_name:”ERROR”

Level_name :”INFO” AND full_message:”13788888888″

If you are careful, you may notice that I have selected GELF for Input, and that there is also GELF for importing Maven dependencies. So what does GELF stand for?

There is also a corresponding explanation on the official website: The Graylog Extended Log Format (GELF) is a log format that avoids the shortcomings of classic plain syslog

Details: docs.graylog.org/docs/gelf * * * *

GELF is a log format that avoids some of the problems of traditional Syslogs, and the Maven dependency we introduced is to format the log into GELF format and append it to GrayLog.

05, How do you feel

A few days ago, a big brother sent me a pull request about Swagger on GitHub. I merged him yesterday and upgraded the version of Swagger.

I have never used a document tool similar to Swagger before, so I also went to experience Swagger for this pull request.

At first glance, it feels good: it manages documentation for all the project interfaces on a single page, and sends requests directly through sample parameters. Documentation is annotated without worrying about code changes and forgetting to update the document.

However, later I configured the corresponding parameter information document, and then in swagger- UI experience, found that it is really ugly, see this UI I still give up stage.

Swagger has several competing products, I think the UI is better than Swagger. However, there was only one main interface for the Austin project, and AS a skilled Markdown engineer I was able to document easily and didn’t go on to play with the competition.