ELK is an acronym for Elasticsearch, Logstash, and Kibana libraries. The three libraries provide the following functions: Elasticsearch is used for log storage, analysis, and search. Logstash is used to collect, modify, and transfer logs. Kibana provides a user-friendly data analysis interface for visualization.
This paper uses Docker to build a relatively simple ELK log system, and assumes that readers have a basic understanding of the following knowledge:
- SpringBoot
- Logback
- Docker (Docker – compress)
First of all, a SpringBoot project is required to generate some logs. The previous project (AppBubbleBackend) is directly used here. If you are interested, please refer to the original article “SpringBoot Integrated gRPC Microservice Project Construction Practice”. Engineering construction and docker thinking of knowledge is not to do too much added, this paper mentioned to put AppBubbleBackend/scripts/ELK ELK configuration directory.
Generating logs (Logback)
The example uses the SpringBoot project to integrate the Logback log library to generate a log, first referencing the associated trust:
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
</dependency>
Copy the code
Logback-classic and logback-core are the libraries provided by logback. Logstuck-logback-encoder provides some libraries for processing logs, mainly for formatting logback logs and sending them to the Logstash server.
After the Logback dependency is configured, SpringBoot will automatically replace the default logging system with a Logback. In this case, only the logstash-logback-encoder will send the log correctly under configuration. Create a logback-spring. XML log configuration file in the resources directory according to the log management mode of SpringBoot. The content is as follows:
<configuration>
<springProperty scope="local" name="appname" source="spring.application.name"
defaultValue="undefined"/>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>logstash:4560</destination>
<! -- encoder is required -->
<encoder class="net.logstash.logback.encoder.LogstashEncoder" >
<customFields>{"appname":"${appname}"}</customFields>
</encoder>
</appender>
<root level="TRACE">
<appender-ref ref="logstash"/>
</root>
<logger name="com.bubble" level="TRACE"/>
<logger name="com.bubble.user" />
<logger name="com.bubble.common" />
</configuration>
Copy the code
The above configuration file using.net. Logstash.. Logback appender. LogstashTcpSocketAppender establish an appender, sets the destination to logstash server address, And add a custom parameter appName, which I’ll show you later.
After configuring the dependence and Logback, logs are sent to the LogstashTcpSocketAppender, it will process and send the Logstash server logging.
Collecting logs (Logstash)
Mentioned above log to Logstash by LogstashTcpSocketAppender handle concurrent server, you need to configure a Logstash to receive the logs. Yml does not require too much configuration. The default configuration is enough. Pipeline configuration can be divided into input, filter and output configuration modules. Their main functions are as follows: Input is used to receive logs; Filter is used to modify logs. Output sends the log to the destination.
Log the logback – spring. XML configuration file using LogstashTcpSocketAppender to send logs and configuration the Logstash address for Logstash: 4560 in the pipeline configuration so first need to add a receiving inpu log T:
input {
tcp {
port => 4560
codec => json_lines
id => "main"
}
}
Copy the code
Then add an output to send the log to Elasticsearch:
output {
#stdout { codec => rubydebug { metadata => true } }
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-services-%{+YYYY.MM.dd}"
}
}
Copy the code
Select * from elasticSearch; / / select * from elasticSearch; / / select * from elasticSearch; / / select * from elasticSearch; So the name of index will be logstash-services-2019.03.18.
The Logstash configuration can be found in the official documentation:
- plugins-inputs-tcp
- plugins-outputs-elasticsearch
Storing logs (Elasticsearch)
The configuration of Elasticsearch is a bit more complicated than the previous two, and the main purpose of this test is to use the default configuration. Once Elasticsearch is enabled, if the Logstash configuration is correct, you can use the Get Index API to retrieve the Index information.
Check 2019.03.16 index information:
The curl http://elasticsearch:9200/logstash-services-2019.03.16Copy the code
Since a custom appname parameter was added when logback-spring. XML was configured, you can query logs by using appname:
curl http://localhost:9200/logstash-*/_search? pretty -X GET -H'Content-Type: application/json' -d '{"query":{"match":{"appname":{"query":"AppBubbleUserService"}}}}'
Copy the code
If information similar to the following is displayed, the log is successfully sent to ElasticSearch:
{
"_index" : "Logstash - services - 2019.03.18"."_type" : "doc"."_id" : "6ksRj2kBJ7x4nqLIOHp6"."_score": 0.0011799411."_source" : {
"thread_name" : "main"."@version" : "1"."level" : "TRACE"."appname" : "AppBubbleUserService"."host" : "appbubblebackend_user_1.appbubblebackend_default"."port" : 56952,
"level_value" : 5000,
"logger_name" : "org.springframework.boot.context.config.ConfigFileApplicationListener"."@timestamp" : "The 2019-03-18 T04:31:02. 983 z"."message" : "Skipped missing config classpath:/application-default.xml for profile default"
}
Copy the code
Analyzing logs (Kibana)
Search logs can now be viewed using Kibana after they have been successfully sent to ElasticSearch. Add a simple configuration file:
elasticsearch:
hosts:
- "http://elasticsearch:9200"
server:
name: "kibana"
host: 0.0. 0. 0
Copy the code
Configure the elasticSearch server address, then start Kibana and add an Index Patterns, To Create an Index pattern, go to Management >> Index Patterns >> Create Index Pattern.
The Discover page is displayed. Select the newly created index from the drop-down list and you can see the output log as follows:
There are many more powerful features in Kibana, so explore more in the future.
conclusion
Compared with traditional log processing, elK can greatly improve log retrieval efficiency. To use elK in the production environment, you need to learn more and configure detailed solutions. In the following study and use, I will continue to learn Elasticsearch and Logstash knowledge in depth. Meanwhile, BEATS will be used to access the log system for tools that do not support remote log sending. An envoy or Nginx can use FileBeat to send logs to Logstash to view access logs.
Data reference
- logstash-logback-encoder
- boot-features-custom-log-configuration
- The logback manual