1 overview
In microservices architecture, many applications are deployed, including basic applications such as gateways, service discovery, and so on. There are also plenty of business applications. So, how to effectively collect their logs and make them easy to query, while providing friendly visualization, is a great help in dealing with the complexity of microservices architecture. In highly complex systems, logs are very important for locating online problems. ELK(ElasticSearch+Logstash+Kibana) is the most popular log platform construction solution. It is favored by developers mainly because it solves various pain points of log collection for large-scale systems.
2 ELK Stack
ELK(ElasticSearch+Logstash+Kibana) consists of three components:
- ElasticSearch
- Logstash
- Kibana
2.1 ElasticSearch
ElasticSearch is an open source distributed search engine based on Apache Lucene. ElasticSearch is the most important component in the ELK Stack. It stores data and provides many flexible and useful Rest apis, so upper-layer applications can query, use, and analyze data as needed. On the log platform, all log data is stored in ElasticSearch. With its powerful search capability, you can query logs flexibly.
2.2 Logstash
Logstash is used to collect data and store it in ElasticSearch.
Logstash is rich in plugins and easy to extend, so once you’ve collected data using Logstash, you can do a lot of processing before exporting it to ElasticSearch. In the log platform, it is mainly complex to collect application logs.
2.3 Kibana
Kibana is responsible for reading ElasticSearch data and visualizing it. It also comes with a Tool that makes it easy to call the Rest API for ElasticSearch. In the logging platform, we view the logs through Kibana.
3 architecture
A logging platform architecture was built using ELK:
This is a simplified version of the log collection architecture from which many elK-based logging architectures have evolved. The core problem is that the log data is stored in ElasticSearch. For example, you can collect logs in Kafka and then export them to ElasticSearch using a Logstash stash. Kafka adds a lot of possibilities to using data.
4 Set up a log platform
System: Ubuntu16.06 64
Download ElasticSearch, Logstash, and Kibana from the official website. Ensure that the version is the same as the version 6.0. For demonstration purposes, install all ELK programs and Java applications on one machine as follows:
noone@ubuntu:/opt$tree ├─ Gs-Spring-boot ├─ $elasticSearch ├─ gs-Spring-boot ├─ $elasticSearch ├─ $tree Logs └ ─ ─ logstash - the 6.0.1Copy the code
4.1 configuration
For easy management, use Systemd to manage ElasticSearch and Kibana.
/etc/systemd/system/elasticsearch.service
[Service] Environment = ES_HOME = / opt/elasticsearch - 6.0.0 Environment = ES_PATH_CONF = / opt/elasticsearch - 6.0.0 / config The Environment = PID_DIR = / var/run/elasticsearch WorkingDirectory = / opt/elasticsearch - 6.0.0 User = noone Group = nonone ExecStart = / opt/elasticsearch 6.0.0 / bin/elasticsearch - p${PID_DIR}/elasticsearch.pid --quiet
# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536
# Specifies the maximum number of processes
LimitNPROC=4096
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
Copy the code
/etc/systemd/system/kibana.service
[Unit] Description=Kibana [Service] Type=simple User=noone The Environment = CONFIG_PATH = / opt/kibana - the 6.0.1 - Linux - x86_64 / config/kibana yml Environment = NODE_ENV = dev ExecStart = / opt/kibana - the 6.0.1 - Linux - x86_64 / bin/kibana [Install] WantedBy = multi - user. TargetCopy the code
To create a SpringBoot application, you need to introduce dependencies in pom.xml
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.1</version>
</dependency>
Copy the code
Then configure the log, logback-spring.xml
<configuration scan="true">
<include resource="org/springframework/boot/logging/logback/base.xml" />
<appender name="STASH" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/opt/logs/logback/app.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>/opt/logs/logback/app.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>15</maxHistory>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
<appender-ref ref="STASH" />
<appender-ref ref="CONSOLE" />
</root>
</configuration>
Copy the code
Note the name and path of the log, which will later match the Logstash configuration. In this way, the main purpose is to output the log to the specified file according to the format, so that the Logstash can monitor the log file and obtain the log data in real time.
Then the/opt/configuration Logstash Logstash – the 6.0.1 / config/Logstash. Conf
input {
file {
path => "/opt/logs/logback/*.log"
codec => "json"
type= >"logback"
}
}
output {
if [type] = ="logback" {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "logback-%{+YYYY.MM.dd}"}}}Copy the code
4.2 the use of
Enable ElasticSearch and Kibana
sudo systemctl start elasticsearch.service
sudo systemctl start kibana.service
Copy the code
Start the SpringBoot application, and then start the Logstash
sudo bin/logstash -f config/logstash.conf
Copy the code
Open Kibana and with a little configuration, you can query the application logs.
5 subtotal
This article mainly introduces the elK-based logging platform construction, which is only the most basic architecture, of course, it is not only suitable for springCloud-based microservices architecture. As the service volume increases, the system can evolve to adapt to more service architectures, process massive logs, and extract more information from log data based on service requirements. Follow me to learn more: