preface

This paper mainly introduces the ELK log system in the Logstash quick start and actual combat

ELK is introduced

ELK stands for Elasticsearch, Logstash, and Kibana, all of which are open source software. A new FileBeat is added, which is a lightweight log collection and processing tool (Agent). FileBeat consumes less resources and is suitable for collecting logs from various servers and transferring them to Logstash. This tool is also recommended by the official.

  • Elasticsearch is an open source distributed search engine that collects, analyzes, and stores data. Its features are: distributed, zero configuration, automatic discovery, index automatic sharding, index copy mechanism, restful interface, multi-data sources, automatic search load and so on.

  • Logstash is a tool for collecting, analyzing, and filtering logs. It supports a large number of data acquisition methods. The client is installed on the host that needs to collect logs. The server filters and modifies the received node logs and sends them to ElasticSearch at the same time.

  • Kibana is also an open source and free tool that provides a log analysis friendly Web interface for Logstash and ElasticSearch to help aggregate, analyze and search important data logs.

  • Filebeat is a lightweight log collector that can be easily integrated with Kibana. After Filebeat is started, you can directly view the detail process of log files in Kibana.

Logstash is introduced

Logstash is a data stream engine:

It is an open source streaming ETL engine for data logistics that builds a data flow pipeline in minutes, is horizontally scalable and resilient with adaptive buffering, agnosticable data sources, a plug-in ecosystem with over 200 integrations and processors, and uses Elastic Stack to monitor and manage deployments

The Logstash consists of three main parts: inputs, filters and outputs. Inputs are mainly used to provide rules for receiving data, for example, using the inputs to collect file contents. Filters are mainly used to filter the transmitted data, such as using grok rules for data filtering; Outputs outputs the received data according to the specified output mode, such as ElasticSearch.

Figure:

Logstash installation is used

First, environmental choice

Logstash is written in JRuby and runs in the JVM, so you need to install the JDK before you install it. If the version is 6.x, the JDK must be at least 8; if the version is 7.x, the JDK must be at least 11. If the Elasticsearch cluster is version 7.x, you can use Elasticsearch’s own JDK.

The Logstash download address is recommended to use tsinghua University or Huawei’s open source image site.

Download address:

mirrors.huaweicloud.com/logstash mirrors.tuna.tsinghua.edu.cn/ELK

ELK7.3.2 baidu network location address: link: pan.baidu.com/s/1tq3Czywj… Extraction code: CXNG

Second, JDK installation

Note :JDK version depends on the version of your Elasticsearch cluster.

1. Document preparation

Decompress the downloaded JDK tar-xvf JDK-8u144-linux-x64.tar. gz to the opt/ Java folder, create a new folder, and rename the folder to jdk1.8

Mv jdk1.8.0_144 /opt/ Java mv jdk1.8.0_144 jdk1.8Copy the code

2. Environment configuration

First type Java -version to see if the JDK is installed, and if so, uninstall it if the version is not appropriate

The input

rpm -qa | grep java 
Copy the code

Check the information

Then type:

RPM -e --nodeps You want to uninstall JDK information, for example, RPM -e --nodeps java-1.7.0-openJDK-1.7.0.99-2.6.5.1.el6.x86_64Copy the code

Once you’re sure it’s gone, unzip the downloaded JDK

tar  -xvf   jdk-8u144-linux-x64.tar.gz
Copy the code

Move to the opt/ Java folder, create a new one if not, and rename the folder to jdk1.8.

Mv jdk1.8.0_144 /opt/ Java mv jdk1.8.0_144 jdk1.8Copy the code

Then edit the profile file and add the following configuration input: vim /etc/profile

Export JAVA_HOME=/opt/ Java /jdk1.8 export JRE_HOME=/opt/ Java /jdk1.8/jre export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH=.:${JAVA_HOME}/bin:$PATHCopy the code

After successfully adding, type:

source /etc/profile
Copy the code

To make the configuration take effect, and then view the version information enter:

java  -version 
Copy the code

Logstash installation

1. Document preparation

Decompress the logstuck-7.3.2.tar. gz configuration file on Linux and enter:

The tar – XVF logstash – 7.3.2. Tar. Gz

Then move to /opt/elk and rename the folder logstash-7.3.2

Mv logstash-7.3.2.tar /opt/elk mv logstash-7.3.2.tar logstash-7.3.2

2. Modify the configuration

Inputs, filters and outputs are briefly introduced here.

inputs

Inputs Use the following configurations:

  • Path: Specifies the path to read files based on the glob matching syntax. Exclude: An optional array type that excludes unwanted file rules based on the glob matching syntax.

  • Sincedb_path: Optional data file that records the sincedDB file path and where the file is read information.

  • Start_position: Optional, can be set to beginning/end, whether to read the file from scratch. The default starting value is: end.

  • Stat_interval: this parameter is optional. The unit is second. It periodically checks whether files are updated.

  • Discover_interval: this parameter is optional. The unit is second. It periodically checks whether new files are to be read

  • Ignore_older: This option is optional, expressed in seconds. When scanning a file list, if the last change of the file exceeds the specified period, the file is not processed but is still monitored for new content. This option is disabled by default.

  • Close_older: This option is optional. The unit is second. If there is no new content in a monitored file within the specified time, the file handle is closed and resources are released.

  • Tags: Optional tags that can be added or removed by a specific plug-in during data processing. Type: Specifies the processing time type. Such as nginxlog.

A simple input example:

input {
  file {
    path => "/home/logs/mylog.log"
  }
}
Copy the code

This configuration is to collect the logs of /home/logs/mylog.log. If the entire directory is to be collected, the * wildcard can be used to match

 path => "/home/logs/*.log"
Copy the code

Log: collects all. Log logs in the directory.

When you import some local log files through the logstash-input-file plug-in, the Logstash keeps track of the current position in each file through a separate file called sincedb. This makes it possible to stop and restart the Logstash and allow it to continue running without losing the number of lines added to the file when the Logstashwas was stopped. When debugging, we might want to disable the recording capability of sincedB so that the files can be read from scratch each time. At this point, we can do an example like this:

input {
  file {
    path => "/home/logs/mylog.log"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}
Copy the code

If you want to use HTTP for input, you can change the type to HTTP with different parameters. The same goes for TCP, UDP, Syslog, beats, and so on. Example:

Input {HTTP {port => port number}}Copy the code
filter

Filter is mainly used to implement filtering functions, such as using Grok to achieve log content segmentation and so on.

For example, grok filter for apache logs:

127.0.0.1 – – [13/Apr/2015:17:22:03 +0800] “HTTP/1.1” 404 285 “-” “curl/7.19.7 (x86_64-redhat- Linux -gnu) Libcurl / 7.19.7nss / 3.15.3zlib /1.2.3 libidn/1.18 libssh2/1.4.2”

grok:

%{COMBINEDAPACHELOG}

For this analysis we can use Kibana’s Grok, which is in the development tool. Match debugging can also be done at grokdebug.herokuapp.com/.

Example:

filter {
    grok {
      match => ["message", "%{COMBINEDAPACHELOG}"]
    }
}
Copy the code

Figure:If there is no demand in this respect, filter can not be made.

output

Output Is used to export data to a file or ElasticSearch.

This is where the data is output to ElasticSearch. If it is a cluster, use commas to configure multiple nodes.

Output {elasticSearch {hosts => ["127.0.0.1:9200"]}}Copy the code

If you want to log on the console, you can add the STdout configuration. If you want to customize the output index, you can add the corresponding index database name. If the index database does not exist, you can create the index database based on the data content. You can also automatically create the index database every day.

The following is an example:

Output {stdout {codec => rubyDebug} elasticSearch {hosts => ["127.0.0.1:9200"] index => "mylogs-%{+ YYYY.mm. Dd}"}}Copy the code

More logstash configuration: www.elastic.co/guide/en/lo…

3, the use of

demo

Add a log file to the /home/logs/ directory, then create a logstuck-test. conf file in the logstash folder and add the following configuration to this file:

input { file { path => "/home/logs/mylog.log" start_position => "beginning" sincedb_path => "/dev/null" } } filter { } Output {stdout {codec => rubydebug} elasticSearch {hosts => ["192.168.9.238:9200"]}}Copy the code

Then start with the following command in the logstash directory:

nohup ./bin/logstash -f logstash-test.conf

Background startup:

nohup ./bin/logstash -f logstash-test.conf >/dev/null 2>&1 &

Hot configuration loading startup:

nohup ./bin/logstash -f logstash-test.conf –config.reload.automatic >/dev/null 2>&1 &

After successful startup, if the startup is not background, you can check the data transfer in the console. If the startup is background, you can check the data transfer in the LogStash log directory.

In kibana show

Open Kibana and create an index template as shown in the following image:Because the index library is not specified, the logstash template uses the default logstash template, which can be selected here.Then create a dashboard and select the index library template you just created to view the data.

other

Reference: elasticstack.blog.csdn.net/article/det…

ElasticSearch Combat Series

  • Kinaba for ElasticSearch
  • ElasticSearch DSL statement for ElasticSearch
  • ElasticSearch: JAVA API for ElasticSearch
  • ElasticSearch: ElasticSearch
  • ElasticSearch: ElasticSearch
  • Metric Aggregations for ElasticSearch

Music to recommend

Original is not easy, if you feel good, I hope to give a recommendation! Your support is the biggest motivation for my writing! Copyright: www.cnblogs.com/xuwujing CSDN: blog.csdn.net/qazwsxpcm… Personal blog: www.panchengming.com