Mind mapping

Star: github.com/yehongzhi/l…

An overview of the

As we all know, many exceptions and error messages are frequently encountered in the production environment. You need to check log information to troubleshoot errors. Most of the current systems are more complex, even behind a service is a cluster of machines running, if the machine to view the log is obviously very laborious, and not practical.

If you can collect all the logs to a platform, and then like Baidu, Google search for relevant logs through keywords, is not too bad. Hence the centralized logging system. ELK is one of the most used open source products.

What is ELK

ELK is an acronym for Elasticsearch, Logstash and Kibana, all of which are open source.

ElasticSearch(ES for short) is a real-time distributed search and analysis engine that can be used for full text search, structured search and analysis.

Logstash, a data collection engine, is used to collect, parse, and send data to ES. Supported data sources include local files, ElasticSearch, MySQL, Kafka, and more.

Kibana provides analytics and Web visualizations for Elasticsearch, and generates various dimensional tables and graphs.

2. Build ELK

Environment dependencies: CentOS7.5, JDK1.8, ElasticSearch7.9.3, Logstash 7.9.3, Kibana7.9.3.

2.1 installation ElasticSearch

First, download the installation package from the official website and run the tar -zxvf command to decompress it.

Find elasticSearch. yml in config directory and modify the configuration:

cluster.name: es-application
node.name: node-1
# Open to all IP
network.host: 0.0. 0. 0
# the HTTP port number
http.port: 9200
# elasticSearch data file directory
path.data: The/usr/elasticsearch - 7.9.3 / data
# directory for storing elasticSearch log files
path.logs: The/usr/elasticsearch - 7.9.3 / logs
Copy the code

After configuration, create a user because ElasticSearch starts as a non-root user.

Chown -r yehongzhi:yehongzhi /usr/elasticsearch-7.93./
Copy the code

Then switch users and start:

/bin/ elasticSearch -dCopy the code

Run the netstat -nltp command to check the port number:

Visit http://192.168.0.109:9200/ you can see the following information, said the installation was successful.

2.2 installation Logstash

Download the installation package from the official website, decompress it, find the logstuck-sample. conf file in the /config directory, and modify the configuration:

input {
  file{
    path = > ['/usr/local/user/*.log']
    type = > 'user_log'
    start_position = > "beginning"}}output {
  elasticsearch {
    hosts = > ["http://192.168.0.109:9200"]
    index = > "user-%{+YYYY.MM.dd}"}}Copy the code

Input indicates the input source, output indicates the output, and filter can be configured as follows:

Jar application. Then start it in the background and export it to the log file user.log.

nohup java -jar user.jar >/usr/local/user/user.log &
Copy the code

Then start the Logstash in the background as follows:

Nohup./bin/logstash -f /usr/logstash-7.9.3/config/logstash-sample.conf &Copy the code

After starting, using the JPS command, you can see two processes running:

2.3 installation Kibana

First of all, download the zip package from the official website, then unzip it, find the kibana.yml file in the /config directory, and modify the configuration:

server.port: 5601
server.host: "192.168.0.111"
elasticsearch.hosts: ["http://192.168.0.109:9200"]
Copy the code

As with elasticSearch, you cannot start as root, you need to create a user:

Chown -r kibana:kibana /usr/kibana/Copy the code

Then use the command to start:

/bin/kibana/nohup /bin/kibanaCopy the code

Starts in the browser open http://192.168.0.111:5601, you can see kibana web interface:

2.4 Effect Display

After all successful startup, the process should look like this. Let’s see:

The browser open http://192.168.0.111:5601, to the Management interface, click on the “Index Management” as you can see, there is a user – 2020.10.31 Index.

Click on the Index Patterns menu bar and create it. Name it User -*.

Finally, you can go to the Discover bar to select the Index Pattern of user-*, and then search the keyword, you will find the relevant log!

Third, improvement and optimization

The above ELK, which is simply built with three core components, is actually flawed. If the Logstash need to add plug-ins, then all the server Logstash need to add plug-ins, poor scalability. So you have FileBeat, which takes a lot of resources, you just collect logs, you don’t do anything else, so it’s lightweight, you take out the Logstash and you do filtering and stuff like that.

FileBeat is also officially recommended as a log collector. Download the Linux installation package first:

https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.3-linux-x86_64.tar.gz
Copy the code

When the download is complete, unzip it. Then modify the filebeat.yml configuration file:

# input source
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /usr/local/user/*.log
# output Logstash server address
output.logstash:
  hosts: ["192.168.0.110:5044"]
Output, if output directly to ElasticSearch
#output.elasticsearch:
  #hosts: ["localhost:9200"]
  #protocol: "https"
Copy the code

Then the Logstash configuration file, Logstash -sample.conf, also needs to be modified:

{beats {port => 5044 codec => "json"}}Copy the code

Then start FileBeat:

#Background Start command
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &
Copy the code

Then start the Logstash:

#Background Start commandNohup./bin/logstash -f /usr/logstash-7.9.3/config/logstash-sample.conf &Copy the code

To determine success, look at the Logstash /logs file in the Logstash application:

Write in the last

At present, many Internet companies are using ELK to do log centralized system, the reason is very simple: open source, plug-in, easy to expand, support data sources, active community, out of the box and so on. I have seen one company add a Kafka cluster to the above architecture, mainly due to the high volume of log data. There are three basic ElasticSearch, Logstash, and Kibana components.

I hope this article can help you have some preliminary understanding of ELK. Thank you for reading it.

Please give me a thumbs-up if you think it is useful. Your thumbs-up is the biggest motivation for my creation

Refusing to be a salt fish, I’m a programmer trying to be remembered. See you next time!!

Ability is limited, if there is any mistake or improper place, please criticize and correct, study together!