Mind mapping

An overview of the

As we all know, many exceptions and error messages are frequently encountered in the production environment. You need to check log information to troubleshoot errors. Most of the current systems are more complex, even behind a service is a cluster of machines running, if the machine to view the log is obviously very laborious, and not practical.

If you can collect all the logs to a platform, and then like Baidu, Google search for relevant logs through keywords, is not too bad. Hence the centralized logging system. ELK is one of the most used open source products.

What is ELK

ELK is an acronym for Elasticsearch, Logstash and Kibana, all of which are open source.

ElasticSearch(ES for short) is a real-time distributed search and analysis engine that can be used for full text search, structured search and analysis.

Logstash, a data collection engine, is used to collect, parse, and send data to ES. Supported data sources include local files, ElasticSearch, MySQL, Kafka, and more.

Kibana provides analytics and Web visualizations for Elasticsearch, and generates various dimensional tables and graphs.

2. Build ELK

Environment dependencies: CentOS7.5, JDK1.8, ElasticSearch7.9.3, Logstash 7.9.3, Kibana7.9.3.

2.1 Installing ElasticSearch Download the installation package from the official website and run the tar -zxvf command to decompress it.

Find elasticSearch. yml in config directory and modify the configuration:

Cluster. name: es-application node.name: node-1 # Open to all IP addresses net. host: 0.0.0.0 #HTTP port: Data: /usr/usr/elasticsearch-7.9.3 /data #elasticsearch: /usr/ usr/ data #elasticsearch: /usr/ usr/ data The/usr/elasticsearch - 7.9.3 / logsCopy the code

After configuration, create a user because ElasticSearch starts as a non-root user.

Chown -r yehongzhi:yehongzhi /usr/elasticsearch-7.9.3/Copy the code

Then switch users and start:

/bin/ elasticSearch -dCopy the code

Run the netstat -nltp command to check the port number:

accesshttp://192.168.0.109:9200/ you can see the following information, said the installation was successful.

2.2 Installing the Logstash Package Download the installation package from the official website, decompress the package, find the logstuck-sample. conf file in the /config directory, and modify the configuration.

input { file{ path => ['/usr/local/user/*.log'] type => 'user_log' start_position => "beginning" } } output { Elasticsearch {hosts = > (" http://192.168.0.109:9200 ") index = > "user - % {+ YYYY. MM. Dd}"}}Copy the code

Input indicates the input source, output indicates the output, and filter can be configured as follows:

Jar application. Then start it in the background and export it to the log file user.log.

nohup java -jar user.jar >/usr/local/user/user.log &
Copy the code

Then start the Logstash in the background as follows:

Nohup./bin/logstash -f /usr/logstash-7.9.3/config/logstash-sample.conf &Copy the code

After starting, using the JPS command, you can see two processes running:

2.3 Installing Kibana First go to the official website to download the zip package, then decompress, find the kibana.yml file in the /config directory, and modify the configuration:

Server port: 5601 server host: "192.168.0.111" elasticsearch. Hosts: [" http://192.168.0.109:9200 "]Copy the code

As with elasticSearch, you cannot start as root, you need to create a user:

Chown -r kibana:kibana /usr/kibana/Copy the code

Then use the command to start:

/bin/kibana/nohup /bin/kibanaCopy the code

After startup, open it in the browserhttp://192.168.0.111:5601, you can see kibana web interface:

2.4 Effect Display After all successful startup, the whole process should be like this, let’s take a look:

Browser openhttp://192.168.0.111:5601, to the management interface, click on the “IndexManagement “as you can see, there is an index of user-2020.10.31.

Click on the Index Patterns menu bar and create it. Name it User -*.

Finally, you can go to the Discover bar to select the Index Pattern of user-*, and then search the keyword, you will find the relevant log!

Third, improvement and optimization

The above ELK, which is simply built with three core components, is actually flawed. If the Logstash need to add plug-ins, then all the server Logstash need to add plug-ins, poor scalability. So you have FileBeat, which takes a lot of resources, you just collect logs, you don’t do anything else, so it’s lightweight, you take out the Logstash and you do filtering and stuff like that.

FileBeat is also officially recommended as a log collector. Download the Linux installation package first:

https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.3-linux-x86_64.tar.gz
Copy the code

When the download is complete, unzip it. Then modify the filebeat.yml configuration file:

Paths: - /usr/local/user/*. Log # Output, Logstash server address ElasticSearch: ["192.168.0.110:5044"] ["localhost:9200"] #protocol: "https"Copy the code

Then the Logstash configuration file, Logstash -sample.conf, also needs to be modified:

{beats {port => 5044 codec => "json"}}Copy the code

Then start FileBeat:

/filebeat -e -c filebeat.yml >/dev/null 2>&1 &Copy the code

Then start the Logstash:

/bin/logstash -f /usr/logstash-7.9.3/config/logstash-sample.conf &Copy the code

To determine success, look at the Logstash /logs file in the Logstash application:

Write in the last

At present, many Internet companies are using ELK to do log centralized system, the reason is very simple: open source, plug-in, easy to expand, support data sources, active community, out of the box and so on. I have seen one company add a Kafka cluster to the above architecture, mainly due to the high volume of log data. There are three basic ElasticSearch, Logstash, and Kibana components.