Wechat official account: Operation and maintenance development story, author: Double Dong
ELK practical article
Through the construction section, I believe that we can perfect the construction of a set of ELK log analysis system, we will see how to use this system for actual combat
Configure log visualization on the Kibana Web interface
In set up in the end, we through the logstash – f/etc/logstash/conf. D/elk. Conf on system logs and security logs for the acquisition, created the system and the safety index, and the index according to deposit types do the es, You can use the elasticSearch -head plugin to create a powerful visualization of kibana data, which is not only cool but also very useful. Click on the Settings in the left menu bar, then click on the index button under Kibana, then click on the top left and create a Nagios-System-_ and nagios-secure-_ index mode as shown
Then filter by time to complete the creation
Index matching creation, click on the top menu Discove on the left, and then you can see the index we just created on the left, and then you can add the label to show below, you can also filter the label, the final effect as shown in the figure, you can see all the information just collected to the log
ELK application log collection
Nginx, Apache, message, and Secrue logs can now be exported to the front end.
Edit the nginx configuration file to modify the following (added under the HTTP module)
[root@elk-master ~]# vim /usr/local/nginx/conf/nginx.conf log_format json '{"@timestamp":"$time_iso8601",' '"@version":"1",' '"client":"$remote_addr",' '"url":"$uri",' '"status":"$status",' '"domian":"$host",' '"host":"$server_addr",' '"size":"$body_bytes_sent",' '"responsetime":"$request_time",' '"referer":"$http_referer",' '"ua":"$http_user_agent"' '}'; Change the output format of access_log to json. Access_log logs/elk.access.log json.Copy the code
Continue to modify the Apache configuration file
[root@elk-master ~]# vim /etc/httpd/conf/httpd.conf
LogFormat "{ \ \"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \ \"@version\": \"1\", \ \"tags\":[\"apache\"], \ \"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \ \"clientip\": \"%a\", \ \"duration\": %D, \ \"status\": %>s, \ \"request\": \"%U%q\", \ \"urlpath\": \"%U\", \ \"urlquery\": \"%q\", \ \"bytes\": %B, \ \"method\": \"%m\", \ \"site\": \"%{Host}i\", \ \"referer\": \"%{Referer}i\", \ \"useragent\": \"%{user-agent} I \" \}" ls_apache_json Modifies the output format to the JSON format defined above. CustomLog logs/ Access_log ls_apache_JSONCopy the code
Edit the logstash configuration file, To log collection/root @ elk - master ~ # vim/etc/logstash/conf. D/elk. Conf input {file {path = > "/ var/log/messages" type = > "system" start_position => "beginning" } file { path => "/var/log/secure" type => "secure" start_position => "beginning" } file { path => "/var/log/httpd/access_log" type => "http" start_position => "beginning" } file { path => "/usr/local/nginx/logs/elk.access.log" type => "nginx" start_position => "beginning" } } output { if [type] == "system" {elasticSearch {hosts => ["192.168.73.133:9200"] index => "nagios-system-%{+ YYYY.mm. Dd}"}} if [type] == "Secure" {elasticSearch {hosts => ["192.168.73.133:9200"] index => "nagios-system-%{+YYYY. Elasticsearch {hosts => ["192.168.73.133:9200"] index => "nagios-secure-%{+ YYYY.mm. Dd}"}} if [type] == "HTTP" { Elasticsearch {hosts => ["192.168.73.133:9200"] index => "nagios-http-%{+ YYYy.mm. Dd}"}} if [type] == "nginx" { Elasticsearch {hosts => ["192.168.73.133:9200"] index => "nagios-nginx-%{+ YYYY.mm-dd}"}}}Copy the code
Run it and see what happens
[root@elk-master ~]# nohup logstash -f /etc/logstash/conf.d/elk.conf &
Copy the code
In head, you can find that all the indexes to create the log already exist. Then you can go to Kibana and create the log index
Redis is introduced as a cache queue
Advantage:
-
1. Compared with the figure above, in the case of multiple servers and a large number of logs, the ES pressure can be reduced, the queue plays a buffer role, and data loss can be protected to a certain extent.
(When Logstash data receiving capacity exceeds ES processing capacity, queue balancing network transmission can be added)
-
2. Collect logs and process them in Indexer.
If the log volume is large, Kafka can be used as the buffer queue, which is more suitable for large throughput than Redis.
Bind 192.168.73.133daemonize yes save reids # yum install -y redis ""#save 900 1#save 300 10#save 60 10000RequirePass Root123 # Set authentication password to start the Redis service # systemctl restart redis Tests whether the redis service is successfully enabled [root@elk-master ~]# redis-cli -h 192.168.73.133 192.168.73.133:6379> info# Serverredis_version: 3.2.12 ` ` ` omit run logstash specified redis - out. # logstash - f the conf configuration file/etc/logstash/conf. D/redis - out. ConfCopy the code
Once it’s running successfully, enter something in the Logstash file (you can see the effect after using the client connection on your local computer)
After the test is successful, edit the redis-in. Conf configuration file and export the data stored by reids to ElasticSearch
# vim/etc/logstash/conf. D/redis - in the conf key = > 'elk - test input {redis {host = > "192.168.73.133 port" = > "6379" Password => 'root123' db => '1' data_type => "list" key => 'elk-test' batch_count => 1 Default 125 (if there are no 125 in Redis, an error is reported, }} output {elasticSearch {hosts => ['192.168.73.133:9200'] index => 'redis-test-%{+ YYYy.mm. Dd}'}}Copy the code
Save all log monitoring source files to Redis and export them to ElasticSearch via Redis
# vim /etc/logstash/conf.d/elk.conf input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } file { path => "/var/log/secure" type => "secure" start_position => "beginning" } file { path => "/var/log/httpd/access_log" type => "http" start_position => "beginning" } file { path => "/usr/local/nginx/logs/elk.access.log" type => "nginx" start_position => "beginning" } } utput { if [type] == "http" { Redis {host => "192.168.1.202" password => 'test' port => "6379" type => "HTTP" start_position => "beginning" file { path => "/usr/local/nginx/logs/elk.access.log" start_position => "beginning" } } output { if [type] == "http" { password => 'root123' port => "6379" db => "2" data_type => "list" key => 'nagios_http' } } if [type] == "nginx" { redis { host => "192.168.73.133" password => 'root123' port => "6379" db => "2" data_type => "list" key => 'nagiOS_nginx'}} if [type] == "secure" {redis {host => "192.168.73.133" password => 'root123' port => "6379" db => "2" data_type => "list" Key => 'nagiOS_Secure'}} if [type] == "system" {redis {host => "192.168.73.133" password => 'root123' port => "6379" db => "2" data_type => "list" key => 'nagiOS_System'}}} Run logstash to specify the shire.conf configuration file # logstash -f /etc/logstash/conf.d/elk.confCopy the code
Check whether data has been written to a redis log file (sometimes the input log file does not generate a log file)
Read data from Redis and write it to ElasticSearch (need another host to experiment, use 192.168.73.135)
# vim/etc/logstash/conf. D/redis - out. Conf input {redis {type = > "system" host = > "192.168.73.133" password = > 'root123' port => "6379" db => "2" data_type => "list" key => 'nagios_system' batch_count => 1 } redis { type => "http" Host => "192.168.73.133" password => 'root123' port => "6379" db => "2" data_type => "list" key => 'nagiOS_HTTP' Batch_count => 1} redis {type => "nginx" host => "192.168.73.133" password => 'root123' port => "6379" db => "2" Data_type => "list" key => 'nagiOS_nginx' batch_count => 1} redis {type => "Secure" host => "192.168.73.133" password => 'root123' port => "6379" db => "2" data_type => "list" key => 'nagios_secure' batch_count => 1 } } output { if [type] == "system" {elasticSearch {hosts => ["192.168.73.133:9200"] index => "nagios-system-%{+ YYYY.mm. Dd}"}} if [type] == "Secure" {elasticSearch {hosts => ["192.168.73.133:9200"] index => "nagios-secure-%{+ YYYY.mm. Dd}"}} if [type] == "HTTP" {elasticSearch {hosts => ["192.168.73.133:9200"] index => "nagios-http-%{+ YYYy.mm. Dd}"}} if [type] == "nginx" {elasticSearch {hosts => ["192.168.73.133:9200"] index => "nagios-nginx-%{+ YYYY.mm. Dd}"}}}Copy the code
Note: Input is the output collected from the client and stored in 192.168.1.202. If you want to save the output to the current host, you can change the hosts in the output to localhost. If you want to display it in Kibana, Kabana needs to be deployed on the local machine. Why do you want to do this for the purpose of loose coupling, namely collecting logs on the client side, writing to server redis or local REDis, and connecting the output to ES server
Run the command to see the effect
[root@elk-master ~]# nohup logstash -f /etc/logstash/conf.d/redis-out.conf &
Copy the code
The effect is the same as direct output to the ES server (this saves the logs to the Redis database and then retrieves the logs from the Redis database)
Online ELK
-
1. Log classification
System log rsyslog logstash Syslog plug-in access log nginx logstash codec JSON Error log File logstash mulitline Run log file logstash codec JSON Syslog Logstash Syslog plug-in Debug Log file Logstash JSON or mulitlineCopy the code
-
2. Log standardization
Fixed path
Format as json as possible
- 3. System logs start > error > Run > Access
Finally, because ES stores logs permanently, it is necessary to delete logs periodically. The following command deletes logs before the specified time
curl -X DELETE http://xx.xx.com:9200/logstash-*-`date +%Y-%m-%d -d "-$n d
Copy the code
This article uses the article synchronization assistant to synchronize