Elasticsearch, Logstash and Kibana (what we call “ELK” Stack) have the ability to visualize data from any source.
Docker is an open source engine that makes it easy to create a lightweight, portable, self-contained container for any application. Containers that developers compile and test on laptops can be deployed in a production environment, including VMs (virtual machines), Bare Metal, OpenStack clusters, and other basic application platforms.
1. The architecture
This time, we set up a log platform with ELK Stack, which runs in Docker and works with Filebeat and Log4j to collect system logs, nginx access logs and Java logs generated by Log4j. The overall architecture diagram is as follows:
Elasticsearch is used as the log storage and indexing platform. Logstash relies on a variety of powerful plug-ins as a log processing platform. Kibana is used to get data from Elasticsearch for data visualization and customize data reports. Filebeat is used to collect logs of a specific location in each host. The collected logs are sent to the Logstash. Log4j connects directly to the Logstash and sends logs directly to the Logstash (of course you can also collect Tomcat logs here using Filebeat).
2. Elasticsearch configuration
Elasticsearch’s docker-compose configuration file is as follows:
Port 9200 exposed by the host serves as the HTTP port to provide services externally. 9300 as the TCP port for interaction; Mount the directory storing log index files to the directory on the host to prevent historical log files from being lost due to the restart of the container.
After the docker-compose. Yml file is configured, run docker-compose up -d to start it as a daemon. Use the docker ps command to check the startup status.
3. The Logstash configuration
As a log processing platform, Logstash has many plug-ins that can be configured. Its Docker-compose configuration file is as follows:
The exposed port 5044 is used to receive log data collected by Filebeat; 4569 receives log data from Log4j; 8080 is used to receive log data from the plug-in logstuck-input-http;
Mount the conf directory for adding our own custom configuration files and patterns for adding our own custom Grok rule files.
Also, connect to the ElasticSearch container to transfer data.
After the docker-compose. Yml file is configured, run docker-compose up -d to start it as a daemon. Use the docker ps command to check the startup status.
4. Filebeat configuration
Filebeat is installed on every host that needs to collect logs. Its Docker-compose configuration file is as follows:
The Logstash location is specified in extra_hosts. Mount configuration file directory conf; Mount local log directory /data/logs; Mount the local system log directory /var/log. The registry directory is mounted to determine the location where Filebeat reads files. In this case, Filebeat restarts or hangs up and starts to read log files again, which may cause log data duplication.
5. Kibana configuration
Kibana is the simplest to configure, serving only as a data visualization platform and providing HTTP access. His docker-compose configuration file is as follows:
It connects to the elasticSearch container to fetch data. Expose port 5601 for Kibana interface access;
After the docker-compose. Yml file is configured, run docker-compose up -d to start it as a daemon. Use the docker ps command to check the startup status.
Open http://your-host:5601 and you can see the Kibana interface, which looks like the picture below:
6. Use the configuration
- Nginx access log format configuration in the nginx configuration file configuration as follows:
- Log4j access is configured in the
log4j.properties
The following information is configured in the file:
7. The attachment
All the configuration files have been posted to Github, and those who need them can go to github.com/zmpandzmp/E… Download and use.
To admire the authors’