Elasticsearch, Logstash and Kibana (what we call “ELK” Stack) have the ability to visualize data from any source.

Docker is an open source engine that makes it easy to create a lightweight, portable, self-contained container for any application. Containers that developers compile and test on laptops can be deployed in a production environment, including VMs (virtual machines), Bare Metal, OpenStack clusters, and other basic application platforms.

1. The architecture

This time, we set up a log platform with ELK Stack, which runs in Docker and works with Filebeat and Log4j to collect system logs, nginx access logs and Java logs generated by Log4j. The overall architecture diagram is as follows:


Elasticsearch is used as the log storage and indexing platform. Logstash relies on a variety of powerful plug-ins as a log processing platform. Kibana is used to get data from Elasticsearch for data visualization and customize data reports. Filebeat is used to collect logs of a specific location in each host. The collected logs are sent to the Logstash. Log4j connects directly to the Logstash and sends logs directly to the Logstash (of course you can also collect Tomcat logs here using Filebeat).

2. Elasticsearch configuration

Elasticsearch’s docker-compose configuration file is as follows:

Version: '2' services: ElasticSearch: image: elasticSearch :2.2.0 container_name: elasticSearch restart: always network_mode: "bridge" ports: - "9200:9200" - "9300:9300" volumes: - ./data:/usr/share/elasticsearch/dataCopy the code

Port 9200 exposed by the host serves as the HTTP port to provide services externally. 9300 as the TCP port for interaction; Mount the directory storing log index files to the directory on the host to prevent historical log files from being lost due to the restart of the container.

After the docker-compose. Yml file is configured, run docker-compose up -d to start it as a daemon. Use the docker ps command to check the startup status.

3. The Logstash configuration

As a log processing platform, Logstash has many plug-ins that can be configured. Its Docker-compose configuration file is as follows:

Version: '2' services: logstash: image: logstash: 2.0-1 container_name: logstash restart: always network_mode: "bridge" ports: - "5044:5044" - "4560:4560" - "8080:8080" volumes: - ./conf:/config-dir - ./patterns:/opt/logstash/patterns external_links: - elasticsearch:elasticsearch command: logstash -f /config-dirCopy the code

The exposed port 5044 is used to receive log data collected by Filebeat; 4569 receives log data from Log4j; 8080 is used to receive log data from the plug-in logstuck-input-http;

Mount the conf directory for adding our own custom configuration files and patterns for adding our own custom Grok rule files.

Also, connect to the ElasticSearch container to transfer data.

After the docker-compose. Yml file is configured, run docker-compose up -d to start it as a daemon. Use the docker ps command to check the startup status.

4. Filebeat configuration

Filebeat is installed on every host that needs to collect logs. Its Docker-compose configuration file is as follows:

Version: '2' services: fileBeat: image: olinicola/ fileBeat :1.0.1 container_name: fileBeat restart: always network_mode: "Bridge" extra_hosts: - "logstash:127.0.0.1" volumes: - ./conf/filebeat.yml:/etc/filebeat/filebeat.yml - /data/logs:/data/logs - /var/log:/var/host/log - ./registry:/etc/registryCopy the code

The Logstash location is specified in extra_hosts. Mount configuration file directory conf; Mount local log directory /data/logs; Mount the local system log directory /var/log. The registry directory is mounted to determine the location where Filebeat reads files. In this case, Filebeat restarts or hangs up and starts to read log files again, which may cause log data duplication.

5. Kibana configuration

Kibana is the simplest to configure, serving only as a data visualization platform and providing HTTP access. His docker-compose configuration file is as follows:

Version: '2' services: Kibana: image: Kibana :4.4.0 container_name: Kibana restart: always network_mode: "bridge" ports: - "5601:5601" external_links: - elasticsearch:elasticsearchCopy the code

It connects to the elasticSearch container to fetch data. Expose port 5601 for Kibana interface access;

After the docker-compose. Yml file is configured, run docker-compose up -d to start it as a daemon. Use the docker ps command to check the startup status.

Open http://your-host:5601 and you can see the Kibana interface, which looks like the picture below:


6. Use the configuration

  • Nginx access log format configuration in the nginx configuration file configuration as follows:
    log_format  logstash  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for" '
                  '"$http_host" "$upstream_addr" '
                  '$request_time $upstream_response_time';

    access_log  /data/logs/nginx/access.log  logstash;Copy the code
  • Log4j access is configured in thelog4j.propertiesThe following information is configured in the file:
Log4j. rootLogger = INFO,logstash # Push to logstash # Information like RemoteHost can change, According to the specific situation configuration log4j.appender.logstash=org.apache.log4j.net.SocketAppender log4j. Appender. Logstash. Port = 4560 log4j.appender.logstash.RemoteHost=your-logstash-host log4j.appender.logstash.ReconnectionDelay=60000 log4j.appender.logstash.LocationInfo=trueCopy the code

7. The attachment

All the configuration files have been posted to Github, and those who need them can go to github.com/zmpandzmp/E… Download and use.

To admire the authors’