An overview of the framework
- ElasticSearch is a schemaless database with powerful search capabilities that can be easily scaled up, indexed by field, and aggregated by grouped data.
- Logstash is written in Ruby and we can pipe in and out data to any location. An ETL pipe that can grab, convert, and store events to ElasticSearch. The packaged version runs on JRuby and uses dozens of threads for parallel data processing, taking advantage of the JVM’s threading capabilities.
- Kibana Web-based data analysis tool for ElasticSearch dashboard. Take full advantage of ElasticSearch’s search capabilities to visualize data in seconds. Support Lucene query string syntax and Elasticsearch filtering.
The premise
- The architecture in this article is built based on Docker. You need to understand the basic concepts of Docker, basic operations and custom Overlay network after Docker 1.9. This article only introduces the simplest construction. 1. Elastic is a storage directory that requires volume mapping in Docker. The configuration file ElasticSearch. yml must be configured as required. Please refer to: https://hub.docker.com/_/elasticsearch/ 2. The official image provided by Dockerhub is based on different basic images, which is not conducive to network transmission! You are advised to create an internal image based on your own organization. 3. Kafka can be used as the message queue to avoid excessive elasticSearch cluster pressure
Docker builds ELK javaweb application log collection, storage and analysis system
Step 1: Start ElasticSearch
Docker run -d --name myes --net=multihost -- IP =192.168.2.51 \ ElasticSearch :2.3Copy the code
- Use docker custom overlay network multihost and set the container IP to 192.168.2.51
Step 2: Launch Kibana
Docker run --name mykibana \ -e ELASTICSEARCH_URL=http://192.168.2.51:9200 \ --net=multihost \ -p 5601:5601 \ -d Kibana: 4.5Copy the code
- Using a custom network multihost, IP randomly assigned
- When Kibana is started on the host, container port 5601 maps to host port 5601 and kibana can be accessed through http://< host IP >:5601
- The ELASTICSEARCH_URL argument points to the ElasticSearch started in step 1
Step 3: Logstash configuration file
- Logstash. Conf, you can name it anything you want
Input {log4j {mode => "server" host => "0.0.0.0" port => 3456 type => "log4j"}} output {elasticSearch {hosts => [" 192.168.2.51 "]}}Copy the code
- The input mode log4j service listens on port 3456 of the current container. That is, the data source needs to send logs to port 3456 of the container.
Step 4: Start the Logstash
Docker run -d --rm -v "$PWD":/config-dir -p 3456:3456 -net multihost logstash:2.3 logstash -f /config-dir/logstashCopy the code
- Using a custom network multihost, IP randomly assigned
- Start the Logstash on the host and the container port 3456 maps to the host port 3456.(This assumes that your application is not Docker-centric, so the IP is not in the custom network Multihost. If the Web application is docker-oriented and shares a custom network with Logstash, the port does not need external mapping.)
- The container configuration file /config-dir/logstash. Conf is mapped to the current directory of the host. You need to start logstash. Conf in the current directory “$PWD”. (This directory can be adjusted)
Step 5: Web application LOG4j log TCP output
- Add TCP output to log4j.properties as follows:
log4j.rootLogger = DEBUG,tcp log4j.appender.tcp=org.apache.log4j.net.SocketAppender log4j.appender.tcp.Port=3456 . Log4j appenders. TCP. RemoteHost = 192.168.1.158. Log4j appenders. TCP. ReconnectionDelay = 10000 log4j.appender.tcp.Application=ssmmCopy the code
- RemoteHost is the host IP of the Logstash stash. If your Web application is docker-like, it can be the container IP
- Send logs to port 3456 > The most important thing not to forget is to start your Web application. To send the log!
- Here are the raw logs, and you can use Kibana’s powerful configuration to present your log analysis
Use docker five steps to build ELK log collection and analysis system