A preface.

Log for the importance of a program without too much words to modify, this will bea practical way to tell the full stack of open source micro service project to mall is how to integrate the current mainstream log solution ELK +Filebeat. No more words, first look at the implementation of the renderings, do not waste everyone’s time and meaningless things.

Elastic Stask technology stack

1. Why was Filebeat introduced?

ELK as a distributed logging solution has been talked about so much that it is familiar even if it has not been practiced. Beats joined the Elastic family as a data collector, and ELK was renamed Elastic Stack. In the era of ELK, data collection was done by Logstash, and there was a much more powerful filtering power than Filebeat, so it makes people wonder why Filebeat was introduced. Is it superfluous? Take a look at the official explanation to Beats:

Light weight, collected from source, simple and clear.

Beats collects data that meets Elastic Common Schema (ECS) requirements. If you want more processing power, Beats can forward data to Logstash for conversion and parsing.

Where key keywords are collected from source, more lightweight.

Logstash has more functions than Filebeat, but the more powerful it is, the more it is burdened, which means that as a collection tool, Logstash takes up more system resources than Filebeat. One of them is that the Logstash default heap size is 1 GB, while Filebeat is only around 10 MB.

2. ELK +Filebeat Log solution flow

Filebeat regularly listens to the specified log file. If the log file changes, it pushes the data to the Logstash file. After the Logstash filtering, the desired log data is stored in ElasticSearch, and finally presented through Kibana.

3. Environment preparation

Here I use to mall online cloud server to do, because the server resources are limited so do the heap memory conservative Settings, we can use virtual machine test.

Computer: the server configuration Open ports instructions memory
e.youlai.tech 1 the nuclear 2 g 5044 ,5601, 9200 ELK deployment server The ELK+IK word divider actually takes up 1.4g
f.youlai.tech 1 the nuclear 2 g SpringBoot Application port Filebeat + SpringBoot Application deployment server 300 m + / –

4. Custom network

Ensure that containers (ElasticSearch, Logstash, and Kibana) on the same network can access each other.

  • Create a custom network ELK

    docker network create elk
    Copy the code
  • Viewing an Existing Network

    docker network ls
    Copy the code

    Docker comes with three network modes bridge, host, and None

    Docker custom network mode

  • Deleting an Existing Network

    docker network rm elk
    Copy the code

5. ELK deployment

1. ElasticSerach deployment

  1. Create a directory

    mkdir -p /opt/elasticsearch/{config,data}
    chmod 777 /opt/elasticsearch/{config,data}
    Copy the code
  2. Pull the mirror

    Docker Hub Repository view the latest Elastic Search release

    Docker pull elasticsearch: 7.14.1Copy the code
  3. The configuration file

    Create the elasticSearch.yml configuration file

    vi /opt/elasticsearch/config/elasticsearch.yml
    Copy the code

    Adding Configuration Information

    #Allow access from all IP addresses on the hostHTTP. Host: 0.0.0.0#Whether cross-domain is supported. The default value isfalse
    http.cors.enabled: true 
    http.cors.allow-origin: "*" 
    Copy the code
  4. Create and start the container

    docker run -d --name elasticsearch --net elk --restart always \ -p 9200:9200 -p 9300:9300 \ -e "ES_JAVA_OPTS=-Xms256m -Xmx256m" \ -e "discovery.type=single-node" \ -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ -v / opt/elasticsearch/data: / usr/share/elasticsearch/data \ elasticsearch: 7.14.1Copy the code

    JVM heap size set to 128MB, IK word segmentation installation error, recommended heap size set to at least 256M.

  5. Install IK word dividers

    Visit github.com/medcl/elast… Find the word splitter corresponding to the version of ElasticSearch and copy its full download address.

    docker exec -it elasticsearch /bin/sh
    cd bin/
    elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.14.1/elasticsearch-analysis-ik-7.14.1.zip
    Copy the code

    View the installed IK word divider

    elasticsearch-plugin list
    Copy the code

    Restart the ElasticSearch

    docker restart elasticsearch
    Copy the code

    Delete the word divider

    elasticsearch-plugin remove analysis-ik
    Copy the code

2. Kibana deployment

  1. Pull the mirror

    Docker Hub Image repository check out the latest version of Kibana

    Docker pull kibana: 7.14.1Copy the code
  2. Create and start the container

    Docker run -d --name kibana --net elk --restart always -P 5601:5601 kibana:7.14.1Copy the code

    Visit E.youlai. tech:5601 to see the Kibana interface

3. The Logstash deployment

  1. Pull the mirror

    Docker Hub Image repository check out the latest version of Logstash

    Docker pull logstash: 7.14.1Copy the code
  2. Create a directory

    mkdir -p /opt/logstash/{config,data,pipeline}
    chmod 777 /opt/logstash/{config,data,pipeline} 
    Copy the code
  3. The configuration file

    • Set the JVM heap memory size

      vi /opt/logstash/config/jvm.options
      Copy the code

      Adding Configuration Information

      -Xmx128m
      -Xms128m
      Copy the code
    • Logstash configuration

      vi /opt/logstash/config/logstash.yml
      Copy the code

      Adding Configuration Information

      #Allow access from all IP addresses on the hostHTTP. Host: "0.0.0.0"#Specifies the pipe ID to use
      xpack.management.pipeline.id: ["main"]
      Copy the code
    • Pipe ID and profile path mapping

      vi /opt/logstash/config/pipelines.yml 
      Copy the code

      Add pipeline ID and pipeline configuration file directory mapping, note the symbol – with Spaces before and after (giant pit)

       - pipeline.id: main
         path.config: "/usr/share/logstash/pipeline"
      Copy the code

    • Pipeline configuration

      Add to mall service application log pipeline configuration, the pipeline configuration specified in the above container to the directory/usr/share/logstash/pipeline, back when they start the logstash to mount it to the host machine directory/opt/logstash/pipeline, Then add the pipe configuration file Youlai-log. config in the host directory to be automatically loaded by the Logstash file.

      vi /opt/logstash/pipeline/youlai-log.config
      Copy the code

      Add the complete content below

      input {
       beats {
          port => 5044
          client_inactivity_timeout => 36000
        }
      }
      filter {
         mutate {
              remove_field => ["@version"]
              remove_field => ["tags"]
         }
      }
      output {
        if [appname] == "youlai-admin" {
           elasticsearch {
             hosts => "http://elasticsearch:9200"
             index => "youlai-admin-log"
           }
        }else if [appname] == "youlai-auth" {
           elasticsearch {
             hosts => "http://elasticsearch:9200"
             index => "youlai-auth-log"
           }
        }
        stdout {}
      }
      Copy the code

      You can see the different index libraries generated in output based on appName, where AppName is the FileBeat custom field, which is used to distinguish multi-application logs. The custom field was defined when FileBeat was deployed.

  4. Create and start the container

    docker run -d --name logstash --net elk --restart always \ -p 5044:5044 -p 9600:9600 \ -v /opt/logstash/config:/usr/share/logstash/config \ -v /opt/logstash/data:/usr/share/logstash/data \ -v / opt/logstash/pipeline: / usr/share/logstash/pipeline \ logstash: 7.14.1Copy the code

Filebeat deployment

  1. Pull the mirror

    Docker Hub Image repository Check the Filebeat version

    The docker pull elastic/filebeat: 7.14.1Copy the code
  2. The directory to create

    mkdir -p /opt/filebeat/config
    chmod 777 /opt/filebeat/config
    Copy the code
  3. The configuration file

    Add the filebeat.yml configuration file

    vi /opt/filebeat/config/filebeat.yml
    Copy the code

    Add the following configuration:

    filebeat.inputs:
    - type: log
      enabled: true
      paths:
      - /logs/youlai-admin/log.log
      fields:
        appname: youlai-admin  A custom field is provided to the Logstash to distinguish the log source
      fields_under_root: true  # Enable custom fields
    - type: log
      enabled: true
      paths:
      - /logs/youlai-auth/log.log
      fields:
        appname: youlai-auth
      fields_under_root: true
    
    processors:
      - drop_fields:
          fields: ["log"."input"."host"."agent"."ecs"] Filter fields that are not needed
    
    output.logstash:
      hosts: ['47.104.214.213:5044']
    Copy the code
    • / logs/youlai – admin/log. The log is usedyoulai-adminPath of the output log file
    • 47.104.214.213 is the IP address of the Logstash server
  4. Container is created and started

    docker run -d --name filebeat --restart=always \ --log-driver json-file \ --log-opt max-size=100m \ --log-opt max-file=2 \ - v/logs: / logs \ - v/opt/filebeat/config/filebeat yml: / usr/share/filebeat/filebeat yml \ elastic/filebeat: 7.14.1Copy the code

7. SpringBoot application deployment

In the article IDEA for one-click Remote deployment of SpringBoot application by integrating Docker plug-in, it has been described in detail how youlai-admin service is deployed to the online cloud environment. Next, log configuration will be added.

  • The log configuration

    Configure the log output path in logback-spring. XML under the logging common-log module of Youlai-Mall

    Specify production environment output to a file

  • Log print

    To facilitate testing, obtain user information and print logs after a user logs in successfully

  • Mount the log directory

    Add the Docker container log directory on the basis of IDEA integration with the Docker plug-in to achieve one-click remote deployment of SpringBoot application when Run/Debug Configurations configure the Dockerfile /logs/youlai-admin mount to the host.

    The directory mount configuration is as follows: host /logs/ Youlai-admin ←→ Container /logs/ Youlai-admin, click OK to save

  • SpringBoot application deployment

    Select the Dockerfile configuration and click Run: arrow_Forward: Wait for the application to be published to the cloud server

Query ElasticSerarch logs

After the application is published, log in to www.youlai.tech and view the log files /logs/youlai-admin/log.log of the application server

Look at the Logstash process for logs

docker logs logstash
Copy the code

Visit E.oulai. tech:5601 to enter the Kibana console, first add index mode, can be discovered, but the premise is that there is data that is the index library.

  1. Management → Stack Management → Kibana → Index Patterns → Create Index Pattern

  2. Enter rules that match existing indexes and click Next Step

  3. Select the time field and click Create Index Pattern

  4. Then click Analytics → Discover in the left column for data search

9. To summarize

This article describes how to use Docker to build an ELK + Filebeat environment. The lightweight log collection tool Filebeat is used to collect microservice application logs and push the log data to Logstash. After the Logstash filtering process, the data is stored in ElasticSearch, and the final log data is presented through Kibana. In fact, ELK + Filebeat log solution is enough for most application scenarios. However, considering the throughput bottleneck of Logstash and the collection of logs collected by multiple Filebeat and filtering, too many logs in a short period of time will cause log accumulation and data loss. At present, the most common solution to this problem is to introduce message queues (Kafka and Redis) into Filebeat and Logstash for peak elimination, so that Logstash can process logs at a stable and uniform speed. The introduction of message queue is not explained in this article because of the time, it will be supplemented in the later article, if necessary, you can search for relevant information on the Internet to integrate, I believe that based on the actual combat basis of this article, it should be easy to achieve, I hope everyone has gained.

The appendix

1. Open source projects

The project name Yards cloud (Gitee) Github
Micro service background youlai-mall youlai-mall
System management front end youlai-mall-admin youlai-mall-admin
Wechat applets youlai-mall-weapp youlai-mall-weapp

2. Contact information

The wechat exchange group can only be entered by invitation. If you have any problems in the project or want to study in the exchange group, please add a developer to join the group and note “you come”.