ELK build see the previous blog

  • We collected the project logs directly into ES from the logstash used before, and then changed from the kibana display mode to collect the logs to Kafka first and output them to Kibana display with logstash
  • The reason is that kafka can achieve high throughput when the log volume of the project is too large

Specific steps

  • Kafka project dependencies are introduced in the project

        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>
        <!--logback-kafka-appender依赖-->
        <dependency>
            <groupId>com.github.danielwegener</groupId>
            <artifactId>logback-kafka-appender</artifactId>
            <version>0.2.0-RC2</version>
        </dependency>
    Copy the code

    1234567891011

  • Set the project path in the LogSTAS configuration

    Input {TCP {mode => “server” host => “0.0.0.0” kafka port => 4560

    Output {elasticSearch {hosts => “127.0.0.1:9200” index => “Springboot-logstash -%{+ YYYy.mm. Dd}”} 1234567891011121314

– Added the logback. XML configuration

<! -- This is the kafkaAppender --> <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender"> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> <topic>applog</topic> <! -- we don't care how the log messages will be partitioned --> <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" /> <! -- use async delivery. the application threads are not blocked by logging --> <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" /> <! -- each <producerConfig> translates to regular kafka-client config (format: key=value) --> <! -- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --> <! -- bootstrap.servers is the only mandatory producerConfig --> < producerConfig > bootstrap. The servers = 192.168.217.130:9092 < / producerConfig > <! -- don't wait for a broker to ack the reception of a batch. --> <producerConfig>acks=0</producerConfig> <! -- wait up to 1000ms and collect log messages before sending them as a batch --> <producerConfig>linger.ms=1000</producerConfig> <! -- even if the producer buffer runs full, do not block the application but start to drop messages --> <producerConfig>max.block.ms=0</producerConfig> <! -- define a client-id that you use to identify yourself against the kafka broker --> <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig> </appender> <root level="info"> <appender-ref ref="kafkaAppender" /> </root> <! - output to logstash appender - > < appender name = "logstash" class = "net. Logstash. Logback. Appender. LogstashTcpSocketAppender" > <destination>localhost:9500</destination> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/> </appender> <root level="INFO"> <appender-ref ref="CONSOLE"/> <appender-ref ref="FILE"/> <! Appender-ref ref=" logstash "/> </root> 1234567891011121314151617181920212223242526272829303132333435363738394041Copy the code
  • Log4j2 configuration

    • Socket: name: logstash -TCP host: 127.0.0.1 Port: 4560 protocol: TCP PatternLayout: Pattern: ${log.pattern}

    Root: level: debug AppenderRef: – ref: CONSOLE – ref: ROLLING_FILE – ref: EXCEPTION_ROLLING_FILE – ref: logstash-tcp – ref: TASK_ROLLING_FILE