preface
ELK is gradually being replaced by EFK due to the large memory footprint of Logstash and relatively poor flexibility. The EFK mentioned in this article is Elasticsearch+Fluentd+Kfka. In fact, K should be Kibana’s log display. This section is not for demonstration, this article only describes the data collection process.
The premise
- docker
- docker-compose
- Apache kafka service
architecture
Data Collection process
Data generation uses CAdvisor to collect monitoring data from the container and transfer the data to Kafka.
Cadvisor->Kafka->Fluentd-> ElasticSearch
Each service can be scaled horizontally to add services to the logging system.
The configuration file
docker-compose.yml
Version: "3.7" Services: ElasticSearch: image: elasticSearch :7.5.1 environment: Cadvisor: image: Google/cAdvisor command: -discovery. type=single-node # -storage_driver=kafka -storage_driver_kafka_broker_list=192.168.1.60:9092(Kafka service IP:PORT) -storage_driver_kafka_topic=kafeidou depends_on: - elasticsearch fluentd: image: Lypgcs/fluentd - es - kafka: v1.3.2 volumes: -) /, / etc/fluent - / var/log/fluentd: / var/log/fluentdCopy the code
Among them:
- Data generated by cAdvisor is transmitted to the kafka service on 192.168.1.60,topic kafeidou
- Elasticsearch is specified to start in single-machine mode (discovery.type=single-node environment variable) to test the overall effect
fluent.conf
#<source>
# type http
# port 8888
#</source>
<source>
@type kafka
brokers 192.168.1.60:9092
format json
<topic>
topic kafeidou
</topic>
</source>
<match **>
@type copy
# <store>
# @type stdout
# </store>
<store>
@type elasticsearch
host 192.168.1.60
port 9200
logstash_format true
#target_index_key machine_name
logstash_prefix kafeidou
logstash_dateformat %Y.%m.%d
flush_interval 10s
</store>
</match>
Copy the code
Among them:
- The type copy plug-in is used to copy the data received by FluentD. It is used to print the data in the console or store it in a file for debugging purposes. This configuration file is closed by default and only the necessary ES output plug-in is provided. If necessary, you can turn on the @type stdout section to debug whether data is received.
- The input source is also configured with an HTTP input configuration, which is off by default and is also used for debugging, putting data into Fluentd. The following command can be executed on Linux:
curl -i -X POST -d 'json={"action":"write","user":"kafeidou"}' http://localhost:8888/mytagCopy the code
- targetindexThe key parameter, which indexes the value of a field in the data as es. For example, the configuration file uses the value of the machine_name field as es.
Start the deployment
Docker-compose up -d in the directory that contains the docker-compose. Yml file and the fluent.conf file
Select * from es; select * from es; select * from es; select * from es;
Bash: - : command was not found [root @ master kafka] # curl http://192.168.1.60:9200/_cat/indices? v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open 55a4a25feff6 Fz_5v3suRSasX_Olsp-4tA 1 1 1 0 4kb 4kbCopy the code
May also directly in the browser enter http://192.168.1.60:9200/_cat/indices? V ‘ ‘to view the results, it will be more convenient.
’55a4a25feff6′ (‘ docs. Count ‘) = ’55a4a25feff6’
So far kafka-> Fluentd -> ES is a log collection process.
Of course, the architecture is not fixed. You can also use fluentd-> Kafka -> ES to collect data. The fluentd.conf configuration file is modified to switch es and Kafka-related configurations.
Fluentd-es plugin and Fluentd-kafka plugin can be found on Github or fluentd website.
Originated from four coffee beans, reproduced please state the source.
Follow the public account ->[Four Coffee beans] to get the latest content