In this paper, Burrow and Telegraf are used to build Kafka monitoring system. Then, briefly introduce others, such as Kafka Manager, Kafka Eagle, Confluent Control Center, etc.
If you’re new to Kafka, check out the Kafka Basics Index
Burrow
Depend on the path
Use Burrow to pull kafka’s monitoring information, collect it by Telegraf, and write it to the InfluxDB. Use Grafana for demonstration.
The installation
Download the binaries from Github and unzip them.
Burrow is written by the Kafka Community Committer and monitors the consumer side. However, there is no UI interface and the development language is Go. It’s not very active, but it’s good enough.
The main configuration file
Burrow can support retrieving information from multiple clusters at the same time. For example, if I have two clusters, DM and DATABus, my configuration file could look like this.
[general]
pidfile="burrow.pid"
stdout-logfile="burrow.out"
access-control-allow-origin="mysite.example.com"
[logging]
filename="logs/burrow.log"
level="info"
maxsize=100
maxbackups=30
maxage=10
use-localtime=false
use-compression=true
[zookeeper]
servers=[ "192.168.54.159:2181"]
timeout=6
root-path="/burrow"
[client-profile.databus]
client-id="burrow-databus"
kafka-version="0.10.0"
[cluster.databus]
class-name="kafka"
servers=[ "192.168.86.57:9092"."192.168.128.158:9092" ]
client-profile="databus"
topic-refresh=120
offset-refresh=30
[consumer.databus]
class-name="kafka"
cluster="databus"
servers=[ "192.168.86.57:9092"."192.168.128.158:9092" ]
client-profile="databus"
group-blacklist="^(console-consumer-|python-kafka-consumer-|quick-).*$"
group-whitelist=""
[client-profile.dm]
client-id="burrow-dm"
kafka-version="0.10.0"
[cluster.dm]
class-name="kafka"
servers=[ "192.168.204.156:9092"."192.168.87.50:9092" ]
client-profile="dm"
topic-refresh=120
offset-refresh=30
[consumer.dm]
class-name="kafka"
cluster="dm"
servers=[ "192.168.204.156:9092"."192.168.87.50:9092" ]
client-profile="databus"
group-blacklist="^(console-consumer-|python-kafka-consumer-|quick-).*$"
group-whitelist=""
[httpserver.default]
address=": 8000"
[storage.default]
class-name="inmemory"
workers=20
intervals=15
expire-group=604800
min-distance=1
Copy the code
Then, use the nohup command to start.
nohup ./burrow -config-dir=./config &
Copy the code
validation
Get the configured Kafka cluster. http://localhost:8000/v3/kafka for a cluster of consumer information. http://localhost:8000/v3/kafka/databus/consumer
Grafana configuration
1. Create variables
Filter out all clusters so that you can use the selection box for cluster selection.
2. Create a chart
Filter clusters by cluster and select monitoring items. You can group by group.
Monitoring items
Burrow_group monitors consumer information. including
lag offset status total_lag patition_count
Burrow_patition monitors more detailed partition information, including
lag offset status
Burrow_topic Monitors topic information, including
offset
A, Kafka Manager
The most popular one, written in Scala and available only for download, requires your own SBT compilation. O (≧ ≦) O
Manage multiple clusters, select replicas, reassign replicas, create topics, and view Consumer information.
In addition to compilation difficulties, pulling information from a large Kafka cluster can be very resource-intensive.
Second, Kafka Eagle
Kafka QQ group, because the interface is very clean and beautiful, there is a good display of data. Authority alarm is more perfect, support nail, wechat, email and other alarm methods. Supports data query using KSQL.
Confluent Control Center
The Control Center is the most fully featured Kafka monitoring framework available, but it is only available with Confluent Enterprise edition, paid for
The official document: docs. Confluent. IO/current/qui…
Note: the installation is extremely cumbersome (docs. Confluent. IO/current/ins…
You need to use kafka provided with the Enterprise edition, or you need to import Kafka into the four jars and modify the configuration files.
In addition, this service relies on services such as schema-Registry, connect-distributed, and Kafka-REST, and requires five ports
Four, Kafka Monitor
Cannon fodder
Kafka Offset Monitor
Cannon fodder
End
Above, is commonly used Kafka monitoring components.
More excellent articles.
“Microservices are not all, just a subset of a specific domain”
Selection and process should be careful, otherwise it will be out of control.
Out of all the monitoring components, there’s always one for you
The Most Common Set of “Vim” Techniques for Linux Production
What are we Developing with Netty?
Linux five-piece or something.
“Linux” Cast Away (1) Preparation”
Linux: Cast Away (2) CPU
Linux cast Away (3) Memory
Linux cast Away (4) I/O
Linux cast Away (5) Network Chapter