In the process of Kafka cluster operation and maintenance, we usually borrow some open source tools to complete Kafka’s daily operation and maintenance requirements and related problems, the next introduction to several commonly used Kafka operation and maintenance artifacts.
kafka-manager
Kafka cluster management tool (CMAK) kafka cluster management tool (CMAK) kafka cluster management tool (CMAK)
- Multi-cluster Management
- Instantaneous cluster monitoring (topics, brokers, replica distribution, partition distribution, etc.)
- Priority copy election
- Theme, partitioning, replica management (theme creation and parameter tuning, replica allocation, etc.)
- Metrics monitoring at the Broker and Topic level
- Basic consumer information management
# compile source code
$ ./sbt clean dist
# docker installation
$ docker run -itd --name kafka-manager -e KAFKA_ZK_CLUSTER="zk-1:2181,zk-2:2181" -P bgbiao/kafka-manager:2.0. 02.
# Custom port
$ docker run -itd --name kafka-manager -e KAFKA_ZK_CLUSTER="zk-1:2181,zk-2:2181" -P bgbiao/kafka-manager:2.0. 02. -Dhttp.port=8080
Copy the code
kafkacat
Kafkacat is a non-JVM command line producer and consumer that can quickly produce and consume Kafka messages.
# MAC installation
$ brew install kafkacat
# docker installation
# Only the Debian and openSUSE series are officially supported
The following images can be used in CentOS environments
$ docker pull bgbiao/kafkacat
Copy the code
# Consumer model
$ kafkacat -b localhost:9092 -t mysql_users
% Auto-selecting Consumer mode (use -P or -C to override)
{"uid":1."name":"Cliff"."locale":"en_US"."address_city":"St Louis"."elite":"P"}
{"uid":2."name":"Nick"."locale":"en_US"."address_city":"Palo Alto"."elite":"G"}
[...].
# Producer model
# default to a newline to determine a message (-d can specify a separator)
Read messages from stdin by default
$ kafkacat -b localhost:9092 -t new_topic -P
test
Read the message from the file
The # -l argument is used to specify that each line in the file will be sent as a message, without which the entire text will be sent as a message (for binary data comparisons)
Useful)
The # -t argument can be used to echo to stdout
$ kafkacat -b localhost:9092 -t <my_topic> -T -P -l /tmp/msgs
The key of the message can be specified, using the -k argument
$ kafkacat -b localhost:9092 -t keyed_topic -P -K:
1:foo
2:bar
$ kafkacat -b localhost:9092 -t keyed_topic -C -f 'Key: %k\nValue: %s\n'
Key: 1
Value: foo
Key: 2
Value: bar
# set partition
$ kafkacat -b localhost:9092 -t partitioned_topic -P -K: -p 1
1:foo
$ kafkacat -b localhost:9092 -t partitioned_topic -P -K: -p 2
2:bar
$ kafkacat -b localhost:9092 -t partitioned_topic -P -K: -p 3
3:wibble
Metadata listening mode
The '-l' parameter is used to display the status of the kafka cluster as well as the subject, partition, replica, and ISR information.
The # '-j' argument is output in JSON format
$ kafkacat -b localhost:9092 -L
$ kafkacat -b mybroker -L -J
# Docker is easy to use
Input production data from standard
$ docker run --rm bgbiao/kafkacat kafkacat -b kafka-broker:9092 -t test-biao -P <<EOF
> ssabdkgjf
> asfgnh
> wertnh
> waer
> awegrtn
> 2020- 0426 -
> end time
> EOF
Receive data from the specified topic
$ docker run -it --rm bgbiao/kafkacat kafkacat -b kafka-broker:9092 -t test-biao -C -f '\nKey (%K bytes): %k\t\nValue (%S bytes): %s\n\Partition: %p\tOffset: %o\n--\n'
Key (- 1 bytes) :
Value (13 bytes) :test kafkacat
Partition: 0 Offset: 11
--
% Reached end of topic test-biao [0] at offset 12
Key (- 1 bytes) :
Value (18 bytes) :2020- 0426 - endtime
Partition: 1 Offset: 8
--
% Reached end of topic test-biao [1] at offset 9
Key (- 1 bytes) :
Value (7 bytes): overlay
Partition: 2 Offset: 6
--
% Reached end of topic test-biao [2] at offset 7
Copy the code
Note: If you want to copy the Kafkacat binary from the Docker image, you only need to install Librdkafka-devel in the current environment.
gokafka
Gokafka (https://github.com/goops-top/gokafka) is also a non-JVM Kafka operations management tool that can not only perform simple message simulation production, but also preview messages for specific topics. It also supports common operation and maintenance management operations for Kafka clusters.
Quick start
$ git clone https://github.com/goops-top/gokafka.git
$ cd gokafka
$ go build -o build/goops-kafka
$ ./build/goops-kafka
goops-kafka: A kafka tools with golang that can operate the kafka for describe.create.update and so on.
Note: Without the jvm,so you must be specify the [--broker or --cluster and the Value must be in --config entry.]
Usage:
gokafka [command]
Available Commands:
consumer consumer a topic message data with specified kafka-cluster.
create create the kafka topic with some base params in specify kafka-cluster.
describe describe the kafka some info (cluster,topic,broker,loginfo)
help Help about any command
init init the gokafka some default config.
list list the kafka some info (cluster,topic,broker)
producer producer a topic message data with specified kafka-cluster.
version print version info of goops-kafka tool
Flags:
-- Broker string Specifies the broker address
--cluster string Specifies a cluster
--config string Specifies the configuration file (default is $HOME/. Goops -kafka)
-h, --help help for gokafka
Use "gokafka [command] --help" for more information about a command.
Use make to compile
Binaries for MAC and Linux distributions are automatically generated
$ make default
Copy the code
Simple to use
Check the tool version
$ ./build/gokafka.mac version
Version: 0.0.1
GitBranch: master
CommitId: 445f935
Build Date: 2020-06-26T18:49:48+0800
Go Version: go1.14
OS/Arch: darwin/amd64
Initialize the configuration file
# gokafka uses configuration files to quickly manage multiple Kafka clusters
$ ./build/gokafka.mac init
gokafka config init ok.
Generate cluster configuration file in user's home directory
$ cat ~/.goops-kafka
app: gokafka
spec:
clusters:
- name: test-kafka
version: V2_5_0_0
brokers:
- 10.0.0.1:9092
- 10.0.0.2:9092
- 10.0.0.3:9092
- name: dev-kafka
version: V1_0_0_0
brokers:
- 192.168.0.22:9092
- 192.168.0.23:9092
- 192.168.0.24:9092
You can also use --config to specify the cluster configuration file
$ ./build/gokafka.mac --config ./kafka-cluster.yaml version
Version: 0.0.1
GitBranch: master
CommitId: 445f935
Build Date: 2020-06-26T18:53:25+0800
Go Version: go1.14
OS/Arch: darwin/amd64
View cluster details in the configuration file
$ ./build/gokafka.mac list cluster
Cluster: test - kafka version: V2_5_0_0 connector_brokers: [10.0.0.1:9092]
Cluster: dev - kafka version: V1_0_0_0 connector_brokers: [192.168.0.22:9092]
./build/gokafka.mac list cluster --config ./kafka-cluster.yaml
cluster:log-kafka version:V2_5_0_0 connector_brokers:[log-kafka-1.bgbiao.cn:9092]
Note: When using cluster profiles to manage clusters, use the --cluster global parameter to specify the target cluster to operate on
It is also possible to specify the broker directly
$ ./build/gokafka.mac --cluster dev-kafka describe broker
controller: 3
brokers num: 3
broker list: [2 1 3]
Id: 2 broker: 192.168.0.22:9092
Id: 1 broker: 192.168.0.23:9092
Id: 3 broker: 192.168.0.24:9092
$./build/gokafka. MAC -- Broker 10.0.0.1:9092 describe broker
controller: 3
brokers num: 3
broker list: [2 1 3]
Id: 2 broker: 192.168.0.22:9092
Id: 1 broker: 192.168.0.23:9092
Id: 3 broker: 192.168.0.24:9092
Copy the code
Commonly used functions
# 1. Create a topic
You can specify partitions and number of copies
$ ./build/gokafka.mac --cluster dev-kafka create --topic test-bgbiao-1
true
# 2. Check topic
$ ./build/gokafka.mac --cluster dev-kafka list topic --topic-list test-bgbiao-1
Topic:test-bgbiao-1 PartNum:3 Replicas:3 Config:
Topic-Part:test-bgbiao-1-2 ReplicaAssign:[1 3 2]
Topic-Part:test-bgbiao-1-1 ReplicaAssign:[2 1 3]
Topic-Part:test-bgbiao-1-0 ReplicaAssign:[3 2 1]
Get the partition and replica status of the topic
$ ./build/gokafka.mac --cluster dev-kafka describe topic --topic-list test-bgbiao-1
Topic-Part:test-bgbiao-1-2 Leader:1 Replicas:[1 3 2] ISR:[1 3 2] OfflineRep:[]
Topic-Part:test-bgbiao-1-1 Leader:2 Replicas:[2 1 3] ISR:[2 1 3] OfflineRep:[]
Topic-Part:test-bgbiao-1-0 Leader:3 Replicas:[3 2 1] ISR:[3 2 1] OfflineRep:[]
# 4. Produce messages to topic
$ ./build/gokafka.mac --cluster dev-kafka producer --topic test-bgbiao-1 --msg "Hello, BGBiao."
INFO[0000] Produce msg:Hello, BGBiao. to topic:test-bgbiao-1
INFO[0000] topic:test-bgbiao-1 send ok with offset:0
$ ./build/gokafka.mac --cluster dev-kafka producer --topic test-bgbiao-1 --msg "Nice to meet you."
INFO[0000] Produce msg:Nice to meet you. to topic:test-bgbiao-1
INFO[0000] topic:test-bgbiao-1 send ok with offset:0
# 5. Consume messages (a default consume group is created in Gokafka to preview messages)
# Make real-time consumption in terminal, CTRL + C can cancel
$ ./build/gokafka.mac --cluster dev-kafka consumer --topic test-bgbiao-1
INFO[0000] Sarama consumer up and running! .
INFO[0004] part:2 offset:0
msg: Nice to meet you.
INFO[0004] part:1 offset:0
msg: Hello, BGBiao.
# 6. View the list of brokers and controllers for the cluster
$ ./build/gokafka.mac --cluster dev-kafka describe broker
controller: 3
brokers num: 3
broker list: [2 1 3]
Id: 2 broker: 192.168.0.23:9092
Id: 1 broker: 192.168.0.22:9092
Id: 3 broker: 192.168.0.24:9092
# 7. Check the log size for topic
$ ./build/gokafka.mac --cluster dev-kafka describe loginfo --topic-list test-bgbiao-1
topic:test-bgbiao-1
172.16.32.23:9092
logdir:/soul/data/kafka/kafka-logs
topic-part log-size(M) offset-lag
---------- ----------- ----------
test-bgbiao-1-0 0 0
test-bgbiao-1-1 0 0
test-bgbiao-1-2 0 0
172.16.32.22:9092
logdir:/soul/data/kafka/kafka-logs
topic-part log-size(M) offset-lag
---------- ----------- ----------
test-bgbiao-1-0 0 0
test-bgbiao-1-1 0 0
test-bgbiao-1-2 0 0
172.16.32.24:9092
logdir:/soul/data/kafka/kafka-logs
topic-part log-size(M) offset-lag
---------- ----------- ----------
test-bgbiao-1-0 0 0
test-bgbiao-1-1 0 0
test-bgbiao-1-2 0 0
# 8. List the consumer groups for the Kafka cluster
$ ./build/gokafka.mac --cluster dev-kafka list consumer-g | grep -i gokafka
GoKafka
# 9. View consumer group details
The consumer group is currently empty because we stopped consuming above
$ ./build/gokafka.mac --cluster dev-kafka describe consumer-g --consumer-group-list GoKafka
--------------------------------------------------------------------------------------------
consumer-group:GoKafka consumer-state:Empty
# Let's look at a group of consumers with actual consumption states
You can see a list of all consumer threads under a consumer group, as well as consumer instances, and corresponding topics for consumption
$ ./build/gokafka.mac --cluster dev-kafka describe consumer-g --consumer-group-list group-sync
--------------------------------------------------------------------------------------------
consumer-group:group-sync consumer-state:Stable
consumer-id consumer-ip topic-list
Consumer - 1 - a9437739 - b41 e5cb - 4 - a9d3-2640 b9878965/172.16.64.207 [sync - dev]
Consumer eb690b d327-468 - b - 1-92-8990-9 c21e1ee405d / 172.16.32.235 [sync - dev]
You can also view log details for a topic consumed by a consumer group
$ ./build/gokafka.mac --cluster dev-kafka describe consumer-group-offset --group group-sync --topic sync-dev
sync-dev
topic-part:sync-dev-0 log-offsize:98
topic-part:sync-dev-1 log-offsize:0
topic-part:sync-dev-2 log-offsize:340
topic-part:sync-dev-3 log-offsize:261
Copy the code