This is the 26th day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021

Related: Native deployment of Apache Kafka

The following is a list of script files in the Apacke Kafka 2.13 installation directory bin/ :

➜ tree-L 1 bin bin ├── connect-distributed. Sh ├─ connect-mirror-maker. Sh ├─ connect-standalone ├─ Kafka-broker-APi-versions.sh ├─ Kafka-configs.sh ├─ Kafka-console-consumer.sh ├─ Kafka-broker-API-versions.sh ├─ Kafka-Configs.sh ├─ Kafka-console-consumer.sh ├─ Funka - Console-producer. sh ├─ Funka - Consumer - Groups. Sh ├─ Funka - Consumer - Perf -test.sh ├─ Funka - Delegation -tokens ├ ─ ─ kafka - delete - records. Sh ├ ─ ─ kafka - dump - log. Sh ├ ─ ─ kafka - the features. Sh ├ ─ ─ kafka - get - offsets. Sh ├ ─ ─ Flag school ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Flag School ── Kafka-producer-perf-test. sh ├─ Kafka-Reassign-partitions. Sh ├─ Kafka-Replica-verification. Sh ├─ kafka-run-class.sh ├ ─ ─ kafka - server - start. Sh ├ ─ ─ kafka - server - stop. Sh ├ ─ ─ kafka - storage. Sh ├ ─ ─ kafka - streams - application - reset. Sh ├ ─ ─ └ ─ Kafka - topical exercises. Sh ├─ Kafka - topical exercises. Sh └ ─ Kafka - topical exercises - topical exercises ├─ Zookeeper-server-stop.sh ├─ Zookeeper-server-stop.sh ├─ Zookeeper-server-stop.sh └─ zookeeper-shell.sh 1 directory, 37 filesCopy the code

There is a directory called Windows, which contains the.bat batch file for Windows, and the rest are.sh scripts, which contain several scripts described in previous articles:

  • zookeeper-server-start.shUsed to start the ZooKeeper service
  • kafka-server-start.shUsed to start Kafka Broker
  • kafka-topics.shUse for Topics
  • kafka-configs.shTo modify the dynamic configuration of the Broker side
  • kafka-consumer-groupsWe used it in previous articles to reset the shift of the consumer group, etc

Among these scripts, the first zooKeeper scripts are used to operate ZooKeeper, the first connect scripts are related to Kafka Connect, and the rest are a number of Kafka-related scripts.

The documentation for these scripts can be found by adding –help to the file name, for example:

➜ bin/kafka-log-dirs.sh --help This tool helps to query log directory usage on the specified brokers. Option Description  ------ ----------- --bootstrap-server <String: The server REQUIRED: the server(s) to use for (s) to use for bootstrapping> bootstrapping --broker-list <String: Broker list> The list of brokers to be queried in The form "0,1,2". All brokers in The cluster will be queried if no broker list is specified --command-config <String: Admin client Property file containing configs to be property file> passed to Admin Client. --describe Describe the specified log directories on the specified brokers. --help Print usage information. --topic-list <String: Topic list> The list of topics to be queried in the form "topic1,topic2,topic3". All topics will be queried if no topic list is specified (default: ) --version Display Kafka version.Copy the code

This article focuses on a few commonly used of these scripts

kafka-broker-api-versions

This script is used to validate the match between different versions of Broker and Consumer. The result is as follows (the console output has many more lines replaced with ellipses) :

➜ bin/kafka-broker-api-versions.sh --bootstrap-server localhost:9092

192.168.1.21:9092 (id: 0 rack: null) -> (
	Produce(0): 0 to 9 [usable: 9],
	Fetch(1): 0 to 12 [usable: 12],
	ListOffsets(2): 0 to 7 [usable: 7],
        ......
Copy the code

Produce(0): 0 to 9 [usable: 9] is an example of an application that can be modified to Produce(0): 0 to 9.

  • ProduceThe producer sends a message to the Broker to send an Produce request.
  • (0)This is the serial number
  • 0 to 9Represents 10 versions of Producer requests from 0 to 9 that are supported within the current Broker.
  • [usable: 9]Indicates that the current client is using a request with version number 9. By client, we mean the kafka-broker-apI-versions script we use. If you validate the same broker instance using the Kafka-broker-apI-versions script in Kafka, you will get different results.

Kafka – the console – consumer and kafka – the console – producer

In the previous article (Local deployment of Apache Kafka), you used these two commands to verify that Kafka was deployed and running successfully. They can be used to produce and consume messages, respectively.

To submit a message to Kafka, use a simple command (remember to create a Topic in advance) :

➜ bin/kafka-console-producer.sh --topic hello-events --bootstrap-server localhost:9092
Copy the code

If you want to provide multiple Broker nodes, replace –bootstrap-server with –broker-list, separated by commas.

If you want to consume messages for a topic, you can do this:

➜ bin/kafka-console-consumer.sh --topic hello-events --from-beginning --bootstrap-server localhost:9092
Copy the code

After execution, the message is printed to the console.

There is a –from-beginning parameter, which represents consuming messages from the Earliest current shift, using the Earliest policy reset shift.

These two commands, used to produce and consume messages, are generally used less in real scenarios and more for testing purposes.

Kafka – producer – perf – test and kafka – consumer – perf – test

These two commands also come in pairs with producer and consumer counterparts to test the performance of production and consumption messages.

For example, the following script:

bin/kafka-producer-perf-test.sh --topic hello-events --num-records 100000 --throughput -1 --record-size 1024 --producer-props bootstrap.servers=localhost:9092
Copy the code

One hundred thousand messages of 1024 bytes each were sent to the specified Topic (this is too small for demonstration purposes only), resulting in the following results:

100000 records sent, 65919.578115 records/ SEC (64.37 MB/ SEC), 2.89 ms avG latency, 253.00 ms Max latency, 0 ms 50th, 20 ms 95th, 22 ms 99th, 23 ms 99.9th.Copy the code

This shows the number of messages sent per second, throughput, average latency, and several quantiles. We can focus on the last quantile. 23ms 99.9th means that 99.9% of messages are delayed within 23ms, which is an important indicator of performance.

The performance test command on the consumer side is slightly simpler:

bin/kafka-consumer-perf-test.sh --broker-list localhost:9092 --messages 100000 --topic hello-events
Copy the code

The results are as follows:

start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, Fetch.MB. SEC, fetch. Nmsg. SEC 2021-11-26 16:31:12:691, 2021-11-26 16:31:17:028, 98.1396, 22.6285, 100495, 23171.5472, 3594, 743, 132.0857, 135255.7201Copy the code

Only some time and throughput data, not quantile results.

kafka-dump-log

Used to view data from a message file, or read into a file.

bin/kafka-dump-log.sh --files /tmp/kafka-logs/hello-events-0/00000000000000000000.log
Copy the code

Using this script, you need to specify the path to a.log file. The result is a string of contents similar to the following:

baseOffset: 1035315 lastOffset: 1035329 count: 15 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 1073690676 CreateTime: 1637914980747 size: 15556 magic: 2 compresscodec: none crc: 4007605223 isvalid: true
Copy the code

It contains information such as displacement range, quantity, creation time and compression algorithm of message set. To see the details of each entry, add –deep-iteration after the command. The result format is as follows:

baseOffset: 182055 lastOffset: 182069 count: 15 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position: 188803172 CreateTime: 1637914971926 size: 15556 magic: 2 compresscodec: none crc: 2670957992 isvalid: true | offset: 182055 CreateTime: 1637914971926 keySize: -1 valueSize: 1024 sequence: -1 headerKeys: [] | offset: 182056 CreateTime: 1637914971926 keySize: -1 valueSize: 1024 sequence: -1 headerKeys: [] | offset: 182057 CreateTime: 1637914971926 keySize: -1 valueSize: 1024 sequence: -1 headerKeys: []Copy the code

– Added details about each message, and you can even see the details of the message by adding –print-data-log.

kafka-consumer-groups

In addition to the kafka-consumer-groups script, you can also use the kafka-consumer-groups script to check the shift of consumer groups:

bin/kafka-consumer-groups.sh --describe --all-groups --bootstrap-server localhost:9092
Copy the code

The following results are displayed:

GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID perf-consumer-27937 hello-events 0 100495 3513680 3413185 --Copy the code

This command lists all consumer groups. If you want to see information about a single consumer group, replace –all-groups with –group

.