August more text challenge | new play strange upgrade, prize stacking get soft 🎁
How can didi open Source LogiKM one-stop Kafka monitoring and control platform be absent from daily operation and maintenance and troubleshooting
ConfigCommand
Config related operations; Dynamic configuration can override the default static configuration.
1 Querying configuration information
Topic Configuration Query
Shows dynamic and static configurations for Topic
1.1. Querying A Single Topic Configuration (Dynamic configuration only)
Sh –describe –bootstrap-server XXXXX :9092 –topic test_create_topic or sh bin/kafka-configs.sh –describe –bootstrap-server XXXXX :9092 –topic test_create_topic –describe –bootstrap-server 172.23.248.85:9092 –entity-type topics –entity-name test_create_topic
1.2. Query all Topic configurations (including internal topics)(only dynamic configurations are listed)
Sh bin/kafka-configs.sh --describe --bootstrap-server 172.23.248.85:9092 --entity-type topics
1.3. Querying detailed Topic configuration (dynamic + static)
You just need to add one argument –all
Other configuration/clients/users/brokers/broker – loggers queries
In the same way; Only need to change – the entity – type to correspond to the type of line (switchable viewer/clients/users/brokers/broker – loggers)
Broker -loggers queries Logger configurations of the specified broker
Sh bin/kafka-configs.sh --describe --bootstrap-server XXXX :9090 --entity-type broker-loggers --entity-name BrokerID to be queried
Example Query the kafka version
sh bin/kafka-configs.sh --describe --bootstrap-server xxxx:9092 --version
See the bottom for all configurable dynamic configurationsThe attachmentPart of the
2 Add, delete, and modify configurations--alter
–alter
–delete-config k1=v1,k2=v2 –delete-config k1=v1,k2=v2 – the entity -type (switchable viewer/clients/users/brokers/broker – loggers) type names: – the entity – the name
Topic Adds/modifies dynamic configuration
--add-config
sh bin/kafka-configs.sh --bootstrap-server xxxxx:9092 --alter --entity-type topics --entity-name test_create_topic1 --add-config file.delete.delay.ms=222222,retention.ms=999999
Deleting a Topic dynamic configuration
--delete-config
sh bin/kafka-configs.sh --bootstrap-server xxxxx:9092 --alter --entity-type topics --entity-name test_create_topic1 --delete-config file.delete.delay.ms,retention.ms
Configurations are added and deleted simultaneously
sh bin/kafka-configs.sh --bootstrap-server xxxxx:9092 --alter --entity-type brokers --entity-default --add-config log.segment.bytes=788888888 --delete-config log.retention.ms
The same applies to other configurations. You only need to change the type--entity-type
Type: (switchable viewer/clients/users/brokers/broker – loggers)
Broker -loggers queries Logger configurations of the specified broker
Sh bin/kafka-configs.sh --describe --bootstrap-server XXXX :9090 --entity-type broker-loggers --entity-name BrokerID to be queried
See the attachment at the back to see what configurations can be modified: Some optional configurations for ConfigCommand
3. Default Settings
Set the default –entity-default
sh bin/kafka-configs.sh --bootstrap-server xxxxx:9090 --alter --entity-type brokers --entity-default --add-config log.segment.bytes=88888888
The default configuration for dynamic configuration is to use the node
;
The picture fromwww.cnblogs.com/lizherui/p/…
Priority Dynamic configuration > Default Dynamic configuration > Static configuration
The attachment
Some optional configurations for ConfigCommand
Topic Optional configuration
key | value | The sample |
---|---|---|
cleanup.policy | Clean up the strategy | |
compression.type | Compression types (usually recommended on the Produce side) | |
delete.retention.ms | Retention period of compressed logs | |
file.delete.delay.ms | ||
flush.messages | Persist message limits | |
flush.ms | Persistence frequency | |
follower.replication.throttled.replicas | Flowwer Copy flow limiting format: Partition NUMBER: copy follower NUMBER, partition number: copy follower number | 1-0, 1:1 |
index.interval.bytes | ||
leader.replication.throttled.replicas | Leader replica traffic limiting format: Partition number: Number of the replica leader | 0-0 draw |
max.compaction.lag.ms | ||
max.message.bytes | Maximum batch message size | |
message.downconversion.enable | Message is backward compatible | |
message.format.version | Message Format version | |
message.timestamp.difference.max.ms | ||
message.timestamp.type | ||
min.cleanable.dirty.ratio | ||
min.compaction.lag.ms | ||
min.insync.replicas | The smallest ISR | |
preallocate | ||
retention.bytes | Log retention size (usually by time limit) | |
retention.ms | Log Retention time | |
segment.bytes | Size limit of segment | |
segment.index.bytes | ||
segment.jitter.ms | ||
segment.ms | Segment cutting time | |
unclean.leader.election.enable | Whether an asynchronous copy can be selected as the primary |
Broker Related Optional configuration
key | value | The sample |
---|---|---|
advertised.listeners | ||
background.threads | ||
compression.type | ||
follower.replication.throttled.rate | ||
leader.replication.throttled.rate | ||
listener.security.protocol.map | ||
listeners | ||
log.cleaner.backoff.ms | ||
log.cleaner.dedupe.buffer.size | ||
log.cleaner.delete.retention.ms | ||
log.cleaner.io.buffer.load.factor | ||
log.cleaner.io.buffer.size | ||
log.cleaner.io.max.bytes.per.second | ||
log.cleaner.max.compaction.lag.ms | ||
log.cleaner.min.cleanable.ratio | ||
log.cleaner.min.compaction.lag.ms | ||
log.cleaner.threads | ||
log.cleanup.policy | ||
log.flush.interval.messages | ||
log.flush.interval.ms | ||
log.index.interval.bytes | ||
log.index.size.max.bytes | ||
log.message.downconversion.enable | ||
log.message.timestamp.difference.max.ms | ||
log.message.timestamp.type | ||
log.preallocate | ||
log.retention.bytes | ||
log.retention.ms | ||
log.roll.jitter.ms | ||
log.roll.ms | ||
log.segment.bytes | ||
log.segment.delete.delay.ms | ||
max.connections | ||
max.connections.per.ip | ||
max.connections.per.ip.overrides | ||
message.max.bytes | ||
metric.reporters | ||
min.insync.replicas | ||
num.io.threads | ||
num.network.threads | ||
num.recovery.threads.per.data.dir | ||
num.replica.fetchers | ||
principal.builder.class | ||
replica.alter.log.dirs.io.max.bytes.per.second | ||
sasl.enabled.mechanisms | ||
sasl.jaas.config | ||
sasl.kerberos.kinit.cmd | ||
sasl.kerberos.min.time.before.relogin | ||
sasl.kerberos.principal.to.local.rules | ||
sasl.kerberos.service.name | ||
sasl.kerberos.ticket.renew.jitter | ||
sasl.kerberos.ticket.renew.window.factor | ||
sasl.login.refresh.buffer.seconds | ||
sasl.login.refresh.min.period.seconds | ||
sasl.login.refresh.window.factor | ||
sasl.login.refresh.window.jitter | ||
sasl.mechanism.inter.broker.protocol | ||
ssl.cipher.suites | ||
ssl.client.auth | ||
ssl.enabled.protocols | ||
ssl.endpoint.identification.algorithm | ||
ssl.key.password | ||
ssl.keymanager.algorithm | ||
ssl.keystore.location | ||
ssl.keystore.password | ||
ssl.keystore.type | ||
ssl.protocol | ||
ssl.provider | ||
ssl.secure.random.implementation | ||
ssl.trustmanager.algorithm | ||
ssl.truststore.location | ||
ssl.truststore.password | ||
ssl.truststore.type | ||
unclean.leader.election.enable |
Users Optional configuration
key | value | The sample |
---|---|---|
SCRAM-SHA-256 | ||
SCRAM-SHA-512 | ||
consumer_byte_rate | Traffic limiting is implemented for consumer users | |
producer_byte_rate | Stream limiting for producers | |
request_percentage | Percentage of requests |
Clients Optional configuration
key | value | The sample |
---|---|---|
consumer_byte_rate | ||
producer_byte_rate | ||
request_percentage |
More
Kafka column continues to be updated… (source code, principle, actual combat, operation and maintenance, video, interview video)
【 Kafka operation 】Kafka network most complete most detailed operation and maintenance command collection (boutique strongly recommended collection!! _ Shi Zhenzhen’s grocery store -CSDN blog
[Kafka actual] partition redistribution may appear problems and troubleshooting problems (production environment actual combat, dry goods!! Very dry!!!!!! Suggested collection)
【 Kafka exception 】 Kafka common exception handling solution (continuously updated! Suggested collection)
【 Kafka operation and Maintenance 】 Partition from allocation, data migration, copy expansion (with video)
Kafka source 】 【 ReassignPartitionsCommand source code analysis (duplicates of scalability, data migration, and redistribution, copy migration across the path
[Kafka] Go to…. for more