Basic introduction to Kafka
Originally developed by Linkedin, Kafka is a distributed, partitioned, multi-replica, multi-subscriber, ZooKeeper-coordinated distributed logging system (also known as MQ) that is commonly used for Web/Nginx logs, access logs, messaging services, etc. Linkedin contributed to the Apache Foundation in 2010 and became a top open source project, Java Core Learning Note Sharing.
The application scenarios are as follows: log collection system and message system.
Kafka’s main design goals are as follows:
-
The message persistence capability is provided with the time complexity of O(1), and the performance of constant time access can be guaranteed even for more than TB data.
-
High throughput. Even very cheap business machines can support 100K messages per second on a single machine.
-
Supports the partitioning of messages between Kafka servers and distributed consumption, while ensuring the sequential transmission of messages within each partition.
-
Support both offline data processing and real-time data processing.
Before we put in the schematics, let’s learn some Kafka terminology
-
Broker: a message middleware processing node. A Kafka node is a Broker. Multiple brokers can form a Kafka cluster.
-
Topic: A type of message that a Kafka cluster can distribute for multiple topics at the same time.
-
Partition: a topic physical group. A topic can be divided into multiple partitions. Each Partition is an ordered queue.
-
Segment: A partition physically consists of multiple segments.
-
Offset: Each partition consists of a series of ordered and immutable messages that are appended to the partition successively. Each message in a partition has a sequential sequence number called offset, which uniquely identifies a message to the partition.
-
Producer: Publishes messages to the Kafka Broker.
-
Consumer: Message Consumer, the client that reads messages to the Kafka Broker.
-
Consumer Group: Each Consumer belongs to a specific Consumer Group.
At this point, let’s look at:
Analysis of Kafka’s design principles
A typical Kafka cluster contains several producers, brokers, consumers, and a Zookeeper cluster. Kafka uses Zookeeper to manage the cluster configuration, elect the leader, and rebalance the consumer group when changes occur. Producers publish messages to brokers in push mode, and consumers subscribe to and consume messages from brokers in pull mode.
Transactional characteristics of Kafka data transmission
At most one time, he will be able to go to school.
This is similar to a “non-persistent” message in JMS. Once sent, it will not be resent, regardless of success or failure. The consumer fetch the message, then saves the offset, and then processes the message; After the client saves the offset, an exception occurs during message processing. As a result, some messages cannot be processed. Then “unprocessed” messages will not be fetched thereafter, and this is “at most once”.
The message should be sent at least once.
If the message is not successfully received, it may be resended until it is successfully received. The consumer fetches the message, then processes the message, and then saves the offset. “At least once” if the message is processed successfully, but the save operation fails due to the ZooKeeper exception during the offset phase, then the next fetch may obtain the message that has been processed last time. Cause The offset file is not submitted to ZooKeeper in time. Zookeeper recovers or is in the original offset state.
The message will only be sent once.
There is no strict implementation in Kafka (based on 2-phase commit), and we don’t think this strategy is necessary in Kafka.
Usually, “at-least-once” is our first choice.
Kafka message storage format
Topic & Partition
A topic can be considered a message of one class, and each topic will be divided into multiple partitions. Each partition is an Append log file on the storage layer.
In a Kafka file storage, there are multiple partitions under the same topic, and each partition is a directory. The partiton naming rule is topic name + order number. The first partiton number starts from 0, and the maximum number is the number of partitions decreased by 1.
Each partion is equivalent to a single large file that is equally divided among multiple data files with equally sized segments. However, the number of messages in each segment file may not be equal. This feature facilitates the quick deletion of old segment files.
Each partiton only needs to support sequential reads and writes. The segment file lifecycle is determined by server configuration parameters.
The advantage of this is that useless files can be quickly deleted, effectively improving disk utilization.
Segment file: Consists of two parts: index file and data file. The two files correspond to each other and appear in pairs. The suffix “. Index “and “. Log” indicate the segment index file and data file respectively.
The naming rule of segment files is as follows: The first segment of the partion global starts from 0, and each subsequent segment file is named the offset value of the last message in the previous segment file. The value is a maximum of 64 digits long and 19 digits long. No digits are filled with 0.
The physical structure of the mapping between index and data file in the segment is as follows:
In the figure above, the index file stores a lot of metadata. The data file stores a lot of messages. The metadata in the index file points to the physical offset address of the message in the corresponding data file.
In this example, metadata 3,497 in the index file is used to represent the third message in the data file (368772 message in the global partiton), and the physical offset address of the message is 497.
The segment data file consists of a number of messages. The physical structure of the message is as follows:
Parameter description:
Each message within parition(partition) has an ordered ID number. This ID number is called offset, which uniquely determines the position of each message within parition(partition). Parameter Description Value offset indicates the number of the partiion. Message4 byte Message Sizemessage Size 4 byte CRC32 Uses CRC32 to verify message1 byte “Magic” indicates the Kafka service program protocol version 1 Byte “attributes” indicates the independent version, or identifies the compression type, or encoding type. 4 Byte Key Length Indicates the length of the key. If the key is -1, the K byte key field is not filled. K byte Key Optional Value bytes Payload Indicates the actual message data.