This is the 10th day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021
Apache Kafka
Kafka is a message system that can be used in a production environment. Kafka is a system that can be used in a production environment. Kafka is a system that can be used in a production environment.
Apache Kafka is developed in Java and Scala, so you need a Java runtime environment. Before installing Apache Kafka, you need to ensure that JDK 1.8 or later is installed locally.
The JDK can be downloaded at this address:portalSpecific installation steps are beyond the scope of this article.
In addition, Kafka uses Zookeeper to store metadata, so Zookeeper must be deployed to run Kafka. However, Zookeeper is shipped with Kafka’s official distribution and can be used directly.
Zookeeper is a distributed application coordination service. It is a component of Hadoop before it is independent. Zookeeper provides configuration maintenance, domain name service, and distributed synchronization.
The following describes the installation steps.
Step one: Download the Kafka distribution.
You can find the Apache Kafka download page on the Apache Kafka website, or download kafka_2.13-3.0.0.tgz from this address.
After downloading, unzip and go to the directory.
➜ tar -xzf kafka_2.13-3.0.0.tgz
➜ cdKafka_2. 13-3.0.0Copy the code
After decompression, the directory structure looks something like this:
. ├ ─ ─ LICENSE ├ ─ ─ NOTICE ├ ─ ─ bin ├ ─ ─ the config ├ ─ ─ libs ├ ─ ─ licenses └ ─ ─ site - the docsCopy the code
The bin directory contains many. Sh scripts, including Kafka and Zookeeper running scripts, and the config directory contains configuration files.
Second, run Kafka. The following commands are executed in Kafka’s root directory.
Start the Zookeeper service.
➜ bin/zookeeper server - start. Sh config/zookeeper propertiesCopy the code
Then open another command line to start the Kafka Broker service.
➜ bin/kafka - server - start. Sh config/server propertiesCopy the code
Step 3: Create a Topic. A Topic can be a queue of events.
➜ bin/kafka-topics. Sh --create --topic hello-events --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1 Created topic hello-events.Copy the code
Since we are not clustering, both partitions and replication-factor are 1. After execution, a message will tell us that hello-events has been created successfully.
Use the kafka-topics. Sh –describe command to view information about topics that have been created.
➜ bin/kafka-topics. Sh --describe --bootstrap-server localhost:9092 Topic: hello-events TopicId: LeWuXHJwQqi9AUCuyWjOEA PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824 Topic: hello-events Partition: 0 Leader: 0 Replicas: 0 Isr: 0Copy the code
Fourth, write events to the Topic.
Open a new command line and execute the following command:
➜ bin/kafka-console-producer.sh --topic hello-events --bootstrap-server localhost:9092
Copy the code
At this point, you enter an interactive command line interface where you type some random text and press Enter to submit.
Such as:
>First Event
>Second Event
Copy the code
Fourth, read the event.
Open the fourth command line and type the following command:
➜ bin/kafka-console-consumer.sh --topic hello-events --from-beginning --bootstrap-server localhost:9092
Copy the code
You read what you just typed on the interactive command line where you wrote the message. If you continue writing on the command line where the message was written, the command line that read the event will also read the new content.
At this point, we are done using Kafka as a messaging system from the local command line, creating a topic, sending messages to it, and reading messages from it. We have four command lines open. To stop these programs, simply press Ctrl+C in each window.
/ TMP/Kafka -logs/TMP/Zookeeper/Kafka -logs/TMP/Zookeeper
➜ rm -rf /tmp/kafka-logs /tmp/zookeeper
Copy the code