Kafka’s background has been covered quite a bit, so let’s get started now, assuming you have the JDK and ZooKeeper environment ready.

1. Download the code

Download versions 2.12-1.1.0 and unzip it.

Tar -xvf kafka_2.12-1.1.0.tgz -c /home/kafka_12-1.1.0.tgz-c${user}/software/kafka/
Copy the code

3. Modify the configuration

Modify the $KAFKA_HOME/config/server. The properties files

Borker.id can be specified optionally, provided that each machine in the cluster has a unique broker.ID and the second machine is set to 2... And so on
broker.id=0
The port to provide for the client response
port=9092
Kafka log directory kafka log directory kafka log directory
log.dirs=/tmp/kafka-logs-0
# Set the ZooKeeper cluster addressZookeeper. Connect = 192.168.0.1:2181192168 0.2:2181192168 0.3:2181Set the local address to the IP address of this server. If this parameter is NOT set, the NOT LEADER FOR PARTITION exception occurs when a topic is created and messages are sent.The host name = 192.168.0.1Copy the code

2. Start the service

Zookeeper is required to run Kafka, so you need to start Zookeeper first. If you don’t have Zookeeper, you can use Zookeeper that comes packaged and configured with Kafka.

./bin/zookeeper-server-start.sh config/zookeeper.properties
Copy the code

Display:

[2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
...
Copy the code

Zookeeper is successfully started. Now start the Kafka service.

./bin/kafka-server-start.sh  -daemon config/server.properties
Copy the code

Display:

[the 2013-04-22 15:01:47, 028] INFO Verifying the properties (kafka. Utils. VerifiableProperties) [the 15:01:47 2013-04-22, 051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties) ...Copy the code

Indicates successful startup.

3. Create a Topic

Create a Topic named “test” with only one partition and one backup:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Copy the code

Once created, you can view information about the created topic by running the following command:

./bin/kafka-topics.sh --list --zookeeper localhost:2181
Copy the code

Alternatively, in addition to creating topics manually, you can configure your broker to automatically create topics when publishing a topic that does not exist. Configuration items are as follows:

auto.create.topics.enable=true 
Copy the code

4. Send the MESSAGE

Kafka provides a command-line tool that reads messages from input files or from the command line and sends them to the Kafka cluster. Each line is a message. Run the producer and enter several messages on the console to the server.

./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
Copy the code
This is a message
This is another message
Copy the code

5. Consumer news

Kafka also provides a command line tool that consumes messages and outputs stored information.

./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
Copy the code
This is a message
This is another message
Copy the code

If you run the above command on two different terminals, then when you run the producer, the consumer can consume the message sent by the producer.

6. Set up multiple broker clusters

So far, we have only been running a single broker, which is not interesting. For Kafka, a broker is just the size of a cluster, so let’s have several brokers.

Start by creating a configuration file for each broker:

cp config/server.properties config/server-1.properties 
cp config/server.properties config/server-2.properties
Copy the code

Now edit these new files and set the following properties:

config/server-1.properties: 
    broker.id=1 
    listeners=PLAINTEXT://:9093 
    log.dir=/tmp/kafka-logs-1

config/server-2.properties: 
    broker.id=2 
    listeners=PLAINTEXT://:9094 
    log.dir=/tmp/kafka-logs-2
Copy the code

Broker.id is the unique and permanent name for each node in the cluster. We changed the port and log directory because we are now running on the same machine and we want to prevent brokers from registering and overwriting each other’s data on the same port.

We’ve already run ZooKeeper and the last kafka node, so we just need to start two new Kafka nodes.

./bin/kafka-server-start.sh  -daemon config/server-1.properties
Copy the code
./bin/kafka-server-start.sh  -daemon config/server-2.properties
Copy the code

Now let’s create a new topic and set the backup to: 3

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
Copy the code

Ok, now that we have a cluster, how do we know what each cluster is doing? Run the command “describe Topics”

./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Copy the code

Display:

Topic:my-replicated-topic    PartitionCount:1    ReplicationFactor:3    Configs:
Topic: my-replicated-topic    Partition: 0    Leader: 1    Replicas: 1,2,0    Isr: 1,2,0
Copy the code

The first line is a summary of all partitions, and second, each line provides partition information, because we only have one partition, so there is only one.

"leader": This node is responsible for all reads and writes of the partition. The leader of each node is randomly selected."replicas": The list of nodes that are backed up, whether the node is the leader or currently alive, is only displayed."isr": list of nodes that are "synchronized backup", that is, nodes that are alive and synchronizing the leader.Copy the code

Let’s run this command to see the node we created at the beginning:

./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Copy the code

Display:

Topic:test    PartitionCount:1    ReplicationFactor:1    Configs:
Topic: test    Partition: 0    Leader: 0    Replicas: 0    Isr: 0
Copy the code

Not surprisingly, the theme we just created doesn’t have Replicas, and it’s on server “0”. When we created it, there was only one server in the cluster, so it was “0”.

Let’s release some information on a new topic:

./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
Copy the code

Display:

. mytest message 1
my test message 2
Copy the code

Now, consume the news.

./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic
Copy the code

Display:

. mytest message 1
my test message 2
Copy the code

To test the fault tolerance of the cluster, kill the Leader, Broker1 as the current leader, so kill Broker1.

ps | grep server-1.properties
Copy the code

Display:

7564 ttys002 0:15. 91 / System/Library/Frameworks/JavaVM framework Versions / 1.6 / Home/bin/Java...Copy the code

Perform:

kill7564-9Copy the code

Used on Windows:

wmic process where "caption = 'java.exe' and commandline like '%server-1.properties%'" get processid
Copy the code

Display:

ProcessId
6016
Copy the code

Perform:

taskkill /pid 6016 /f
Copy the code

One of the backup nodes becomes the new leader and Broker1 is no longer in the synchronous backup collection.

./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Copy the code
Topic:my-replicated-topic    PartitionCount:1    ReplicationFactor:3    Configs:
Topic: my-replicated-topic    Partition: 0    Leader: 2    Replicas: 1,2,0    Isr: 2,0
Copy the code

But the news was still there:

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic
Copy the code
. mytest message 1
my test message 2
Copy the code


The article is reprinted from:Kafka is installed and started