The background,

Kafka cluster: Kafka cluster on MAC

Two, software installation

Zookeeper is required because Kafka relies on ZooKeeper. Kafka is written in Scala, which is based on the JDK, so you need to install the JDK. 1. JDK. You are advised to install JDK in JDK8 or later. 2. Zookeeper, set up a zK pseudo-cluster with 3 nodes on the machine. 3. Kafka set up a 3 node Kafka cluster locally.

Three, installation steps

1. Set up a zK pseudo-cluster with three nodes

ip Client connection port Cluster election interface Cluster atomic broadcast interface Server. id Specifies the ID value in the server The node name, not currently available, is configured in the hosts file
127.0.0.1 2181 12888 13888 1 zk01
127.0.0.1 3181 22888 23888 2 zk02
127.0.0.1 4181 32888 33888 3 zk03

The id in server.id is specified by the myID file created in the directory specified by the dataDir item in zoo. CFG configuration file.

1. Download the ZooKeeper installation package

Get https://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gzCopy the code

2. Decompress three copies and store them in the ZooKeeper directory

3. Modify the ZooKeeper configuration file. The following uses ZK01 as an example

1. Change the zoo_sample. CFG file in the conf directory to zoo.cfg2. Edit the zoo. CFG configuration file Note:

1, here take ZK01 as an example to write, because it is local start multiple, so the need to modify the port

2. On each servermyidThe corresponding values of files should be different and unique.

3,dataDirThe path needs to be modified.

4, the rest of the configuration can be modified to see the specific situation.

5. Start three ZK nodes

Respectively into zk01 / bin, zk02 / bin, zk03 / bin directory to perform the following commands

./zkServer.sh --config .. /conf startCopy the code

6, connectionzkThe cluster

. / zkCli. Sh - server 127.0.0.1:2181127.00 0.1:3181127.00 0.1:4181Copy the code

2. Create a kafka cluster with 3 nodes

ip Client connection port broker.id
127.0.0.1 9092 0
127.0.0.1 9093 1
127.0.0.1 9094 2
Note:
1.broker.idMust be unique and numeric.

1. Download Kafka

https://www.apache.org/dyn/closer.cgi?path=/kafka/2.6.0/kafka_2.13-2.6.0.tgz
Copy the code

2. Unpack 3 files and place them in kafka directory

3. Modify the server.properties file. Kafka01 is used as an example

Attributes to be modified. The following uses Kafka01 as an example

The property name Attribute values describe
broker.id 0 The configuration on each Kafka node needs to be different
listeners PLAINTEXT: / / 127.0.0.1:9092 Each Kafka node needs to be configured differently. PLAINTEXT represents PLAINTEXT transmission
log.dirs ../logs Log File Path
zookeeper.connect 127.0.0.1:2181,127.0.0.1:3181,127.0.0.1:4181 Zk server address
num.partitions 1 The default number of partitions for the topic
log.retention.hours 168 Controls the retention time of log files, in hours

4. Start three Kafka nodes

In order to enter kafka01 / bin, kafka02 / bin, kafka03 / bin directory, execute the following commands

./kafka-server-start.sh .. /config/server.properties &Copy the code

5. Kafka test

Here kafka01 is used as an example

1. Create a theme

bin/kafka-topics.sh --create --topic test-001 --replication-factor 1 --partitions 1 --bootstrap-server 127.0.0.1:9092,127.0.0.1:9093,127.0.0.1:9094
Copy the code

2. View the subject information

Bin/kafka - switchable viewer. Sh - go - the bootstrap - server 127.0.0.1:9092127.00 0.1:9093127.00 0.1:9094 - topic test - 001Copy the code

3. Post a message to the topic you created

Bin/kafka - the console - producer. Sh -- -- topic test - 001 - the bootstrap - server 127.0.0.1:9092127.00 0.1:9093127.00 0.1:9094Copy the code

Read the message just published to the topic

bin/kafka-console-consumer.sh --topic test-001 --from-beginning --bootstrap-server 127.0.0.1:9092,127.0.0.1:9093,127.0.0.1:9094
Copy the code

At this point, a simple working Kafka cluster is set up.

Iv. Reference documents

1, 2, kafka.apache.org/documentati zookeeper.apache.org/…