SpringBoot e-commerce project mall (40K + STAR) address: github.com/macrozheng/…

Abstract

Kafka is a very popular messaging middleware, with thousands of companies using it, according to its website. Recently implemented a wave of Kafka, which is really good and powerful. Today we will learn Kafka from three aspects: Kafaka under Linux installation, Kafka visualization tools, Kafka and SpringBoot combined use. I hope you can get a quick start on Kafka and master this popular messaging middleware!

Kafka profile

Kafka is an open source distributed messaging platform developed by LinkedIn and written in Scala and Java. The main function is to provide a unified, high throughput and low delay platform for processing real-time data, and its essence is a message engine system based on publish and subscribe mode.

Kafka has the following features:

  • High throughput and low latency: Kafka sends and receives messages very quickly, with latency as low as 2ms using clustering.
  • High scalability: Kafka can scale and contract flexibly, scaling to thousands of brokers, hundreds of thousands of partitions, and processing trillions of messages per day.
  • Persistent storage: Kafka can securely store data in distributed, persistent, fault-tolerant clusters.
  • High availability: Kafka can effectively scale the cluster over the available area, and the cluster can still function when a node goes down.

Kafka installation

We will use the installation mode under Linux, the installation environment is CentOS 7.6. Docker is not used here to install and deploy, personally feel that the direct installation is simpler (mainly the official Docker image is not provided)!

  • First we need to download the installation package of Kafka, download address: mirrors.bfsu.edu.cn/apache/kafk…

  • Unzip Kafka to the specified directory:
cd/ mydata/kafka/tar - XZF kafka_2. 13-2.8.0. TGZCopy the code
  • After decompression, go to the decompression directory:
cdKafka_2. 13-2.8.0Copy the code
  • Zookeeper will be removed from Kafka in the latest version of Kafka. Zookeeper will be removed from Kafka in the latest version of Kafka.

  • Start the Zookeeper service and the service will run in2181Port;
# zookeeper-out.file = zookeeper-out.file = zookeeper-out.file
nohup bin/zookeeper-server-start.sh config/zookeeper.properties > zookeeper-out.file 2>&1 &
Copy the code
  • Kafka is currently deployed on a Linux server. If the Extranet wants to access Kafka, you need to modify the Kafka configuration fileconfig/server.propertiesChange Kafka’s listening address; otherwise, Kafka will fail to connect.
############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT: / / 192.168.5.78:9092
Copy the code
  • Finally, start the Kafka service and the service will run in9092Port.
Run the service in the background and output the log to kafka-out.file in the current folder
nohup bin/kafka-server-start.sh config/server.properties > kafka-out.file 2>&1 &
Copy the code

Kafka command line operation

Next we use the command line to operate Kafka, familiar with the use of Kafka.

  • I’m going to create one calledconsoleTopicThe Topic;
Bin /kafka-topics. Sh --create --topic consoleTopic --bootstrap-server 192.168.5.78:9092Copy the code
  • Next look at Topic;
Bin /kafka-topics. Sh --describe --topic consoleTopic --bootstrap-server 192.168.5.78:9092Copy the code
  • The following Topic information is displayed;
Topic: consoleTopic	TopicId: tJmxUQ8QRJGlhCSf2ojuGw	PartitionCount: 1	ReplicationFactor: 1	Configs: segment.bytes=1073741824
	Topic: consoleTopic	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
Copy the code
  • Sending a message to a Topic:
Bin /kafka-console-producer.sh --topic consoleTopic --bootstrap-server 192.168.5.78:9092Copy the code
  • You can directly enter the information in the command line and send it.

  • Re-open a window to retrieve messages from Topic with the following command:
Bin /kafka-console-consumer.sh --topic consoleTopic --from-beginning --bootstrap-server 192.168.5.78:9092Copy the code

Kafka visualization

Working with Kafka from the command line is a bit tricky, so let’s try the visualization tool Kafka-Eagle.

Install the JDK

If you use CentOS, you do not have the full JDK installed by default.

  • Download the JDK 8, download address: mirrors.tuna.tsinghua.edu.cn/AdoptOpenJD…

  • After downloading, decompress the JDK to the specified directory.
cd/mydata/ Java tar -zxvf openjdk8u-jdk_x64_linux_XXX.tar. gz mv openjdk8u-jdk_x64_linux_XXX.tar. gz jdk1.8Copy the code
  • in/etc/profileAdd environment variables to the fileJAVA_HOME.
vi /etc/profile
# Add to profile
exportJAVA_HOME = / mydata/Java/jdk1.8export PATH=$PATH:$JAVA_HOME/bin
# Make the modified profile take effect
. /etc/profile
Copy the code

The installationkafka-eagle

  • downloadkafka-eagleInstallation package, download address:Github.com/smartloli/k…

  • The download will be completedkafka-eagleUnzip to the specified directory;
cd/ mydata/kafka/tar - ZXVF kafka - eagle - web - at 2.0.5 - bin. Tar. GzCopy the code
  • in/etc/profileAdd environment variables to the fileKE_HOME;
vi /etc/profile
# Add to profile
exportKE_HOME = / mydata/kafka/kafka - eagle - web - at 2.0.5export PATH=$PATH:$KE_HOME/bin
# Make the modified profile take effect
. /etc/profile
Copy the code
  • Install MySQL and add database ke (kafka-Eagle);

  • Modify the configuration file $KE_HOME/conf/system-config.properties to modify the Zookeeper configuration and database configuration, comment out the SQLite configuration, and use MySQL instead.

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# multi zookeeper & kafka cluster list
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=localhost:2181
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# kafka eagle webui port
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
kafka.eagle.webui.port=8048
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# kafka sqlite jdbc driver address
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# kafka.eagle.driver=org.sqlite.JDBC
# kafka.eagle.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
# kafka.eagle.username=root
# kafka.eagle.password=www.kafka-eagle.org
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# kafka mysql jdbc driver address
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
kafka.eagle.driver=com.mysql.cj.jdbc.Driver
kafka.eagle.url=jdbc:mysql://localhost:3306/ke? useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
kafka.eagle.username=root
kafka.eagle.password=root
Copy the code
  • Run the following command to startkafka-eagle;
$KE_HOME/bin/ke.sh start
Copy the code
  • After the command is executed, the following information is displayed, but it does not mean that the service is started successfully.

  • A few more useful oneskafka-eagleCommand:
# stop service
$KE_HOME/bin/ke.sh stop
# restart service
$KE_HOME/bin/ke.sh restart
Check the running status of the service
$KE_HOME/bin/ke.sh status
Check service status
$KE_HOME/bin/ke.sh stats
Dynamically view service output logs
tail -f $KE_HOME/logs/ke_console.out
Copy the code
  • If the startup succeeds, you can directly access it. Enter your account and passwordadmin:123456, visit address:http://192.168.5.78:8048/

  • Once you’ve logged in, you can access the Dashboard, and the interface is still great!

Visual Tool use

  • Previously we created a Topic using the command line, which can be created directly from the interface;

  • We can still go right throughkafka-eagleTo send a message;

  • We can consume messages in a Topic from the command line;
Sh --topic testTopic --from-beginning --bootstrap-server 192.168.5.78:9092Copy the code
  • The information obtained by the console is displayed as follows.

  • There’s another interesting feature calledKSQL, you can query messages in a Topic through SQL statements.

  • Visualization tools, of course, include monitoring if you want to turn them onkafka-eagleFor Kafka monitoring, you need to modify Kafka startup scripts to expose JMX ports.
vi kafka-server-start.sh
Expose the JMX port
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70"
    export JMX_PORT="9999"
fi
Copy the code
  • Take a look at the monitor chart interface;

  • There is also a very sexy gas monitoring large screen function;

  • Zookeeper command line function, in short, very full function, very powerful!

Kafka SpringBoot integration

Kafka is also very simple to operate in SpringBoot. For example, Kafka’s message mode is very simple, with no queues, just topics.

  • First in applicationpom.xmlAdd Spring Kafka dependency to
<! - Spring integration Kafka - >
<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>2.7.1</version>
</dependency>
Copy the code
  • Modify an application configuration fileapplication.ymlKafka service address and consumergroup-id;
server:
  port: 8088
spring:
  kafka:
    bootstrap-servers: '192.168.5.78:9092'
    consumer:
      group-id: "bootGroup"
Copy the code
  • Create a producer that sends messages to Kafka’s Topic.
/** * Created by macro on 2021/5/19. */
@Component
public class KafkaProducer {
    @Autowired
    private KafkaTemplate kafkaTemplate;

    public void send(String message){
        kafkaTemplate.send("bootTopic",message); }}Copy the code
  • Create a consumer that gets messages from Kafka and consumes them.
/** * Kafka message consumer * Created by macro on 2021/5/19. */
@Slf4j
@Component
public class KafkaConsumer {

    @KafkaListener(topics = "bootTopic")
    public void processMessage(String content) {
        log.info("consumer processMessage : {}",content); }}Copy the code
  • Create an interface to send messages and call producers to send messages.
/** * Created by macro on 2021/5/19. */
@api (tags = "KafkaController", description = "Kafka functionality test ")
@Controller
@RequestMapping("/kafka")
public class KafkaController {

    @Autowired
    private KafkaProducer kafkaProducer;

    @apiOperation (" Send message ")
    @RequestMapping(value = "/sendMessage", method = RequestMethod.GET)
    @ResponseBody
    public CommonResult sendMessage(@RequestParam String message) {
        kafkaProducer.send(message);
        return CommonResult.success(null); }}Copy the code
  • Call interface directly in Swagger to test;

  • The project console outputs the following message indicating that the message has been received and consumed.
2021-05-19 16:59:21.016 INFO 2344 -- [ntainer#0-0-C-1] c.m.mall.tiny.component.KafkaConsumer : consumer processMessage : Spring Boot message!
Copy the code

conclusion

Through a wave of practice in this article, you can basically get started Kafka. Installing, visualizing tools, and incorporating SpringBoot — these are all developer-specific operations that are essential to learning Kafka.

The resources

  • Kafka’s official documents: kafka.apache.org/quickstart

  • Kafka-eagle official documentation: www.kafka-eagle.org/articles/do…

  • Kafka: juejin.cn/post/684490…

Project source code address

Github.com/macrozheng/…

In this paper, making github.com/macrozheng/… Already included, welcome everyone Star!