preface

Recently do a second kill case, involving the synchronization lock, database lock, distributed lock, process queue and distributed message queue, here on the SpringBoot integration Kafka message queue to do a simple record.

Kafka profile

Kafka is an open source stream processing platform developed by the Apache Software Foundation and written in Scala and Java. Kafka is a high-throughput distributed publish-subscribe messaging system that processes all action flow data in consumer-scale websites. This action (web browsing, searching and other user actions) is a key factor in many social functions on the modern web. This data is usually addressed by processing logs and log aggregation due to throughput requirements. This is a viable solution for logging data and offline analysis systems like Hadoop, but with limitations that require real-time processing. Kafka is designed to unify online and offline message processing through Hadoop’s parallel loading mechanism, and to provide real-time messaging across clusters.

Kafka is a high-throughput distributed publish-subscribe messaging system with the following features:

  • Message persistence is provided through the O(1) disk data structure, which can maintain stable performance over long periods of time even with terabytes of message storage.
  • High throughput: Even very modest hardware Kafka can support millions of messages per second.
  • Support for partitioning messages through Kafka servers and consumer machine clusters.
  • Hadoop supports parallel data loading.

The term is introduced

  • Broker Kafka clusters contain one or more servers, which are called brokers
  • Topic Each message published to a Kafka cluster has a category called Topic. (Physically, messages from different topics are stored separately. Logically, messages from one Topic are stored on one or more brokers, but users can produce or consume data by specifying the Topic of the message, regardless of where the data is stored.)
  • Partition A Partition is a physical concept. Each Topic contains one or more partitions.
  • Producer is responsible for publishing messages to the Kafka Broker
  • Consumer Message Consumer, the client that reads messages to Kafka Broker.
  • Consumer Group Each Consumer belongs to a specific Consumer Group (you can specify a Group name for each Consumer, or the default Group if you do not specify a Group name).

Kafka installation

Kafka relies on the JAVA environment to run. How to install the JDK is not covered here.

Download kafka:

Wget HTTP: / / http://mirror.bit.edu.cn/apache/kafka/1.1.0/kafka_2.11-1.1.0.tgzCopy the code

Download the package to the execution directory and unzip it:

CD /usr/local/tar -xcvf kafka_2.11-0.10.0.1. TGZCopy the code

Modify kafka configuration file:

CD kafka_2.11-0.10.0.1/config/ # edit the configuration file vi server.properties broker.id=0 # Zookeeper. Zookeeper. connect=localhost:2181 Kafka supports internal Zookeeper and external zookeeperCopy the code

Start Kafka and ZooKeeper separately:

. / they are - server - start. Sh/usr/local/kafka_2. 11-0.10.0.1 / config/zookeeper. The properties &. / kafka - server - start. Sh / usr/local/kafka_2. 11-0.10.0.1 / config/server properties &Copy the code

SpringBoot integration

Pom. XML is introduced into:

<! - kafka support - > < the dependency > < groupId > org. Springframework. Kafka < / groupId > < artifactId > spring - kafka < / artifactId > < version > 1.3.5. RELEASE < / version > <! --$NO-MVN-MAN-VER$--> </dependency>Copy the code

Application. The properties configuration:

# kafka related configuration spring. Kafka. The bootstrap - the servers = 192.168.1.180:9092 # set a default set of spring. Kafka. Consumer. The group id = 0 # key - value serialization deserialization  spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer Spring.kafka.producer.value-serializer=org.apache.kafka.com mon. Serialization. StringSerializer # the number of each batch sending messages spring.kafka.producer.batch-size=65536 spring.kafka.producer.buffer-memory=524288Copy the code

Producer KafkaSender:

Producer / * * * * @ author division to help network By https://blog.52itstyle.com * / @ Component public class KafkaSender {@autowired private KafkaTemplate<String,String> kafkaTemplate; Public void sendChannelMess(String channel, String message){kafkatemplate. send(channel,message); }}Copy the code

Consumer:

/ * * * consumer spring - kafka 2.0 + dependent JDK8 * @ author division to help network By https://blog.52itstyle.com * / @ Component public class KafkaConsumer * @param message */ @kafkalistener (topics = {"seckill "}) public void receiveMessage(String Message){// Execute seckill operation after receiving channel message}}Copy the code

Code cloud download: build distributed kill system from 0 to 1

reference

kafka.apache.org/

Author: Xiao Qi 2012

Welcome to:blog.52itstyle.com