1. Didi Logi — KafkaManager

Didi self-built Kafka cloud management and control platform from April 2019 plan open source to January 14, 2021 to complete open source, internal iterations of three major versions, lasting 22 months to complete the fruit, once open source has been widely recognized by the community users, up to the current project Star reached 2K.

(Long press the qr code above to view Github project details)

If you are interested, you can click Star✨ to collect it

Installation manual

1. Environment dependence

If the Release package is installed, only Java and MySQL can be installed. If you want to package the source code package before using it, you need to install the Maven and Node environment.

  • Java 8+(Required by the operating environment)
  • MySQL 5.7(Data storage)
  • Maven 3.5 +(Backend packaging dependencies)
  • Node 10+(Front-end packaging dependency)

2. Obtain the installation package

1, Release directly download

If this is a hassle and you don’t want to do secondary development, you can download the Github Release directly from Github

If Github’s download address is too slow, you can also go to the logi-kafkamanager user group, which is in the README.

2. Package the source code

After downloading the code, go to the logi-kafkamanager home directory and run the sh build.sh command. After executing the command, a JAR package will be generated under the output/kafka-manager-xxx directory.

For Windows users, the sh build.sh command cannot be executed. Therefore, you can run MVN install and generate a kafka-manager-web-xxx.jar package in the kafka-manager-web/target directory.

After obtaining the JAR package, we continue with the following steps.

MySQL > initialize mysql-db

Execute the SQL command in create_mysql_table. SQL to create the required MySQL libraries and tables. The default library name is logi_kafka_manager.

4, start,

5, use

To start locally, visit http://localhost:8080 and enter your account and password (admin/admin by default) to log in. For more information: Kafka-Manager user manual

6. Product Introduction

6.1 Quick Experience address

  • Experience the address http://117.51.146.109:8080 account password admin/admin

6.2 Experiencing a Map

Compared with similar products that have a single user perspective (mostly an administrator’s perspective), Didi Logi-KafkaManager builds an experience map based on role-dividing and multi-scene perspectives. They are user experience map, operation and maintenance experience map, and operation experience map

6.2.1 User Experience Map

  • Platform tenant application: Apply the App as the user name in Kafka and use AppID+password as authentication
  • Applying for cluster resources: Apply for and use cluster resources on demand. You can use the shared cluster provided by the platform or apply for a separate cluster for your application
  • Topic application: You can create a Topic according to the application (App), or apply for the read and write permission of other topics
  • Topic operation and maintenance: Topic data sampling, quota adjustment, partition application and other operations
  • Index monitoring and control: Time consuming statistics of each link of production and consumption are based on Topic, and performance indicators of different quantiles are monitored
  • Consumer group operation and maintenance: support to reset the consumption offset to a specified time or location

6.2.2 O&M Experience Map

  • Multi-version cluster management: Supports slave0.10.2to2.xversion
  • Group monitoring: multi-dimensional historical and real-time key indicators such as cluster Topic and Broker can be viewed to establish a health sub-system
  • Group O&M: Divide brokers into regions. A Region defines resource division units and divides logical clusters based on service and security capabilities
  • Broker operations: Including operations such as priority copy elections
  • Topic operation and maintenance: includes creation, query, capacity expansion, property modification, migration, and offline

6.2.3 Operation Experience Map

  • Resource management: precipitation resource management method. Aiming at frequent common problems such as hot Topic partition and insufficient partition, precipitate resource management methods to realize expert resource management
  • Resource review and approval: work order system. Topic creation, quota adjustment, partition application and other operations are approved by professional operation and maintenance personnel to standardize the use of resources and ensure the smooth operation of the platform
  • Accounting system: cost control. Topic resources and cluster resources are applied and used on demand. According to the flow of cost accounting, help enterprises to build big data cost accounting system

6.3 Core Advantages

  • Efficient problem location: Monitor multiple core indicators, collect data of different segments, and provide a variety of indicator monitoring reports to help users and O&M personnel quickly and efficiently locate problems
  • Convenient group O&M: Define cluster resource division units based on regions and divide logical clusters based on security levels. In addition to facilitating resource isolation and improving scalability, it implements strong management and control on the server
  • Professional resource management: Based on didi’s internal operation practices for many years, the resource management method has been accumulated to establish a health sub-system. Expert resource management is implemented for frequent problems such as Topic partition hot spots and insufficient partitions
  • Friendly operation and maintenance ecosystem: Connect with Didi Nightingale monitoring and alarm system to integrate alarm monitoring, cluster deployment, cluster upgrade and other capabilities. Form the operation and maintenance ecology, condense expert service, make the operation and maintenance more efficient

6.4 Didi Logi-KafkaManager Architecture Diagram

Kafka Manager kafkaManager

It is a tool developed by Yahoo to monitor information related to the entire Kafka cluster.

(1) Can manage several different clusters

(2) Monitor cluster status (topics, brokers, replica distribution, partition distribution)

(3) Create topic and modify the configuration related to topic

1. Upload the installation package

Kafka – manager – 1.3.0.4. Zip

2. Decompress the installation package

Unzip kafka – manager – 1.3.0.4.zip – d/kafka/install

3. Modify the configuration file

Go to conf vim application.conf

# modified kafka - manager. Zkhosts the value of the specified address kafka - manager. Kafka cluster zkhosts = "node02 node01:2181:2181, node03:2181"Copy the code

4. Start kafka-Manager

Start the ZK cluster, kafka cluster, and start the Kafka-Manager service as user root.

Bin /kafka-manager The default port is 9000. You can specify the configuration file by running -dhttp. port and specifying the port -dconfig. file=conf/application.conf

nohup bin/kafka-manager -Dconfig.file=conf/application.conf -Dhttp.port=8080 &

5. Access address

The host name of kafka-manager is 8080

Third, KafkaOffsetMonitor

The monitor is run in the form of a JAR package, which is convenient to deploy. Only monitoring function, relatively safe to use.

(1) List of consumer groups

(2) Check the historical consumption information of the topic.

(3) list of all the parition each topic (topic, pid, offset, logSize, lag, the owner)

(4) Monitor consumer consumption, and list each consumer offset and lag data.

1. Download the installation package

KafkaOffsetMonitor - assembly - 0.2.0. Jar

2. Create a directory kafka_moitor on the server and upload the JAR package to this directory

3. Create a script in the kafka_moitor directory

 vim start_kafka_web.sh

#! Java/bin/sh - cp KafkaOffsetMonitor - assembly - 0.2.0. Jar. Com quantifind. Kafka. Offsetapp. OffsetGetterWeb - zk Node01 :2181,node 02:281,node03:2181 --port 8089 --refresh 10.seconds --retain 1.days --refresh refresh time -retain data retention timeCopy the code

4. Start the script

nohup sh start_kafka_web.sh &

5. Access address

You can use IP :8089 in your browser to access kafka's monitoring page.

Kafka Eagle Kafka Eagle

The underlying layer depends on the database

1. Download Kafka Eagle installation package

 download.smartloli.org/ 

Kafka – eagle – bin – 1.2.3. Tar. Gz

2, decompression

Tar -zxvf kafka-eagle-bin-1.1.3.tar. gz -c /kafka/install

Unzip it and go to kafka-eagle-bin-1.2.3

Get a kafka – eagle – web – 1.2.3 – bin. Tar. Gz

Then decompress tar -zxvf kafka-eagle-web-1.1.3-bin.tar. gz

Rename mv kafka-eagle-web-1.2.3 kafka-eagle-web

3. Modify the configuration file

Go to the conf directory and modify system-config.properties

# fill in your information of kafka cluster kafka. Eagle. Zk. Cluster. The alias = cluster1 cluster1. Zk. The list = node01:2181, node02:2181 node03:2181 # kafka Port =8048 # kafka sasl Authenticate Kafka.eagle.sasl. Enable =false kafka.eagle.sasl.protocol=SASL_PLAINTEXT kafka.eagle.sasl.mechanism=PLAIN Kafka. Eagle. Sasl. Client = / kafka/install/kafka - eagle - bin - 1.2.3 / kafka - eagle - web/conf/kafka_client_jaas. Conf # Add the ke database configuration you just imported, Driver =com.mysql.jdbc. driver //ke is the name of the database. Kafka Eagle is automatically built without the need to create it in advance kafka.eagle.url=jdbc:mysql://node03:3306/ke? useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull kafka.eagle.username=root kafka.eagle.password=123456Copy the code

4. Configure environment variables

 vi /etc/profile 

Export KE_HOME = / kafka/install/kafka – eagle – bin – 1.2.3 / kafka – eagle – web

export PATH=
P A T H : PATH:
KE_HOME/bin  

5. Start Kafka-Eagle

Go to the $KE_HOME/bin directory

Run the sh ke.sh start script

6. Access address

To access Kafka Eagle, type http://node01:8048/ke in your browser.

User name: admin

Password: 123456

The login page

Dashboard information

Kafka cluster information

Zookeeper cluster

Topic information

View topic information

Consumer Information

Zk client command