Welcome to my GitHub

Github.com/zq2599/blog…

Content: all original article classification summary and supporting source code, involving Java, Docker, Kubernetes, DevOPS, etc.;

How to Deploy quickly

  1. With Helm, kafka can be deployed with a few operations;
  2. Both Kafka and ZooKeeper require storage. If you prepare StorageClass in advance, storage becomes very easy

Refer to the article

The K8S, Helm, NFS, and StorageClass prerequisites are involved in this actual combat. For details about their installation and use, see:

  1. Kubespray2.11 Install Kubernets 1.15
  2. Deploying and Experiencing Helm(version 2.16.1)
  3. Ubuntu16 installing and using NFS
  4. K8S using CDH DS218+ NFS
  5. K8S StorageClass (NFS)

Environmental information

The version information of the operating system and software is as follows:

  1. Kubernetes: 1.15
  2. Kubernetes CentOS Linux release 7.7.1908
  3. NFS service: IP address 192.168.50.135, folder /volume1/nfs-storageclass-test
  4. Helm: 2.16.1
  5. Kafka: 2.0.1
  6. Zookeeper: 2.6.2

Before the next actual combat, please prepare: K8S, Helm, NFS, StorageClass;

operation

  1. Add the helm warehouse (in the warehouse with kafka) : helm repo add incubator storage.googleapis.com/kubernetes-…
  2. Download Kafka’s Chart: Helm Fetch Incubator/Kafka
  3. Kafka-0.20.8.tgz: kafka-0.20.8.tgz: kafka-0.20.8.tgz: kafka-0.20.8.tgz: kafka-0.20.8.tgz: kafka-0.20.8.tgz: kafka-0.20.8.tgz: kafka-0.20.8.tgz: kafka-0.20.8.tgz: kafka-0.20.8.tgz
  4. Go to the unzipped kafka directory and edit the values.yaml file.
  5. Set external. Enabled to true if kafka is enabled outside K8S.

ConfigurationOverrides: K8S host IP configurationOverrides: K8S host IP configurationOverrides: K8S host IP configurationOverrides:

7. Next, set up the data volume, find persistence, resize it as needed, and set the name of the prepared Storageclass:

8. Configure the ZooKeeper data volume.

Kubectl create namespace kafka-test 10. Helm install –name-template kafka -f values.yaml. –namespace kafka-test 11. If the previous configuration is ok, the console prompts the following:

Kafka relies on ZooKeeper to start up. The whole startup takes several minutes, during which zooKeeper and Kafka pods start up gradually.

Kubectl get services -n kafka-test kubectl get services -n kafka-test kubectl get services -n kafka-test kubectl get services -n kafka-test kubectl get services -n kafka-test kubectl get services -n kafka-test

14. Check the kafka version: Kubectl exec kafka – 0 – n kafka – test – sh -c ‘ls/usr/share/Java/kafka kafka_ *. Jar’, shown in the diagram below the red box, scala version 2.11, version 2.0.1 kafka:

  1. After kafka is successfully started, let’s verify that the service is normal.

Zookeeper is exposed

  1. In order to operate Kafka remotely, you sometimes need to connect to ZooKeeper, so you need to expose ZooKeeper as well;
  2. Create the zookeeper-nodeport-svc.yaml file as follows:
apiVersion: v1
kind: Service
metadata:
  name: zookeeper-nodeport
  namespace: kafka-test
spec:
  type: NodePort
  ports:
       - port: 2181
         nodePort: 32181
  selector:
    app: zookeeper
    release: kafka
Copy the code
  1. Run kubectl apply -f zookeeper-nodeport-svc.yaml
  2. Zookeeper can be accessed using host IP address :32181, as shown in the following figure:

Verify the Kafka service

Kafka: Kafka: kafka: kafka: Kafka: Kafka: Kafka: Kafka: Kafka: Kafka: Kafka: Kafka: Kafka: Kafka

  1. Visit the kafka’s official website: kafka.apache.org/downloads, just make sure the scala version 2.11, kafka version 2.0.1, so download version of the red box below:

2. Decompress the file and go to kafka_2.11-2.0.1/bin 3. View current topic:

. / kafka - switchable viewer. Sh - the list - the zookeeper 192.168.50.135:32181Copy the code

The picture below is empty:

4. Create a topic

Sh --create --zookeeper 192.168.50.135:32181 --replication-factor 1 -- Partitions 1 --topic test001./kafka-topicsCopy the code

Create a topic and then check it.

5. Check topic named test001:

./kafka-topics. Sh --describe -- Zookeeper 192.168.50.135:32181 --topic test001Copy the code

6. Enter the interaction mode for creating messages:

Sh --broker-list 192.168.50.135:31090 --topic test001./kafka-console-producer.sh --broker-list 192.168.50.135:31090 --topic test001Copy the code

After entering interactive mode, enter any string and enter to send the current content as a message:

7. Open another window and execute command consume message:

./kafka-console-consumer.sh --bootstrap-server 192.168.50.135:31090 --topic test001 --from-beginning
Copy the code

8. Open another window and run the following command to view the consumer group:

. / kafka - consumer groups. Sh -- -- the bootstrap - server 192.168.50.135:31090 - the listCopy the code

Groupid equals console-consumer-21022

9. Run the following command to check the consumption of groupid equal to console-consumer-21022:

/kafka-consumer-groups.sh --group console-consumer-21022 --describe --bootstrap-server 192.168.50.135:31090Copy the code

As shown below:

Remote connection Kafka experience basic functions are complete, the view, send and receive messages are normal, which proves that the deployment is successful.

Kafkacat connection

  1. Kafkacat is a client tool that I installed on a MacBook Pro using BREW.
  2. K8S server IP is 192.168.50.135, so run this command to check kafka information: Kafkacat-b 192.168.50.135:31090-L kafkacat-b 192.168.50.135:31090-L kafkacat-b 192.168.50.135:31090-L kafkacat-b 192.168.50.135:31090-L kafkacat-b 192.168.50.135:31090-L kafkacat-b 192.168.50.135:31090-L Changing ports to 31091 and 31092 will connect to two other brokers and get the same information:

Clean up resources

There are many resources created in this field: RBAC, Role, ServiceAccount, POD, Deployment, Service, and the following script can clean up these resources (only NFS files are not cleaned up) :

helm del --purge kafka
kubectl delete service zookeeper-nodeport -n kafka-test
kubectl delete storageclass managed-nfs-storage
kubectl delete deployment nfs-client-provisioner -n kafka-test
kubectl delete clusterrolebinding run-nfs-client-provisioner
kubectl delete serviceaccount nfs-client-provisioner -n kafka-test
kubectl delete role leader-locking-nfs-client-provisioner -n kafka-test
kubectl delete rolebinding leader-locking-nfs-client-provisioner -n kafka-test
kubectl delete clusterrole nfs-client-provisioner-runner
kubectl delete namespace kafka-test
Copy the code

At this point, K8S environment deployment and verification kafka combat is completed, I hope to provide you with some reference;

Welcome to pay attention to the public number: programmer Xin Chen

Wechat search “programmer Xin Chen”, I am Xin Chen, looking forward to enjoying the Java world with you…

Github.com/zq2599/blog…