Please go to Github to get the latest docker-compole.yml. README has corresponding operation steps. Github address: github.com/JacianLiu/d…

I’ve been studyingKafkaWhen preparing to test the cluster state, it was too much trouble to run three virtual machines or create three different port numbers in one virtual machine (HMMM. Mainly lazy).

Environment to prepare

A computer with Internet access and a CentOS7 virtual machine

Why virtual machines? Because of the laptop, the IP will change every time you connect to the network, and you have to always modify the configuration file, which is too tedious and inconvenient to test. (This problem can be avoided through Docker virtual network, which was not known in the experiment at that time)

Docker installation

If Docker has been installed, skip this step

  1. Docker supports the following CentOS versions:
  2. CentOS 7 (64-bit) : The operating system must be 64-bit and the kernel version must be later than 3.10.
  3. CentOS 6.5 (64-bit) or later: The operating system must be 64-bit and the kernel version must be 2.6.32-431 or later.
  4. CentOS only supports Docker kernels in distributions.

Yum install

Docker requires a CentOS kernel version higher than 3.10. Check the previous prerequisites to verify that your CentOS version supports Docker.

Check the kernel version
$ uname -a
Copy the code
# installation Docker
$ yum -y install docker
Copy the code
Start the Docker background service
$ service docker start
Copy the code
# Hello-world is not available locally, so a hello-world image will be downloaded and run inside the container.
$ docker run hello-world
Copy the code

Script installation

  1. Log in to Centos with sudo or root permission.
  2. Make sure the YUM package is up to date.
$ sudo yum update
Copy the code
  1. Get and execute the Docker installation script.
$ curl -fsSL https://get.docker.com -o get-docker.sh
# Executing this script will add the docker.repo source and install the Docker.
$ sudo sh get-docker.sh
Copy the code

Start the Docker

$ sudo systemctl start docker
Copy the code
Verify that docker is installed successfully and run a test image in the container.
$ sudo docker run hello-world
$ docker ps
Copy the code

Mirror to accelerate

At the beginning, I refused to configure the domestic mirror source, but after using it, I found that the download speed of duang~ went up. Therefore, you are strongly advised to configure the domestic mirror source. Open/create /etc/docker/daemon.json and add the following:

{
  "registry-mirrors": ["http://hub-mirror.c.163.com"]}Copy the code

The Zookeeper cluster is created

Zookeeper Image: Zookeeper :3.4

The mirror to

$ docker pull zookeeper:3.4
Copy the code

To find images, go to hub.docker.com/ docker pull images:TAG // to pull the TAG version of the image image

Create an independent Zookeeper container

Let’s start by creating a standalone Zookeeper node in the simplest way possible, and then we’ll create other nodes based on this example.

$ docker run --name zookeeper -p 2181:2181 -dZookeeper: 3.4Copy the code

By default, the container configuration files are in /conf/zoo. CFG, and the data and log directories are in /data and /datalog by default, which can be mapped to the host if necessary. Parameter interpretation

  1. –name: Specifies the container name
  2. -p: allocates a port number to the ports exposed by the container
  3. -d: Runs the container in the background and prints the container ID

The cluster structures,

The creation method of Zookeeper containers for other nodes is similar to that of independent containers. Note that to specify node IDS and modify the configuration of multiple nodes in the file, run the following commands:

Create a Docker network

$ docker network create zoo_kafka
$ docker network ls
Copy the code

Zookeeper container 1

$ docker run -d \
     --restart=always \
     -v /opt/docker/zookeeper/zoo1/data:/data \
     -v /opt/docker/zookeeper/zoo1/datalog:/datalog \
     -e ZOO_MY_ID=1 \
     -p 2181:2181 \
     -e ZOO_SERVERS="server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888"\ --name=zoo1 \ --net=viemall-zookeeper \ --privileged zookeeper:3.4Copy the code

Zookeeper container 2

$ docker run -d \
     --restart=always \
     -v /opt/docker/zookeeper/zoo2/data:/data \
     -v /opt/docker/zookeeper/zoo2/datalog:/datalog \
     -e ZOO_MY_ID=2 \
     -p 2182:2181 \
     -e ZOO_SERVERS="server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888"\ --name=zoo2 \ --net=viemall-zookeeper \ --privileged zookeeper:3.4Copy the code

Zookeeper container 3

$ docker run -d \
     --restart=always \
     -v /opt/docker/zookeeper/zoo3/data:/data \
     -v /opt/docker/zookeeper/zoo3/datalog:/datalog \
     -e ZOO_MY_ID=3 \
     -p 2183:2181 \
     -e ZOO_SERVERS="server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888"\ --name=zoo3 \ --net=viemall-zookeeper \ --privileged zookeeper:3.4Copy the code

This method also achieves what we want, but the steps are too tedious and troublesome to maintain (lazy cancer terminal), so we use docker-compose to achieve it.

Docker-compose builds a ZooKeeper cluster

Create a Docker network

$docker network create --driver bridge --subnet 172.23.0.0/25 --gateway 172.23.0.1 zoo_kafka $docker network lsCopy the code

Write the docker-comemess.yml script

Usage:

  1. The installationdocker-compose
# fetch scripthttps://github.com/docker/compose/releases/download/1.25.0-rc2/docker-compose- $curl - L ` uname-s`-`uname -m` -o /usr/local/bin/docker-compose
# Grant execute permission
$chmod +x /usr/local/bin/docker-compose
Copy the code
  1. Create one in any directorydocker-compose.ymlFile, copy the following
  2. Execute the commanddocker-compose up -d

Command controls

The command explain
docker-compose up Start all containers
docker-compose up -d The background starts and runs all containers
docker-compose up –no-recreate -d Do not recreate stopped containers
docker-compose up -d test2 Start only the container test2
docker-compose stop Stop the container
docker-compose start Start the container
docker-compose down Stop and destroy the container

Docker-comemage. yml: github.com/JacianLiu/d… Docker – compose. Yml for details

version: '2'
services:
  zoo1:
    image: Zookeeper: 3.4 # mirror name
    restart: always Restart automatically when an error occurs
    hostname: zoo1
    container_name: zoo1
    privileged: true
    ports: # port
      - 2181: 2181
    volumes: # Mount data volumes
      - ./zoo1/data:/data
      - ./zoo1/datalog:/datalog 
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 1 Node ID #
      ZOO_PORT: 2181 # ZooKeeper port number
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 # List of ZooKeeper nodes
    networks:
      default:
        ipv4_address: 172.23. 011.

  zoo2:
    image: Zookeeper: 3.4
    restart: always
    hostname: zoo2
    container_name: zoo2
    privileged: true
    ports:
      - 2182: 2181
    volumes:
      - ./zoo2/data:/data
      - ./zoo2/datalog:/datalog
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 2
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
    networks:
      default:
        ipv4_address: 172.23. 012.

  zoo3:
    image: Zookeeper: 3.4
    restart: always
    hostname: zoo3
    container_name: zoo3
    privileged: true
    ports:
      - 2183: 2181
    volumes:
      - ./zoo3/data:/data
      - ./zoo3/datalog:/datalog
    environment:
      TZ: Asia/Shanghai
      ZOO_MY_ID: 3
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
    networks:
      default:
        ipv4_address: 172.23. 013.

networks:
  default:
    external:
      name: zoo_kafka
Copy the code

validation

As can be seen from the figure, there is a Leader and two flowers, so our Zookeeper cluster has been set up

Kafka cluster setup

Is it still a problem to build Kafka clusters once you have the basics? It’s just a couple of variables with different values.

Docker-compose is composed for docker-compose. Docker-compose is composed for docker-compose. Docker-compose is composed for docker-compose, docker-compose is composed for docker-compose. At this time, we do not need to build a new Docker network, directly use the network created when building the Zookeeper cluster before!

Environment to prepare

Kafka Image: Wurstmeister/Kafka kafka-Manager Image: sheepkiller/ kafka-Manager

By default, the latest version of the image is pulled
docker pull wurstmeister/kafka
docker pull sheepkiller/kafka-manager
Copy the code

Write the docker-comemess.yml script

Usage:

  1. The installationdocker-compose
# fetch scripthttps://github.com/docker/compose/releases/download/1.25.0-rc2/docker-compose- $curl - L ` uname-s`-`uname -m` -o /usr/local/bin/docker-compose
# Grant execute permission
$chmod +x /usr/local/bin/docker-compose
Copy the code
  1. Create one in any directorydocker-compose.ymlFile, copy the following
  2. Execute the commanddocker-compose up -d

Command controls

The command explain
docker-compose up Start all containers
docker-compose up -d The background starts and runs all containers
docker-compose up –no-recreate -d Do not recreate stopped containers
docker-compose up -d test2 Start only the container test2
docker-compose stop Stop the container
docker-compose start Start the container
docker-compose down Stop and destroy the container

Docker-comemage. yml: github.com/JacianLiu/d… Docker-comemage. yml details

version: '2'

services:
  broker1:
    image: wurstmeister/kafka
    restart: always
    hostname: broker1
    container_name: broker1
    privileged: true
    ports:
      - "9091:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: PLAINTEXT://broker1:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1:9092
      KAFKA_ADVERTISED_HOST_NAME: broker1
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
      JMX_PORT: 9988
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./broker1:/kafka/kafka\-logs\-broker1
    external_links:
    - zoo1
    - zoo2
    - zoo3
    networks:
      default:
        ipv4_address: 172.23. 014.

  broker2:
    image: wurstmeister/kafka
    restart: always
    hostname: broker2
    container_name: broker2
    privileged: true
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_LISTENERS: PLAINTEXT://broker2:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker2:9092
      KAFKA_ADVERTISED_HOST_NAME: broker2
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
      JMX_PORT: 9988
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./broker2:/kafka/kafka\-logs\-broker2
    external_links:  Connect to containers outside of the compose file
    - zoo1
    - zoo2
    - zoo3
    networks:
      default:
        ipv4_address: 172.23. 015.

  broker3:
    image: wurstmeister/kafka
    restart: always
    hostname: broker3
    container_name: broker3
    privileged: true
    ports:
      - "9093:9092"
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_LISTENERS: PLAINTEXT://broker3:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker3:9092
      KAFKA_ADVERTISED_HOST_NAME: broker3
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
      JMX_PORT: 9988
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./broker3:/kafka/kafka\-logs\-broker3
    external_links:  Connect to containers outside of the compose file
    - zoo1
    - zoo2
    - zoo3
    networks:
      default:
        ipv4_address: 172.23. 016.

  kafka-manager:
    image: sheepkiller/kafka-manager:latest
    restart: always
    container_name: kafka-manager
    hostname: kafka-manager
    ports:
      - "9000:9000"
    links:            Connect to the container created by the compose file
      - broker1
      - broker2
      - broker3
    external_links:   Connect to containers outside of the compose file
      - zoo1
      - zoo2
      - zoo3
    environment:
      ZK_HOSTS: zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
      KAFKA_BROKERS: broker1:9092,broker2:9092,broker3:9092
      APPLICATION_SECRET: letmein
      KM_ARGS: -Djava.net.preferIPv4Stack=true
    networks:
      default:
        ipv4_address: 172.23. 010.

networks:
  default:
    external:   Use the created network
      name: zoo_kafka
Copy the code

validation

Kafka-manager: host IP :9000;

Zookeeper
save

Problems encountered during setup

  1. Mount Data volume Unlimited restart, viewlogChown: changing ownership of ‘/var/lib/mysql/…. ‘: Permission denied
    • Add — Privileged = True to docker Run to give the container special privileges
    • Temporarily disable selinux: setenForce 0
    • Add a Selinux rule that changes the security text of the directory to mount
  2. Kafka-manager: jMX-related error
    • Add the environment variable JMX_PORT= port to each Kafka node
    • Then it was discovered that the connection was not working and there was a problem with the network connection, so it exposed every JMX port and then fire-wall released it and fixed the problem.
    • KAFKA_ADVERTISED_HOST_NAMEThis is best set to host IP, outside the host code or tools to connect, the back of the port also need to set the exposed port.
[error] k.m.j.KafkaJMX$ - Failed to connect to service:jmx:rmi:/ / / jndi/rmi: / / 9.11.8.48: - 1 / jmxrmi Java. Lang. IllegalArgumentException: the requirement failed: No jmx port but jmx polling enabled!
Copy the code
  1. View in the containertopicThe following errors are reported (not just topic commands, it seems all will fail)
$ bin/kafka-topics.sh --list --zookeeper zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1
# This is an error
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 7203; nested exception is:
        java.net.BindException: Address already in use
Copy the code

Solution: Add unset JMX_PORT before command; Command, the above command changed to:

$ unsetJMX_PORT; bin/kafka-topics.sh --list --zookeeper zoo1:2181/kafka1,zoo2:2181/kafka1,zoo3:2181/kafka1Copy the code

Attached: Docker common instructions

# View all mirrors
docker images
View all running containers
docker ps
# View all containers
docker ps -a
Get all container IP addresses
$ docker inspect --format='{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
View internal container logs
$ docker logs -f< ID > container# Go inside the container
$ docker exec-it < container ID> /bin/basj-d stands for background startupDocker run --name < container name >-e< Parameters > -v < Mount data volume > < Container ID># restart containerDocker restart < container ID># close the containerDocker stop < container id>Run the containerDocker start < container id>Copy the code