Hi, I’m Jack

Previously in Docker (5) : How do containers communicate with each other? Network communication when running multiple containers, but those containers are all running on the same physical machine

In real projects, we often need to deploy multiple sets of software. For example, components need to be deployed in a cluster, or a project application itself depends on many components, for storage and operational efficiency, often need to be deployed across hosts. So how do you implement network communication across host containers?

Hey, you figured it out, Docker figured it out, or there’s a universal solution

1. Theory: How does Docker achieve cross-host network?

1. Know about Docker Overlay

An Overlay network is a logical network created on top of a physical (underlying) network for a specific purpose. Docker can create Overlay networks between containers to communicate between containers across hosts

In other words, as long as several physical machines can communicate with each other, an Overlay network is built on the machines. Overlay networks are deployed with containers that need to communicate with each other. In the end, they work as if they were deployed on the same physical machine

For example, if we need to implement an Elasticsearch cluster, we can deploy each node on a pre-created Overlay network to communicate with each other

Be a little more specific

You may also wonder why an Overlay network can be used to interconnect multiple physical machines. In fact, it adds a layer of virtual network between Docker cluster nodes, which has an independent virtual network segment. The request sent by the Docker container will be first sent to the virtual subnet, and then sent by the virtual subnet wrapped as the real WEBSITE of the host

Docker Swarm is a container cluster management tool developed by Docker.

Docker Swarm is a simple way to implement Docker Overlay network. It has good compatibility with Docker API. Docker is installed on Linux and Swarm is installed by default. Therefore, we adopt Swarm to realize network communication between clusters here

Next, let’s get to know it in real life

2. Actual combat 1: To achieve communication between clusters

Let’s take the old example. In Docker (5) : How do containers communicate with each other? We run Spring Boot back-end programs Druid_demo and Mariadb respectively in different containers, and successfully implement the communication between them

So, next, we deploy them separately on two machines

The machine configuration is as follows:

The serial number The node role The IP address Container name
1 manager 10.10.10.100 druid_demo
2 worker 10.10.10.88 mariadb

Swarm to set up a cross-host network

  1. Create a Swarm cluster in manager

  2. Add the other clusters separately

  3. Create an Overlay network in Manager

  4. Specify this Overlay network when starting each container

To be specific, let’s go down

1. Create a Swarm cluster on the manager node

Docker swarm init --advertise-addr=10.10.10.100 [root@localhost ~]# docker swarm init --advertise-addr=10.10.10.100 Swarm initialized: current node (maw28ll7mlxuwp47z5c5vo2v1) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-2bdwwlo8xvdskil05itg66l63dpi931oqnt5gvehlnf1it1auo-2uvypbiu0hpcn1e06hrhctbe8 10.10.10.100:2377 To add a manager To this swarm, run 'docker swarm join-token manager' and follow the instructions.Copy the code

2. Run the command on the worker node to add yourself to the cluster

docker swarm join --token SWMTKN-1-2bdwwlo8xvdskil05itg66l63dpi931oqnt5gvehlnf1it1auo-2uvypbiu0hpcn1e06hrhctbe8 10.10.10.100:2377
Copy the code

3. On the Manager node, view the status of nodes in the current network cluster

Run the Docker node ls command

4. On the Manager node, create overlay network

docker network create -d overlay  --attachable  demo
Copy the code

–attachable declares that the currently created network is a network that other nodes can access

5. Is there any more network in the network list of worker node

docker network ls
Copy the code

Note: Forget the screenshot here, normally, after executing step 4, there will be a demo network on the basis of the original

6. Start the Mariadb container on the worker node and specify the overlay network

sudo docker run -itd  -v /usr/local/mysqldata:/var/lib/mysql -h mariadb --name=mariadb --net=demo --privileged=true mariadb:latest /sbin/init
Copy the code

–net=demo: specifies network overlay between the cluster (overlay) : demo

7. On the Manager node, start durid_demo

Next, on the Manager node, start the container for the Spring Boot back-end program druid_demo and specify the Demo network

Docker run -it -p 8888:8080 -h druid_demo --name druid_demo --net=demo --privileged=true druid_demo:0.0.1-SNAPSHOT /sbin/initCopy the code

At this point, request the interface to verify whether the network is normal

The interface returns normally, indicating that we have implemented communication between the Druid_demo application container and the Mariadb container

8. Exit the cluster

Execute docker swarm Leave

So, now you know how Docker communicates across hosts

Next, let’s try out Elasticsearch cluster building

Build Elasticsearch cluster

Before entering the cluster building, let’s first look at how to start a single ES

1. Single-machine mode

docker run -itd --name elastic -v /usr/local/es/data:/var/lib/elasticsearch -v /usr/local/es/logs:/var/log/elasticsearch -p 9200 -p 9300:9300 -e "discovery.type=single-node" --restart=always ElasticSearch :v7.9.1Copy the code

Description:

1) -v: specify the mount directory (map ES data directory and log directory to the host respectively, so that even if the container hangs and restarts, the log will not be lost)

Among them, the container directory/etc/elasticsearch/elasticsearch yml in configuration path respectively. The data and path. The logs. Respectively,

/var/lib/elasticsearch

/usr/local/es/logs
Copy the code

2) -p: Specify the mapping port (map ports in the container to the host)

3) -e: specify configuration (specify es is currently started as a single node)

4) –restart=alway: always restarts automatically

After the startup, log in to es-head and find that it can be connected, then it indicates that the startup is successful

Single-machine mode is as simple as configuring the mount directories path.data and path.logs respectively and specifying the single-machine mode for boot

The cluster mode is not complex, and the configuration is basically the same as that of the single-machine mode. In addition to the need to deploy multiple physical machines, pay attention to the way the nodes are related to each other. And this interrelation means, then include: network interworking and determination cluster node member

2. Cluster mode

Let’s set up a cluster mode with 1 master + 3 data nodes, and the machines are allocated as:

The serial number The node role The IP address
1 elastic-master 10.10.10.88
2 elastic-data01 10.10.10.76
3 elastic-data02 10.10.10.77
4 elastic-data03 10.10.10.78

1) Configure cluster network Demo

Yes, refer to step 1 to configure the cluster network for the three machines

The roles of network nodes are as follows:

The serial number The node role The IP address
1 manager 10.10.10.88
2 worker 10.10.10.76
3 worker 10.10.10.77
4 worker 10.10.10.78

2) Modify the configuration file of each nodeelasticsearch.yml

a. elastic-master

vi /usr/local/es/config/elastic-master/elasticsearch.yml

# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: my-application # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: elastic-master node.master: true node.data: fasle # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /var/lib/elasticsearch # # Path to log files: # path.logs: /var/log/elasticsearch # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific Host: 0.0.0.0 # # Set a custom port for HTTP: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery. Seed_hosts: ["elastic-master", "elastic-data01","elastic-data02","elastic-data03"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["elastic-master"] # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requi http.cors.enabled: true http.cors.allow-origin: "*"Copy the code

Description:

  • Cluster. name: my-application # Specifies the cluster name

  • Node. name: elastice-master # Specifies the node name. Each node must be unique

  • Node. master: Demo # is the master node

  • Node. data: Demo # is not a data node

  • Path. Data: /var/lib/ elasticSearch

  • /var/log/elasticsearch: /var/log/elasticsearch

  • discovery.seed_hosts: [“elastic-master”, “elastic-data01″,”elastic-data02″,”elastic-data03”]

Docker is deployed on four nodes. It is recommended to configure the host name directly.

  • Initial_master_nodes: [“elastic-master”] # initial_master_nodes: [“elastic-master”] # initial_master_nodes: [“elastic-master”

b.elastic-data01

vi /usr/local/es/config/elastic-data01/elasticsearch.yml

# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: my-application # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: elastic-data01 node.master: false node.data: true # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /var/lib/elasticsearch # # Path to log files: # path.logs: /var/log/elasticsearch # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific Host: 0.0.0.0 # # Set a custom port for HTTP: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery. Seed_hosts: ["elastic-master", "elastic-data01","elastic-data02","elastic-data03"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["elastic-master"] # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requi http.cors.enabled: true http.cors.allow-origin: "*"Copy the code

Description:

A. The elastics-data01 configuration file is the same as the elastics-master configuration file except for the following three items

Node. name: elastic-data01 # Node name. Each node must be unique

Node. master: false # Is not a master node

Node. data: true # is the data node

B. The configurations of elastic-data02 and elastic-data03 are the same as those of elastic-data01 except that the value of Node. name is inconsistent

3) start

Execute the Docker startup command on each host to start each node

docker run -itd --name elastic-master -h elastic-master --net=demo -v /usr/local/es/data:/var/lib/elasticsearch -v /usr/local/es/logs:/var/log/elasticsearch -v /usr/local/es/plugins:/usr/share/elasticsearch/plugins -v /usr/local/es/config/elastic-master/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml:ro -p 9200:9200 -p 9300:9300 -e cluster.initial_master_nodes=elastic-master --restart=always ElasticSearch :v7.9.1 /sbin/initCopy the code
docker run -itd --name elastic-data01 -h elastic-data01 --net=demo -v /usr/local/es/data:/var/lib/elasticsearch -v /usr/local/es/logs:/var/log/elasticsearch -v /usr/local/es/plugins:/usr/share/elasticsearch/plugins -v /usr/local/es/config/elastic-data01/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml:ro -p 9200:9200 -p 9300:9300 -e cluster.initial_master_nodes=elastic-master --restart=always ElasticSearch :v7.9.1 /sbin/initCopy the code
docker run -itd --name elastic-data02 -h elastic-data02 --net=demo -v /usr/local/es/data:/var/lib/elasticsearch -v /usr/local/es/logs:/var/log/elasticsearch -v /usr/local/es/plugins:/usr/share/elasticsearch/plugins -v /usr/local/es/config/elastic-data02/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml:ro -p 9200:9200 -p 9300:9300 -e cluster.initial_master_nodes=elastic-master --restart=always ElasticSearch :v7.9.1 /sbin/initCopy the code
docker run -itd --name elastic-data03 -h elastic-data03 --net=demo -v /usr/local/es/data:/var/lib/elasticsearch -v /usr/local/es/logs:/var/log/elasticsearch -v /usr/local/es/plugins:/usr/share/elasticsearch/plugins -v /usr/local/es/config/elastic-data03/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml:ro -p 9200:9200 -p 9300:9300 -e cluster.initial_master_nodes=elastic-master --restart=always relasticSearch :v7.9.1 /sbin/initCopy the code

4) verify

Request address: http://10.10.10.88:9200/_cat/nodes View the node list

As you can see, there are four nodes in the cluster, including one master node and three data nodes (the ones marked with * are the master nodes).

Cluster set up successfully! Congratulations!

5) ES configuration is integrated in the project

After testing, you only need to configure the address of a master node

Spring. Elasticsearch. Rest uris, = http://10.10.10.88:9200Copy the code

Note: this only explains how to build an ES cluster and realize the way of network communication between clusters. There will be special articles to introduce the theoretical knowledge and practical knowledge of ES. Please look forward to them

Four,

Ok, generally speaking, Docker builds Overlay network Bridges, and then puts containers of each host on the bridge to achieve cross-host communication of Docker containers

Congratulations on acquiring a new skill. Of course, if you want to really master the words, also suggest that you really combat once, after all, combat and understanding are different stages, the degree of knowledge is also different levels of oh

Well, that’s it. Learn a little every day, and time will show you how strong you are

Next period:

Stay tuned ~