Docker is now a popular tool, widely used in many large companies.

Docker is different from a VIRTUAL machine. A virtual machine is a virtualization of hardware and is completely isolated from resources through the Hypervisor.

But Docker is operating system isolation, software isolation using the kernel Cgroup and Namespace features to achieve.

Since Docker is also isolated, do you know what docker network isolation is?

What is the network?

If you’ve ever used a virtual machine, you probably know Nat and the bridge model, which is also called the bridge model. Similarly, Docker actually provides four network models, namely brifge mode,host mode, Container mode and None mode.

Brifge mode

Brifge network mode is the default network model of Docker, which is actually similar to our bridge model. When the Docker process starts, a docker0 virtual bridge will be created on the host host, and the Docker container started on the host will be connected to the virtual bridge. The Docker0 virtual bridge is like the wifi we see most in our homes (at the high end we call it a switch). This allows all containers on the host to connect to a layer 2 network using the Docker0 virtual bridge.

When docker is running, it will assign an IP address from Docker0 and set the IP address of Docker0 as the default gateway of the container. Create a pair of virtual network cards on the host veth pair device, Docker put one end of the Veth pair device in the newly created container, named eth0 (the container’s network card), the other end of the veth pair device in the host, named vethxxx and similar names, and added this network device to the Docker0 bridge. To view the value, run the BRCTL show command.

Do not write –net parameter, is bridge mode. When docker run -p is used, Docker actually makes DNAT rules on iptables to implement port forwarding. You can use iptables -t nat-vnl.

use

$ docker run --name [ContainerName] -d --net=bridge [ImageName]
Copy the code

Host mode

If the host mode is used when the container is started, the container does not get a separate Network Namespace, but shares a Network Namespace with the host. The container does not virtualize its own network card, configure its own IP address, etc., but uses the IP address and port of the host. However, other aspects of the container, such as the file system and process list, are still isolated from the host.

To be clear, the IP address of the container is the same as the IP address of the host, if you don’t want to run multiple services on one server, or have a highly available Redis cluster on one host. So Host is kind of a lazy way to do it, because any configuration IP in your container can be written directly to the IP of the Host

Take a quick look at a diagram of this pattern in action

$ docker run --name [ContainerName] -d --net=host [ImageName]
Copy the code

Container pattern

This pattern specifies that newly created containers share a Network Namespace with an existing container, rather than with the host. A newly created container does not create its own network adapter or configure its own IP address. Instead, it shares IP address and port range with a specified container. Also, the two containers are isolated from each other except for the network aspects, such as file systems, process lists, and so on. The processes of the two containers can communicate through the LO network device.

You can use 127.0.0.1/localhost to call in and ask for services from other containers. If you don’t want to change the o&M configuration when you deliver it locally, you don’t want to change it either. After all, domestic people, unlike foreign countries enjoy the process, people here only see the result, regardless of whether you are alive or dead. If it’s not me, I’ll call you. To deal with the enterprise operation and maintenance of DOCker and K8S for small and medium-sized enterprises, we can do this.

A simple look at the effect is better

$ docker run --name [ContainerName] -d --net=container:[ContainerId/ContainerName] [ImageName]
Copy the code

None mode

None is a netless mode. Because you need to configure your own network. It has the same thing in common as before: it allocates its own Network Namespace, but does not configure Network card, IP, etc. . Totally not to this mode is to solve what scene, you can know in the comment area reply to tell small series oh.

And finally we’re going to look at his schematic

$ docker run --name [ContainerName] -d --net=name [ImageName]
Copy the code

Talking can be a plus in an interview.

Then we have to talk about how docker communicates across hosts, because by default, docker network mode, docker containers on a single host can communicate directly through the Docker0 bridge, while docker containers on different hosts can only communicate through port mapping on the host. This port mapping method is extremely inconvenient for many cluster applications. It would solve many problems if Docker containers could communicate directly with each other using their own IP addresses. According to the implementation principle, direct routing mode, bridge mode (such as Pipework), Overlay tunnel mode (such as Flannel, OVS + GRE), etc.

The above inconvenience is to manage multiple ports, and then someone proposed Pipework tool, this tool is a simple Docker network configuration tool, through the use of IP, BRCTL, OVS-vsctl and other commands to configure the docker container custom bridge, network card, routing and so on.

  • Replace the default docker0 bridge with the bri0 bridge
  • Bri0 differs from the default docker0 bridge: Bri0 and eth0 are veth pairs

Add a layer of LAN in front of all hosts, which you can think of as service discovery and governance.

When it comes to service discovery and governance, golang has a great solution, etCD. If you want to use ETCD, there is a solution to Flannel. The general process is as follows.

  • Etcd and Flannel are installed and run on each host.
  • Configure the docker0 subnet range for all hosts in etCD.
  • Flanneld on each host allocates subnets for the docker0 of the host according to the configuration in ETCD to ensure that the docker0 network segments on all hosts are not repeated, and stores the results (i.e., the correspondence between the docker0 subnet information on the host and the IP address of the host) into etCD library. In this way, the etCD library saves the corresponding relationship between the Docker subnet information on all hosts and the host IP;
  • When you need to communicate with containers on other hosts, search the ETCD database to find ouTIP (IP address of the destination host) corresponding to the subnet of the destination container.
  • The original packets are encapsulated in VXLAN or UDP packets, and the DESTINATION IP address of the IP layer is OUTIP.
  • Because the destination IP address is the host IP address, the route is reachable.
  • The VXLAN or UDP packets arrive at the destination host and are decapsulated. The original packets are decapsulated and finally arrive at the destination host.

What else can I say in the comments section