Docker network details

Basic theory of Docker networks

Docker uses Linux bridge network card to create a virtual Docker Container bridge (Docker0) on the host computer. When docker starts a Container, it assigns an IP address to the Container according to the network segment of the Docker bridge, called container-IP. The Docker bridge is the default network gateway for each container. Because containers on the same host are connected to the same bridge, containers can communicate directly with each other through containers’ container-IP.

The Docker bridge is a virtual network device created by the host rather than a real network device. External networks cannot be addressed, which means that external networks cannot access containers directly through Container-IP.

If the container wants external access, it can be accessed by mapping the container port to the host host (port mapping), that is, when docker run creates the container, it is enabled with the -p or -p parameter. When accessing the container, it is accessed through the host IP: container port.

Docker network mode

Docker network mode configuration instructions
Host mode – net = host The container is shared with the hostNetwork namespace.

The container does not virtualize its own network card, configure its own IP address, etc., but uses the IP address and port of the host.
Container pattern – net = container: NAME_or_ID A container is shared with another containerNetwork namespace.

A POD in Kubernetes is a Network namespace shared by multiple containers.

The created container does not create its own network card and configure its own IP, but andA specified container share IP address, port range.
None mode – net = none The container has a separate Network namespace without any Network Settings, such as assigning Veth pairs and Bridges, configuring IP addresses, etc.

This mode turns off the container's networking capabilities.
Bridge pattern – net = bridge (Default mode). This pattern assigns, sets IP, and so on to each container, and connects containers to oneDocker0 virtual bridgeThrough theDocker0 bridgeAs well asIptable natTable configuration communicates with the host machine
Macvlan

network
There is no The container has a Mac address that displays as a physical device on the network
Overlay There is no (Overwrite network): Bridge mode implemented using VXLAN

Bridge pattern

Default network mode. In Bridge mode, a container does not have a public IP address. Only the host can access the container directly. The external host is invisible, but the container can access the Internet through the NAT rule of the host.

Bridge Bridge mode implementation steps

  • The Docker Daemon uses veth pair technology to create two virtual network interface devices on the host, let’s say veth0 and VEth1. The VeTH pair technology ensures that no matter which VETH receives network packets, the packets will be transmitted to the other party.

  • The Docker Daemon attaches veth0 to the Docker0 bridge created by the Docker Daemon. Ensure that host network messages can be sent to VETH0;

  • The Docker Daemon adds veth1 to the namespace of the Docker Container and is renamed eth0. In this way, if the host’s network message is sent to VEth0, it will be immediately received by eth0, realizing the connectivity between the host and Docker Container network. At the same time, Docker Container is also guaranteed to use eth0 alone to achieve the isolation of Container network environment.

Defects of Bridge mode

Docker Container does not have a public IP address, that is, it is on a different network segment from eth0. The result is that the world outside the host cannot communicate directly with the container.

Pay attention to

Eth devices come in pairs. One end of the container is named eth0, and the other end is added to the bridge and named VETH (usually named VETH). They form a data transmission channel, with one end in and one end out.

Host network mode

The host mode is similar to the NAT mode in Vmware. The host and host are on the same network but have no independent IP address.

When the host mode is used to start the container, the container does not get a separate Network Namespace, but shares a Network Namespace with the host.

The container does not virtualize its own network card, configure its own IP address, etc., but uses the IP address and port of the host. Other aspects of the container, such as the file system and process list, remain isolated from the host

The container using host mode can directly use the IP address of the host to communicate with the outside world, and the service port inside the container can also use the port of the host, without NAT. The biggest advantage of host is good network performance, and the ports already used on Docker host can no longer be used, resulting in poor network isolation.

Host Network mode must be specified when the container is created. – network=host

Host mode is a good complement to bridge mode. Docker Container in host mode can directly use the IP address of the host to communicate with the outside world. If eth0 of the host is a public IP address, the Container also has this public IP address. In addition, the ports of services in the container can also use the ports of the host without additional NAT.

The host mode allows containers to share the host network stack. This has the advantage that external hosts communicate directly with the container, but the container network lacks isolation.

Host network mode defect

Containers using Host mode no longer have isolated, independent network environments. Although the service inside the container can be used in the same way as the traditional situation, the container will share and compete with the host for the use of network stack due to the weakening of network isolation. In addition, the container will no longer have all port resources inside the container, because some port resources have been occupied by the host’s own services, and some ports have been used for port mapping of the Bridge network mode container.

Container Network mode

A special host network mode, ontainer network mode is a relatively special network mode in Docker. Specify – network=container:vm1 when creating a container. (vm1 specifies the running container name.) Docker containers in this mode share a network environment so that the two containers can communicate efficiently and quickly using localhost.

Defects of the Container network mode

The Container network mode does not improve the Container’s ability to communicate with the world outside the host (like the bridge mode, it cannot connect to devices outside the host).

This pattern specifies that newly created containers share a Network Namespace with an existing container, rather than with the host. A newly created container does not create its own network adapter or configure its own IP address. Instead, it shares IP address and port range with a specified container. Also, the two containers are isolated from each other except for the network aspects, such as file systems, process lists, and so on. The processes of the two containers can communicate through the LO network device

None mode

In None mode, Docker containers have their own Network Namespace, but do not do any Network configuration for Docker containers. The Docker container has no information about network adapters, IP addresses, and routes. We need to add network cards and configure IP for Docker containers by ourselves.

In this network mode, the container has only lo loopback network and no other network adapter. The None mode can be specified at container creation time with — network= None. This type of network cannot be connected to the Internet. A closed network can ensure the security of containers.

Basic Network Usage

As a demonstration, use the Nginx scene

Operations related to the mirror

  • Pull the mirror
Docker pull nginx: 1.19.3 - alpineCopy the code
  • Run the mirror
Docker run-itd --name nginx1 nginx:1.19.3-alpineCopy the code
  • IP address allocation during container creation

The effect of docker Network Inspect Bridge is shown below

Docker container creation process

  • Create a pair of virtual interfaces/network cards, also known as veth pairs, and place them on the local host and in the new container.

  • The localhost side Bridges to the default docker0 or specified bridge and has a unique name, such as vetha596DA4;

  • Put one end of the container into the new container and change the name to eth0. This nic/interface is visible only in the namespace of the container.

  • Get a free address from the bridge available address segment (that is, the network corresponding to the bridge) and assign it to the container’s eth0, and configure a default route to the bridge network card vetha596DA4.

  • Containers can then use the ETH0 virtual network card to connect to other containers and other networks. If –network is not specified, containers created by default will be attached to Docker0, using the IP address of the docker0 interface on the localhost as the default gateway for all containers.

  • Enter the container to view the network address

Docker execit nginx1 sh IP a docker execit nginx1 IP aCopy the code

Install BRCTL

yum install -y bridge-utils
Copy the code

Run BRCTL

brctl show
Copy the code

Container connection diagram

Communication between multiple containers

Docker run-itd --name nginx1 nginx:1.19.3-alpine docker run-itd --name nginx2 nginx:1.19.3-alpine Docker network Inspect bridge Docker exec it nginx1 sh ping 172.17.0.2 docker exec it nginx2 sh ping 172.17.0.2 www.baidu.com ping nginx1Copy the code

The IP address of the container changes after the restart

Docker stop nginx1 docker start nginx2 docker start nginx1 docker network inspect bridgeCopy the code

Resolve container IP address changes (New Bridge Network)

docker network create -d bridge test-bridge
Copy the code

The -d command is used to specify the type of the DRIVER, and the test-bridge command is used to specify the name of the network. Let’s start by showing you how to connect containers to the test-Bridge network.

Start an nginx container, nginx3, and connect to the Lagou-Bridge network with the parameter Network connect. Before starting the container nginx3, let’s check that there are currently no containers connected to the test-Bridge network.

brctl show docker network ls docker network inspect test-bridge docker run -itd --name nginx3 --network test-bridge Nginx :1.19.3- Alpine BRCTL show docker network inspect test-bridgCopy the code
  • Connect a running container to the test-Bridge network
docker network connect test-bridge nginx2

docker network inspect test-bridge

docker exec -it nginx2 sh

ping nginx3
docker exec -it nginx3 sh
ping nginx2
Copy the code

None network

Start a container of NgniX, nginx1, and connect to the None network. Then execute docker Network Inspect None to see the container information

Docker run-itd --name nginx1 --network None nginx:1.19.3-alpine Docker network inspect NoneCopy the code

Pay attention to

The container uses none mode and has no physical address or IP address. We can go into the nginx1 container and run IP a and see. There is only one LO interface, no other network interface, and no IP. That is, with None mode, the container is not accessible to other containers. This usage scenario is rare, and only the features of the project with high security can be used.

docker exec -it nginx1 sh

ip a
Copy the code

Docker Network commands summary

Viewing network (Docker network ls)

View network objects that have been created

  • grammar
docker network ls [OPTIONS]
Copy the code
  • Commonly used parameters

    -f

    • –filter filter filter criteria (e.g. ‘driver=bridge ‘)
    • –format string
    • –no-trunc

    -q, –quiet Format the output to display only the ID of the network object

  • The basic use

docker network ls

docker network ls --no-trunc

docker network ls -f 'driver=host'
Copy the code

Docker network create

Create a new network object

  • grammar
docker network create [OPTIONS] NETWORK
Copy the code
  • Commonly used parameters

    • — Driver string Specifies the network driver (default “bridge”)

    • — Subnet strings Specifies the subnet segment (for example, 192.168.0.0/16 and 172.88.0.0/24).

      — IP-range strings executes the IP range of the container in the same format as the subnet parameter

    • –gateway strings IPv4 or IPv6 gateway of the subnet, for example, 192.168.0.1

  • The basic use

docker network ls

docker network create -d bridge my-bridge

docker network ls
Copy the code

Docker network rm

Delete one or more networks

  • grammar
docker network rm NETWORK [NETWORK...]
Copy the code

View network details (Docker Network inspect)

View the details of one or more networks

  • grammar
docker network inspect [OPTIONS] NETWORK [NETWORK...]  docker inspect [OPTIONS] NETWORK [NETWORK...]Copy the code
  • Common parameters

    • -f, –format string Outputs results based on format

Using a network (Docker run — network)

Specifies the network mode for the started container

  • grammar
docker run/create --network NETWORK
Copy the code

Docker network Connect /disconnect

docker network connect/disconnect

  • grammar
docker network connect [OPTIONS] NETWORK CONTAINER 

docker network disconnect [OPTIONS] NETWORK CONTAINER
Copy the code
  • Common parameters

    • -f, –force Forcibly disconnects (for disconnect)

The container image has a fixed IP address

Docker network create -d bridge --subnet=172.172.0.0/24 --gateway 172.172.0.1 network 172.172.0.0/24: 24 indicates that the subnet mask is 255.255.255.0 172.172.0.0/16: Docker network ls docker run-itd --name nginx3 -p 80:80 --net network -- IP 172.172.0.10 nginx:1.19.3-alpine --net mynetwork: selects an existing network. -- IP 172.172.0.10: assigns fixed IP addresses to nginx docker network inspect network docker stop nginx3 docker start  nginx3 docker network inspect networkCopy the code

Docker network knowledge and syntax are shared here. I hope I can skillfully use Docker