Here is a list of commands related to the Docker network. Some of these command options can only be configured when the Docker service is started and do not take effect immediately.

-b BRIDGE or – BRIDGE =BRIDGE – Specifies the BRIDGE to be mounted by the container

-- bip=CIDR -- Customize the mask of docker0

- H SOCKET... Or - the host = SOCKET... - Channel through which the Docker server receives commands

- the ICC = true | false - support for communication between the container

- IP - forward = true | the communication between the false - see below container

- the iptables = true | false - the iptables rules prohibit Docker add

- MTU =BYTES - Mtu in the container network

The following two command options can be specified either when the service is started or when the Docker container starts Docker Run. The default value will be specified when the Docker service is started, and the default value can be overwritten when the Docker run is executed later.

– DNS = IP_ADDRESS… – Use the specified DNS server

– DNS – search = DOMAIN… – Specify the DNS search domain

These last options are only used when a Docker run is executed, because it is specific to the container’s property content.

-h HOSTNAME or -hostname =HOSTNAME - configure the container HOSTNAME

- Link =CONTAINER_NAME:ALIAS - Connection added to another container

- net = bridge | none | container: NAME_or_ID | host - configuration container bridge model

-p SPEC or -publish =SPEC - Maps container ports to the host host

-p or - publish - all = true | false - mapping containers all ports to host host

1. Configure DNS

Docker does not customize the image for each container, so how to customize the host name and DNS configuration of the container? The secret is that it uses virtual files to mount the three related configuration files of the incoming container. Using the mount command in the container, you can see the mount information:

$ mount /dev/disk/by-uuid/1fec... ebdf on /etc/hostname type ext4 ... /dev/disk/by-uuid/1fec... ebdf on /etc/hosts type ext4 ... tmpfs on /etc/resolv.conf type tmpfs ...Copy the code

This mechanism enables the DNS configuration of all Docker containers to be updated immediately through files after the HOST host DNS information is updated.

If the user wants to manually specify the configuration of the container, the following options are available.

-h HOSTNAME or — HOSTNAME =HOSTNAME sets the HOSTNAME of the container, which will be written to /etc/hostname and /etc/hosts in the container. But it is not visible outside the container, neither in Docker PS nor in /etc/hosts of other containers.

–link=CONTAINER_NAME:ALIAS The ALIAS option adds the host name of another container to the /etc/hosts file when the container is created, so that processes in the new container can connect to it using the ALIAS name.

— DNS =IP_ADDRESS Add a DNS server to /etc/resolv.conf and let the container use this server to resolve all host names that are not in /etc/hosts.

–dns-search=DOMAIN Specifies the search DOMAIN of the container. When the search DOMAIN is set to.example.com, when searching for a host named host, the DNS searches not only host but also host.example.com. Note: Without the last two options, Docker will default to /etc/resolv.conf on the host to configure the container.

2. Container access control

The access control of containers is mainly managed and implemented by iptables firewall on Linux. Iptables is the default firewall software on Linux and comes with most distributions.

2.1. Access the external network

In order for a container to access an external network, it needs forwarding support from the local system. On Linux, check whether forwarding is enabled.

$sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
Copy the code

If the value is 0, forwarding is not enabled. You need to manually enable forwarding.

$sysctl -w net.ipv4.ip_forward=1
Copy the code

If –ip-forward=true is set when Docker starts the service, Docker automatically sets the ip_forward parameter to 1.

2.2. Access between containers

Container access to each other requires support from both sides.

  • Whether the network topology of the container is interconnected. By default, all containers are connected to the Docker0 bridge.
  • Whether the firewall software of the local system – Iptables is allowed to pass.

Access all ports

When the Docker service is started, a forwarding policy is added to the iptables FORWARD chain by default. Whether the policy is ACCEPT or deny depends on the configuration — ICC =true (the default) or — ICC =false. Of course, iptables rules are not added if –iptables=false is specified manually.

By default, different containers are allowed to communicate with each other. DOCKER_OPTS=– ICC =false in /etc/default/docker file to disable it for security reasons.

Accessing a specified port

After network access is turned off with -ICC =false, the container’s open port can also be accessed with the –link=CONTAINER_NAME:ALIAS option.

For example, when starting the Docker service, you can also use the ICC =false –iptables=true argument to turn off allowing mutual network access and allow Docker to modify the iptables rules in the system.

At this point, the iptables rules on the system may be similar

$sudo iptables -nl Chain FORWARD (policy ACCEPT) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0Copy the code

After that, start the container (docker run) with the –link=CONTAINER_NAME:ALIAS option. Docker adds an ACCEPT rule to the iptable for each container, allowing mutual access to open ports (depending on the EXPOSE line in the Dockerfile).

Iptables rules are added when the –link=CONTAINER_NAME:ALIAS option is added.

$sudo iptables -nl Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT TCP -- 172.17.0.2 172.17.0.3 TCP SPT :80 ACCEPT TCP -- 172.17.0.3 172.17.0.2 TCP DPT :80 DROP all -- 0.0.0.0/0 0.0.0.0/0Copy the code

Note: –link=CONTAINER_NAME:ALIAS CONTAINER_NAME must currently be the Docker assigned name or the name specified using the –name parameter. The host name is not recognized.

3. Map container ports to host hosts

By default, the container can proactively access connections to the external network, but the external network cannot access the container.

3.1. Container access to the outside

All container connections to the external network, the source address will be NAT to the local system IP address. This is done using iptables’ source address masquerade operation.

View NAT rules for the host.

$ sudo iptables -t nat -nL
Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16       !172.17.0.0/16
Copy the code

In the preceding rule, traffic whose source IP address is 172.17.0.0/16 and destination IP address is in other network segments (external network) is dynamically disguised as being sent from the system NIC. MASQUERADE’s advantage over traditional SNAT is that it can dynamically retrieve addresses from network cards.

3.2. External access containers

Containers allow external access, which can be enabled during Docker run with either -p or -p arguments. Either way, add rules to the NAT table of the local IPtable. When using -p:

$ iptables -t nat -nL
Chain DOCKER (2 references)
target  prot  opt  source     destination
DNAT    tcp   --   0.0.0.0/0  0.0.0.0/0  tcp dpt:49153 to:172.17.0.2:80
Copy the code

With -p 80:80:

$ iptables -t nat -nL
Chain DOCKER (2 references)
target     prot opt source               destination
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.2:80
Copy the code

The rule here maps 0.0.0.0, which means that the host will accept traffic from all interfaces. You can run the -p IP:host_port:container_port or -p IP::port command to specify the IP address and interface of the host that is allowed to access the container to set stricter rules.

If you want to permanently bind to a fixed IP address, you can specify DOCKER_OPTS=”– IP =IP_ADDRESS” in the Docker configuration file /etc/default/docker and restart the Docker service to take effect.

4. Configure the network bridge

By default, Docker services create a Docker0 bridge (with a Docker0 internal interface on it), which communicates with other physical or virtual network cards at the kernel layer, which puts all containers and localhosts on the same physical network.

Docker specifies the IP address and subnet mask of the Docker0 interface by default, allowing the host and container to communicate with each other over the bridge. It also gives the MTU (the maximum transport unit the interface is allowed to receive), which is usually 1500 Bytes, or the default value supported on the host host network route. These values can be configured at service startup time.

–bip=CIDR – IP address mask format, for example, 192.168.1.5/24

–mtu=BYTES — Overrides the default Docker MTU configuration

You can also configure DOCKER_OPTS in the configuration file and restart the service. Since the Docker bridge is currently a Linux bridge, users can use BRCTL show to view bridge and port connection information.

$sudo BRCTL show bridge name bridge ID STP enabled interfaces docker0 8000.3a1d7362b4ee no veth65f9 vethdda6Copy the code

Note: The BRCTL command can be installed in Debian and Ubuntu using sudo apt-get install bridge-utils.

Each time a new container is created, Docker selects a free IP address from the available address segment and assigns it to port eth0 on the container. Use the IP of the Docker0 interface on the localhost as the default gateway for all containers.

$ sudo docker run -i -t --rm base /bin/bash $ ip addr show eth0 24: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 32:6f:e0:35:57:91 brd Ff :ff:ff:ff:ff:ff :ff inet 172.17.0.3/16 scope global eth0 valid_lft forever preferred_lft forever inet6 Fe80: : 306 f: e0ff: fe35:5791/64 scope link valid_lft forever preferred_lft forever $IP route default via 172.17.42.1 dev Eth0 172.17.0.0/16 dev eth0 proto kernel scope link SRC 172.17.0.3 $exitCopy the code

5. Custom Bridges

In addition to the default Docker0 bridge, users can also specify Bridges to connect containers.

When starting the Docker service, use either -b BRIDGE or — BRIDGE =BRIDGE to specify the BRIDGE to use.

If the service is already running, stop the service and delete the old bridge.

$ sudo service docker stop
$ sudo ip link set dev docker0 down
$ sudo brctl delbr docker0
Copy the code

Then create a bridge bridge0.

$sudo BRCTL addbr bridge0 $sudo IP addr add 192.168.5.1/24 dev bridge0 $sudo IP link set dev bridge0 upCopy the code

Check to make sure the bridge is created and started.

$ ip addr show bridge0
4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default
link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.1/24 scope global bridge0        
valid_lft forever preferred_lft forever
Copy the code

Configure the Docker service so that the default bridge is connected to the created bridge.

$ echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker
$ sudo service docker start
Copy the code

Start the Docker service. Create a new container and see that it has been bridged to Bridge0. You can continue with the BRCTL show command to view the bridge information. In addition, the IP addr and IP route commands can be used in the container to view the IP address configuration and routing information.

6. Tools and examples

Before introducing custom network topologies, you may be interested in some external tools and examples:

6.1. pipework

Jerome Petazzoni wrote a shell script called Pipework that helps users connect containers in complex scenarios.

6.2. playground

Brandon Rhodes created a Python library that provides complete Docker container network topology management, including routing, NAT firewalls; And some servers that provide HTTP, SMTP, POP, IMAP, Telnet, SSH, FTP.

7. Edit the network configuration file

Docker 1.2.0 began supporting editing /etc/hosts, /etc/hostname, and /etc/resolval.conf files in a running container. However, these changes are temporary and only remain in the running container, and are not saved after the container terminates or restarts. It will not be committed by a Docker Commit.

Example 8.

Create a point-to-point connection

By default, Docker connects all containers to a virtual subnet provided by Docker0. Users sometimes need two containers to communicate directly with each other, rather than bridging over the host bridge.

The solution is simple: Create a pair of peer interfaces, place them in two containers, and configure them as point-to-point links.

Start two containers first:

$ sudo docker run -i -t --rm --net=none base /bin/bash root@1f1f4c1f931a:/# $ sudo docker run -i -t --rm --net=none base  /bin/bash root@12e343489d2f:/#Copy the code

Locate the process number and create a trace file for the network namespace.

$ sudo docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a
2989
$ sudo docker inspect -f '{{.State.Pid}}' 12e343489d2f
3004
$ sudo mkdir -p /var/run/netns
$ sudo ln -s /proc/2989/ns/net /var/run/netns/2989
$ sudo ln -s /proc/3004/ns/net /var/run/netns/3004
Copy the code

Create a pair of peer interfaces and configure routes

$ sudo ip link add A type veth peer name B $ sudo ip link set A netns 2989 $ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A $sudo IP netns exec 2989 IP link set A up $sudo IP netns exec 2989 IP route add 10.1.1.2/32 dev A $ Sudo IP link set B netns 3004 $sudo IP netns exec 3004 IP addr Add 10.1.1.2/32 dev B $sudo IP netns exec 3004 IP link Set B up $sudo IP netns exec 3004 IP route add 10.1.1.1/32 dev BCopy the code

Now the two containers can ping each other and successfully establish a connection. Point-to-point links do not require a subnet or subnet mask.

Alternatively, you can create a point-to-point link without specifying –net= None. This allows the container to communicate over the original network.

In a similar way, you can create a container that communicates only with the host. But in general, it is recommended to use — ICC =false to close communication between containers.