Kubernetes + Flannel













































  1. Network communication within the same Pod. Containers within the same Pod share the same network namespace and the same Linux stack. So for various operations on the network, they can access each other’s ports directly using the localhost address as if they were on the same machine. In fact, this is exactly the same environment as a traditional set of ordinary programs, which can be ported without special modifications for the network. The result is simplicity, security and efficiency, as well as reducing the difficulty of porting existing programs from physical machines or virtual machines to run under containers.
  2. Pod1 to Pod2 network, in two cases. Pod1 and Pod2 are on different hosts. Pod1 and Pod2 are on the same host.

    • First, Pod1 and Pod2 are not on the same host. The Pod address is in the same network segment as docker0, but docker0 network segment and host network card are two completely different IP network segments, and communication between different nodes can only be carried out through the host physical network card. Associate the IP address of the Pod with the IP address of the Node where the Pod is located. This association allows the Pod to access each other.
    • Pod1 is on the same host as Pod2. If Pod1 and Pod2 are on the same host, the Docker0 bridge forwards requests directly to Pod2 without passing through Flannel.
  3. Pod to Service network. When a Service is created, a domain name pointing to the Service is created. The rule of the domain name is {Service name}.{namespace}.svc. In the past, Service IP was forwarded by iptables and kube-proxy. Now, iptables maintains and forwards Service IP based on performance considerations. Iptables is maintained by Kubelet. Service supports only UDP and TCP. Therefore, ICMP, such as ping, cannot be used. Therefore, Service IP cannot be pinged.
  4. Pod to extranet. Pod sends a request to the extranet, searches the routing table, and forwards the packet to the host network adapter. After the host network adapter completes the route selection, Iptables executes Masquerade, changes the source IP address to the IP address of the host network adapter, and then sends the request to the extranet server.
  5. External access to Pod or Service Because Pod and Service are virtual concepts within the Kubernetes cluster, external client systems cannot access them through the Pod IP address or the virtual IP address and virtual port number of the Service. To make these services accessible to external clients, the Pod or Service port number can be mapped to the host so that the client application can access the container application through the physical machine.





Network customization based on Docker Libnetwork


  • The solution to inter-host communication in layer 2 VLAN network is to transform the original network architecture into a large layer 2 network that can communicate with each other, and implement point-to-point communication between containers through direct routing of specific network devices.
  • Overlay Network Refers to a new data format that encapsulates Layer 2 packets on TOP of IP packets using a specified communication protocol without changing the existing network infrastructure.

















  1. Bridge: The default Container network driver of Docker. Container is connected to the Docker0 bridge through a pair of Veth pairs. Docker dynamically allocates IP addresses and configates routes and firewalls for the Container.
  2. Host: The container and host share the same Network Namespace.
  3. Null: The network in the container is empty. You need to manually configure network interfaces and routes for the container.
  4. Remote: Remote Driver enables Libnetwork to connect to third-party network solutions through the HTTP Resful API. The SDN scheme similar to SocketPlane can replace the native network implementation of Docker as long as it realizes the agreed HTTP URL processing function and the underlying network interface configuration method.
  5. Overlay: Docker native cross-host multi-subnet network solution.









Example: Network configuration tool Pipework


  • Supports custom Linux Bridge and Veth pair for container communication.
  • Support for connecting containers to local networks using MacVLAN devices.
  • DHCP is used to obtain the IP address of the container.
  • Supports Open vSwitch.
  • VLAN division is supported.





OVS cross-host multi-subnet network scheme


Kubernetes integrates Calico



























  • The first etCD without HTTPS connection is deployed in HTTP mode, that is, directly connect to etCD without certificates
  • The second HTTPS connection etCD cluster solution, loading the ETCD HTTPS certificate mode, is a bit troublesome





Iv. Application container IP fixed (refer to online materials)


























  1. Pod IP Allocator: The Pod IP Allocator is an ETCD-based IP address Allocator that allocates and reclaims Pod IP addresses. Pod IP Allocator records the allocation of IP addresses using a bitmap and persists the bitmap to etCD.
  2. Pod IP Recycler: Pod IP Recycler is an ETCD based IP address Recycler that is at the heart of PodConsistent IP. Pod IP Recycler records the IP addresses used by every application based on namespace (RC name) and can use recycled IP addresses in advance for next deployment. Pod IP Recycler can only recycle IP generated by RC. IP generated by Pod Recycler through other controllers or directly can not be recorded, so IP generated by Pod Recycler does not remain constant. In addition, Pod IP Recycle checks the TTL of each RECLAIMED IP object. The retention time is set to one day.








  • The first deployment and capacity expansion of applications are randomly allocated from the IP pool.
  • Application redeployment: During redeployment, released IP addresses are stored in the IP Recycle list according to the RC full name. IP addresses are preferentially obtained from the Recycle list to fix IP addresses.

















Q&A

Q: Today we talked about many ways to solve the container network problem. It seems that you can access the container IP directly from outside or inside the cluster, but is there a scenario where you can access the container IP from outside the cluster? I don’t think the IP of the container should be exposed. The IP of the container cannot be accessed directly from the outside, because the IP of the container may change. Sometimes you use RC or something like that. My question is is it possible to access container-based services from outside the cluster via ClusterIP or Node-port IP? If so, what is the value of Flanel or Calico introduced today? I feel that both of these two solutions directly access the IP address of the container, so if something goes wrong with a container and you restart it, you must change the IP address. What can you do in this case? In addition, container to container access in the cluster, whether to directly access the IP address of the peer container, or access cluster-IP, I am confused, thank you.

A: You can add ports to localhost for container-to-container access in the cluster. If it is a Pod on the same host, because they are on the same Docker0 bridge, the same address segment, can communicate directly. If it is a Pod on a different host, it needs to cross the physical network card on the physical machine and go to docker0 via the host IP to a container in the corresponding Pod. Both Flanel and Calico have their own application scenarios. Flanel does not directly access the container IP. It also has its own Flanel0 bridge. The Calico node network can directly use the network structure of the data center (supporting L2 or L3) and can communicate directly. The IP address of the container can be directly accessed from the outside. The fixed IP address of the container is required for certain applications such as databases.




A: Not yet. If the Pod is abnormal after mass creation, it can be manually restored to the Running state.

Q: Do we use Calico network in our actual project, and do we encounter any typical problems? Is there any problem with Calico IPIP production environment?


A: Data center environments use Calico. The problem is that stable doesn’t support private networks very well yet.

Q: Do you have any suggestions for Kubernetes version selection?


A: It is recommended to use version 1.8 or higher, which is better in high availability, compatibility and stability.

Sun Jie, architect of Beijing CNPC Ruifei Information Technology Co., LTD., is an open source technology enthusiast. Focus on the system, operational, cloud computing and data center management, successively in foreign companies, Internet, electricity, large enterprises, involved in the implementation of data center construction, private cloud architecture planning and operational management, data mining and so on related work, in a number of large and medium-sized projects in the construction and deployment operations, has accumulated rich architectural design, project implementation and experience. IT is not only the advocator of technology sharing, but also the practitioner and preacher of IT industry