Network characteristics of K8S:

  1. One IP for each POD (IP Peer POD)

  2. All pods access each other directly via IP, regardless of whether the POD is on the same physical machine

  3. All containers in POD share a LINUX NET NAMESPACE (network stack), and containers in POD can use localhost to access other containers in POD.

K8S requirements for cluster network:

  1. All containers can access other containers without NAT
  2. All nodes can communicate with all containers without NAT, and vice versa
  3. The address of the container is the same as the address seen by others

Kubernetes network implementation

POD communication network model

Communication network model between nodes

Ip1 Ip2 are stored in etCD

K8S Pod communication between nodes must meet the following conditions

  1. There must be no conflicting POD IP allocation in the entire K8S cluster
  2. Find a way to associate the IP of the POD with the IP of the NDOE and let the POD access each other through this association

Condition 1 requires that the bridge addresses of docker0 in NODE do not conflict

Condition 2 requires POD data to have a mechanism to know which NODE the IP address of the other POD is on when it starts

The flat network topology that meets the conditions is as follows

The default docker0 network is 172.17.0.0/16. Each container gets an IP within this subnet and uses Docker0 as a gateway

The docker host doesn’t need to know anything about Docker 0, Because the docker host makes IP camouflage (masquerade implicit NAT) on the physical card for the data sent by any container, that is to say, the source of data packets seen by any other node is the physical NIC IP of the host

The downside of this model is that NAT technology is required

In the K8S model, docker0 on each NODE is routable, that is, when a POD is deployed, each host in the same cluster can access the POD IP on other hosts without port mapping on the host.

We can think of NODE as a switch and the network model looks something like this

In Node, we currently use direct routing. Configure platform routing on each node

For example, on 192.168.1.10

Route add-net 10.1.20.0 netmask 255.255.255.0 GW 192.168.1.20 Route Add-net 10.1.30.0 netmask 255.255.255.0 GW 192.168.1.30Copy the code

When we start a POD, we require all containers under the POD to use the same network namespace and IP, so we must use the Container mode of the container network. If the containers in all pods are made into a chain structure, a problem with any of the containers in the middle will cause a chain reaction, so a google_containers/pause is introduced in each pod and the other containers are linked to this container. Google_containers/Pause is responsible for port planning and mapping

The network model inside POD is

The pause container takes over the ENDPOINT of the POD.

Through docker inpsect < id > | grep NetworkMod view pause container network mode, can see the use of the bridge, but business container docker inpsect business container id > < | grep NetworkMod Container :< long ID>

Service network information

When a service(non-nodeport) is created in K8S, K8S assigns a cluster IP to each service

roger@microk8s:~$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.152.183.69 < None > 80/TCP 11d HTTP-SVC ClusterIP 10.152.183.164 < None > 80/TCP 11dCopy the code

The IP address segment is the IP address segment specified by –server-cluster-ip-range when apiserver is started. This IP address segment cannot conflict with the IP address segment of Docker0. This network segment is not routed between the physical network and Docker0. The purpose of the Portal network is to direct container traffic to the default gateway, which is docker0

View iptables-save data

:KUBE-POSTROUTING - [0:0] ... -a kube-portals -CONTAINER -d 10.152.183.69/32 -p TCP -m comment --comment "default/default-http-backend:" -m TCP --dport 80 -j REDIRECT --to-ports 37853 -a kube-portals -CONTAINER -d 10.152.183.164/32 -p TCP -m comment --comment "Default/HTTP-svC: HTTP" -m TCP --dport 80 -j REDIRECT --to-ports 35667 -a kube-portals -CONTAINER -d 10.152.183.1/32-p tcp -m comment --comment "default/kubernetes:https" -m tcp --dport 443 -j REDIRECT --to-ports 40441 ... -A kube-alportals -HOST -d 10.152.183.69/32 -p TCP -m comment --comment "default/ default-HTTP-backend :" -m TCP --dport 80 -j DNAT --to-destination 192.168.10.5:37853 -a kube-portals -HOST -d 10.152.183.164/32 -p TCP -m comment --comment "Default/HTTP-svC: HTTP" -m TCP --dport 80 -j DNAT --to-destination 192.168.10.5:35667 -a kube-portals -host -d 10.152.183.1/32 -p TCP -m comment --comment "default/kubernetes: HTTPS "-m TCP --dport 443 -j DNAT --to-destination 192.168.10.5:40441Copy the code

You can see that traffic to all three services is redirected to a random port, 37853, 35667, 40441. These ports are created by kube-proxy. The Kube-proxy service associates each service with a random port and listens for that particular port to create associated load balancers for the service.

Note that node3’s KubeProxy does not participate in this interaction; node1’s Kube-Proxy acts as a load balancer

If this article is helpful to you, please click “like” below.