Like everything else in K8S, a service is just a resource, a record in a central database. It describes how to configure some programs to do something. In fact, services affect the configuration and behavior of several components in the cluster, but the most important one here is Kube-Proxy. Based on the name, many people probably have a general idea of what this component does, but kube-Proxy has some features that make it very different from typical reverse proxies such as HaProxy or Linkerd.

The general behavior of a proxy is to pass traffic between a client and a server over two open connections.

The client connects inwards to the service port and the proxy connects outwards to a server. Since all such agents run in user space, this means that packets are marshalled into user space and returned to kernel space each time they pass through the agent.

Kube-proxy was originally implemented as such a user-space proxy, but with a twist. The proxy needs an interface to listen for client connections and to connect to the server. The only interfaces available on a node may be:

  • The host interface
  • Pod Virtual interface on the network

Why not use an address in one of these networks?

I’m not aware of some of the internal reasons, but I think doing this early in the project would complicate routing rules for networks that are designed to accommodate pods and nodes that are ephemeral entities in the cluster.

Services obviously need their own, stable, non-conflicting network address space, whereas a virtual IP system is efficient and free to control. However, as we noted, there are no actual devices on this network. You can use virtual networks in routing rules, firewall filters, etc., but you can’t actually listen on ports or open a connection through a nonexistent interface.

K8s uses a feature of the Linux kernel called NetFilter and a user-space interface called iptables to solve this problem.

There isn’t enough space in this already lengthy article (plus the previous section) to detail how this works. If you want to learn more, netFilter is a good place to start.

Here is a rules-based package processing engine. It runs in kernel space and looks at each package at different points in its life cycle. It matches packets according to rules, and when it finds a matching rule, it takes the specified action. One of the many actions it can take is to redirect packets to another destination.

Yes, NetFilter is a kernel space proxy. The following illustrates the role of Kube-Proxy as a user-space proxy run time netfilter.

In this mode, Kube-proxy opens a port on the localhost interface (10400 in the example above) that listens for requests to test-service, inserts netFilter rules, rerouts packets destined for service IP addresses to its own port, And forward these requests to the POD on port 8080. This is how a request for 10.3.241.152:80 magically becomes a request for 10.0.2.2:8080.

Given netFilter’s capabilities, Kube-Proxy needs to open a port and insert the correct NetFilter rules for the service in response to cluster changes from the autonomous API server.

There is another twist to the story. As I mentioned above, user-space agents are very expensive due to packet marshalling. In K8S 1.2, Kube-Proxy gained the ability to run in iptables mode. In this mode, kube-Proxy is basically no longer a proxy for inter-cluster connections, but delegates NetFilter to detect packets bound to the service IP and redirect them to pods, all in kernel space. In this mode, kube-Proxy’s job is more or less limited to keeping netFilter rules in sync.

conclusion

Finally, let’s compare what is described above with the requirement for a reliable agent at the beginning of this article. Is the service proxy system persistent?

  1. By default, Kube-Proxy runs as a Systemd unit, so if it fails, it will be restarted.
  2. In the Google Container Engine, it runs as a POD, controlled by a Daemonset.
  3. As a user-space proxy, kube-proxy is still a single point of connection failure.
  4. When running in Iptables mode, the system is very persistent from the point of view of the local POD trying to connect, because if the node is started, netFilter is also started.

Second question: Does the Service agent know of a health server POD that can handle requests?

As mentioned above, Kube-Proxy listens to the main API server for changes in the cluster, including changes to services and points.

  1. When it receives updates, it uses iptables to keep the NetFilter rules in sync.
  2. When a new service is created and its endpoint is populated, kube-Proxy gets notified and creates the necessary rules.
  3. Similarly, it removes rules when deleting services. The health check of the endpoint is performed by Kubelet, another component that runs on each node. When an unhealthy endpoint is found, Kubelet notifies Kube-Proxy via the API server and edits the NetFilter rules to remove the endpoint until it is healthy again.

This paper is participating in theNetwork protocols must be known and must be known”Essay campaign