Overview

Calico will create a route and virtual interface for pod on the worker node, and assign POD IP. And assign pod CIDR network segments to worker nodes.

We produce K8S network plug-in using Calico CNI. Two plug-ins will be installed during installation: Calico and Calico-ipam, the plugin for calico-ipam and calico-ipam, the plugin for calico-ipam and calico-ipam, the plugin for calico-ipam and calico-ipam, the plugin for calico and calico-ipam. Calico plug-in is mainly used to create route and virtual interface, while calico-ipam plug-in is mainly used to allocate POD IP and assign POD CIDR to worker nodes.

The big question is, how did Calico do it?

Sandbox container

When kubelet starts, it calls SyncPod in the container runtime to create a container within the pod. It does several things:

  • Create a Sandbox container, which calls the CNI plug-in to create a network and other steps. At the same time, the boundary conditions are considered, and the failure of the creation will kill the Sandbox container and so on
  • Create Ephemeral Containers, init Containers, and normal containers.

This step creates a POD network. After the Sandbox Container is created, all other Containers share the same Network namespace with it. Therefore, all containers in a POD see the same network protocol stack and the same IP address. Each container can be distinguished by port. In the specific creation process, the container runtime service will be called to create the container, and the pod configuration data will be prepared first. The configuration data is also required when creating the Network namespace l36-L138:


func (m *kubeGenericRuntimeManager) createPodSandbox(pod *v1.Pod, attempt uint32) (string.string, error) {
	// Generate pod configuration data
	podSandboxConfig, err := m.generatePodSandboxConfig(pod, attempt)
	// ...

	// The pod logs directory is created on the host, in /var/log/Pods /{namespace}_{pod_name}_{uid}
	err = m.osInterface.MkdirAll(podSandboxConfig.LogDirectory, 0755)
	// ...

	// Call the container runtime to create the sandbox container, we produce k8s, here is the docker creation
	podSandBoxID, err := m.runtimeService.RunPodSandbox(podSandboxConfig, runtimeHandler)
	// ...

	return podSandBoxID, "".nil
}

Copy the code

K8s uses crI (Container Runtime Interface) to abstract the standard interface. Currently, Docker does not support CRI, so Kubelet makes an adaptation module dockershim, code in PKG /kubelet/dockershim. RuntimeService objects in the code above is dockerService object, so you can see the dockerService. RunPodSandbox () code implementation L76 – L197:


// Create a Sandbox container and a network for that container
func (ds *dockerService) RunPodSandbox(ctx context.Context, r *runtimeapi.RunPodSandboxRequest) (*runtimeapi.RunPodSandboxResponse, error) {
	config := r.GetConfig()

	// Step 1: Pull the image for the sandbox.
	// 1. Pull the mirror
	image := defaultSandboxImage
	podSandboxImage := ds.podSandboxImage
	if len(podSandboxImage) ! =0 {
		image = podSandboxImage
	}

	iferr := ensureSandboxImageExists(ds.client, image); err ! =nil {
		return nil, err
	}

	// Step 2: Create the sandbox container.
	// 2. Create sandbox Container
	createResp, err := ds.client.CreateContainer(*createConfig)
	// ...
	resp := &runtimeapi.RunPodSandboxResponse{PodSandboxId: createResp.ID}
	ds.setNetworkReady(createResp.ID, false)

	// Step 3: Create Sandbox Checkpoint.
	3. Create checkpoint
	iferr = ds.checkpointManager.CreateCheckpoint(createResp.ID, constructPodSandboxCheckpoint(config)); err ! =nil {
		return nil, err
	}

	// Step 4: Start the sandbox container.
	// Assume kubelet's garbage collector would remove the sandbox later, if
	// startContainer failed.
	// start the container
	err = ds.client.StartContainer(createResp.ID)
	// ...

	// Step 5: Setup networking for the sandbox.
	// All pod networking is setup by a CNI plugin discovered at startup time.
	// This plugin assigns the pod ip, sets up routes inside the sandbox,
	// creates interfaces etc. In theory, its jurisdiction ends with pod
	// sandbox networking, but it might insert iptables rules or open ports
	// on the host as well, to satisfy parts of the pod spec that aren't
	// recognized by the CNI standard yet.
	
	// 5. This step creates a network for the Sandbox Container. It calls the Calico CNI plug-in to create a route and virtual network card, and assigns pod IP to the pod and divides the pod network segment for the host
	cID := kubecontainer.BuildContainerID(runtimeName, createResp.ID)
	networkOptions := make(map[string]string)
	ifdnsConfig := config.GetDnsConfig(); dnsConfig ! =nil {
		// Build DNS options.
		dnsOption, err := json.Marshal(dnsConfig)
		iferr ! =nil {
			return nil, fmt.Errorf("failed to marshal dns config for pod %q: %v", config.Metadata.Name, err)
		}
		networkOptions["dns"] = string(dnsOption)
	}
	// This step calls the network plug-in to setup the Sandbox pod
	/ / due to our network plug-in is the cni (container network interface), so the code in the PKG/kubelet/dockershim/network/the cni/the cni. Go
	err = ds.network.SetUpPod(config.GetMetadata().Namespace, config.GetMetadata().Name, cID, config.Annotations, networkOptions)
	// ...

	return resp, nil
}
Copy the code

Due to our network plugin is the cni (container network interface), code ds.net work. SetUpPod continue chasing found actual call is cniNetworkPlugin SetUpPod (), Code in the PKG/kubelet/dockershim/network/the cni/the cni. Go:

func (plugin *cniNetworkPlugin) SetUpPod(namespace string, name string, id kubecontainer.ContainerID, annotations, options map[string]string) error {
	// ...
	netnsPath, err := plugin.host.GetNetNS(id.ID)
	// ...
	// Windows doesn't have loNetwork. It comes only with Linux
	ifplugin.loNetwork ! =nil {
        / / add the loopback
		if_, err = plugin.addToNetwork(cniTimeoutCtx, plugin.loNetwork, name, namespace, id, netnsPath, annotations, options); err ! =nil {
			return err
		}
	}
    // Call the network plug-in to create network related resources
	_, err = plugin.addToNetwork(cniTimeoutCtx, plugin.getDefaultNetwork(), name, namespace, id, netnsPath, annotations, options)
	return err
}

func (plugin *cniNetworkPlugin) addToNetwork(ctx context.Context, network *cniNetwork, podName string, podNamespace string, podSandboxID kubecontainer.ContainerID, podNetnsPath string, annotations, options map[string]string) (cnitypes.Result, error) {
	// This step prepares the parameters required by the network plug-in, which will eventually be used by the Calico plug-in
    rt, err := plugin.buildCNIRuntimeConf(podName, podNamespace, podSandboxID, podNetnsPath, annotations, options)
    // ...
    // The AddNetworkList function in the CNI library will be called, and the calico binary command will be called
    res, err := cniNet.AddNetworkList(ctx, netConf, rt)
    // ...
    return res, nil
}
// These parameters include the Container ID and POD
func (plugin *cniNetworkPlugin) buildCNIRuntimeConf(podName string, podNs string, podSandboxID kubecontainer.ContainerID, podNetnsPath string, annotations, options map[string]string) (*libcni.RuntimeConf, error) {
    rt := &libcni.RuntimeConf{
        ContainerID: podSandboxID.ID,
        NetNS:       podNetnsPath,
        IfName:      network.DefaultInterfaceName,
        CacheDir:    plugin.cacheDir,
        Args: [][2]string{{"IgnoreUnknown"."1"},
            {"K8S_POD_NAMESPACE", podNs},
            {"K8S_POD_NAME", podName},
            {"K8S_POD_INFRA_CONTAINER_ID", podSandboxID.ID},
        },
    }
    
    // port mappings Related parameters
    // ...
    
    // DNS parameters
    // ...
    
    return rt, nil
}
Copy the code

The addToNetwork() function calls the AddNetworkList function in the CNI library. Container Network Interface (CNI) is a standard Interface for Container networks. This code repository provides the implementation of the CNI standard Interface. All K8s Network plug-ins must implement the CNI standard Interface. We can also implement a simple web plug-in that follows the standard specification. So the relationship between Kubelet, CNI and Calico is: Kubelet calls CNI standard specification code package, cnI calls Calico plug-in binary. AddNetworkList AddNetworkList


func (c *CNIConfig) addNetwork(ctx context.Context, name, cniVersion string, net *NetworkConfig, prevResult types.Result, rt *RuntimeConf) (types.Result, error) {
	c.ensureExec()
	pluginPath, err := c.exec.FindInPath(net.Network.Type, c.Path)
	// ...

	// pluginPath is the calico binary path, which is to call the calico ADD command and pass the parameters that are already in place as described above
	// Parameters are also written to environment variables, from which the calico binary can take values
	return invoke.ExecPluginWithResult(ctx, pluginPath, newConf.Bytes, c.args("ADD", rt), c.exec)
}

// AddNetworkList executes a sequence of plugins with the ADD command
func (c *CNIConfig) AddNetworkList(ctx context.Context, list *NetworkConfigList, rt *RuntimeConf) (types.Result, error) {
	// ...
	for _, net := range list.Plugins {
		result, err = c.addNetwork(ctx, list.Name, list.CNIVersion, net, result, rt)
		// ...
	}
    // ...

	return result, nil
}

Copy the code

Kubelet is now in the pluginPath directory of the calico binary file. Kubelet is now in the pluginPath directory of the calico binary file. Kubelet is now in the pluginPath directory of the calico binary file. Kubelet command-line-tools-reference: kubelet command-line-tools-reference: kubelet command-line-tools-reference: kubelet command-line-tools-reference: kubelet command-line-tools-reference


{
  "name": "k8s-pod-network"."cniVersion": "0.3.1"."plugins": [{"type": "calico"."log_level": "debug"."log_file_path": "/var/log/calico/cni/cni.log"."datastore_type": "kubernetes"."nodename": "minikube"."mtu": 1440."ipam": {
          "type": "calico-ipam"
      },
      "policy": {
          "type": "k8s"
      },
      "kubernetes": {
          "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"}}, {"type": "portmap"."snat": true."capabilities": {"portMappings": true}}, {"type": "bandwidth"."capabilities": {"bandwidth": true}}}]Copy the code

The CNI code is a standard skeleton, but the core still needs to call a third-party network plug-in to create network resources for the Sandbox. The cni also provides some examples of the plugins, see code warehouse containernetworking/plugins, see with documentation plugins docs, For example, see the static IP Address Management Plugin provided on the official website.

conclusion

When kubelet creates a Sandbox container, it calls the cnI plugin command, such as calico ADD, and passes the command parameters through the environment variable to create a network resource object for the Sandbox Container. For example, Calico creates routes and virtual interfaces, assigns IP addresses to pods, and assigns pod cidr network segments to the current worker node from cluster CIDR. The data is written to the Calico Datastore database.

So, the key question, again, is how the Calico plug-in code is done.