Two problems need to be solved in K8S underlying network

  1. Assist K8S by assigning non-conflicting IP to docker containers on each NODE
  2. An overlay Network is created between these IP addresses, through which packets are delivered intact to the target container.

Open vSwitch

Open vSwitch You can set up a multi-clock communication tunnel, for example, Open vSwitch with GRE/VXALN. In the K8S scenario, we mainly resume the tunnel from L3 to L3. The network architecture is as follows

The steps to complete are as follows:

  1. Delete the bridge docker0 created by the Docker Daemon to avoid docker0 address conflicts.

  2. Create a Linux bridge manually and configure the IP of the bridge manually

  3. Establish an Open VSwitch ovS-Bridge and use OVS-vsctl to add a GRE port to ovS-Bridge. Before adding a gre port, set the IP address of the destination NODE to the peer IP address. This operation is required for each peer IP address.

  4. Add OVS-Bridge as network interface to Docker bridge (Docker0 or manually created bridge)

  5. Restart the OVS-Bridge and Docker bridge and add a Docker segment to the docker bridge routing rule

Network communication process

When an application in a container accesses the address of another container, the packet will be sent to the Docker0 bridge through the default route in the container. The OVS bridge exists as the port of the Docker0 bridge, and it will send the data to the OVS bridge, and the OVS sends the data to the peer node through the GRE tunnel.

The configuration steps

Install OVS on both nodes

Install the ovs

yum install openvswitch
Copy the code

Disable Selinux and restart

#vi /etc/selinux/conifg
SELINUX=disabled
Copy the code

Check the OVS status

systemctl status openvswtich
Copy the code

Create a bridge and gre tunnel

Create ovS bridge BR0 on each node and then create GRE tunnel on the bridge

#Create an OVS bridge
ovs-vsctl add-br br0
#Create a GRE tunnel to connect to the peer end. Remote_ip is the IP address of eth0 on the peer endOvs-vsctl add-port br0 GRE1 -- set interface GRE1 type GRE option:remote_ip=192.168.18.128#Add BR0 to the local Docker0 bridge so that container traffic enters the tunnel through OVS
brctl addif docker0 br0
#Start the BR0 Docker0 bridge
ip link set dev br0 up
ip link set dev docker0 up
Copy the code

Network segments 128 and 131 are 172.17.43.0/24 and 172.17.42.0/24 respectively. Routes to these two network segments need to pass through the local Docker0 bridge. One 24 network segment reaches the peer end through the GRE tunnel of OVS. Therefore, you need to configure routing rules across the Docker0 bridge on each node

IP route add 172.17.0.0/16 dev docker0Copy the code

Clear the iptables rules of docker and the Linux rules that deny ICMP warming through the firewall

iptables -t nat -F; iptalbes -F
Copy the code

The direct route

A network model

By default, the IP of docker0 is not sensed in node networks. Manually routing allows pods to communicate between nodes.

implementation

This is achieved by deploying multilayer Switches (MLS)

Assume that the IP segment of the Docker0 bridge where POD1 resides is 10.1.10.0 and NODE1 is 192.168.1.128. The docker0 IP segment where POD2 resides is 10.1.20.0. The address of NODE2 is 192.168.1.129

1 Add a static route rule on NODE 1 to docker0 on NODE 2

Route add-net 10.1.20.0 netmask 255.255.255.0 GW 192.168.1.129Copy the code

2 Add a static route rule to docker0 on NODE 1

Route add-net 10.1.10.0 netmask 255.255.255.0 GW 192.168.1.128Copy the code

3 Verify the connectivity by ping the Docker0 network on NODE1

Ping 10.1.20.1Copy the code

In a large cluster, manually create a Linux bridge to avoid IP segment conflicts caused by the docker daemon creating docker0. Then use docker’s –bridge command to specify the bridge.

The Quagga routing learning software is then run on each node.

calico

Calico works

Calico can create and manage a layer 3 network with a fully routable IP address assigned to each workload. Workloads can communicate without IP encapsulation or network address translation to achieve bare-metal performance, simplify troubleshooting and provide better interoperability. In environments where overlay networks are required, Calico provides IP-in-IP tunneling, or can be used in conjunction with other overlay networks such as Flannel.

Calico also provides dynamic configuration of network security rules. Using Calico’s simple policy language, you can achieve fine-grained control over containers, virtual machine workloads, and communication between nodes of bare metal hosts.

Calico V3.4 was released on December 10, 2018, and integrates well with Kubernetes, OpenShift, and OpenStack.

Note: Calico v2.6 is currently only supported when using Calico in Mesos, DC/OS, and Docker Orchestrators.

Calico IPIP and BGP mode

  • IPIP is a tunnel between nodes and connects the two networks. When IPIP mode is enabled, Calico creates a virtual network interface named “TUNl0” on each Node. As shown in the figure below.
  • In BGP mode, a physical machine is used as a vRouter and no additional tunnel is created

Calico implements a vRouter in the Linux kernel to be responsible for data forwarding. Through BGP protocol, routing information on nodes is broadcast in the entire Calico network, and routing forwarding rules to other nodes are automatically set.

The Calico BGP mode can be directly connected in small-scale clusters. In large-scale clusters, additional BGP Route Reflector can be used.

Calico main components

Calico leverages the routing and iptables firewall features native to the Linux kernel. All traffic to and from containers, virtual machines, and physical hosts is traversed through these kernel rules before being routed to the target.

  • Felix: The main Calico agent that runs on each computer to manage endpoints resources.
  • Calicoctl: allows advanced policies and networks to be configured from a command line interface.
  • Orchestrator Plugins: Provides tight integration and synchronization support with a variety of popular cloud choreography tools.
  • Key/Value Store: Stores policy configuration and network status information of Calico. Etcdv3 or K8S API is used.
  • Calico/Node: Runs on each host, reads relevant policy and network configuration information from the key/value store, and implements it in the Linux kernel.
  • Dikastes/Envoy: Optional Kubernetes Sidecars that can protect workload-to-workload communications with mutual TLS authentication and add application layer control policies.

Felix

Felix is a daemon that runs on every computer that provides endpoints resources. In most cases, this means that it needs to run on the host node that hosts the container or VM. Felix is responsible for compiling routing and ACL rules and whatever else is needed on the host to provide the network connections required for endpoints resources on the host to function properly.

Depending on the specific choreography environment, Felix is responsible for the following tasks:

  • To manage the network interface, Felix programs some information about the interface into the kernel so that the kernel can properly handle the traffic emitted by this endpoint. In particular, it will ensure that the host responds correctly to ARP requests from each workload and will enable IP forwarding support for the interfaces it manages. It also monitors the appearance and disappearance of network interfaces to ensure that programming for them is being applied correctly.
  • Writing routes, Felix is responsible for writing routes to endpoints on his host into the Linux kernel FIB (forward information library). This ensures that packets destined for endpoints on the target host are forwarded correctly.
  • Writing ACLs, Felix is also responsible for programming ACLs into the Linux kernel. These ACLs are used to ensure that valid network traffic can only be sent between Endpoints and that Endpoints cannot bypass Calico’s security measures.
  • Reporting status, Felix is responsible for providing data about the health of the network. In particular, it reports on errors and problems that occur when configuring its host. This data is written to the ETCD to make it visible to other components and operations in the network.

Orchestrator Plugin

Each major cloud choreography platform has a separate Calico networking plug-in (e.g. OpenStack, Kubernetes). The purpose of these plug-ins is to bind Calico more closely to the orchestration tool, allowing users to manage the Calico network just as they manage the network tools built into the orchestration tool.

A good example of the Orchestrator plug-in is the Calico Neutron ML2 driver. This plug-in integrates with Neutron ML2 plug-in, allowing users to configure the Calico network through Neutron API calls, achieving seamless integration with Neutron.

The Orchestrator plug-in is responsible for the following tasks:

  • API Translation. Inevitably, each cloud choreography tool has its own API specification for managing the network. The Orchestrator plug-in’s main job is to translate these apis into Calico’s data model and store them in Calico’s data store. Some of the work in this transformation will be very simple, and others may be more complex to convert a single complex operation (for example, live migration) into a series of simpler operations expected by the Calico network.
  • The Orchestrator plug-in will provide the orchestrator with Feedback on management commands from the Calico network, if needed. This includes providing information about Felix’s survival and marking certain Endpoints as failures if the network configuration fails.

etcd

Etcd is a distributed key-value store database focused on data store consistency. Calico uses ETCD to provide data communication between components and as a consistent data store to ensure that Calico can always build an accurate network.

Depending on the orchestrator plug-in, ETCD can be used as either a master data store or a lightweight mirror of a separate data store. For example, in an OpenStack deployment, the OpenStack database is considered the “source of real configuration information,” and ETCD is used to mirror information about network configuration in it and to serve other Calico components.

Etcd components are interspersed throughout the deployment. It can be divided into two groups of host nodes: the core cluster and the agent.

For small deployments, the core cluster can be an ETCD cluster of one node (typically on the same node as the Orchestrator plug-in component). This deployment model is simple but does not provide redundancy for ETCD. In the event that etCD fails, the Orchstrator plug-in must rebuild the database, such as OpenStack, which requires the plug-in to resynchronize state from the OpenStack database to ETCD.

In larger deployments, the core cluster can be extended according to the ETCD management guide.

In addition, an ETCD proxy service is run on each computer that runs the Felix or Orchstrator plug-in. This reduces the load on the ETCD core cluster and shields the etCD service cluster details from the host nodes. If both the ETCD cluster and the Orchstrator plug-in have members on the same machine, use of the ETCD agent on that machine can be abandoned.

Etcd is responsible for performing the following tasks:

  • Data Storage, ETCD stores Calico network Data in a distributed, consistent, and fault-tolerant manner (for cluster sizes of at least three ETCD nodes). This ensures that the Calico network is always in known good condition, while allowing individual machine nodes running ETCD to fail or become inaccessible. This distributed storage of Calico network data improves the ability of Calico components to read from the database.
  • Communication, ETCD is also used as a Communication service between components. We ensure that non-ETCD components see any changes that have been made by having them monitor certain points in the key-value space, allowing them to respond to those changes in a timely manner. This feature allows status information to be submitted to the database, which then triggers further network configuration management based on that status data.

BGP Client (BIRD)

Calico deploys a BGP client on each node running the Felix service. The BGP client is used to read routing information written into the Felix kernel and distributed within the data center.

The BGP client performs the following tasks:

  • Route information is distributed, and when Felix inserts routes into the Linux kernel FIB, the BGP client receives them and distributes them to other working nodes in the cluster.

BGP Route Reflector (BIRD)

For larger deployments, simple BGP can be a limiting factor because it requires every BGP client to connect to every other BGP client in the mesh topology. This requires more and more connections, quickly becomes difficult to maintain, and can even overwhelm routing tables on some devices.

Therefore, Calico recommends deploying BGP Route Reflector in large-scale deployments. Such components are typically used in the Internet to act as a central point for BGP client connections, preventing them from needing to communicate with every BGP client in the cluster. For redundancy, you can deploy multiple BGP Route Reflector services at the same time. Route Reflectors only assist in managing BGP networks, and no endpoint data passes through them.

This BGP component is also the most common BIRD used in Calico, configured to run as a Route Reflector rather than a standard BGP client.

BGP Route Reflector is responsible for the following tasks:

  • Centralized Route information distribution. When the Calico BGP client advertises routes from its FIB to Route Reflector, Route Reflector advertises these routes to other nodes in the deployment cluster.

What is the BIRD

BIRD is a school project of the Faculty of Mathematics and Physics at Charles University in Prague. The project name is short for BIRD Internet Routing Daemon. It is currently developed and supported by the CZ.NIC Laboratory.

The BIRD project aims to develop a fully functional dynamic IP routing daemon, primarily for (but not limited to) Linux, FreeBSD and other Unix-like systems, and distributed under the GNU General Public License. For more information, please visit https://bird.network.cz/.

As an open source network routing daemon project, BRID designs and supports the following features:

  • both IPv4 and IPv6 protocols
  • multiple routing tables
  • the Border Gateway Protocol (BGPv4)
  • the Routing Information Protocol (RIPv2, RIPng)
  • the Open Shortest Path First protocol (OSPFv2, OSPFv3)
  • the Babel Routing Protocol
  • the Router Advertisements for IPv6 hosts
  • a virtual protocol for exchange of routes between different routing tables on a single host
  • a command-line interface allowing on-line control and inspection of status of the daemon
  • soft reconfiguration (no need to use complex online commands to change the configuration, just edit the configuration file and notify BIRD to re-read it and it will smoothly switch itself to the new configuration, not disturbing routing protocols unless they are affected by the configuration changes)
  • a powerful language for route filtering

Calico is deployed in K8S

  1. Modify kube-API Server startup parameters

    --allow-priviledge=true (Calico requires privileged mode)Copy the code
  2. Modify kubelet startup parameter –network-plugin=cni

Assume that the K8S environment contains two nodes: node1 (192,168.18.3) and node2 (192.168.18.4).

To create calico services, including Calico-Node and Calico Policy Controller, the following K8S resource objects are required

  • Configmap: calico-config Contains the configuration parameters of calico
  • Secret: calico-etcd-secrets used for TLS etCD connection
  • Deploy the Calico/Node container on each node as a Daemonset
  • Calico CNI binaries and network configuration parameters are installed on each node (done by the install-CNi container)
  • Deploy a Deployment named Calico/Kube-policy-Controller to set network policy for the PODS in the K8S cluster

The official Calico K8S installation yamL file is as follows

calico-etcd.yaml

---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Populate the following with etcd TLS configuration if desired, but leave blank if
  # not using TLS for etcd.
  # The keys below should be uncommented and the values populated with the base64
  # encoded contents of each file that would be associated with the TLS data.
  # Example command for encoding a file contents: cat <file> | base64 -w 0
  # etcd-key: null
  # etcd-cert: null
  # etcd-ca: null
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  #ETCD service address
  etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "bird"

  # Configure the MTU to use
  veth_mtu: "1440"

  # The CNI network configuration to install on each node. The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network".
      "cniVersion": "0.3.1".
      "plugins": [
        {
          "type": "calico".
          "log_level": "info".
          "etcd_endpoints": "__ETCD_ENDPOINTS__".
          "etcd_key_file": "__ETCD_KEY_FILE__".
          "etcd_cert_file": "__ETCD_CERT_FILE__".
          "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__".
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap".
          "snat": true.
          "capabilities": {"portMappings": true}
        }
      ]
    }

---
# Source: calico/templates/rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
rules:
  # Pods are monitored for changing labels.
  # The node controller monitors Kubernetes nodes.
  # Namespace and serviceaccount labels are used for policy.
  - apiGroups: ["] ""
    resources:
      - pods
      - nodes
      - namespaces
      - serviceaccounts
    verbs:
      - watch
      - list
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: ["] ""
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: ["] ""
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
  - apiGroups: ["] ""
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        # This, along with the CriticalAddonsOnly toleration below,
        # marks the pod as a critical add-on, ensuring it gets
        # priority scheduling and that its resources are reserved
        # if it ever gets evicted.
        scheduler.alpha.kubernetes.io/critical-pod: ' '
    spec:
      nodeSelector:
        beta.kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      initContainers:
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: The calico/the cni: v3.8.0
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: The calico/pod2daemon - flexvol: v3.8.0
          volumeMounts:
          - name: flexvol-driver-host
            mountPath: /host/driver
      containers:
        # Runs calico-node container on each Kubernetes node. This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: The calico/node: v3.8.0
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Set noderef for node controller.
            - name: CALICO_K8S_NODE_REF
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            httpGet:
              path: /liveness
              port: 9099
              host: localhost
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -bird-ready
              - -felix-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
            - name: policysync
              mountPath: /var/run/nodeagent
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/run/nodeagent
        # Used to install Flex Volume Driver
        - name: flexvol-driver-host
          hostPath:
            type: DirectoryOrCreate
            path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml

# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ' '
    spec:
      nodeSelector:
        beta.kubernetes.io/os: linux
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      priorityClassName: system-cluster-critical
      # The controllers must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      containers:
        - name: calico-kube-controllers
          image: The calico/kube - controllers: v3.8.0
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: policy,namespace,serviceaccount,workloadendpoint,node
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r
      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system
---
# Source: calico/templates/calico-typha.yaml

---
# Source: calico/templates/configure-canal.yaml

---
# Source: calico/templates/kdd-crds.yaml


Copy the code
kubectl apply -f calico-etcd.yaml
Copy the code

Notice Modifying parameters

More Calico Settings

coredns

Enabling CoreDNS requires adding two parameters to Kubelet

–cluster-dns=169.169.0.100 IP specifies the cluster IP address of the DNS service

–cluster-domain=cluster.local Specifies the domain name for the DNS service

The CoreDNS YAML file to deploy

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: | .:53 { errors health ready kubernetes CLUSTER_DOMAIN REVERSE_CIDRS { pods insecure fallthrough in-addr.arpa ip6.arpa }FEDERATIONS prometheus :9153 forward . UPSTREAMNAMESERVER cache 30 loop reload loadbalance }STUBDOMAINS ---apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
      - name: coredns
        image: Coredns/coredns: 1.5.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf". "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
    Kubelet = kubelet
  clusterIP: 169.169. 0100.
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
Copy the code

To view the configuration items used in the existing cluster, run the following command

kubectl -n kube-system get configmap coredns -o yaml
Copy the code

If this article is helpful to you, please click “like” below.