CSI storage mechanism

1.1 introduction of CSI

The Container Storage Interface (CSI) mechanism is used to establish a standard Storage management Interface between Kubernetes and external Storage systems. It provides Storage services for containers through this Interface.

1.2 CSI design background

Kubernetes has provided a powerful plug-in based storage management mechanism through PV, PVC and Storageclass, but the storage services provided by various storage plug-ins are based on a way called “in-true” (in-tree). This requires that the code storing the plug-in must be put into the Kubernetes trunk code base before it can be called by Kubernetes, which belongs to the tightly coupled development mode. This “in-tree” approach causes some problems:

  • The code to store the plug-in needs to be placed in the same code base as Kubernetes code and distributed with Kubernetes binaries;
  • Developers storing plug-in code must follow Kubernetes code development specifications;
  • Developers of stored plug-in code must follow the Kubernetes release process, including adding support for the Kubernetes storage system and bug fixes;
  • The Kubernetes community needs to maintain the code of the stored plug-in, including auditing, testing, etc.
  • Problems in the stored plug-in code can affect the performance of Kubernetes components and can be difficult to troubleshoot.
  • The stored plug-in code shares the same system privileges as the core components of Kubernetes (Kubelet and Kubecontroler-Manager), which may have reliability and security issues.

\

Kubernetes’ existing FlexVolume plug-in mechanism attempts to solve these problems by exposing an executive-based API for external storage. Although it allows third-party storage providers to develop storage drivers outside of the Kubernetes core code, there are still two issues that are not well addressed:

  • The executable file of a third-party driver still requires the root permission of the host, causing security risks.
  • When a storage plug-in is mounted or attached, third-party toolkits and dependent libraries need to be installed on the host, which makes the deployment process more complicated. For example, when deploying Ceph, the RBD library needs to be installed and GlusterFS needs to be installed.

\

Based on the above problems and considerations, Kubernetes gradually launched the storage interface standard and container docking, storage providers only need to implement the storage plug-in based on the standard interface, can use Kubernetes native storage mechanism to provide storage services for containers. This set of standards is called CSI (Container Storage Interface).

After CSI becomes Kubernetes storage supply standard, the storage provider’s code can be completely decoupled from Kubernetes code, and the deployment is also separated from Kubernetes core components. Obviously, the storage plug-in development maintained by the provider can provide more storage functions for Kubernetes users. It’s also safer and more reliable.

Csi-based storage plug-in mechanism, also known as “out-of-tree” service delivery, is the standard solution for future Kubernetes third-party storage plug-ins.

The CSI architecture

2.1 CSI storage component/deployment architecture

Key components of the KubernetesCSI storage plug-in and the recommended container deployment architecture:

There are two main components: CSI Controller and CSI Node.

2.2 CSI Controller

CSI Controller provides storage services to manage and operate storage resources and volumes. It is recommended in Kubernetes to deploy it as a single-instance Pod, which can be deployed using a StatefulSet or Deployment controller, with the number of replicas set to 1 to ensure that only one controller instance is run for a storage plug-in.

Two containers are deployed within this Pod:

  • Secondary Sidecar container that communicates with the Master (Kube-Controller-Manager). You can also include external-attacher and external-provisioner within the Sidecar container, which provide the following functions.
    • External-attacher: Monitors changes to the VolumeAttachment resource object and triggers ControllerPublish and ControllerUnpublish operations for CSI endpoints.
    • External-provisioner: Monitors changes to the PersistentVolumeClaim resource object, triggering the CreateVolume and DeleteVolume operations on the CSI endpoint.
  • CSI Driver storage Driver container, provided by a third-party storage provider, is required to implement the above interfaces.

The two containers communicate over local sockets (Unix DomainSocket, UDS) using the gPRC protocol.

The SIDecar container invokes the CSI interface of the CSI Driver container through the Socket. The CSI Driver container is responsible for specific volume storage operations.

2.3 CSI Node

A CSI Node manages and operates volumes on hosts. It is recommended in Kubernetes to deploy as DaemonSet, with a Pod running on each Node.

Deploy the following two containers in this Pod:

  • Node-driver-registrar, the secondary Sidecar container that communicates with Kubelet, mainly registers storage drivers with Kubelet;
  • CSI Driver is a storage Driver container provided by a third-party storage provider. Its main function is to receive calls from Kubelet. It needs to implement a series of CSI interfaces related to Node. Examples include the NodePublishVolume interface (used to mount volumes to a target path within the container), the NodeUnpublishVolume interface (used to unload volumes from the container), and so on.

The Node-driver-Registrar container communicates with Kubelet via unixsocket under one of the hostPath directories on the Node host. The CSIDriver container communicates with Kubelet through unixsocket in the other hostPath directory of the Node host. Meanwhile, the working directory of Kubelet (default: /var/lib/kubelet) needs to be mounted to the CSIDriver container. This command is used to manage the Volume (such as mounting and unmounting) for pods.

Three CSI plug-in use practice

3.1 Experimental Description

This section uses the CSI-HostPath plug-in as an example to demonstrate how to deploy the CSI plug-in and use the storage resources provided by the CSI plug-in.

3.2 Enabling features

Kube -apiserver, Kubecontroller-Manager and Kubelet start parameters.

[root@k8smaster01 ~]# vi /etc/kubernetes/manifests/kube-apiserver.yaml

... - -- privileged=true - --feature-gates=CSIPersistentVolume=true - --runtime-config=storage.k8s. IO /v1alpha1=true...Copy the code

[root@k8smaster01 ~]# vi /etc/kubernetes/manifests/kube-controller-manager.yaml

... - - feature - gates = CSIPersistentVolume = true...Copy the code

[root@k8smaster01 ~]# vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

# Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -- kubeconfig = / etc/kubernetes/kubelet. Conf - feature - gates = CSIPersistentVolume = true "...Copy the code

[root@k8smaster01 ~]# systemctl daemon-reload

[root@k8smaster01 ~]# systemctl restart kubelet.service

3.3 Creating CRD Resource Objects

Create CSINodeInfo and CSIDriverRegistry CRD resource objects:

[root@k8smaster01 ~]# vi csidriver.yaml

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: csidrivers.csi.storage.k8s.io
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  group: csi.storage.k8s.io
  names:
    kind: CSIDriver
    plural: csidrivers
  scope: Cluster
  validation:
    openAPIV3Schema:
      properties:
        spec:
          description: Specification of the CSI Driver.
          properties:
            attachRequired:
              description: Indicates this CSI volume driver requires an attach operation,and that Kubernetes should call attach and wait for any attach operationto complete before proceeding to mount.
              type: boolean
            podInfoOnMountVersion:
              description: Indicates this CSI volume driver requires additional pod
                information (like podName, podUID, etc.) during mount operations.
              type: string
  version: v1alpha1
Copy the code

[root@k8smaster01 ~]# vi csinodeinfo.yaml

apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: csinodeinfos.csi.storage.k8s.io labels: addonmanager.kubernetes.io/mode: Reconcile spec: group: csi.storage.k8s.io names: kind: CSINodeInfo plural: csinodeinfos scope: Cluster validation: openAPIV3Schema: properties: spec: description: Specification of CSINodeInfo properties: drivers: description: List of CSI drivers running on the node and their specs. type: array items: properties: name: description: The CSI driver that this object refers to. type: string nodeID: description: The node from the driver point of view. type: string topologyKeys: description: List of keys supported by the driver. items: type: string type: array status: description: Status of CSINodeInfo properties: drivers: description: List of CSI drivers running on the node and their statuses. type: array items: properties: name: description: The CSI driver that this object refers to. type: string available: description: Whether the CSI driver is installed. type: boolean volumePluginMechanism: description: Indicates to external components the required mechanism to use for any in-tree plugins replaced by this driver. pattern:  in-tree|csi type: string version: v1alpha1Copy the code

[root@k8smaster01 ~]# kubectl apply -f csidriver.yaml

[root@k8smaster01 ~]# kubectl apply -f csinodeinfo.yaml

3.4 Creating an RBAC

[root@k8smaster01 ~]# git clone github.com/kubernetes-…

[root@k8smaster01 ~]# cd drivers/deploy/hostpath/

[root@k8smaster01 hostpath]# vi csi-hostpath-attacher-rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-attacher
  # replace with non-default namespace name
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: external-attacher-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["csi.storage.k8s.io"]
    resources: ["csinodeinfos"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-attacher-role
subjects:
  - kind: ServiceAccount
    name: csi-attacher
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: ClusterRole
  name: external-attacher-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # replace with non-default namespace name
  namespace: default
  name: external-attacher-cfg
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "watch", "list", "delete", "update", "create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-attacher-role-cfg
  # replace with non-default namespace name
  namespace: default
subjects:
  - kind: ServiceAccount
    name: csi-attacher
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: Role
  name: external-attacher-cfg
  apiGroup: rbac.authorization.k8s.io
Copy the code

[root@k8smaster01 hostpath]# vi csi-hostpath-provisioner-rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-provisioner
  # replace with non-default namespace name
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["get", "list"]
  - apiGroups: ["csi.storage.k8s.io"]
    resources: ["csinodeinfos"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: ClusterRole
  name: external-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # replace with non-default namespace name
  namespace: default
  name: external-provisioner-cfg
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "watch", "list", "delete", "update", "create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-provisioner-role-cfg
  # replace with non-default namespace name
  namespace: default
subjects:
  - kind: ServiceAccount
    name: csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: Role
  name: external-provisioner-cfg
  apiGroup: rbac.authorization.k8s.io
Copy the code

[root@k8smaster01 hostpath]# vi csi-hostpathplugin-rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-driver-registrar
  # replace with non-default namespace name
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: driver-registrar-runner
rules:
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  # The following permissions are only needed when running
  # driver-registrar without the --kubelet-registration-path
  # parameter, i.e. when using driver-registrar instead of
  # kubelet to update the csi.volume.kubernetes.io/nodeid
  # annotation. That mode of operation is going to be deprecated
  # and should not be used anymore, but is needed on older
  # Kubernetes versions.
  # - apiGroups: [""]
  #   resources: ["nodes"]
  #   verbs: ["get", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-driver-registrar-role
subjects:
  - kind: ServiceAccount
    name: csi-driver-registrar
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: ClusterRole
  name: driver-registrar-runner
  apiGroup: rbac.authorization.k8s.io
Copy the code

[root@k8smaster01 hostpath]# kubectl create -f csi-hostpath-attacher-rbac.yaml

[root@k8smaster01 hostpath]# kubectl create -f csi-hostpath-provisioner-rbac.yaml

[root@k8smaster01 hostpath]# kubectl create -f csi-hostpathplugin-rbac.yaml

3.5 Formal Deployment

[root@k8smaster01 ~]# cd drivers/deploy/hostpath/

[root@k8smaster01 hostpath]# kubectl create -f csi-hostpath-attacher.yaml

[root@k8smaster01 hostpath]# kubectl create -f csi-hostpath-provisioner.yaml

[root@k8smaster01 hostpath]# kubectl create -f csi-hostpathplugin.yaml

Note: YamL recommends changing the mirror source to domestic:


Gcr. IO —-> gcr.azk8s.cn


Quay. IO —-> quay.azk8s.cn

Iv Test use

4.1 Verification

[root@k8smaster01 ~]# kubectl get pods

4.2 create StorageClass

[root@k8smaster01 ~]# vi drivers/examples/hostpath/csi-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-hostpath-sc
provisioner: csi-hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
Copy the code

[root@k8smaster01 ~]# kubectl create -f drivers/examples/hostpath/csi-storageclass.yaml

4.3 create PVC

[root@k8smaster01 ~]# vi drivers/examples/hostpath/csi-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-hostpath-sc
Copy the code

[root@k8smaster01 ~]# kubectl create -f drivers/examples/hostpath/csi-pvc.yaml

[root@k8smaster01 ~]# kubectl get pvc

[root@k8smaster01 ~]# kubectl get pv

4.4 Creating an Application

[root@k8smaster01 ~]# vi drivers/examples/hostpath/csi-app.yaml

kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data"
        name: my-csi-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-csi-volume
      persistentVolumeClaim:
        claimName: csi-pvc
Copy the code

[root@k8smaster01 ~]# kubectl create -f drivers/examples/hostpath/csi-app.yaml

[root@k8smaster01 ~]# kubectl get pods

\

Tip: More CSI plugin sample references:Feisky. Gitbooks. IO/kubernetes /…


CSI official documents:kubernetes-csi.github.io/docs/