The article was submitted by Gao Qing, senior operation and maintenance engineer of Baiwu Technology
Kubernetes supports dozens of types of back-end storage volumes, some of which can be confusing, especially the local and hostPath volumes, which look like Node local solutions. Of course, there is another volume type that is similar, emptyDir.
In the Docker container era, we are familiar with Volume. Generally speaking, we create Volume data volumes and mount them to the specified path of the specified container to realize persistent storage of container data or data sharing among multiple containers. Of course, the container solutions mentioned here are standalone versions.
Moving into the container cluster era, we see Kubernetes providing a local disk storage volume solution with emptyDir, hostPath, and local in chronological order.
EmptyDir and hostPath are both technologies implemented and supported by Kubernetes for a long time. Local Volume is the alpha version of K8S V1.7, and the beta version of Local Volume has been released in K8S V1.10. Some features were not supported in earlier versions.
Before we start, let’s discuss a question: since we have already implemented container cloud platform, why do we care about these local storage volumes?
After a rough summary, there are several reasons:
- Special scenarios require temporary storage space, access to node node /sys/fs/cgroup data to run cAdvisor, and perform local single-node K8S environment function tests.
- Container clusters are only small-scale deployments to meet development testing and integration testing requirements.
- As a supplement to distributed storage services, for example, I inserted an SSD into a Node host to feed a container.
- Ceph and GlusterFS are two mainstream container cluster storage solutions, both of which are typical distributed network storage. All data reads and writes are tests of disk I/O and network I/O. Therefore, at least 10 GIGABit FC network adapters and Fc switches must be used when deploying storage clusters. If you don’t have the hard stuff, you end up with a slow-motion cluster of containers.
- The planning, deployment, long-term monitoring, capacity expansion, and operation and maintenance of distributed storage cluster services are professional tasks that require full-time technical personnel to invest in long-term technical construction.
This is not to say that distributed storage services are not good. In the practice of cloud platform construction, many companies often need to combine several common and dedicated storage solutions to meet most of the usage requirements.
So, if there’s a scenario for you, take a look at the features, techniques, and similarities and differences of these local storage volumes.
1, emptyDir
A Volume of type emptyDir is created when Pod is assigned to Node. Kubernetes automatically assigns a directory to Node, so there is no need to specify the corresponding directory file on the host Node. The initial contents of this directory are empty, and the data in emptyDir is permanently deleted when Pod is removed from Node.
Note: A container crashing event does not cause data in emptyDir to be deleted.
Best practices
According to official best practice recommendations, emptyDir can be used in the following scenarios:
- Temporary space, such as disk-based merge sort
- Set checkpoints to recover unfinished long calculations from crash events
- Save the files that the content manager container gets when it serves data from the Web server container
By default, emptyDir can use any type of back-end storage provided by a Node node. If you have a particular scenario and need to use TMPFS as the available storage resource for emptyDir, you can just add a definition of the emptydir. medium field and assign it to “Memory” when creating the emptyDir volume.
Note: When using the TMPFS file system as the storage backend for emptyDir, all data in emptyDir will also be lost if a Node node is restarted. Also, any files you write will count toward the Container’s memory limits.
EmptyDir volume experiment
Let’s create an example of the use of emptyDir Volume in a test K8S environment.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- image: busybox
name: test-emptydir
command: [ "sleep", "3600" ]
volumeMounts:
- mountPath: /data
name: data-volume
volumes:
- name: data-volume
emptyDir: {}Copy the code
The created pod contains only the part related to volume, and other irrelevant content is omitted:
# kubectl describe pod test-pod Name: test-pod Namespace: default Node: kube-node2/172.16.10.102...... Environment: <none> Mounts: /data from data-volume (rw) ...... Volumes: data-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: ......Copy the code
You can enter the container to view the actual volume mounting result:
# kubectl exec -it test-pod -c test-emptydir /bin/shCopy the code
2, hostPath
HostPath maps files or directories in the Node file system to pods. When a storage volume of the hostPath type is used, you can also set the Type field. The supported types include File, directory, File, Socket, CharDevice, and BlockDevice.
The following describes the usage scenarios and precautions of hostPath from the official website.
Usage Scenarios:
- When the running container needs to access Docker internal structure, such as using hostPath mapping /var/lib/docker to the container;
- When running cAdvisor in a container, you can use hostPath to map /dev/cgroups to the container.
Matters needing attention:
- A pod with the same configuration (for example, created through the podTemplate) may behave differently on different nodes because of the file contents mapped on different nodes
- When Kubernetes adds a resource-sensitive scheduler, resources used by hostPath are not counted
- Only root has the write permission for directories created on the host. You need to run your application on privileged Container, or change the file permissions on the host.
HostPath volume experiment
Let’s create an example using hostPath Volume in a test K8S environment.
apiVersion: v1
kind: Pod
metadata:
name: test-pod2
spec:
containers:
- image: busybox
name: test-hostpath
command: [ "sleep", "3600" ]
volumeMounts:
- mountPath: /test-data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: DirectoryCopy the code
Take a look at the pod creation results and the volumes section:
# kubectl describe pod test-pod2 Name: test-pod2 Namespace: default Node: kube-node2/172.16.10.102...... Mounts: /test-data from test-volume (rw) ...... Volumes: test-volume: Type: HostPath (bare host directory volume) Path: /data HostPathType: Directory ......Copy the code
We log in to the container, go to the mounted /test-data directory, and create a test file.
# kubectl exec -it test-pod2 -c test-hostpath /bin/sh
/ # echo 'testtesttest' > /test-data/test.log
/ # exitCopy the code
We can see the following files and contents on the Node where the pod is running.
[root@kube-node2 test-data]# cat /test-data/test.log
testtesttestCopy the code
Now let’s delete the pod and see what happens to the directory and data used by hostPath on node.
[root@kube-node1 ~]# kubectl delete pod test-pod2
pod "test-pod2" deletedCopy the code
Go to the node running the original POD and view the following.
[root@kube-node2 test-data]# ls -l
total 4
-rw-r--r-- 1 root root 13 Nov 14 00:25 test.log
[root@kube-node2 test-data]# cat /test-data/test.log
testtesttestCopy the code
- When using the hostPath volume, data in the volume will still exist even after the POD has been deleted.
Single-node K8S local test environment and hostPath Volume
Sometimes we need to build a single-node K8S test environment, and use hostPath as the back-end storage volume to simulate the real environment to provide management functions of PV, StorageClass and PVC.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: kube-system
name: standard
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
addonmanager.kubernetes.io/mode: Reconcile
provisioner: kubernetes.io/host-pathCopy the code
- This scenario can only be used in a single-node K8S test environment
3. Analysis of the similarities and differences between emptyDir and hostPath in functions
- Both are local storage volumes of Nodes.
- EmptyDir has the option of storing data in a local file system of type TMPFS, which hostPath does not support;
- In addition to mounting directories, hostPath supports File, Socket, CharDevice, and BlockDevice. It not only supports mounting existing files and directories to containers, but also provides the “if a File or directory does not exist, create one” function.
- EmptyDir is a temporary storage space with no persistence support at all;
- Volume data of hostPath is persisted in the file system of node. Even if POD is deleted, volume data is still stored in Node.
4. Concept of Local Volume
This is a very new storage type and is recommended for k8S v1.10+ and above. The Local Volume type is currently only in beta.
Local Volume allows users to access Local storage on node nodes in a simple and portable manner through a standard PVC interface. PV must contain affinity information. K8s uses this information to schedule containers to the correct node.
Configuration requirements
- When the local-volume plug-in is used, the names and paths of storage devices are fixed and do not change with the restart of the system or the increase or decrease of disks.
- The static provisioner configurator only supports discovery and management of mount points (for Filesystem mode storage volumes) or symbolic links (for block device mode storage volumes). Bind – Mounted storage volumes to the discovery directory.
StorageClass is bound to delay
When using local volumes, create a StorageClass and set the volumeBindingMode field to “WaitForFirstConsumer”.
Although local Volumes do not yet support dynamic provisioning, it is possible to create a StorageClass and delay the volume binding until the Pod Scheduling phase using deferred volume binding.
This ensures that the PersistentVolumeClaim binding policy also evaluates any other Node constraints that a Pod may have, such as node resource requirements, node selectors, Pod affinity, and Pod antiaffinity.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerCopy the code
External static provisioner
After configuring Local Volume, you can use an external static configurator to help simplify the management of local storage. The Provisioner configurator manages the volumes under the discovery directory by creating and cleaning up the PersistentVolumes for each volume.
Local Storage provisioner requires the administrator to pre-configure Local volumes on each node and specify which Local volumes belong to the following types:
- Filesystem volumeMode (default) PVs – Mount the Filesystem to the discovery directory.
- Block volumeMode PVs – A symbolic link to the Block device on the node needs to be created in the discovery directory.
A local volume can be a disk, disk partition, or directory mounted to a node.
Local volumes can support the creation of static persistentVolumes, but so far they do not support dynamic PV resource management.
This means that you’ll need to handle some of the PV management work manually, but it’s worth it considering that you at least eliminate the need to manually define and use PVS when creating a POD.
Create a PV based on Local volumes
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-nodeCopy the code
- The nodeAffinity field must be configured. K8s relies on this tag to find the local volumes to use on the correct Nodes for your Pods.
- If the volumeMode field is used, the BlockVolume Alpha feature must be enabled.
- The default value of the volumeMode field is Filesystem, but it can also be configured as Block. In this way, the local volume of the Node is mounted as a raw Block device of the container.
Data security Risks
Local Volumes are still limited by the availability of nodes, so they are not suitable for all applications. If the Node node becomes unhealthy, the Local volume will also become inaccessible, and pods using this local volume will not run. Applications using local voluems must be able to tolerate reduced availability and potential data loss. Whether this happens will depend on the implementation of node’s underlying disk storage and data protection.
5. Analysis of the similarities and differences between hostPath and Local Volume in functions
- Both realize the persistence function of data in containers based on local storage resources of node nodes, and both provide more suitable storage solutions for some special scenarios.
- The former has a long time, so the function is stable, while the latter is young, so the reliability and stability of the function still need to experience time and cases, especially the support for Block devices is only the alpha version;
- Both of them provide PV, PVC and StorageClass methods for K8S storage management.
- The StorageClass implemented by Local Volume does not have complete functions, and currently only supports delayed binding of volumes.
- HostPath is a single-node local storage volume solution that does not provide pod scheduling management based on node affinity.
- Local Volume is suitable for small-scale, multi-node K8S development or test environments, especially when a set of secure, reliable, and high-performance storage cluster services are not available.
6. Install and configure local Volume
Local-volume Project and address
Github.com/kubernetes-…
Step 1: Configure the K8S cluster to use local disks
If you use block block devices, you need to enable the Alpha feature: K8S V1.10 +
$export KUBE_FEATURE_GATES BlockVolume = = “true”
Note: If the k8S V1.10 + cluster has been deployed, the block device function can be used only after this feature is enabled for several major components. If K8S is later than version 1.10, there are several other features that need to be enabled because they are still in alpha.
Depending on how you set up the K8S, the following configuration instructions are provided in four cases.
Option 1: Google Compute Engine (GCE) cluster
A GCE cluster started with kube-up.sh will automatically format and mount the requested Local SSDs, so you can deploy the configurator using the pre-generated deployment specification and skip to Step 4, unless you want to customize the configurator specification or storage class.
$ NODE_LOCAL_SSDS_EXT=<n>,<scsi|nvme>,fs cluster/kube-up.sh
$ kubectl create -f provisioner/deployment/kubernetes/gce/class-local-ssds.yaml
$ kubectl create -f provisioner/deployment/kubernetes/gce/provisioner_generated_gce_ssd_volumes.yamlCopy the code
Option 2: GKE (Google Kubernetes Engine) cluster
The GKE cluster will automatically format and mount the requested Local SSDs. This is explained in more detail in GKE Documentation.
Then, skip to Step 4.
Option 3: cluster in bare – metal environment
- Local data disks on each node are partitioned and formatted according to application usage requirements.
- Define a StorageClass and mount all the storage file systems you want to use in a discovery directory. The discovery directory is specified in ConfigMap, as shown below.
- As mentioned above, use KUBE_FEATURE_GATES to configure Kubernetes API Server, Controller-Manager, Scheduler, and Kubelets on all nodes.
- If you are not using the default Kubernetes scheduler policy, the following features must be enabled:
- The Pre – 1.9: NoVolumeBindConflict
- 9+: VolumeBindingChecker
Note: in the test environment we used, it was a 3-node k8s test environment. In order to simulate the local volume function, we created three TMPFS file system mount resources in combination with the ram disks test method provided in option4 below.
Option 4: Use a native single-node test cluster
- Create the/MNT /disks directory and mount several subdirectories to it. Here is a simulation test of a real storage volume using three RAM disks.
$ mkdir /mnt/fast-disks
$ for vol in vol1 vol2 vol3;
do
mkdir -p /mnt/fast-disks/$vol
mount -t tmpfs $vol /mnt/fast-disks/$vol
doneCopy the code
(2) Create a stand-alone K8S local test cluster
$ ALLOW_PRIVILEGED=true LOG_LEVEL=5 FEATURE_GATES=$KUBE_FEATURE_GATES hack/local-up-cluster.shCopy the code
Create a StorageClass (1.9+).
To delay volume binding until pod schedules and processes multiple local PVS in a single POD, StorageClass must be created and volumeBindingMode set to WaitForFirstConsumer.
# Only create this for K8s 1.9+ apiVersion: storage.k8s. IO /v1 kind: StorageClass metadata: name: fast-disks provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer # Supported policies: Delete, Retain reclaimPolicy: Delete $ kubectl create -f provisioner/deployment/kubernetes/example/default_example_storageclass.yamlCopy the code
- Yaml file Check the local Volume project file for the YAML file to be used
Create local persistent volumes
Option 1: Local volume static provisioner
Configure an external static configurator.
(1) generate the ServiceAccount, Roles, DaemonSet, and ConfigMap specifications of the Provisioner and customize them.
This step uses the HELM template to generate the required configuration specifications. For setup instructions, see Helm README.
To generate the Provisioner configuration specification using the default values, run:
helm template ./helm/provisioner > ./provisioner/deployment/kubernetes/provisioner_generated.yaml
- Here is the template after rendering to get the final use of the resource definition file.
If using a custom configuration file:
helm template ./helm/provisioner --values custom-values.yaml > ./provisioner/deployment/kubernetes/provisioner_generated.yamlCopy the code
(2) Deploy Provisioner
If the user is satisfied with the contents of the Provisioner’s YAML file, you can use Kubectl to create the Provisioner’s DaemonSet and ConfigMap.
# kubectl create -f ./provisioner/deployment/kubernetes/provisioner_generated.yaml
configmap "local-provisioner-config" created
daemonset.extensions "local-volume-provisioner" created
serviceaccount "local-storage-admin" created
clusterrolebinding.rbac.authorization.k8s.io "local-storage-provisioner-pv-binding" created
clusterrole.rbac.authorization.k8s.io "local-storage-provisioner-node-clusterrole" created
clusterrolebinding.rbac.authorization.k8s.io "local-storage-provisioner-node-binding" createdCopy the code
(3) Check the local volumes that have been automatically discovered
Once started, external static Provisioner discovers and automatically creates local-volume PVs.
Let’s take a look at the PVs created in the above test:
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-436f0527 495Mi RWO Delete Available fast-disks 2m
local-pv-77a4ffb0 495Mi RWO Delete Available fast-disks 2m
local-pv-97f7ec5c 495Mi RWO Delete Available fast-disks 2m
local-pv-9f0ddba3 495Mi RWO Delete Available fast-disks 2m
local-pv-a0dfdc91 495Mi RWO Delete Available fast-disks 2m
local-pv-a52333e3 495Mi RWO Delete Available fast-disks 2m
local-pv-bed86926 495Mi RWO Delete Available fast-disks 2m
local-pv-d037a0d1 495Mi RWO Delete Available fast-disks 2m
local-pv-d26c3252 495Mi RWO Delete Available fast-disks 2mCopy the code
- Because there are three nodes, each of which has three file systems mounted to the/MNT /fast-disks automatic discovery directory, nine PVs are generated
View details about a PV:
# kubectl describe pv local-pv-436f0527
Name: local-pv-436f0527
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by=local-volume-provisioner-kube-node2-c3733876-b56f-11e8-990b-080027395360
Finalizers: [kubernetes.io/pv-protection]
StorageClass: fast-disks
Status: Available
Claim:
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 495Mi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [kube-node2]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /mnt/fast-disks/vol2
Events: <none>Copy the code
- In this case, you can directly declare the use of the PV and bind it to the PVC by referring to the storageClassName named Fast-Disks.
Option 2: Manually create a Local Persistent Volume
See the PersistentVolume usage example described in the previous section that introduces Local Volume concepts.
Step 4: Create local Persistent Volume claim
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
storageClassName: fast-disksCopy the code
Replace it with the actual storage capacity requirement and storageClassName value.
# kubectl create -f local-pvc.yaml
persistentvolumeclaim "example-local-claim" created
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
example-local-claim Pending
# kubectl describe pvc example-local-claim
Name: example-local-claim
Namespace: default
StorageClass: fast-disks
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------Copy the code
Normal WaitForFirstConsumer 6s (x6 over 59s) persistentvolume-controller waiting for first consumer to be created before binding
- We can see the effect of delayed binding of the storage volume. The PVC will be in a pending state before binding to the container
Step 5: Create a test Pod and reference the PVC created above
apiVersion: v1
kind: Pod
metadata:
name: local-pvc-pod
spec:
containers:
- image: busybox
name: test-local-pvc
command: [ "sleep", "3600" ]
volumeMounts:
- mountPath: /data
name: data-volume
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: example-local-claimCopy the code
Create and view:
# kubectl create -f example-local-pvc-pod.yaml pod "local-pvc-pod" created # kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE client1 1/1 Running 67 64D 172.30.80.2 kube-node3 LOCAL-PVC-pod 1/1 Running 0 2m 172.30.48.6 RESTARTS AGE IP NODE client1 1/1 Running 67 64D 172.30.80.2 kube-node3 local-PVC-pod 1/1 Running 0 2m 172.30.48.6 kube-node1Copy the code
View the configuration details of the POD container to mount PVC. Here is only part of the information:
# kubectl describe pod local-pvC-pod Name: local-pvC-pod Namespace: default Node: kube-node1/172.16.10.101 Start Time: Thu, 15 Nov 2018 16:39:30 +0800 Labels: <none> Annotations: <none> Status: Running IP: 172.30.48.6 Containers: test-local-pvc: ...... Mounts: /data from data-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-qkhcf (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: data-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: example-local-claim ReadOnly: false ...... [root@kube-node1 ~]# kubectl exec -it local-pvc-pod -c test-local-pvc /bin/sh / # ls bin data dev etc home proc root sys TMP usr var / # df -h Filesystem Size Used Available Use% Mounted on overlay 41.0g 8.1g 32.8g 20% / TMPFS 64.0m 0 64.0m /dev/tmpfs 495.8m 0 495.8m 0% /sys/fs/cgroup vol3 495.8m 0 495.8m 0% /dataCopy the code
Back to the state of the PVC, we have changed it to Bound:
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
example-local-claim Bound local-pv-a0dfdc91 495Mi RWO fast-disks 1hCopy the code
7. A discussion on the functional limitations of Local Volume
In the above experiment, I don’t know if you found a problem, that is, when we defined PVC, we applied for the specified 50Mi space, but the actual storage space mounted to the test container was 495.8m, which is exactly the entire space of a file system mounted on a node node.
Why is that? This is where the local Persistent Volume external static configurator we used is limited. It does not support dynamic PV space application management.
That said, while we saved ourselves the pain of writing PV YAML files by hand with this static PV configurator, we still had to do this manually:
- Manually maintain file system resources mounted to the auto-discovery directory specified in ConfigMap, or symbolic links to block devices.
- You need to make a global plan for the available local storage resources in advance and divide them into volumes of various sizes and mount them to automatic discovery directories. Of course, as long as there are free storage resources, you can also mount them now.
What happens if the storage space allocated to a container runs out of space?
One tip is to use LVM (Logical Partition Management) for Linux to manage local disk storage space on each node.
- Create a large VG group to store all the storage space available on a node.
- Create a batch of logical volumes (LVs) in advance based on the expected container storage space usage in the future and mount them to the automatic discovery directory.
- Do not use up all storage resources in the VG. Reserve a small amount of resources for expanding storage space for individual containers.
- Use LVEXTEND to expand storage volumes used by specific containers.
8. How to configure the device if the container needs to use block blocks
There are a few things that differ from the above configuration approach.
The first is to enable features for supporting block devices on all major K8S components.
KUBE_FEATURE_GATES="BlockVolume=true"Copy the code
Next, define a volumeMode PVC of type “Block” and apply a PV of type “Block” for the container.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-block-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
volumeMode: Block
storageClassName: fast-disksCopy the code
Local volumes best practice
- To improve I/O isolation, you are advised to use a disk as a storage volume.
- To achieve storage space isolation, you are advised to use an independent disk partition for each storage volume.
- Avoid recreating a node with the same node name when an old PV with an affinity relationship for a node still exists. Otherwise, the system might think that the new node contains the old PV.
- For storage volumes with file systems, it is recommended to use their UUids in the fstab entry and in the directory name of the mount point of the volume (for example, the output of ls -l /dev/disk/by-uuid). This ensures that the wrong local volume will not be installed, even if its device path changes (for example, if /dev/sda1 becomes /dev/sdb1 when a new disk is added). In addition, this ensures that if another node with the same name is created, any volumes on that node will still be unique and will not be mistaken for volumes on another node with the same name.
- For the original block storage volume without a file system, use its unique ID as the name of the symbolic link. Depending on your environment, the volume ID in /dev/disk/by-id/ may contain a unique hardware serial number. Otherwise, you should generate your own unique ID. The uniqueness of symbolic link names ensures that if another node with the same name is created, any volumes on that node will still be unique and will not be mistaken for volumes on another node with the same name.
Disable the local volume method
This is a possible workflow when you want to deactivate a local volume.
- Close Pods that use these volumes;
- Remove local volumes from nodes (such as unmounting or unmounting disks).
- Manually delete the corresponding PVCs object.
- Provisioner attempts to clean up the volume, but fails because the volume no longer exists;
- Manually delete the corresponding PVs object.
Note: This is also thanks to the external static configurator we used.
References:
https://blog.csdn.net/watermelonbig/article/details/84108424
https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume
https://kubernetes.io/docs/concepts/storage/volumes/