K8s volume experiment. Note: This paper is the experimental record of the author.

The environment

# kubectl get node NAME STATUS ROLES AGE VERSION edge-node Ready <none> 15m v1.17.0 edge-node2 Ready < None > 16m v1.17.0 Ubuntu Ready Master 67D V1.17.0Copy the code

volume

Technical summary

Multiple containers within a POD can be temporarily shared in real time via emptyDir, for example for instant data communication. For cross-node sharing, NFS is available. The time zone of the busybox image is UTC, and /etc/localtime can be mounted. To make sure the time is correct. Some programs rely on too many libraries. You can mount the host lib directory directly.

Create PV and PVC with different meta names. The pod used is specified as PVC (note: POD is used as PVC), which is automatically matched to PV.

PVC is not suitable for multiple copies because they can read and write to the same file. The actual use of the occasion, to be discussed.

In the pv specified directory, it is best to create and ensure that the permission is normal. When NFS is used, if no directory is created, pod creation fails.

Common type

emptyDir

Yaml files. A POD has two containers, create a temporary mount directory (do not need to specify which directory on the host), two containers can access each other, pod disappeared, the mount directory does not exist.

apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx image: latelee/lidch imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /test111 name: empty-volume ports: - name: HTTP containerPort: 80 hostIP: 0.0.0.0 hostPort: 80 protocol: tcp-name: HTTPS containerPort: 443 hostIP: 0.0.0.0 hostPort: 443 Protocol: tcp-name: busybox image: latelee/busybox imagePullPolicy: IfNotPresent Command: 0.0.0.0 hostPort: 443 Protocol: tcp-name: busybox image: latelee/busybox imagePullPolicy: IfNotPresent Command: ["sh", "-c", "sleep 3600"] volumeMounts: - mountPath: /test222 name: empty-volume volumes: - name: empty-volume emptyDir: {}Copy the code

Create:

kubectl apply -f nginx-pod.yaml
Copy the code

Validation:

kubectl exec -it nginx-pod -c busybox sh echo "from busybos" > /test222/foo exit kubectl exec -it nginx-pod -c nginx sh Cat /test111/foo // Output: from busybos exitCopy the code

That is, the two containers have their own mount directories with different names but shared contents.

hostPath

Similar to the above, but the mounted directory is mapped to the host (actually the node host) directory, shared between containers, containers are deleted, and files are retained. However, if the POD dispatches another node next time, the data does not exist. This method is related to the node host.

Busybox – pod. Yaml:

apiVersion: v1
kind: Pod
metadata:
  name: busybox-pod
  labels:
    app: busybox
spec:
  containers:
  - name: busybox1
    image: latelee/busybox
    imagePullPolicy: IfNotPresent
    command: ["sh", "-c", "sleep 3600"]
    volumeMounts:
    - mountPath: /test111
      name: host-volume
  - name: busybox2
    image: latelee/busybox
    imagePullPolicy: IfNotPresent
    command: ["sh", "-c", "sleep 3600"]
    volumeMounts:
    - mountPath: /test222
      name: host-volume
  volumes:
  - name: host-volume
    hostPath:
      path: /data
      type: DirectoryOrCreate
Copy the code

The test procedure is the same as above. 2 containers write files respectively.

kubectl apply -f busybox-pod.yaml 

kubectl exec -it busybox-pod -c busybox1 sh

kubectl exec -it busybox-pod -c busybox1 sh

kubectl delete -f busybox-pod.yaml 
Copy the code

After the POD is deleted, log in to the node machine and check the directory /data.

Example for multi-directory mounting busybox-pod1.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: busybox-pod1
  labels:
    app: busybox
spec:
  containers:
  - name: busybox1
    image: latelee/busybox
    imagePullPolicy: IfNotPresent
    command: ["sh", "-c", "sleep 3600"]
    volumeMounts:
    - mountPath: /test111
      name: host-volume
    - mountPath: /etc/localtime
      name: time-zone
  volumes:
  - name: host-volume
    hostPath: 
      path: /data
  - name: time-zone
    hostPath: 
      path: /etc/localtime
Copy the code

Note 1: Mount lib directly if a static program is running. Or to operate hardware programs, directly mount the /dev directory. Note 2: This example maps time files, which can be compared with the previous one. kubectl exec -it busybox-pod -c busybox1 date

NFS

Important: Specify a host (master in this example) to provide the NFS service and install the service on it. The NFS client is installed on each node in the cluster. This method fixes the mount directory and does not change with scheduling.

Install and configure NFS.

sudo apt-get install nfs-kernel-server -y vim /etc/exports /nfs *(rw,no_root_squash,no_all_squash,sync) sudo D /nfs-kernel-server restart mount -t NFS -o nolock 192.168.0.102:/ NFS/MNT/NFSCopy the code

Nodes must be mounted in NFS format.

sudo apt-get install nfs-common -y
Copy the code

Otherwise:

Wrong fs type, bad option, bad superblock on 192.168.0.102: / NFS, missing codepage or helper program, or other errorCopy the code

Busybox – NFS. Yaml:

apiVersion: v1 kind: Pod metadata: name: busybox-pod labels: app: busybox spec: containers: - name: busybox1 image: latelee/busybox imagePullPolicy: IfNotPresent command: ["sh", "-c", "sleep 3600"] volumeMounts: - mountPath: /test111 name: nfs-volume - name: busybox2 image: latelee/busybox imagePullPolicy: IfNotPresent command: ["sh", "-c", "sleep 3600"] volumeMounts: - mountPath: /test222 name: nfs-volume volumes: - name: nfs-volume nfs: Server: 192.168.0.102 path: / NFSCopy the code

Testing:

# kubectl exec -it busybox-pod -c busybox1 sh
/ # echo "bbb" > /test111/bbb
/ # exit
# cat /nfs/bbb 
bbb

Copy the code

persistence

PV: Abstracts storage (such as host disks and cloud disks) into K8S storage units for easy use. (Question: can a namespace have only one PV?) . When YOU create PVS, it’s static, it’s always there. Need to use PVC registration and use.

pv.yaml

apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv1 labels: storage: nfs spec: accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"] # 200 mi volumeMode: Filesystem persistentVolumeReclaimPolicy: Retain value delete Recycle Retain NFS: server: 192.168.0.102 path: /nfs1 -- apiVersion: v1 kind: PersistentVolume Metadata: name: NFS-pv2 spec: Capacity: storage: 100Mi # 5Gi accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"] #accessModes: # - ReadWriteMany nfs: Server: 192.168.0.102 path: / nfs2Copy the code

pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc1
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc2
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Mi
Copy the code

Note: Metadata. name must be named carefully and used in the future. (Question: how does PVC match PV?)

create

kubectl apply -f pv.yaml
kubectl delete -f pv.yaml

kubectl apply -f pvc.yaml
kubectl delete -f pvc.yaml

Copy the code

View the created PV:

kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-pv1   200Mi      RWO,ROX,RWX    Retain           Available                                   17s
nfs-pv2   100Mi      RWX            Retain           Available                                   3m
Copy the code

PVC created:

NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc1   Bound    nfs-pv2   100Mi      RWX                           3s
nfs-pvc2   Bound    nfs-pv1   200Mi      RWO,ROX,RWX                   3s
Copy the code

Note: Seems to be based on the container requested,

Busybox – PVC. Yaml:

apiVersion: v1 kind: Pod metadata: name: busybox-pvc labels: app: busybox spec: containers: - name: busybox1 image: latelee/busybox imagePullPolicy: IfNotPresent command: ["sh", "-c", "sleep 3600"] volumeMounts: - mountPath: /test111 name: host-volume volumes: -name: host-volume persistentVolumeClaim: claimName: NFs-pvc2 #Copy the code
kubectl apply -f busybox-pvc.yaml 
kubectl exec -it busybox-pvc -c busybox1 df
kubectl delete -f busybox-pvc.yaml 
Copy the code

Redis instance redis-pvc.yaml:

apiVersion: v1 kind: Pod metadata: name: redis-pvc labels: app: redis-pvc spec: containers: - name: redis-pod image: Redis :alpine imagePullPolicy: IfNotPresent # command: # no volumeMounts: -mountPath: /data # Pvc-volume volumes: -name: PVC-volume persistentVolumeClaim: claimName: NFS-pvC2 # The requested PVC must existCopy the code

Write data:

127.0.0.1:6379> keys * (empty list or set) 127.0.0.1:6379> set who "Latelee" OK 127.0.0.1:6379> set email "[email protected]" OK 127.0.0.1:6379> BGSAVE Background saving Started 127.0.0.1:6379> exit /data # exitCopy the code

Nginx service mount:

ApiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 # tells deployment to run 2 pods matching the template selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: latelee/lidch imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: www subPath: MountPath: /usr/share/nginx/ HTML volumes: -name: WWW persistentVolumeClaim: claimName: Nfs-pvc2 -- apiVersion: v1 kind: Service # specifies Service metadata: labels: run: nginx name: nginx namespace: Default spec: ports: -port: 88 # targetPort: 80 selector: app: nginx type: LoadBalancerCopy the code

Experimental reporter: create, get the corresponding service address, access, 403 (this is normal, because NFS – pvc2 nfs1 / html1 no index. The HTML), write a file can be normal access. (Wait a moment)

Questions and quizzes

1. No PV, but PVC is created. The PVC requested capacity exceeds the PV container. Tip:

no persistent volumes available for this claim and no storage class is set
Copy the code

Create two PVS, and then two PVCS. Check whether the PVS are allocated successfully, and then delete one PV. At this point, the PV is being occupied and will be in Terminating state when it is deleted. When the BOUND PVC is deleted again, the PV is deleted successfully.

Create a POD and write data that exceeds the PV and PVC specified capacity. It seems to work.

dd if=/dev/zero of=null.bin count=3000 bs=102400
Copy the code

2, first delete PVC, then delete PV. Otherwise, pv cannot be deleted.

3,

Mount. NFS: Access denied by server while Mounting 192.168.0.102:/nfs3Copy the code

1. The directory does not exist. 2. The directory is not exported.