In this section, K8S uses NFS remote storage to provide dynamic storage service for managed PODS. Pod creators do not need to care about how and where data is stored, but only need to apply for how much space they need.

The overall process is as follows:

  1. Example Create an NFS server.
  2. Create a Service Account. Controls the permission to run NFS provisioner in the K8S cluster.
  3. Create StorageClass. Responsible for creating the PVC and calling the NFS provisioner for the intended work and associating the PV and PVC.
  4. Create NFS provisioner. You can create mount points (volumes) in an NFS shared directory and associate the PV with the NFS mount point.

Configuring the NFS Server

Server IP: 192.168.10.17

[root@work03 ~]# yum install nfs-utils rpcbind -y [root@work03 ~]# systemctl start nfs [root@work03 ~]# systemctl start rpcbind [root@work03 ~]# systemctl enable nfs [root@work03 ~]# systemctl enable rpcbind [root@work03 ~]# mkdir -p /data/nfs/ [root@work03 ~]# chmod 777 /data/nfs/ [root@work03 ~]# cat /etc/exports /data/nfs/ 192.168.10.0/24 (rw, sync, no_root_squash no_all_squash)/root @ work03 ~ # exportfs - arv exporting 192.168.10.0/24: / data/NFS [root@work03 ~]# showmount -e localhost Export list for localhost: /data/ NFS 192.168.10.0/24 Sync: synchronizes data to the memory buffer and disk, which is inefficient but ensures data consistency. Async: stores data in the memory buffer and writes data to disk only when necessaryCopy the code

Install nfs-utils rpcbind on all Work nodes

yum install nfs-utils rpcbind -y
systemctl start nfs
systemctl start rpcbind
systemctl enable nfs
systemctl enable rpcbind
Copy the code

Create a dynamic volume provider

Create RBAC authorization

# wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml
# kubectl apply -f rbac.yaml
Copy the code

Create Storageclass

# cat class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
Copy the code

Create the NFs-client-provisioner auto-configuration program to automatically create persistent volumes (PV)

  • Namespace − Namespace − Is used for automatically created PVS

    {pvcName}-${pvName} naming format is created on NFS

  • After the PV is reclaimed, archieved-namespace− Namespace −

    {pvcName}-${pvName} name format exists on the NFS server

    cat deployment.yaml

    apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner

    replace with namespace where provisioner is deployed

    namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: – name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: – name: nfs-client-root mountPath: /persistentvolumes env: -name: PROVISIONER_NAME value: fuseim.pri/ ifs-name: NFS_SERVER value: 192.168.10.17 – name: NFS_PATH value: /data/ NFS volumes: -name: nfs-client-root NFS: server: 192.168.10.17 path: /data/ NFS

Create a stateful application

# cat statefulset-nfs.yaml apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nfs-web spec: serviceName: "nginx" replicas: 3 selector: matchLabels: app: nfs-web # has to match .spec.template.metadata.labels template: metadata: labels: app: nfs-web spec: TerminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx: 1.7.9 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: managed-nfs-storage spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi [root@master01 ~]# kubectl apply -f statefulset-nfs.yamlCopy the code

Check the Pod/PV/PVC

[root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5f5fff65ff-2pmxh 1/1 Running 0  26m nfs-web-0 1/1 Running 0 2m33s nfs-web-1 1/1 Running 0 2m27s nfs-web-2 1/1 Running 0 2m21s [root@master01 ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE www-nfs-web-0 Bound pvc-62f4868f-c6f7-459e-a280-26010c3a5849 1Gi RWO managed-nfs-storage 2m35s www-nfs-web-1 Bound pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9 1Gi RWO managed-nfs-storage 2m29s www-nfs-web-2 Bound pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0 1Gi RWO managed-nfs-storage 2m23s [root@master01 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0 1Gi RWO Delete Bound default/www-nfs-web-2 managed-nfs-storage 2m25s pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9 1Gi RWO Delete  Bound default/www-nfs-web-1 managed-nfs-storage 2m31s pvc-62f4868f-c6f7-459e-a280-26010c3a5849 1Gi RWO Delete Bound default/www-nfs-web-0 managed-nfs-storage 2m36sCopy the code

None Example Query information about the NFS server directory, and the subdirectories are empty

[root@work03 ~]# ls -l /data/nfs/
total 12
default-www-nfs-web-0-pvc-62f4868f-c6f7-459e-a280-26010c3a5849
default-www-nfs-web-1-pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9
default-www-nfs-web-2-pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0
Copy the code

Destructive testing

Write content to each pod

[root@master01 ~]# for i in 0 1 2; do kubectl exec nfs-web-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done
Copy the code

The subdirectories of the remote NFS are no longer empty and have contents

[root@work03 ~]# ls /data/nfs/default-www-nfs-web-0-pvc-62f4868f-c6f7-459e-a280-26010c3a5849/ index.html [root@work03 ~] #Copy the code

View the contents of each container for its own host name

[root@master01 ~]# for i in 0 1 2; do kubectl exec -it nfs-web-$i -- cat /usr/share/nginx/html/index.html; done                      
nfs-web-0
nfs-web-1
nfs-web-2
Copy the code

Delete corresponding POD

[root@master01 ~]# kubectl get pod -l app=nfs-web
NAME        READY   STATUS    RESTARTS   AGE
nfs-web-0   1/1     Running   0          7m7s
nfs-web-1   1/1     Running   0          7m3s
nfs-web-2   1/1     Running   0          7m

[root@master01 ~]# kubectl delete pod -l app=nfs-web   
pod "nfs-web-0" deleted
pod "nfs-web-1" deleted
pod "nfs-web-2" deleted
Copy the code

You can see it’s automatically created again

[root@master01 ~]# kubectl get pod -l app=nfs-web   
NAME        READY   STATUS    RESTARTS   AGE
nfs-web-0   1/1     Running   0          15s
nfs-web-1   1/1     Running   0          11s
nfs-web-2   1/1     Running   0          8s
Copy the code

Looking again at the contents of each pod, you can see that the file contents have not changed

[root@master01 ~]# for i in 0 1 2; do kubectl exec -it nfs-web-$i -- cat /usr/share/nginx/html/index.html; done
nfs-web-0
nfs-web-1
nfs-web-2
Copy the code

conclusion

As you can see, the Statefulset controller ensures that the topology between pods is always stable through a fixed pod creation sequence, and automatically creates remote storage volumes that have a fixed relationship with each POD through nfS-client-provisioner. Ensure that data is not lost after pod reconstruction