For THE K8S cluster (such as GKE) running in the cloud service provider, there is a relatively complete storage volume support. And self – built K8S cluster, this aspect is often more troublesome. After investigation, it is found that using NFS volumes in a self-built K8S cluster is a relatively simple and feasible solution. The NFS server can be independent of the K8S cluster, facilitating centralized management of volumes and files in the cluster.

The contents of this paper include:

  • Install and configure the NFS server
  • Use an NFS client to connect to an NFS shared folder
  • Manually create AN NFS volume in a K8S cluster

In this paper, the experimental environment is Ubuntu /Debian. For centos and other systems, only the installation and configuration of NFS are slightly different.

Install and configure the NFS server

Reference tutorial: vitux.com/install-nfs…

sudo apt-get update
sudo apt install nfs-kernel-server
sudo mkdir -p /mnt/sharedfolder
sudo chown nobody:nogroup /mnt/sharedfolder
sudo chmod 777 /mnt/sharedfolder
sudo nano /etc/exports
Copy the code

Step 1: Install nfS-kernel-server

sudo apt-get update
sudo apt install nfs-kernel-server
Copy the code

Step 2: Create an export directory

An export directory is a directory shared with an NFS client. This directory can be any directory on Linux. Here we use a new directory created.

sudo mkdir -p /mnt/sharedfolder
Copy the code

Step 3: Assign server access permissions to clients using NFS output files

Edit /etc/exports file

sudo vi /etc/exports
Copy the code

Add a configuration to the file to assign different types of access:

  • Configuration format for assigning access rights to a single client:
/mnt/sharedfolder clientIP(rw,sync,no_subtree_check)
Copy the code
  • Configuration format for assigning access to multiple clients:
/mnt/sharedfolder client1IP(rw,sync,no_subtree_check)
/mnt/sharedfolder client2IP(rw,sync,no_subtree_check)
Copy the code
  • Configuration format for assigning multiple client access by specifying a complete set of client terminals:
/mnt/sharedfolder subnetIP/24(rw,sync,no_subtree_check)
Copy the code

Example:

This is a sample configuration that assigns read and write permissions to the 192.168.0.101 client

/ MNT/sharedfolder 192.168.0.101 (rw, sync, no_subtree_check)Copy the code

Step 4: Output the shared directory

Run the following command to output the shared directory:

sudo exportfs -a
Copy the code

Restart the nfs-kernel-server service for all configurations to take effect

sudo systemctl restart nfs-kernel-server
Copy the code

Use an NFS client to connect to an NFS shared folder

You can use Windows 10 explorer to connect to the NFS server for testing, or you can use Linux connection testing.

Here use another Ubuntu on the LAN to mount the NFS shared directory for testing:

Step 1: Install NFS-common

Nfs-common contains the software required by NFS clients

sudo apt-get update
sudo apt-get install nfs-common
Copy the code

Step 2: Create a mount point for the NFS shared directory

sudo mkdir -p /mnt/sharedfolder_client
Copy the code

Step 3: Mount the shared directory to the client

Mount command format:

sudo mount serverIP:/exportFolder_server /mnt/mountfolder_client

Based on the previous configuration, the mount commands are as follows:

Sudo mount 192.168.100.5: / MNT/sharedfolder/MNT/sharedfolder_clientCopy the code

You need to set the NFS server IP address based on actual conditions

Step 4: Test the connection

You can copy files to the shared directory and see them on other machines.

Manually create AN NFS volume in a K8S cluster

Medium.com/myte/kuber…

Create an NFS-based PV

nfs.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
  labels:
    name: mynfs # name can be anything
spec:
  storageClassName: manual # same storage class as pvc
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.1681.7. # ip addres of nfs server
    path: "/srv/nfs/mydata2" # path to directory
Copy the code

The deployment of NFS. Yaml:

$ kubectl apply -f nfs.yaml
$ kubectl get pv,pvc
persistentvolume/nfs-pv   100Mi      RWX            Retain           Available
Copy the code

Create a PVC

Create the persistent volume declaration file and deploy it. Note that the accessModes must be the same as those in the previously created PV

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany # must be the same as PersistentVolume
  resources:
    requests:
      storage: 50Mi
Copy the code

The deployment of

$ kubectl apply -f nfs_pvc.yaml
$ kubectl get pvc,pv
persistentvolumeclaim/nfs-pvc   Bound    nfs-pv   100Mi      RWX
Copy the code

Create a pod

To create a simple nginx deployment using this PVC, nfs-pod.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nfs-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
      - name: nfs-test
        persistentVolumeClaim:
          claimName: nfs-pvc # same name of pvc that was created
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: nfs-test # name of volume should match claimName volume
          mountPath: /usr/share/nginx/html # mount inside of contianer
Copy the code

Deployed Nginx:

$ kubectl apply -f nfs_pod.yaml 
$ kubectl get po
nfs-nginx-6cb55d48f7-q2bvd   1/1     Running
Copy the code

Common problem: Failed to create pod — due to missing NFS driver

Error creating pod using NFS volume in K8S:

Cause: The software package required for mounting the NFS client is not installed on the node

root@k8s0:~# kubectl describe pod/nfs-nginx-766d4bf45f-n7dltName: nfs-nginx-766d4BF45F-n7dlt Namespace: default Priority: 0 Node: K8S2/172.16.2.102 Start Time: Fri, 10 Jul 2020 18:04:58 +0800 Labels: app=nginx pod-template-hash=766d4bf45f Annotations: Cni.projectcalico.org/podIP: 192.168.109.86/32 Status: Running IP: 192.168.109.86 IPs: IP: Controlled By: ReplicaSet/ nfS-nginx-766d4BF45f Containers: nginx: Container ID: docker://88299398d40ead29e991e57c6bad5d0e6d0396c21c2e69b0d2afb4ab7cce6044 Image: nginx Image ID: docker-pullable://nginx@sha256:21f32f6c08406306d822a0e6e8b7dc81f53f336570e852e25fbe1e3e3d0d0133 Port: <none> Host Port: <none> State: Running Started: Fri, 10 Jul 2020 18:17:00 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from nfs-test (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-mhtqt (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: nfs-test: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaimin the same namespace)
    ClaimName:  nfs-pvc
    ReadOnly:   false
  default-token-mhtqt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-mhtqt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age        From               Message
  ----     ------       ----       ----               -------
  Normal   Scheduled    <unknown>  default-scheduler  Successfully assigned default/nfs-nginx-766d4bf45f-n7dlt to k8s2
  Warning  FailedMount  21m        kubelet, k8s2      MountVolume.SetUp failed for volume "nfs-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9c0b53d9-581c-4fc4-a286-7 c4a8d470e74 / volumes/kubernetes. IO ~ NFS/NFS - pv - scope mount -t NFS 172.16.100.105: / MNT/sharedfolder /var/lib/kubelet/pods/9c0b53d9-581c-4fc4-a286-7c4a8d470e74/volumes/kubernetes.io~nfs/nfs-pv
Output: Running scope as unit run-r3892d691a70441eb975bc53bb7aeca72.scope.
mount: wrong fs type, bad option, bad superblock on 172.16.100.105: / MNT/sharedfolder, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
  Warning  FailedMount  21m  kubelet, k8s2  MountVolume.SetUp failed for volume "nfs-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9c0b53d9-581c-4fc4-a286-7 c4a8d470e74 / volumes/kubernetes. IO ~ NFS/NFS - pv - scope mount -t NFS 172.16.100.105: / MNT/sharedfolder /var/lib/kubelet/pods/9c0b53d9-581c-4fc4-a286-7c4a8d470e74/volumes/kubernetes.io~nfs/nfs-pv
Output: Running scope as unit run-r8774f015f759436d843d408eb6c941ec.scope.
Copy the code

Solutions:

Ubuntu/Debian runs on the K8S node with NFS client support installed

sudo apt-get install nfs-common
Copy the code

After a period of time, you can see that the POD is working properly

Test k8S for proper use of NFS volumes

Create a test page in nginx Pod with the name index.html:

$ kubectl exec -it nfs-nginx-6cb55d48f7-q2bvd bash
# Fill in the index. HTML content for testing
$ sudo vi /usr/share/nginx/html/index.html
this should hopefully work
Copy the code

Verify that the same file is now on the NFS server and that nginx can read it:

$ ls /srv/nfs/mydata$
$ cat /srv/nfs/mydata/index.html
this should hopefully work
Expose the Nginx pod as a service via nodePort to make it accessible through a browser$kubectl expose deploy nfs-nginx --port 80 --type NodePort $kubectl get SVC $NFS -nginx NodePort 10.102.226.40 <none> 80:32669/TCPCopy the code

Open a browser and type < IP of the corresponding node >:< port >

In this example, 192.168.99.157:32669 is used

Delete all deployments to verify that the test files still exist in our directory:

$ kubectl delete deploy nfs-nginx
$ kubectl delete pvc nf-pvc
--> kubectl delete svc nfs-nginx
$ ls /srv/nfs/mydata/
index.html
Copy the code