The content is from the official Longhorn 1.1.2 English technical manual.

A series of

  • What’s a Longhorn?
  • Longhorn enterprise cloud native container distributed storage solution design architecture and concepts
  • Longhorn Enterprise Cloud Native Container Distributed storage – Deployment
  • Longhorn Enterprise Cloud Native Container Distributed Storage – Volume and Node
  • Longhorn, enterprise cloud native Container Distributed storage -K8S resource configuration example
  • Longhorn, Enterprise Cloud Native Container Distributed Storage – Monitoring (Prometheus+AlertManager+Grafana)
  • Longhorn, Enterprise cloud native Container Distributed storage – backup and recovery
  • Longhorn, Enterprise Cloud Native Container Distributed Storage – ReadWriteMany (RWX) workloads (experimental feature)

Longhorn exposes regular Longhorn volumes via NFSv4 server (Share-Manager), which natively supports RWX workloads.

For each RWX volume that Longhorn is using, a share-manager-< voluum-name > Pod will be created in the Longhorn-system namespace.

This Pod is responsible for exporting the Longhorn volume through the NFSv4 server running inside the Pod.

There are also services created for each RWX volume that serve as the endpoint of the actual NFSv4 client connection.

requirements

In order to be able to use RWX volumes, NFSv4 clients need to be installed on each client node.

For Ubuntu, you can install the NFSv4 client by:

apt install nfs-common
Copy the code

For rpM-based distributions, you can install the NFSv4 client by:

yum install nfs-utils
Copy the code

If the NFSv4 client is unavailable on the node, the following message will be part of an error when attempting to mount the volume:

for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.\n
Copy the code

RWX volume creation and use

For dynamically configured Longhorn volumes, the access mode is based on the PVC access mode.

For manually created Longhorn volumes (recovery, DR volumes), access modes can be specified during Longhorn UI creation.

When creating a PV/PVC for the Longhorn volume through the UI, the PV/PVC access mode will be based on the volume access mode.

As long as the volume is not bound to a PVC, you can change the access mode of the Longhorn volume through the UI.

For Longhorn volumes used by RWX PVC, the volume access mode will change to RWX.

Fault handling

Any failure of the Share-Manager Pod (volume failure, node failure, etc.) will cause the Pod to be recreated and the volume’s remountRequestedAt flag to be set, which will cause the Workload Pods to be deleted and Kubernetes to recreate them. This capability depends on the setting of automatically deleting workload pods when volumes are unexpectedly detached, which is true by default. If this setting is disabled, workload Pods may appear IO errors in the event of an RWX volume failure.

It is recommended to enable the above Settings to ensure automatic workload failover in the event of a problem with the RWX volume.

Migrate from previous external vendors

The PVC below creates a Kubernetes job that copies data from one volume to another.

  • willdata-source-pvcI’m going to replace it with PIKubernetesTo create theNFSv4 RWX PVCThe name of the.
  • willdata-target-pvcReplace it with the new one that you want for the new workloadRWX PVCThe name of the.

You can manually create a new RWX Longhorn Volume + PVC/PV, or just create an RWX PVC and have Longhorn dynamically configure a volume for you.

Both PVCS need to exist in the same namespace. If the namespace you use is different from the default namespace, change the namespace for the job below.

apiVersion: batch/v1
kind: Job
metadata:
  namespace: default  # namespace where the PVC's exist
  name: volume-migration
spec:
  completions: 1
  parallelism: 1
  backoffLimit: 3
  template:
    metadata:
      name: volume-migration
      labels:
        name: volume-migration
    spec:
      restartPolicy: Never
      containers:
        - name: volume-migration
          image: ubuntu:xenial
          tty: true
          command: [ "/bin/sh" ]
          args: [ "-c"."cp -r -v /mnt/old /mnt/new" ]
          volumeMounts:
            - name: old-vol
              mountPath: /mnt/old
            - name: new-vol
              mountPath: /mnt/new
      volumes:
        - name: old-vol
          persistentVolumeClaim:
            claimName: data-source-pvc # change to data source PVC
        - name: new-vol
          persistentVolumeClaim:
            claimName: data-target-pvc # change to data target PVC
Copy the code

history

  • Available as of V1.0.1,External provisioner
    • Github.com/Longhorn/Lo…
  • Available as of V1.1.0,Native RWX support
    • Github.com/Longhorn/Lo…