The article links PV and PVC patterns by first creating PV and then defining PVC for one-to-one binding. If you encounter a large cluster, do you also create one by one? So it’s expensive to maintain and it’s a lot of work. Kubernetes provides a mechanism for automatically creating PVS, called StorageClass, which creates PV templates.
StorageClass defines two parts:
- PV attributes: such as storage size, type, etc
- PV needs to use storage plug-ins such as Ceph;
With these two pieces of information, Kubernetes can find the corresponding StorageClass based on the PVC submitted by the user, and then Kubernetes will call the StorageClass declared storage plug-in to automatically create the PV.
To use NFS, however, we need a plug-in for NFS-Client. This plugin will cause the NFS service to automatically create PVS for us.
${namespace}-${pvcName}-${pvName} -${pvName} Archieved -${namespace}-${pvcName}-${pvName} -${pvName} format storage details can be reference Github PV, PVC, NFS no further description, not complete please check kubernetes using PV and PVC management data storage
Create ServiceAccount
The ServiceAccount is created to authorize nfS-Client.
#Download the rbac. Yamlwget https://github.com/kubernetes-retired/external-storage/blob/201f40d78a9d3fd57d8a441cfc326988d88f35ec/nfs-client/deploy/r bac.yamlCopy the code
The deployment of rbac. Yaml
kubectl apply -f rbac.yaml
#The output is as follows
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
Copy the code
Create the NFS client
Use Deployment to create the NFS-client
#Download deployment. Yamlwget https://github.com/kubernetes-retired/external-storage/blob/201f40d78a9d3fd57d8a441cfc326988d88f35ec/nfs-client/deploy/d eployment.yamlCopy the code
Modify yamL as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs The supplier name must be the same as the provisioner name in class.yaml, otherwise the deployment will not succeed
- name: NFS_SERVER
value: 10.010.51. Write the IP address of the NFS server or the resolvable host name
- name: NFS_PATH
value: /home/bsh/nfs The path must be the last folder in the directory, otherwise the application will not have permission to create the directory causing Pending.
volumes:
- name: nfs-client-root
nfs:
server: 10.010.51. The IP address or resolvable host name of the NFS server
path: /home/bsh/nfs # shared mount directory on NFS server (note: the path must be the last folder in the directory, otherwise the deployed application will not have permission to create the directory causing Pending)
Copy the code
⚠️ Note value: fuseiem.pri /ifs # The supplier name must be the same as the provisioner name in class. Yaml otherwise, the deployment will not succeed
Create a check
#Deploy the NFS client
kubectl apply -f deployment.yaml
#The output is as follows
deployment.apps/nfs-client-provisioner created
Copy the code
Check the pod
kubectl get pod
#The output is as follows
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-fd74f99b4-wr58j 1/1 Running 1 30s
Copy the code
Create StorageClass
Yaml class. Yaml class. Yaml class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
Copy the code
⚠️ Note that the provisioner must be the same as the value of the PROVISIONER_NAME from the YAML file in the Deployment above.
Create storageclass
#create
kubectl apply -f class.yaml
#The output is as follows
storageclass.storage.k8s.io/managed-nfs-storage created
Copy the code
Check the status
kubectl get storageclass
#The output is as follows
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 53s
Copy the code
Create a PVC
Create a tomcat storageclass – PVC, yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: tomcat
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
Copy the code
The deployment of yaml
kubectl apply -f tomcat-storageclass-pvc.yaml
#The output is as follows
persistentvolumeclaim/tomcat created
Copy the code
Check the status
kubectl get pvc
#The output is as follows
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
tomcat Bound pvc-d35c82e3-29f3-4f6d-b25d-3ccdd365d1ec 500Mi RWX managed-nfs-storage 48s
Copy the code
Pod uses added PVC
Also take the previous tomcat experiment, we take the logs in the Tomcat directory to the local NFS.
⚠️ Note That if a POD is created using PVC, it cannot be successfully created. Please refer to the error when it appearsPersistentvolume-controller waiting for a volume to be created
The specific YAML is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 3
selector:
matchLabels:
app: tomcat
minReadySeconds: 1
progressDeadlineSeconds: 60
revisionHistoryLimit: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: wenlongxue/tomcat:tomcat-demo-62-123xw2
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: "2Gi"
cpu: "80m"
limits:
memory: "2Gi"
cpu: "80m"
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 180
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 30
volumeMounts:
- mountPath: "/usr/local/tomcat/logs"
name: tomcat
# PVC part
volumes:
- name: tomcat
persistentVolumeClaim:
claimName: tomcat
---
# Service indicates the Service
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat
spec:
selector:
app: tomcat
ports:
- name: tomcat-port
protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP
---
# ingress service section
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tomcat
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- tomcat.cnsre.cn
secretName: tls-secret
rules:
- host: tomcat.cnsre.cn
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: tomcat-service
port:
number: 8080
Copy the code
Deploying pod Services
kubectl apply -f tomcatc.yaml
#The output is as follows
deployment.apps/tomcat-deployment created
Copy the code
Check the status
kubectl get pod
#The output is as follows
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-fd74f99b4-wr58j 1/1 Running 0 76m
tomcat-deployment-7588b5c8fd-cnwvt 1/1 Running 0 59m
tomcat-deployment-7588b5c8fd-kl8fj 1/1 Running 0 59m
tomcat-deployment-7588b5c8fd-ksbg9 1/1 Running 0 59m
Copy the code
Check the PV PVC
[root@master tomccat]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-d35c82e3-29f3-4f6d-b25d-3ccdd365d1ec 500Mi RWX Delete Bound default/tomcat managed-nfs-storage 65m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/tomcat Bound pvc-d35c82e3-29f3-4f6d-b25d-3ccdd365d1ec 500Mi RWX managed-nfs-storage 65mCopy the code
View information about the NFS Server directory
[root@node1 ~]# ll /home/bsh/nfs/default-tomcat-PVC-d35c82E3-29f3-4f6d-b25d-3ccDD365d1ec/total 220-rw-r -----. 1 root Root 22217 sep 3 14:49 catalina.2021-09-03.log -rw-r-----. 1 root root 0 Sep 3 14:41 host-manager.2021-09-03.log 1 root root 2791 9月 3 14:49 localhost.2021-09-03.log -rw-r-----. 1 root root 118428 9月 3 15:31 Localhost_access_log.2021-09-03.txt -rw-r-----. 1 root root 09 月 3 14:41 manager.2021-09-03.logCopy the code
The article links