K8s Statefulset experiment. Note: This paper is the experimental record of the author.

The environment

# kubectl get node NAME STATUS ROLES AGE VERSION edge-node Ready <none> 15m v1.17.0 edge-node2 Ready < None > 16m v1.17.0 Ubuntu Ready Master 67D V1.17.0Copy the code

statefulset

Technical summary

Create statefulSet to verify the PVC storage. Open three NFS directories on the master node. Read/write.

The experiment

A simple example

1, pv. Yaml

apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv1 labels: storage: nfs spec: accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"] # 200 mi volumeMode: Filesystem persistentVolumeReclaimPolicy: Retain value delete Recycle Retain NFS: server: 192.168.0.102 path: /nfs1 -- apiVersion: v1 kind: PersistentVolume metadata: name: NFS-pV2 labels: storage: NFS spec: capacity: storage: 100Mi # 5Gi accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"] nfs: server: 192.168.0.102 path: /nfs2 -- apiVersion: v1 kind: PersistentVolume metadata: name: NFS-pv3 labels: storage: NFS spec: capacity: storage: 100Mi # 5Gi accessModes: ["ReadWriteOnce", "ReadWriteMany", "ReadOnlyMany"] nfs: server: 192.168.0.102 path: / nfs3Copy the code

2, nginx – service. Yaml:

apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx -- apiVersion: apps/v1 kind: StatefulSet Metadata: name: web # StatefulSet name spec: serviceName: "nginx" replicas: 3 # by default is 1 selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: latelee/lidch imagePullPolicy: IfNotPresent ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] #storageClassName: "my-storage-class" resources: requests: storage: 10MiCopy the code

Note: Create 3 PVS, since StatefulSet has 3 copies, theoretically there should be more PVS than copies.

To view

Create:

kubectl apply -f pv.yaml 
nginx-service.yaml 
Copy the code

To view:

Kubectl describe statefulset web: Kubectl describe Pod Web-0Copy the code

View PV and PVC:

# kubectl get pv // NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE NFS-pv1 200Mi RWO,ROX,RWX Retain Bound default/www-web-2 3m47s nfs-pv2 100Mi RWO,ROX,RWX Retain Bound default/www-web-0 3m47s nfs-pv3 100Mi RWO,ROX,RWX Retain Bound default/www-web-1 3m47s # kubectl get PVC Yes NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE www-web-0 Bound nfs-pv2 100Mi RWO,ROX,RWX 3m51s www-web-1 Bound nfs-pv3 100Mi RWO,ROX,RWX 3m44s www-web-2 Bound nfs-pv1 200Mi RWO,ROX,RWX 3m41sCopy the code

Testing stored

Kubectl get pod -w -l app=nginx kubectl delete pod -l app=nginx kubectl delete pod -l app=nginx To view the host name:

# for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
web-0
web-1
Copy the code

Create a POD named dns-test in k8S, enter the container, if exit, this pod will be deleted.

kubectl run -it --image latelee/busybox dns-test --restart=Never --rm /bin/sh
Copy the code

Run the nslookup command: nslookup web -xx.nginx. Commands and outputs:

/ # nslookup web-0.nginx Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local nslookup: Can't resolve 'web-0.nginx' / # / # nslookup web-1.nginx Server: 10.96.0.10 Address 1: 10.96.0.10 kube - DNS. Kube - system. SVC. Cluster. The local nslookup: Can't resolve 'web-1.nginx' / # nslookup web-2.nginx Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local nslookup: can't resolve 'web-2.nginx' / #Copy the code

Note: why can’t solve web-0. Nginx, unknown

Write the respective host names to the webpage index.html(note: they are mounted and will eventually be written to their respective NFS directories) :

for i in 0 1 2; do kubectl exec web-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done
Copy the code

Results If pod name is web-0, its host name is web-0, and so on. Pod distribution:

NAME READY STATUS RESTARTS AGE IP NODE NODE READINESS GATES Web-0 1/1 Running 0 6M8s 10.244.4.133 EDge-node2 <none> <none> web-1 1/1 Running 0 6m7s 10.244.1.125 edge-node <none> <none> web-2 1/1 Running 0 6m5s 10.244.4.134 edge-node2 <none> <none>Copy the code

The status of the NFS directory on the master host:

# cat /nfs1/index.html 
web-2
# cat /nfs2/index.html 
web-0
# cat /nfs3/index.html 
web-1
Copy the code

Delete the pod:

kubectl delete pod -l app=nginx
Copy the code

Wait for the POD to restart. Distribution at this time:

NAME READY STATUS RESTARTS AGE IP NODE NODE READINESS GATES Web-0 1/1 Running 0 31s 10.244.4.142 EDge-node2 <none> <none> web-1 1/1 Running 0 21s 10.244.1.129 edge-node <none> <none> web-2 1/1 Running 0 19s 10.244.4.143 edge-node2 <none> <none>Copy the code

Note: The nodes scheduled are unchanged, but the IP address is changed. (Assuming it’s already scheduled) View the page for each pod:

for i in 0 1 2; do kubectl exec web-$i -- sh -c 'cat /usr/share/nginx/html/index.html'; Done Output is web-0 web-1 web-2Copy the code

Look at the NFS directory again, there is no change. Conclusion: The file contents have been stored on disk, pod is restarted, the file is not lost, and does not change with scheduling. Note that NFS directories are not one-to-one (web-1 in this case corresponds to the /nfs3 directory), but once they are identified, they are not changed. The consistency of the content is guaranteed.

capacity

Expand the number to 5:

# kubectl scale sts web --replicas=5
Copy the code

The POD cannot be created successfully and is suspended. Because pv is not enough.

web-3   0/1     Pending   0          42s
Copy the code

Add two NFS directories based on pv.yaml and restart the NFS service. Kubectl apply-f pv.yaml kubectl apply-f pv.yaml

# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv1 200Mi RWO,ROX,RWX Retain Bound default/www-web-2 19m nfs-pv2 100Mi RWO,ROX,RWX Retain Bound default/www-web-0 19m nfs-pv3 100Mi RWO,ROX,RWX Retain Bound default/www-web-1 19m nfs-pv4 100Mi RWO,ROX,RWX Retain Bound default/www-web-3 8s nfs-pv5  100Mi RWO,ROX,RWX Retain Available 8sCopy the code

Note: Since pv was created after Web-3 was created, pv5 will be bound as soon as PV is available.

Shrinkage:

kubectl patch sts web -p '{"spec":{"replicas":3}}'
Copy the code

At this point, web-3 and web-4 are deleted.

upgrade

Kubectl set image STS /web nginx=latelee/lidch:1.1Copy the code

Definition: Update STS/Web with container name nginx and mirror latelee/lidch:1.1. To view:

# kubectl get sts -o wide
NAME   READY   AGE   CONTAINERS   IMAGES
web    3/3     69m   nginx        latelee/lidch:1.1
Copy the code

Delete:

kubectl delete -f nginx-service.yaml
kubectl delete -f pv.yaml
Copy the code

Delete statefulSet, PVC will not be deleted, manually delete.

The problem

1.

error while running "VolumeBinding" filter plugin for pod "web-0": pod has unbound immediate PersistentVolumeClaims

no persistent volumes available for this claim and no storage class is set
Copy the code

Cause: PVC not created. If the copy is 3, there will be 3 PVCS (and also 3 PVS). To view:

kubectl get pvc
Copy the code

Continue to check:

Kubectl WWW web - 0 output: the describe PVC storageclass. Storage. K8s. IO "my - storage - class" not foundCopy the code

No output from pv and PVC:

kubectl get pvc
kubectl get pv
Copy the code

Because of the volumeClaimTemplates template, there is no need to create a PVC, so you can create multiple PVS (three for this article’s example) and then create statefulSet, which automatically creates a PVC.

2. If different PVS use the same path, they will influence each other. Modified files such as POd1 affect pod2.

3,

Mount. NFS: Access denied by server while Mounting 192.168.0.102:/nfs3Copy the code

1) Directory does not exist; 2) The directory is not exported.

reference

Kubernetes. IO/useful/docs/tut…