Kubectl delete: delete a resource from Kubernetes. However, fully understanding Delete can be a challenge, and understanding the real rationale behind Delete can help you cope with some bizarre situations. This article will explain the story behind the deletion operation in detail from the following aspects:
- Basic delete operations
- How can Finalizers and Owner Reference affect deletion operations
- How to use propagation Policy to change the delete order
- Examples are used to illustrate the above deletion operations
For simplicity, all examples use ConfigMaps and Basic Shell commands to demonstrate operations.
The basic delete
Kubernetes has several different commands that you can use to allow you to create, read, update, and delete objects. For the purposes of this blog post, we will focus on four Kubectl commands: Create, get, Patch, and delete.
Here are some of the simplest kubectl delete usage scenarios:
kubectl create configmap mymap
configmap/mymap created
Copy the code
kubectl get configmap/mymap
NAME DATA AGE
mymap 0 12s
Copy the code
kubectl delete configmap/mymap
configmap "mymap" deleted
Copy the code
kubectl get configmap/mymap
Error from server (NotFound): configmaps "mymap" not found
Copy the code
Shows the configMap creation, query, delete, and query process, which can be represented by the following state diagram:
This delete operation is very simple and intuitive, but when you encounter Finalizer and Owner References, there will be a variety of confusing phenomena.
Understanding Finalizers
When it comes to understanding Kubernetes resource removal, understanding how Finalizers work can give you inspiration to solve problems when you can’t.
Finalizers are key to triggering pre-delete operations and controlling garbage collection of resources. Finalizers are designed to help controllers prioritize clean up logic before handling resource deletions. Finalizers, however, do not include code logic and can be easily added or removed ina similar way to annotations.
You can already see the following Finalizers:
kubernetes.io/pv-protection
kubernetes.io/pvc-protection
Copy the code
These two Finalizers are used to prevent volume from being deleted by mistake. There are many other finalizers with similar functions.
Here is a custom ConfigMap that contains only one Finalizer:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: mymap
finalizers:
- kubernetes
EOF
Copy the code
The Controller responsible for managing ConfigMap does not know what to do with the Kubernetes Finalizer. Now I try to delete the configMap:
kubectl delete configmap/mymap &
configmap "mymap" deleted
jobs
[1]+ Running kubectl delete configmap/mymap
Copy the code
Kubernetes will return a message telling you that the ConfigMap has been deleted, but not in the traditional way, but in a deleting state. When we get the ConfigMap again, we find that the ConfigMap resource has been modified, and deletionTimeStamp is set.
kubectl get configmap/mymap -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: "2020-10-22T21:30:18Z"
deletionGracePeriodSeconds: 0
deletionTimestamp: "2020-10-22T21:30:34Z"
finalizers:
- kubernetes
name: mymap
namespace: default
resourceVersion: "311456"
selfLink: /api/v1/namespaces/default/configmaps/mymap
uid: 93a37fed-23e3-45e8-b6ee-b2521db81638
Copy the code
In other words, the ConfigMap resource is not deleted but updated because Kubernetes sees a Finalizer for the resource and puts it in read-only state. The ConfigMap will only be removed from the cluster if Finalizer is removed.
To verify the above statement, use the patch command to remove the Finalizer.
kubectl patch configmap/mymap \
--type json \
--patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'
configmap/mymap patched
[1]+ Done kubectl delete configmap/mymap
Copy the code
Get configMap (configMap);
kubectl get configmap/mymap -o yaml
Error from server (NotFound): configmaps "mymap" not found
Copy the code
The preceding figure shows the status flow of a resource with Finalizer deleted.
Owner References
The Owner reference describes the relationships between multiple resources. They are attributes about resources that can be assigned to each other so that they can be cascaded.
In Kubernetes, the most common scenario with owner reference relationship is pod taking replica set as its owner. Therefore, when deployment or statefulSet is deleted, The Replica set and POD that are sub-resources will be deleted.
The following example illustrates how owner reference works. We first create a parent resource and then specify its owner reference when creating the child resource:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: mymap-parent
EOF
CM_UID=$(kubectl get configmap mymap-parent -o jsonpath="{.metadata.uid}")
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: mymap-child
ownerReferences:
- apiVersion: v1
kind: ConfigMap
name: mymap-parent
uid: $CM_UID
EOF
Copy the code
When a child resource is deleted, the parent resource is not deleted:
kubectl get configmap
NAME DATA AGE
mymap-child 0 12m4s
mymap-parent 0 12m4s
kubectl delete configmap/mymap-child
configmap "mymap-child" deleted
kubectl get configmap
NAME DATA AGE
mymap-parent 0 12m10s
Copy the code
Then we delete the parent resource and find that all the resources are deleted:
kubectl get configmap
NAME DATA AGE
mymap-child 0 10m2s
mymap-parent 0 10m2s
kubectl delete configmap/mymap-parent
configmap "mymap-parent" deleted
kubectl get configmap
No resources found in default namespace.
Copy the code
In short, when the owner reference relationship exists between parent and child resources, if the parent resource is deleted, the child resource will also be deleted, which is called cascade deletion. By default, cascade is true. You can perform kubectl delete with the argument cascade=false to delete only the parent resource and keep the child resource.
In the following example, I have a parent resource and if I use –cascade = false to delete the parent resource, the child resource still exists:
kubectl get configmap
NAME DATA AGE
mymap-child 0 13m8s
mymap-parent 0 13m8s
kubectl delete --cascade=false configmap/mymap-parent
configmap "mymap-parent" deleted
kubectl get configmap
NAME DATA AGE
mymap-child 0 13m21s
Copy the code
The Cascade parameter controls the propagation Policy in the API. It allows to control the order in which objects are to be deleted. In the following example, use API access to create a custom DELETE API call with a background propagation policy:
Kubectl proxy --port= 8080&starting to serve on 127.0.0.1:8080 curl -x DELETE \ localhost:8080/api/v1/namespaces/default/configmaps/mymap-parent \ -d '{ "kind":"DeleteOptions", "apiVersion":"v1", "propagationPolicy":"Background" }' \ -H "Content-Type: application/json" { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Success", "details": { ... }}Copy the code
Note that you cannot use Kubectl to specify propagation policies on the command line. Must be specified using a custom API call. You need to create a proxy so that you can access the API Server from the client and run the curl command to perform the delete command.
There are three communication strategies:
Foreground: Delete child resource > Delete Parent resource (post-order)
Background: Delete the parent resource before deleting the child resource (pre-order)
Orphan: Owner reference is ignored
Note that if both the owner Reference and finalizer are configured for a resource, the priority of finalizer is the highest.
Forcing a Deletion of a Namespace
After kubectl delete ns is executed, ns cannot be deleted. In this case, you can update the Finalizer field of the deleted NS to forcibly delete ns. This action notifies the Namespace Controller to remove the finalizer:
cat <<EOF | curl -X PUT \
localhost:8080/api/v1/namespaces/test/finalize \
-H "Content-Type: application/json" \
--data-binary @-
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "test"
},
"spec": {
"finalizers": null
}
}
EOF
Copy the code
Be careful with this, as it may only delete namespaces, leaving isolated resources in namespaces that do not currently exist, leaving the cluster in a confusing state. If this happens, you can manually recreate the namespace, and isolated resource objects will reappear under the namespace you just created, which can be cleaned and restored manually.
Key Takeaways
As the above example shows, Finalizers can prevent Kubernetes from deleting resources, but finalizers are usually added to code for a reason, so you should check before manually removing it. The Owner reference supports cascading resource deletion, but Finalizer has higher priority. Finally, you can use propagation policies to control how objects are deleted by specifying the order of deletion through custom API calls. From the above example, you have a better understanding of how Kubernetes delete works. Now you can use the test cluster to try it out for yourself.
The original address: kubernetes. IO/blog / 2021/0…