Background:
Earlier, kubernetes cluster crI also used docker to experience such a situation: the cluster runs for a long time and the hard disk runs almost full…… , large files are mainly concentrated in: /var/lib/docker-overlay2 files have up to 70G, /var/log/journal/logs also have 4-5g. The following operations were performed manually on the Work node:
- Journalctl –vacuum-size=20M # Set the maximum size of journal to 20M.
- Docker image prune -a –filter “until=24h” # Remove images that were created more than 24 hours ago
- Docker container prune –filter “until=24h” # Delete all stopped containers except those created within 24
- docker volume prune –filter “label! =keep” # delete all volumes except lable.
- The docker System Prune can be used to clean everything: images, containers, networks
Note: The above is from a 2019 post on my blog:Duiniwukenaihe. Making. IO / 2019/11/25 /…. Similarly, of course, inotify Watch depletes cgroup leaks. Can refer to:www.bookstack.cn/read/kubern…Here’s the problem: my current cluster is the Kubernets 1.21 cluster. Cri is containerd how do I clean up crI resources other than system logs? Does kubelet normally have this feature? Anyway, my TKE cluster’s work node has recently frequently received a disk alarm of more than 90%…… Obsessive-compulsive disorder!
Kubernetes Image garbage container collection
About kubelet:
Node management
[kubelet –help] A node can register itself with the API Server by setting the startup parameter “–register-node”. The default value is true.
Pod management
Kubelet listens on the directories of/Registry /nodes/${current node name} and/Registry/Pods in ETCD through API Server Client using Watch/List, and synchronizes the obtained information to the local cache. Kubelet listens to ETCD and performs operations on Pod, while operations on containers are performed through Docker Client. Common operations include: add, delete, this, and search.
Container health check
Probes are the probes that Kubelet performs periodic diagnostics on the container, mainly by calling three types of handlers configured for the container. If the survival probe fails, Kubelet kills the container and the container is affected by its restart strategy.
Resource monitoring
Kubelet obtains the node information and container data from cAdvisor. CAdvisor is Google’s open source container resource analysis tool, integrated into Kubernetes by default.
kubelet-garbage-collection
Note: the following content from: kubernetes. IO/useful/docs/con… Well, take a look at the documentation……
Garbage collection is a useful feature of Kubelet that cleans up unused images and containers. Kubelet will perform a garbage collection every minute for the container and every five minutes for the image. The use of external garbage collection tools is not recommended, as these tools may break Kubelet’s behavior by removing containers that are expected to exist.
Image recovery
Kubernetes uses cAdvisor to manage the life cycle of all images through imageManager. The mirror garbage collection strategy only considers two factors: HighThresholdPercent and LowThresholdPercent. If the disk usage exceeds the upper threshold (HighThresholdPercent), garbage collection is triggered. Garbage collection deletes the least recently used mirrors until the disk usage reaches the lower threshold percent.
Container recycling
The container garbage collection strategy considers three user-defined variables. MinAge is the minimum lifetime for which a container can be garbage collected. MaxPerPodContainer is the maximum number of death containers allowed in each pod. MaxContainers is the maximum number of total dead containers. These variables can be disabled independently by setting MinAge to 0 and MaxPerPodContainer and MaxContainers to less than 0, respectively. Kubelet will handle unrecognized containers, deleted containers, and containers that are outside the range set by the previously mentioned parameters. The oldest containers are usually removed first. MaxPerPodContainer and MaxContainer may conflict in some scenarios, For example, the maximum number of death containers per pod (MaxPerpodContainers) may exceed the maximum number of all death containers allowed (maxContainers). The MaxPerPodContainer is adjusted in this case: the worst case is to downgrade the MaxPerPodContainer to 1 and expel the oldest container. In addition, containers that have been deleted from pods will be cleaned up once they are older than MinAge. Containers that are not managed by Kubelet are not subject to container garbage collection.
User configuration
Users can tune the correlation thresholds to optimize mirror garbage collection using the following kubelet parameters:
- Image-gc-high-threshold: indicates the percentage of disk usage that triggers mirror garbage collection. The default value is 85%.
- Image-gc-low-threshold: indicates the percentage of disk usage after mirroring garbage collection attempts to release resources. The default value is 80%.
We also allow users to customize garbage collection policies with the following Kubelet parameters:
- Minimum-container-ttl-duration: specifies the minimum age of the completed container before garbage collection. The default is 0 minutes. This means that garbage collection is performed on each completed container.
- Maximum-dead-container-per-container, the maximum number of old instances to be retained per container. The default value is 1.
- Maximum-dead-containers, the maximum number of old container instances to be retained globally. The default is -1, meaning there is no global limit.
Containers may be garbage collected before their utility expires. These containers may contain logs and other data useful for troubleshooting. It is strongly recommended to set a value large enough for maximum-dead-container-per-container so that each intended container retains at least one dead container. Maximum-dead-containers is also recommended to use a sufficiently large value for the same reason.
Refer to my TKE cluster
cat /etc/kubernetes/kubelet Kubelet-garbage-collection (kubelet-garbage-collection) There is only one eviction-hard. Take a closer look at the deprecated section of the document:The eviction-hard parameter is enabled for version 1.21, nodefs.inodesFree is designed to be iction free.Kubernetes. IO/useful/docs/tas…
Configuration in a self-built cluster:
Well, there’s nothing in the configuration file except an imageMinimumGCAge! Can refer to:Can refer to:Kubernetes. IO/docs/refere…. Chinese as follows:
To sum up:
Kubelet supports GC and expulsion mechanisms, The specifications can be rebom through –image-gc-high-threshold, –image-gc-low-threshold, –eviction-hard, –eviction-soft, and –eviction- minimum-Reclaim And other parameters to control the disk space release. If the configuration is incorrect or other processes that are not managed by K8S continue to write data to the disk, the disk will be full. The TKE cluster is set to –eviction-hard=nodefs.available<10%, this value I will likely change to 20%. Because I set the alarm threshold of CVM to 90%. Tke — Eviction -hard=nodefs.available<10%,nodefs.inodesFree<5%,memory. Available <100Mi (image-gc-high-threshold default 85%, image-gc-low-threshold default 80% self-built cluster should be ignored). Individual clusters have not yet been alerted. The key is that the tKE cluster alarm value has been adjusted.
Documentation on tKE disk overcrowding:
Cloud.tencent.com/document/pr…
In addition, kubelet configuration should be further studied!