In the Linux system, in order to improve file system performance, the kernel part of the physical memory allocation is used to derive the buffer, used to cache system operation and data files, read and write when the kernel receive the request, the kernel to buffer if there is a request for data, has just returned directly, if not directly by the driver disk operation.
The memory view
When you run the free -h command, the following information is displayed
Total Used Free shared buff/ Cache available Mem: 15G 1.0g 9.3g 1.9m 5.4g 14G Swap: 0B 0B 0BCopy the code
Total: indicates the total memory size. Used: indicates the number of used memory. Free: indicates the number of free memory. Shared: it is not used. Buff /cache: number of cached memory.
Total = used + Free Swap Usage: Swap means Swap partition, usually we say virtual memory, is a partition of the hard disk. When the buffers/cache runs out of memory, the kernel frees some programs that have not been used for a long time and places them temporarily in Swap. This means Swap is used only when the buffers and cache run out of memory.
Swap clearing: swapoff -a && swapon-a
cache && buffer
cache
Cached is the process of storing data that has been read from the hard drive. If a hit is found, you do not need to read from the hard drive. The data is organized by reading frequency, putting the most frequently read content in the easiest location to find, and moving the content that is no longer read back until it is deleted.
A Cache does not Cache files, but blocks (the smallest unit of I/O read and write). The Cache is generally used for I/O requests. If multiple processes want to access a file, the file can be read into the Cache. In this way, the next process can obtain CPU control and directly read the file from the Cache, improving system performance.
Cache scope is between Cpu and memory
buffer
Buffers are designed based on the read and write operations of disks. This helps reduce disk fragmentation and disk seeking and improve system performance. When the device with fast storage speed communicates with the device with slow storage speed, the data with slow storage speed is first stored in the buffer, and the device with fast storage speed reads the data in the buffer to a certain extent. During this period, the CPU of the device with fast storage speed can do other things.
Linux has a daemon that periodically clears the buffer (that is, writes to disk), or you can manually clear the buffer using the sync command.
Change the number to the right of vm.swappiness in /etc/sysctl.conf to adjust the swap usage policy at the next startup. The value ranges from 0 to 100. A larger number indicates that swap is used. The default is 60, so you can try it out. – Both are RAM data.
cache vs buffer
Cache was originally used for CPU cache, mainly because the CPU is fast, memory can not keep up with, and some values are used more than once, so it is stored in the cache. The main purpose is to reuse, and the level-1 / level-2 physical cache is fast.
The buffer is mainly used for disk and memory to protect disks or reduce the number of network transfers (memory data representation dataSet). Of course, it can also improve speed (data that is not immediately written to the disk or read directly from the disk is immediately displayed), reuse, and the primary purpose is to protect the disk.
A cache can also be used for writing, and a buffer can also be used for reading. There is no real distinction between the two.
swap
In addition to the above two methods, Linux also introduces swap technology, which is similar to “virtual memory” in Windows. If the physical memory is insufficient, part of the hard disk space can be used as a SWAP partition (virtual memory) to resolve the memory insufficiency.
When a program requests memory resources from the OS and the OS finds that the memory is insufficient, the OS swaps the temporarily unused data in the memory and places it in the SWAP partition. This process is called SWAP OUT. When the program needs the data again and the OS finds that there is free physical memory, it swaps the data IN the SWAP partition back to the physical memory. This process is called SWAP IN.
So why turn swap off when using Docker and K8s? The main goal is to get better performance. Kubernetes’ idea is to pack instances tightly into as close to 100% utilization as possible. All deployments should have fixed CPU/memory limits. Therefore, if the scheduler sends Pod to the computer, swap should never be used.
Manual clean release
1) Clean up the pagecache
echo 1 > /proc/sys/vm/drop_caches
# or
sysctl -w vm.drop_caches=1
Copy the code
2) Clear dentries (directory cache) and inodes
echo 2 > /proc/sys/vm/drop_caches
# or
sysctl -w vm.drop_caches=2
Copy the code
3) Clear pagecache, dentries, and inodes
echo 3 > /proc/sys/vm/drop_caches
# orsysctl -w vm.drop_caches=3 ```  To permanently release the cache, set '/etc/sysctl.conf' to '/etc/sysctl.conf' :Copy the code
vm.drop_caches=1/2/3
Then 'sysctl -p' takes effect. In addition, you can use 'sync' to clean up the file system cache, as well as zombie objects and their memory usage## Worth mentioningIn most cases, the above actions do no harm to the system and only help free up unused memory. But if the data is being written while these operations are being performed, it is actually wiped from the file cache before it reaches disk, which can have a nasty effect. So what about avoiding that? Therefore, the file '/proc/sys/vm/vfs_cache_pressure' tells the kernel what priority to use when cleaning the 'inoe/dentry' cache. ```bash > cat /proc/sys/vm/vfs_cache_pressure 100Copy the code
Vfs_cache_pressure =100 This is the default, and the kernel tries to redeclare dentries and inodes in a “reasonable” ratio to page and swap caches.
Reducing vfs_cache_pressure causes the kernel to retain dentry and inode caches. Increasing vfs_cache_pressure (above 100) causes the kernel to redeclare dentries and inodes
In summary, vfs_cache_pressure: values below 100 do not cause a significant reduction in the cache and values above 100 tell the kernel that you want to clean the cache at a high priority.
Regardless of the value of vfs_cache_pressure, the kernel cleans the cache at a slower rate. If you set this value to 10000, the system will reduce the cache to a reasonable level. If you like, please follow my official account or check out my blog at packyzbq.coding.me. I will send my own learning records from time to time, so we can learn and communicate with each other