1 LVM logical volume
1.1 the LVM overview
LVM is short for Logical Volume Manager. It is a management mechanism for hard disk partitions under Linux. LVM is suitable for managing large storage devices and allows users to dynamically resize file systems. In addition, LVM’s snapshot feature helps us back up data quickly. LVM gives us the logical concept of disk, so that the file system doesn’t care about the concept of the underlying physical disk.
- Physical Volume (PV) : a Physical hard disk or partition.
- Volume Group (VG) : A Volume Group consists of multiple physical volumes. The physical volumes that form a volume group can be different partitions of the same hard disk or on different hard disks. We can think of a volume group as a logical hard disk.
- Logical Volume (LV) : A Volume group is a Logical disk that can be used only after being partitioned. This partition is called a Logical Volume. Logical volumes can be formatted and written to data. We can think of logical volumes as partitions.
- Physical Extend (PE) : PE is the smallest unit used to store data. Our data is actually written to PE. The PE size is configurable. The default size is 4MB.
Advantages of LVM:
- LVM is an abstraction layer that allows easy manipulation of volume groups, including resizing file systems.
- Allows file systems to be reorganized across multiple physical devices.
- The LVM capacity can be flexibly changed.
Note:
The /boot partition is used to store boot files and cannot be created based on LVM.
1.2 LVM Management Commands
Main commands:
function | Physical Volume Management | Volume group management | Logical Volume Management |
---|---|---|---|
Scan to scan | pvscan | vgscan | lvscan |
Create establishes | pvcreate | vgcreate | lvcreate |
The display shows | pvdisplay | vgdisplay | lvdisplay |
Remove delete | pvremove | vgremove | lvremove |
The extend extension | – | vgextend | lvextend |
Reduce to reduce | – | vgreduce | lvreduce |
Command format:
1) create command:
Pvcreate Device name 1 Device name 2 [Device name 3] Vgcreate Volume group name Physical volume name 1 Physical volume name 2 [Physical volume 3]... XFS Logical volume name (/dev/ Volume group name/logical name) // Format the logical volume mount Logical volume name (/dev/ volume group name/logical name) Mount point // Mount the logical volume Vim /etc/fstab/mount the logical volume permanentlyCopy the code
2) Expand capacity command:
Pvcreate Device name 4... // Create a new physical volume vgextend Volume group name Physical volume name 4 // Expand volume group lvextend -l + size /dev/volume group name/Logical volume name // Expand logical volume xfs_growfs mount point //XFS File system expansion resize2fs /dev/vg_name /LV_NAME /EXT4 File system capacity ExpansionCopy the code
2 LVM applications
2.1 Logical Volume Configuration Example
2.1.1 Experiment Content:
1) Create physical volumes: create physical volumes for /dev/sdb3 and /dev/sdc1.
2) Create a volume group: Create volume group VG01 and allocate two physical volumes to vG01
3) Create logical volume: Create logical volume LVmail with 8GB capacity from VG01
4) Format logical volumes: Format logical volume LVmail as an XFS file system
5) Mount logical volumes: Mount logical volume lvmail to /data/mail
2.1.2 Experimental procedure:
1) Use the fdisk tool to plan partitions and set the /dev/sdb3 and /dev/sdc1 partition formats to Linux LVM (the corresponding system ID is 8E).
[root@localhost ~]# fdisk /dev/sdb/lvm welcome to fdisk (util-linux 2.23.2) Changes stay in memory until you decide to write them to disk. Think twice before using write commands. Command (type m to get help) : T Partition number (1-3,5,6, default 6) : 3 Hex code (type L to list all codes) : 8e Partition "Linux" type changed to "Linux LVM" command (type m to get help) : W [root@localhost ~]# fdisk /dev/sdc // fdisk (util-linux 2.23.2) Changes stay in memory until you decide to write them to disk. Think twice before using write commands. Command (type m to get help) : t Selected partition 1 Hex code (type L to list all codes) : 8e Changed partition "Linux" type to "Linux LVM" command (type m to get help) : W [root@localhost ~]# partprobe Warning: /dev/sr0 cannot be opened in read/write mode. /dev/sr0 has been opened as read-only.Copy the code
2) run the pvcreate command to create physical volumes for /dev/sdb3 and /dev/sdc1.
[root@localhost ~]# pvcreate /dev/sdb3 /dev/sdc1 // create Physical volume Physical volume "/dev/sdb3" successfully created Volume "/dev/sdc1" successfully created. [root@localhost ~]# pvscan /dev/sda2 VG centos LVM2 [14.00 GiB / 0 Lvm2 [lvm2] : n: n: n: n: n: n: n 1 [14.00 GiB] / in no VG: 2 [<10.00 GiB]Copy the code
3) vgcreate creates volume group Vg01 and allocates two physical volumes to vg01.
[root@localhost ~]# vgcreate vg01 /dev/sdb3 /dev/sdc1 // Volume group "vg01" successfully created [root@localhost Read volume groups from cache. Found volume group "centos" using metadata type LVM2 Found volume Group "vg01" using metadata type Lvm2 [root@localhost ~]# vgdisplay vg01 // Show Volume group Name Vg01 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 9.99 GiB // Total volume group capacity PE Size 4.00 MiB Total PE 2558 Alloc PE/Size 0/0 Free PE/Size 2558/9.99 GiB // Free capacity VG UUID FL0023-vhwK-3OIQ-y1wt-dcnS-t7F2-e5ATwnCopy the code
4) lvcreate command to create logical volume lvmail with 8G capacity from VG01
[root@localhost ~]# lvcreate -l 8G -n lvmail vg01 // Create lvmai Logical Volume "lvmail" created. [root@localhost ~]# lvdisplay // Displays details about Logical volumes -- Logical volume -- LV Path /dev/vg01/lvmail LV Name lvmail VG Name vg01 LV UUID hdrCEI-TAMj-0XSc-0V0v-vCDM-lhUJ-8JlUo4 LV Write Access read/write LV Creation host, time localhost.localdomain, 2022-02-25 19:02:45 +0800 LV Status available # open 0 LV Size 8.00 GiB // Logical volume capacity Current LE 2048 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:2Copy the code
5) MKFS command to format logical volume lvmail as XFS file system.
[root@localhost ~]# mkfs. XFS /dev/vg01/lvmail // format lvmail into XFS file system meta-data=/dev/vg01/lvmail isize=512 agcount=4, agsize=524288 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2097152, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0Copy the code
6) Modify the /etc/fstab file to mount logical volume lvmail to the /data/mail directory
/ root @ localhost ~ # mkdir/data/mail/root @ localhost ~ # blkid | grep lvmail / / view the logical volume lvmail UUID/dev/mapper/vg01 - lvmail: UUID=" 14e500d0-867D-470C-b4ed-b5b758DB7996 "TYPE=" XFS" [root@localhost ~]# vim /etc/fstab / UUID= 14e500d0-867D-470C-b4ed-b5b758DB7996 /data/mail XFS defaults 00 [root@localhost [root@localhost ~]# df -th // File system type Capacity Used Available % Mount point /dev/mapper/centos-root XFS 10G 4.9G 5.2g 49% / devtmpfs devtmpfs 897M 0 897M /dev/tmpfs TMPFS 912M 0 912M 0% /dev/shm TMPFS TMPFS 912M 9.1m 903M 1% /run TMPFS TMPFS 912M 0 912M 0% /sys/fs/cgroup /dev/sdb1 XFS 10G 33M 10G 1% /data/aa /dev/sdb5 XFS 2.0g 33M 2.0g 2% /data/bb /dev/sda1 XFS 1014M 179M 836M 18% /boot TMPFS TMPFS 183M 12K 183M 1% /run/user/42 TMPFS TMPFS 183M 0 183M 0% /run/user/0 /dev/mapper/vg01-lvmail XFS 8.0G 33M 8.0 G 1% / data/mailCopy the code
2.2 Logical Volume Expansion Example
2.2.1 Experiment content
Lvmail’s current capacity is only 8G, which is not enough for practical use. Now it is required to expand 5G.
2.2.2 Operational thinking
1) The remaining capacity of volume group VG01 is less than 2G, which cannot meet the requirements of 5G expansion. Therefore, you need to allocate a new physical volume to vG01. The total capacity of the physical volume must exceed 3G (add a physical volume of 4G at least to be safe).
2) All physical volumes have been used, so you need to create a new physical volume.
3) The new disk /dev/sdc has only one partition created and has been used. So now you need to set up a new partition.
2.2.2 Experimental procedures
1) Use the fdisk tool to create a second partition for the /dev/sdc disk. Set the partition type to LVM and the system ID to 8E.
[root@localhost ~]# fdisk /dev/sdc // use fdisk (util-linux 2.23.2) to create a second partition. Changes stay in memory until you decide to write them to disk. Think twice before using write commands. N // Create a Partition Partition type: p primary (1 primary, 0 extended, 3 free) e extended Select (default P): P Partition number (2-4, default 2) : 2 Start sector (16779264-41943039, default 16779264) : 16779264 Last sector, + sector or +size{K,M,G} (16779264-41943039, default 41943039) : GiB command (enter m for help) : t // change partition type partition number (1,2, default 2) : 2 Hex code (enter L to list all codes) : 8E // Select 8E and change the partition type to LVM. The Linux partition type has been changed to Linux LVM (enter m for help) : W [root@localhost ~]# partprobe Warning: /dev/sr0 (read-only file system) cannot be opened in read/write mode. /dev/sr0 has been opened as read-only.Copy the code
2) run the pvcreate command to create a new physical volume /dev/sdc2.
/dev/sdc2 Physical volume "/dev/sdc2" successfully created. [root@localhost ~]# pvcreate /dev/sdc2 Physical volume "/dev/sdc2" successfully created. [root@localhost ~]# pvdisplay /dev/sdc2 // check whether /dev/sdc2 is a new physical volume of "5.00gib" -- new physical Volume -- PV Name /dev/sdc2 VG Name PV Size 5.00 GiB Allocatable NO // Whether PE Size 0 Total PE 0 Free PE 0 has been allocated to the physical volume Allocated PE 0 // Allocated capacityCopy the code
3) vgextend to add physical volume /dev/sdc2 to volume group vg01.
[root@localhost ~]# vgextend vg01 /dev/sdc2 // Add physical Volume /dev/sdc2 to Volume group vg01 Volume group "vg01" successfully extended [root@localhost ~]# vgdisplay vg01 // Display information about Volume group Vg01 -- Volume group -- VG Name Vg01 System ID Format LVM2 Metadata Areas 3 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 3 Act PV 3 VG Size <14.99 GiB // Total capacity of the volume group PE Size 4.00 MiB Total PE 3837 Alloc PE/Size 2048/8.00 GiB // Allocated capacity of the volume group Free PE/Size 1789 / <6.99 GiB // Free capacity in the volume group VG UUID FL0023-vhwK-3OIQ-y1wt-dcns-t7F2-e5ATwnCopy the code
4) lvextend command to increase the capacity of logical volume /dev/vg01/lvmail by 5G.
[root@localhost ~]# lvextend -l +5G /dev/vg01/lvmail changed from 8.00gib (2048 extents) to 13.00 GiB (3328 extents). Logical Volume vg01/lvmail successfully resized. [root@localhost ~]# Lvdisplay /dev/vg01/lvmail // Displays Logical volume information. -- Logical Volume -- LV Path /dev/vg01/lvmail LV Name lvmail VG Name vg01 LV UUID 8WqFMf-1DXl-zCjy-qtnt-AQsw-JBVO-iNNBIb LV Write Access read/write LV Creation host, time localhost.localdomain, 2022-02-25 19:06:20 +0800 LV Status available # open 1 LV Size 13.00 GiB // Total logical volume capacity changed to 13G Current LE 3328 Segments 3 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:2Copy the code
5) Run the xfs_growfs command to enable the XFS file system to recognize the newly added space to expand the file system.
[root@localhost ~]# xfs_growfs /data/mail/meta-data=/dev/mapper/vg01-lvmail isize=512 agcount=4 agsize=524288 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=2097152, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, Rtextents =0 data blocks changed from 2097152 to 3407872 [root@localhost ~]# df -th Lvmail capacity changed to 13 GB File system type Capacity Used Available Used % Mount point /dev/mapper/centos-root XFS 10G 4.9G 5.2g 49% / devtmpfs devtmpfs 897M 0 897M 0% /dev/tmpfs TMPFS 912M 0 912M 0% /dev/shm TMPFS TMPFS 912M 9.1m 903M 1% /run TMPFS TMPFS 912M 0 912M 0% /sys/fs/cgroup /dev/sdb1 XFS 10G 33M 10G 1% /data/aa /dev/sdb5 XFS 2.0g 33M 2.0g 2% /data/bb /dev/sda1 XFS 1014M 179M 836M 18% /boot tmpfs tmpfs 183M 12K 183M 1% /run/user/42 tmpfs tmpfs 183M 0 183M 0% /run/user/0 /dev/mapper/vg01-lvmail xfs 13G 33M 13G 1% /data/mailCopy the code
2 Disk Quota
2.1 Overview of Disk Quotas
2.1.1 Overview of Disk Quotas
A disk Quota (Quota) is used in Linux to limit the amount of disk space or files that a specific user or user group can consume on a specified partition.
In this concept, there are several important points to note:
- Only common users and user groups can be allowed by the disk quota. That is, super user root cannot use the disk quota.
- In other words, disk quotas can only apply to file systems. For example, if your /dev/sda5 is mounted under /home, all directories under /home will be subject to disk quotas.
- We can limit the size of the disk (block) and of course limit the number of files (inode) that can be occupied by the user.
Disk quotas are like renting office buildings. Although the space of the whole building is very large, the cost of paying for the whole building is too high. We can rent it out separately, and users can rent more space if they don’t have enough. But renting is not a casual affair, and there are a few rules that must be followed:
- My building is rented to outside users (common users), either to a person (user) or to a company (user group), but the ownership of the building is mine, so I cannot rent it to myself (root user).
- If you want to rent, you can only rent a certain size of space on each floor, not subspace in a room (quotas can only be for partitions, not for a directory);
- Tenants can decide how much space to rent for a tier (disk capacity limit) or how many people to rent for a tier so that only these people can enter the tier (file number limit).
2.1.2 Conditions for using disk Quotas
The following prerequisites must be met for the disk quota to be used normally:
- The kernel must support disk quotas.
- The Quota tool must be installed on the system. Letter utility is installed by default in our Linux
- Disk quotas must be enabled for partitions that support disk quotas. This feature can be turned on manually, not by default.
2.1.3 Common Concepts in Disk Quotas
The disk capacity and file quantity are limited
In addition to limiting the number of blocks available to a user to limit the disk capacity, we can also limit the number of inodes available to a user to limit the number of files that a user can upload or create.
Soft limits and hard limits
Soft limits can be understood as warning limits, while hard limits are real limits. For example, if the soft limit is 100MB and the hard limit is 200MB, when the disk space is 100 to 200MB, the user can continue to upload and create files, but will receive a warning message telling the user that the disk will be full every time he logs in.
2.2 Enabling disk Quota
1) Check the CentOS. 7 Check whether the XFSPROgs and xfs_Quota software packages are installed in the system.
[root@localhost ~]# RPM -q xfsPROgs quota // Check whether the software package is installed. Xfsprogs-4.5.0-12.el7.x86_64 quota-4.01-14.el7.x86_64 is installedCopy the code
2) Add disk quota attributes and enable quota support for the file system.
Method 1: Run the mount command to add usrquota and GRpquota mount parameters.
[root@localhost ~]# umount /dev/sdb1 // [root@localhost ~]# mount -o usrquota,grpquota /dev/sdb1 /data/aaCopy the code
Method 2: If you manually add a file, the file will disappear in the next mount. Therefore, you can directly modify the /etc/fstab file and write the mounting parameters into the configuration file. In this way, the file will not disappear even after the next mount.
[root@localhost ~]# vim /etc/fstab / Enable disk quotas support UUID = e0b714cd - c33e - 42 b2 - e1f3333b4b7 a051-1 / data/aa XFS defaults, usrquota, grpquota 0 0 / root @ localhost ~ # mount -a // Remount the fileCopy the code
2.3 Disk Quota Management
2.3.1 Editing quota Settings for User and Group Accounts
Edit quota Settings using the xfs_quota command:
Xfs_quota-x -c "limit -u bsoft=N bhard=N isoft=N ihard=N user name "Mount pointCopy the code
Common options:
- -x: enables the expert mode. In the current mode, all management commands that modify the quota system can be used.
- -c: directly invokes management commands.
- -u: specifies the user account object.
- -g: specifies the group account object.
Restricted field:
- Bsoft: Sets the soft limit of the disk capacity (default unit: KB).
- Bhard: Sets the hard limit of the disk capacity (default unit: KB).
- Isoft: Sets the soft limit for the number of disk files.
- Ihard: sets the hard limit for the number of files on the disk.
Example:
[root@localhost ~]# xfs_quota -x -c "limit -u bsoft=80M bhard=100M Nancy "/data/aa [root@localhost ~]# xfs_quote-x -c "limit -u isoft=4 ihard=5 Nancy "/data/aa #Copy the code
2.3.2 Viewing disk Quota Usage
The xfs_quota command displays the Settings and usage of disk quotas.
Xfs_quota -x -c "Report Options" mount pointCopy the code
Report common options:
- -u: Views the user
- -g: Views groups
- -a: Displays the quota usage report of all AVAILABILITY zones
- -b: Check the disk capacity
- -i: Displays the number of files
- -h: User-friendly display
Example:
[root@localhost ~]# xfs_quota -x -c "report -ubih" /data/aa // query /data/aa User quota on /data/aa (/dev/sdb1) Blocks Inodes User ID Used Soft Hard Warn/Grace Used Soft Hard Warn/Grace ---------- --------------------------------- --------------------------------- root 0 0 0 00 [------] 3 0 0 00 [------] nancy 0 80M 100M 00 [------] 0 4 5 00 [------] # you can see that user Nancy has a hard capacity limit of 100M and a hard file limit of 5. No files have been created yet.Copy the code
2.3.3 Verifying disk Quota Function
1) Verify that Nancy has a hard capacity limit of 100M
[root@localhost ~]# chmod 777 /data/aa // [root@localhost ~]# su Nancy // switch to Nancy [nancy@localhost root]$CD /data/aa/ / Switch to /data/aa directory [nancy@localhost aa]$dd if=/dev/zero of=/data/aa/yuji.txt bs=10M Count =11 // copy the contents of 110M to the file in aa directory. 10+0 write 104857600 bytes (105 MB) copied, 1.2945 SEC, 81.0 MB/ SEC [nancy@localhost aa]$exit // exit Nancy user, Go to root user exit [root@localhost ~]# xfs_quota -x -c "report -ubih" /data/aa // check the disk quota usage. One file User quota on /data/aa (/dev/sdb1) Blocks Inodes User ID Used Soft Hard Warn/Grace Used Soft Hard Warn/Grace has been created ---------- --------------------------------- --------------------------------- root 0 0 0 00 [------] 3 0 0 00 [------] nancy 100M 80M 100M 00 [6 days] 1 4 5 00 [------]Copy the code
2) Verify that Nancy user has a hard limit of 5 files
[root@localhost ~]# su Nancy [nancy@localhost root]$CD /data/aa [nancy@localhost aa]$touch file1.txt Because the previous 100 MB capacity has been used, it cannot create touch even if the file limit has not been reached: cannot create "file1.txt": [nancy@localhost aa]$rm -rf * // Empty directory [nancy@localhost aa]$touch file{1.. 10}.txt // Create 10 files at a time, and the system displays a message indicating that the last 5 files exceed the disk quota. Touch: cannot create "file6.txt": the disk quota is exceeded. TXT cannot be created. "file9. TXT ": the disk quota is exceeded. [nancy@localhost aa]$ls // List the contents of the directory, TXT file1.txt file3.txt file4.txt file5.txt [nancy@localhost aa]$exit // exit Nancy Go to root user exit [root@localhost ~]# xfs_quota -x -c "report -ubih" /data/aa // check the disk quota usage. User quota on /data/aa (/dev/sdb1) Blocks Inodes User ID Used Soft Hard Warn/Grace Used Soft Hard Warn/Grace ---------- --------------------------------- --------------------------------- root 0 0 0 00 [------] 3 0 0 00 [------] nancy 0 80M 100M 00 [------] 5 4 5 00 [6 days]Copy the code
2.4 Canceling the Disk Quota
Command format:
Xfs_quota-x -c "disable-up" mount point # Temporarily removes limits on quotas, but the system is still calculating them, just not controlling them. Xfs_quota-x -c "enable-up" mount point # Restore to the normal control state. Disable and fs_quota-x are disabled. Xfs_quota-x -c "off-up" mount point # Disable all limits on quota. After using this state, quota cannot be started again until it is unmounted and remounted. (Does not remove limits on quota, just turns them off.) Xfs_quota -x -c "remove -p" mount point # This command must be executed in the off state to remove limits on a quota. (Note: "remove-p" will remove restrictions on all items)Copy the code
Example:
1) “off-up” turns off quota limits and then tests whether user Nancy can create files
[root@localhost ~]# xfs_quota-x -c "off-up" /data/aa // Completely disable quota limits [root@localhost ~]# su Nancy // Switch to user Nancy [nancy@localhost root]$CD /data/aa [nancy@localhost aa]$mkdir dir1 [nancy@localhost aa]$ls TXT file1.txt file3.txt file4.txt file5.txt dir1Copy the code
2) Unmount and mount the disk again to check whether data in the disk quota table still exists
[root@localhost ~]# umount /data/aa // unmount [root@localhost ~]# mount -o usrquota,grpquota /dev/sdb1 /data/aa // [root@localhost ~]# xfs_quota -x -c "report -ubih" /data/aa / User quota on /data/aa (/dev/sdb1) Blocks Inodes User ID Used Soft Hard Warn/Grace Used Soft Hard Warn/Grace ---------- --------------------------------- --------------------------------- root 0 0 0 00 [------] 3 0 0 00 [------] nancy 0 80M 100M 00 [------] 6 4 5 00 [-none-]Copy the code
3) “off-up” closes quota limits, then “remove-p” removes all limits.
[root@localhost ~]# xfs_quota -x -c "off -up" /data/aa
[root@localhost ~]# xfs_quota -x -c "remove -p" /data/aa
Copy the code