This is the storage situation before capacity expansion: the picture is the old picture two months ago, when Slave1 ~3 was full, the fourth machine is the dn node added by horizontal capacity expansion, please refer to this article for the main points of horizontal capacity expansion:Precautions for adding a data node in the HDFS
Vertical capacity expansion: 1. Mounting disks (Huawei Cloud) Distinguish between disks in use and disks to be mounted. Never make a mistake
Run df -th and fdisk -l to check the partition comparison. Df indicates the disk that has been mounted to the system, and fdisk indicates the information about all disks. In this capacity expansion plan, each machine will have a new capacity of 5*1T. The VDG partition needs to be mounted this time.
[root @ hadoopSlave4 dev_vdf] # df - TH | grep dev devtmpfs devtmpfs 0 0% 8.4 G / 8.4 G dev TMPFS TMPFS 8.4 G 8.4 G 0 0% / dev/SHM /dev/vda1 ext4 43G 2.5g 38G 7% / /dev/vdb ext4 2.2T 915G 1.2T 45% /home /dev/ VDC ext4 1.1T 160M 1.1T 1% / MNT /dev_vdc /dev/vdd ext4 1.1T 80M 1.1T 1% / MNT /dev_vdd /dev/vde ext4 1.1T 80M 1.1T 1% / MNT /dev_vde /dev/vdf ext4 1.1T 80M 1.1T 1% /mnt/dev_vdf [root@hadoopSlave4 dev_vdf]# fdisk -l | grep dev Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors /dev/vda1 * 2048 83886079 41942016 83 Linux Disk /dev/vdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors Disk /dev/vdc: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors Disk /dev/vdd: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors Disk /dev/vde: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors Disk /dev/vdf: 1073.7 GB, 1073741824000 bytes, 2097152000 sectors Disk /dev/vdg: 1073.7 GB, 1073741824000 bytes, 2097152000 sectorsCopy the code
2. Create a mount directory
mkdir /mnt/dev_vdg
cd /mnt/dev_vdg
Copy the code
3. Access the disk partition tool. For details about how to access the disk partition tool, see Huawei Cloud Documentation: Reference
(1) fdisk /dev/vdg
(2) Enter n+ Press Enter
(3) Four Enters
(4) Enter P + Press Enter
(5) Enter W + Press Enter
4. Follow-up operations
Use the command to re-read partition information without restarting the system. MKFS -t ext4 /dev/vdg # Mount the partition to the specified directory mount /dev/vdg/MNT /dev_vdg [root@hadoopSlave4 /]# [root@hadoopSlave4 dev_vdg]# df -th PFS TMPFS 8.4g 0 8.4g 0% /sys/fs/cgroup /dev/vda1 Ext4 43G 2.5g 38G 7% / TMPFS TMPFS 1.7G 0 1.7G 0% /run/user/0 /dev/vdb ext4 2.2T 915G 1.2T 45% /home /dev/ VDC ext4 1.1T 164M 1.1T 1% / MNT /dev_vdc /dev/vdd ext4 1.1T 80M 1.1T 1% / MNT /dev_vdd /dev/vde ext4 1.1T 80M 1.1T 1% / MNT /dev_vde /dev/vdf ext4 1.1T 80M 1.1T 1% / MNT /dev_vdf /dev/vdg ext4 1.1T 80M 1.1T 1% / MNT /dev_vdgCopy the code
5. Write the mounting information to the /etc/fstab file. Otherwise, you need to mount the file again at the next startup.
UUID= fC787e42-0701-45D1-9f02-e3689561AA42 /mnt/dev_vdg ext4 defaults 1 3Copy the code
HDFS cluster
1. Configure the HDFS -site. XML file on the corresponding DN server, where /home/data/hadoop/data is the original folder
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/data/hadoop/data,/mnt/dev_vdc,/mnt/dev_vdd,/mnt/dev_vde,/mnt/dev_vdf,/mnt/dev_vdg</value>
</property>
Copy the code
2. Restart the corresponding DN
/ home/soft/hadoop2.10 / sbin/hadoop - daemon. Sh stop datanode/home/soft/hadoop2.10 / sbin/hadoop - daemon. Sh start datanodeCopy the code
3. After the RESTART, wait for the DN to reconnect to the cluster and check the DN cluster information in HDFS