“This is the 26th day of my participation in the November Gwen Challenge. See details of the event: The Last Gwen Challenge 2021”.
1. HDFS core parameters
1.1 NameNode Memory Production Configuration
1. NameNode memory: Each file block occupies about 150 bytes. For example, how many file blocks can a server store with 128 GB memory? 128 * 128 * 1024 * 1024/150 byte material 910 million
For hadoop2. x series, configure the NameNode memory. The default NameNode memory is 2000 MB. If the server has 4 GB memory, the NameNode memory can be configured with 3 GB memory. Run the following command in the hadoop-env.sh file.
HADOOP_NAMENODE_OPTS= Xmx 3072 m
Copy the code
Hadoop3.x series, configure NameNode memory Hadoop memory is dynamically allocated as described in hadoopenv.sh
# The maximum amount of heap to use (Java -Xmx). If no unit
# is provided, it will be converted to MB. Daemons will
# prefer any Xmx setting in their respective _OPT variable.
# There is no default; the JVM will autoscale based upon machine
# memory size.
# export HADOOP_HEAPSIZE_MAX=
# The minimum amount of heap to use (Java -Xms). If no unit
# is provided, it will be converted to MB. Daemons will
# prefer any Xms setting in their respective _OPT variable.
# There is no default; the JVM will autoscale based upon machine
# memory size.
# export HADOOP_HEAPSIZE_MIN=
HADOOP_NAMENODE_OPTS= Xmx102400m
Copy the code
Check memory usage of NameNode
[Tom@hadoop102 hadoop-3.13.]$ jps
3136 JobHistoryServer
3200 Jps
2947 NodeManager
2486 NameNode
2622 DataNode
[Tom@hadoop102 hadoop-3.13.]$ jps -heap 2486
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 478150656 (456.0MB)
Copy the code
Check the memory usage of DataNode
[Tom@hadoop102 hadoop-3.13.]$ jmap -heap 2622
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 478150656 (456.0MB)
Copy the code
The memory usage of NameNode and DataNode on Hadoop102 is automatically allocated and equal. It doesn’t make sense.
Experience reference: docs.cloudera.com/documentati…
Change the value to hadoop-env.sh
export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS-Xmx1024m"
export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS-Xmx1024m"
Copy the code
1.2 NameNode Concurrent Heartbeat Configuration
NameNode has a worker thread pool that handles concurrent heartbeats from different Datanodes and concurrent metadata operations from clients. For large clusters or clusters with a large number of clients, you usually need to increase this parameter. The default value is 10.
<property>
<name>dfs.namenode.handler.count</name>
<value>21</value>
</property>
Copy the code
Enterprise experience: DFS. The namenode. Handler. Count = 20 x logeClustersize, such as cluster scale (DataNode sets) for 3, this parameter is set to 21. This value can be calculated using simple Python code, as follows.
[Tom@hadoop102 hadoop-3.13.]$ python
Python 2.7. 5 (default, Oct 14 2020.14:45:30)
[GCC 4.8. 5 20150623 (Red Hat 4.8. 5-44)] on linux2
Type "help"."copyright"."credits" or "license" for more information.
>>> import math
>>> print int(20*math.log(3))
21
>>> quit()
Copy the code
1.3 Enabling the Recycle Bin Configuration
After the recycle bin function is enabled, deleted files can be restored without timeout, preventing accidental deletion and backing up.
1. Working mechanism of recycle bin
2. Enable the recycle Bin function
The default valuefs.trash.interval = 0
, 0 indicates to disable the recycle bin, other values indicate to set the file lifetime.
The default valuefs.trash.checkpoint.interval = 0
, check the interval between the recycle bin. If the value is 0, the value sets andfs.trash.interval
The value of the argument is equal.
requirementsfs.trash.checkpoint.interval <= fs.trash.interval
.
3. Enable the recycle bin. Modify the configuration of core-site. XML and set the garbage collection time to 1 minute.
<property>
<name>fs.trash.interval</name>
<value>1</value>
</property>
Copy the code
4. Check the recycle bin directory path in the HDFS cluster: /user/Tom/.trash /… .
5. Note: Files deleted directly from the web page will not go to the recycle bin.
6. Programmatically deleted files do not go through the recycle bin. MoveToTrash () is called to enter the recycle bin
Trash trash = N ew Trash(conf);
trash.moveToTrash(path);
Copy the code
7. Use this command only on the command linehadoop fs -rm
Command to delete files will go to the recycle bin.
[Tom@hadoop102 hadoop-3.13.]$ hadoop fs -rm -r /input
2021- 06- 24 18:20:36.515 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hadoop102:8020/input' to trash at: hdfs://hadoop102:8020/user/Tom/.Trash/Current/input
Copy the code
(8) Restore the recycle bin data
[Tom@hadoop102 hadoop-3.13.]$ hadoop fs -mv /user/Tom/.Trash/Current/input /input
Copy the code
2. HDFS cluster pressure test
In the enterprise, there is a lot of concern about the data pulled from the Java background every day. How long does it take to upload to the cluster? Consumers care how long it takes to pull the data they need from HDFS?
To check HDFS read and write performance, you need to perform pressure tests on clusters in production environments.
The read and write performance of HDFS is mainly affected by theNetwork and DiskThe impact is relatively large. Hadoop102, HadoOP103, and HadoOP104 virtual machine networks are all set to 100Mbps for easy testing.
100Mbps unit is bit; 10M/s is expressed in bytes. 1 byte = 8 bit; 100 MBPS / 8 = 12.5 M/s.
To test the network speed: Go to the /opt/software directory of Hadoop102 and create one
[Tom@hadoop102 software]$ python -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000.Copy the code
2.1 Testing HDFS Write Performance
Write tests underlying principles
Test contents: Write five 128 MB files to the HDFS cluster
Note: nrFiles n is the number of mapTasks generated. In production environment, you can check the number of CPU cores by hadoop103:8088. Set it to (number of CPU cores -1).
[Tom@hadoop102 hadoop-3.13.]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient- 3.1.3-tests.jar TestDFSIO -write -nrFiles 5 -fileSize 128MB
2021- 06- 24 21:58:25.548 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2021- 06- 24 21:58:25.568 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
2021- 06- 24 21:58:25.568 INFO fs.TestDFSIO: Date & time: Thu Jun 24 21:58:25 CST 2021
2021- 06- 24 21:58:25.568 INFO fs.TestDFSIO: Number of files: 5
2021- 06- 24 21:58:25.568 INFO fs.TestDFSIO: Total MBytes processed: 640
2021- 06- 24 21:58:25.568 INFO fs.TestDFSIO: Throughput mb/sec: 0.88
2021- 06- 24 21:58:25.568 INFO fs.TestDFSIO: Average IO rate mb/sec: 0.88
2021- 06- 24 21:58:25.568 INFO fs.TestDFSIO: IO rate std deviation: 0.04
2021- 06- 24 21:58:25.568 INFO fs.TestDFSIO: Test exec time sec: 246.54
2021- 06- 24 21:58:25.568 INFO fs.TestDFSIO:
Copy the code
Number of files
: The number of MapTasks generated, usually the number of CPU cores in the cluster -1, we test the VM according to the actual physical memory -1 allocationTotal MBytes processed
: File size processed by a single mapThroughput mb/sec
: Throughput of a single mapTak
Calculation method: Total file size processed/write time per mapTask Total throughput of the cluster: Number of MapTasks generated x throughput of a single mapTak
Average IO rate mb/sec
: Average throughput of mapTak
Calculation method: Add the file size of each mapTask/the data writing time of each mapTask divided by the number of tasks
IO rate std deviation
: variance, reflecting the difference between mapTasks. The smaller, the more balanced
Note: If exceptions occur during the test, set virtual memory detection to false in yarn-site. XML. Then distribute the configuration and restart the cluster.
<! Whether to start a thread to check the amount of virtual memory each task is using, and if the task exceeds the allocated value, it will be killed. Default is true -->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
Copy the code
Analysis of test results
Copy 1 is not tested because it is local
Total documents to be tested:
5 files * 2 copies = 10
Speed after pressure measurement: 0.88
Measured speed: 0.88m /s * 10 files ≈ 8.8m /s
Bandwidth of the three servers: 12.5 + 12.5 + 12.5 ≈ 30m/s
All network resources are not full.
If the measured speed is far less than the network, and the measured speed cannot meet the work requirements, you can consider using solid-state drives or increasing the number of disks.
If the client is not on the cluster node, all three replicas participate in the calculation
2.2 Testing HDFS Read Performance
Test contents: Read five 128 MB files in the HDFS cluster
[Tom@hadoop102 hadoop-3.13.]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient- 3.1.3-tests.jar TestDFSIO -read -nrFiles 5 -fileSize 128MB
2021- 06- 25 17:34:41.179 INFO fs.TestDFSIO: ----- TestDFSIO ----- : read
2021- 06- 25 17:34:41.181 INFO fs.TestDFSIO: Date & time: Fri Jun 25 17:34:41 CST 2021
2021- 06- 25 17:34:41.182 INFO fs.TestDFSIO: Number of files: 5
2021- 06- 25 17:34:41.182 INFO fs.TestDFSIO: Total MBytes processed: 640
2021- 06- 25 17:34:41.182 INFO fs.TestDFSIO: Throughput mb/sec: 4.6
2021- 06- 25 17:34:41.182 INFO fs.TestDFSIO: Average IO rate mb/sec: 4.74
2021- 06- 25 17:34:41.182 INFO fs.TestDFSIO: IO rate std deviation: 0.93
2021- 06- 25 17:34:41.182 INFO fs.TestDFSIO: Test exec time sec: 82.47
2021- 06- 25 17:34:41.182 INFO fs.TestDFSIO:
Copy the code
Delete test generated data
[Tom@hadoop102 hadoop-3.13.]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient- 3.1.3-tests.jar TestDFSIO -clean
Copy the code
HDFS multiple directories
3.1 NameNode Multi-Directory Configuration
Multiple local directories of NameNode can be configured and each directory stores the same content, which improves reliability.
The configuration is as follows
(1) Add the following content to the HDFS -site. XML file
<property>
<name>dfs.namenode.name.dir</name>
<value>file://${hadoop.tmp.dir}/dfs/name1,file://${hadoop.tmp.dir}/dfs/name2</value>
</property>
Copy the code
Note: After this configuration is configured, you can choose not to distribute because the disks on each server node are different.
(2) Stop the cluster and delete all data in data and logs of the three nodes.
[Tom@hadoop102 hadoop 3.1.3]$rm rf data/ logs/ [Tom@hadoop103 hadoop 3.1.3]$rm rf data/ logs/ [Tom@hadoop104 Hadoop $rm data/ logs/Copy the code
(3) Format the cluster and start it.
[Tom@hadoop102 hadoop 3.13. ]$ bin/hdfs namenode format
[Tom@hadoop102 hadoop 3.13. ]$ sbin/start dfs.sh
Copy the code
View the results
[Tom@hadoop102 dfs]$Ll the total amount12
drwx------. 3 Tom Tom 4096 12month11 08:03 data
drwxrwxr-x. 3 Tom Tom 4096 12month11 08:03 name1
drwxrwxr-x. 3 Tom Tom 4096 12month11 08:03 name2
Copy the code
The contents of name1 and name2 are identical.
3.2 Configuring Multiple Directories for DataNode
Datanodes can be configured as multiple directories, and each directory stores different data (data is not a copy)
The configuration is as follows
(1) Add the following content to the HDFS -site. XML file
<property>
<name>dfs.datanode.data.dir</name>
<value>file://${hadoop.tmp.dir}/dfs/data1,file://${hadoop.tmp.dir}/dfs/data2</value>
</property>
Copy the code
View the results
[Tom@hadoop102 dfs]$Ll the total amount12
drwx------. 3 Tom Tom 4096 4month4 14:22 data1
drwx------. 3 Tom Tom 4096 4month4 14:22 data2
drwxrwxr-x. 3 Tom Tom 4096 12month11 08:03 name1
drwxrwxr-x. 3 Tom Tom 4096 12month11 08:03 name2
Copy the code
Upload a file to the cluster and check the contents of the two folders again to find the inconsistency (one has the number and the other does not).
[Tom@hadoop102 hadoop-3.13.]$ hadoop fs -put wcinput/word.txt /
Copy the code
3.3 Cluster Data Balancing Inter-Disk Data Balancing
In the production environment, an extra hard disk is required because the hard disk space is insufficient. If the newly loaded disk has no data, you can run the disk data balancing command. (New features of Hadoop3.x)
(1) Generating a balancing plan (I have only one disk and cannot generate a plan)
[Tom@hadoop102 hadoop-3.13.]$ hdfs diskbalancer -plan hadoop102
Copy the code
(2) Implement a balanced plan
[Tom@hadoop102 hadoop-3.13.]$ hdfs diskbalancer -execute hadoop102.plan.json
Copy the code
(3) View the execution status of the current balancing task
[Tom@hadoop102 hadoop-3.13.]$ hdfs diskbalancer -query hadoop102
Copy the code
(4) Cancel the balancing task
[Tom@hadoop102 hadoop-3.13.]$ hdfs diskbalancer -cancel hadoop102.plan.json
Copy the code
4. Expand and shrink the HDFS cluster
4.1 Adding a Whitelist
Whitelist: Whitelisted host IP addresses can be used to store data.
Enterprise: Whitelist is configured to prevent malicious access attacks by hackers.
To configure a whitelist, perform the following steps:
Create whitelist and blacklist files in the /opt/module/hadoop-3.1.3/etc/hadoop directory on the NameNode node
(1) Create a whitelist
[Tom@hadoop102 hadoop]$ vim whitelist
Copy the code
Add the host name in whitelist, assuming that the number of nodes in the cluster is 102 103
hadoop102
hadoop103
Copy the code
(2) Create a blacklist and keep it empty
[Tom@hadoop102 hadoop]$ touch blacklist
Copy the code
Add the dfs.hosts configuration parameter to the hdFS-site. XML configuration file
<! -- Whitelist -->
<property>
<name>dfs.hosts</name>
<value>The/opt/module/hadoop - 3.1.3 / etc/hadoop/whitelist</value>
</property>
<! -- Blacklist -->
<property>
<name>dfs.hosts.exclude</name>
<value>The/opt/module/hadoop - 3.1.3 / etc/hadoop/blacklist</value>
</property>
Copy the code
Distribute the configuration file whitelist hdFS-site.xml
[Tom@hadoop104 hadoop]$ xsync hdfs site.xml whitelist
Copy the code
The first time you add a whitelist, you must restart the cluster. You only need to refresh the NameNode
[Tom@hadoop102 hadoop 3.13.]$ myhadoop.sh stop
[Tom@hadoop102 hadoop 3.13.]$ myhadoop.sh start
Copy the code
View the DN in a Web browser,http://hadoop102:9870/dfshealth.html#tab-datanode
Failed to upload data on hadoop104
[Tom@hadoop104 hadoop-3.13.]$ hadoop fs -put NOTICE.txt /
Copy the code
Modified the whitelist twice and added hadoOP104
[Tom@hadoop102 hadoop]$Change vim Whitelist to hadoop102 hadoop103 Hadoo P104Copy the code
Refresh the NameNode
[Tom@hadoop102 hadoop 3.13.]$ hdfs dfsadmin refreshNodes
Refresh nodes successful
Copy the code
View the DN in a Web browserhttp://hadoop102:9870/dfshealth.html#tab-datanode
4.2 Commissioning a new server
Requirements As services grow, the amount of data becomes larger and larger. The capacity of existing data nodes cannot meet data storage requirements. You need to dynamically add new data nodes to the existing cluster.
Environment Preparation (1) Clone another HadoOP105 host from Hadoop100. (2) Change the IP address and host name
[root@hadoop105 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
[root@hadoop105 ~]# vim /etc/hostname
Copy the code
(3) Copy hadoop102’s /opt/module directory and /etc/profile.d/my_env.sh to hadoop105
[Tom@hadoop102 opt]$ scp-r module/* Tom@hadoop105:/opt/module/
[Tom@hadoop102 opt]$ sudo scp/etc/profile.d/my_env.sh root@hadoop105:/etc/profile.d/my_env.sh
[Tom@hadoop105 hadoop-3.13.]$ source /etc/profile
Copy the code
(4) Delete historical data, data and logs data of Hadoop on Hadoop105
[Tom@hadoop105 hadoop 3.13.]$ rm rf data/ logs/
Copy the code
(5) Configure SSH non-secret login from hadoop102 and Hadoop103 to Hadoop105
[ hadoop102 .ssh]$ ssh copy id hadoop105
[ hadoop103 .ssh]$ ssh copy id hadoop105
Copy the code
Procedure for commissioning new nodes (1) Directly start datanodes to associate them with clusters
[Tom@hadoop105 hadoop-3.13.]$ hdfs --daemon start datanode
[Tom@hadoop105 hadoop-3.13.]$ yarn --daemon start nodemanager
Copy the code
Add new servers to the whitelist
(1) Add hadoOP104 and hadoOP105 to the whitelist and restart the cluster
[Tom@hadoop102 hadoop]$Change the vim Whitelist to hadoop102 hadoop103 hadoop104 hadoop105Copy the code
(2) Distribution
[Tom@hadoop102 hadoop]$ xsync whitelist
Copy the code
Refresh the NameNode
[Tom@hadoop102 hadoop-3.13.]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful
Copy the code
Upload files on Hadoop105
[Tom@hadoop105 hadoop-3.13.]$ hadoop fs -put /opt/module/hadoop- 3.1.3/LICENSE.txt /
Copy the code
4.3 Data Balancing between Servers
Experience in enterprise
In enterprise development, if tasks are frequently submitted on HadoOP102 and Hadoop104 and the number of copies is 2, due to the principle of data localization, hadoop102 and Hadoop104 will have too much data and hadoop103 will store a small amount of data.
In another case, the data volume of the newly installed server is small, and the cluster balancing command needs to be executed.
Enable the data balancing command
[Tom@hadoop105 hadoop 3.13.]$ sbin/start balancer.sh
threshold 10
Copy the code
For parameter 10, it indicates that the disk space usage difference of each node in the cluster does not exceed 10%. You can adjust the difference based on the actual situation.
Stop the data balancing command
[Tom@hadoop105 hadoop-3.13.]$ sbin/stop-balancer.sh
Copy the code
Note: Because HDFS needs to start a separate Rebalance Server to perform the Rebalance, try not to run start-balancer.sh on NameNode.
4.4 The Blacklist Server is retired
Blacklist: Blacklisted host IP addresses cannot be used to store data.
Enterprise: Configure a blacklist to decommission servers.
The procedure for configuring the blacklist is as follows:
Edit the blacklist file in the /opt/module/hadoop-3.1.3/etc/hadoop directory
[Tom@hadoop102 hadoop vim blacklist
Copy the code
Add the following host name (the node to be retired) hadoop105
Note: If the whitelist is not configured, add the dfs.hosts configuration parameter in the HDFS -site. XML configuration file
<! -- Blacklist -->
<property>
<name>dfs.hosts.exclude</name>
<value>The/opt/module/hadoop - 3.1.3 / etc/hadoop/blacklist</value>
</property>
Copy the code
Distribute configuration files blacklist, hdFs-site.xml
[Tom@hadoop104 hadoop]$ xsync hdfs-site.xml blacklist
Copy the code
The first time you add a blacklist, you must restart the cluster. You only need to refresh the NameNode node
[Tom@hadoop102 hadoop-3.13.]$ hdfs dfsadmin -refreshNodes
Refresh nodes successful
Copy the code
Decommission in Progress: Decommission in Progress: Decommission in Progress: Decommission in Progress
Stop the decommissioned node and the node explorer until the decommissioned node is decommissioned. Note: If the number of copies is 3 and the number of nodes in service is less than or equal to 3, the node cannot be decommissioned successfully. You need to modify the number of copies before decommissioning.
[Tom@hadoop105 hadoop-3.13.]$ hdfs --daemon stop datanode
[Tom@hadoop105 hadoop-3.13.]$ yarn--daemon stop nodemanager
Copy the code
If the data is unbalanced, you can use commands to rebalance the cluster
[Tom@hadoop102 hadoop-3.13.]$ sbin/start-balancer.sh -threshold 10
Copy the code