The hadoop configuration is set, the environment variable is OK, start Hadoop, and then run it as a uarHadoop user
1. Create a directory
Create HDFS /name, HDFS /data, TMP directories
CD /var/www/html/hadoop-2.7.3 mkdir -p HDFS /data mkdir -p HDFS /name madir TMPCopy the code
2. Format namenode
hdfs namenode -format
Copy the code
3. Problems:Cannot remove current directory: /usr/local/hadoop/hdfs/name/current
Because it was executed twicehdfs namemode -format
That led to it.
Delete the current folder under HDFS /name HDFS /data, and then format it again. Then restart hadoop.\
reformat
Restart the hadoop:
3.2. JPS appearsprocess information unavailable
This allows access to the local file system/tmp
In this directory, delete the folder named hsperfData_ {username} and restart Hadoop.
View on the Web:
Installation successful!
Start testing…
4.1. First create the corresponding input folder
hdfs dfs -mkdir -p /user/hadoop/input
Copy the code
4.2. Create the input data and use the /etc/protocols file as a test
Upload a local file to the HDFS using PUT:
hdfs dfs -put /etc/protocols /user/uarhadoop/input
Copy the code
View content:
4.3. Perform tests
Number of words beginning with a in the /etc/protocols file:
Bin/hadoop jar share/hadoop/graphs/hadoop - graphs - examples - 2.7.3. Jar grep input out 'a *'Copy the code
The execution process appears:
17/01/14 23:17:38 INFO ipc. Client: Retrying the connect to server: master / 192.168.1.141:10020. Already tried 0 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)Copy the code
Is caused by the historyserver didn’t start solution is as follows: blog.csdn.net/wuxintdrh/a…
Run the command again, and the command is successfully executed
Here are the results: