“This is the fifth day of my participation in the First Challenge 2022. For details: First Challenge 2022.”

preface

Recently in the communication with fans, said that white whoring server has not started to use, here I provide a way to use:

She inspired me by mentioning the concept of pseudo-distributed deployment of Hadoop.

Since the private work I do often needs to use the Hadoop cluster, the local startup has some problems such as slow startup speed, troublesome operation and occupying memory.

In view of this reason why clustering is not deployed, the pseudo-distributed deployment method of Hadoop3.x is chosen.

1. White whoring server

Previously sent a white whoring server guide:

But now the event is over. Of course, if you are a student, it is very cheap to buy a server, only 9.9 yuan/month, Ali Cloud developer growth plan

2. Select and configure the server

The system image and application image do not need to be changed. Keep the default values (WordPress, CentOS 7.3).

You need to set the root permission and password

Use the local terminal (MAC) or CMD (Windows) to build SSH

ssh root@****
Copy the code

Then enter the password of the previously set root permission (note: this password will not be displayed).

If this is the case, you need to clean up the previous key

ssh-keygen -R XX.XX.XX.XX
Copy the code

Then SSH connect again, then yes

Ok, here we enter ali Cloud server

3. We start to configure the Java environment

First download the Java JDK

wget https://download.java.net/openjdk/jdk8u41/ri/openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
Copy the code

Then unpack

tar -zxvf openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
Copy the code

Move the location and configure the Java path

mv java-se-8u41-ri/ /usr/java8
echo 'export JAVA_HOME=/usr/java8' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile
source /etc/profile
Copy the code

Check whether the installation is successful

java -version
Copy the code

This is the ideal situation, and if the installation is successful, the following results will occur

4. We installed Hadoop

# Download Hadoop with Tsinghua sourceWget HTTP: / / https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.2.2/hadoop-3.2.2.tar.gz# Here is the mirror source of Tsinghua university, domestic friends download faster

Copy the code

Decompress as usual

Tar -zxvf hadoop-3.2.2.tar.gz -c /opt/ mv /opt/hadoop-3.2.2 /opt/hadoopCopy the code

Hadoop2. X (update)

Wget HTTP: / / https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gzCopy the code
Tar -zxvf hadoop-2.10.1.tar.gz -c /opt/ mv /opt/hadoop-2.10.1 /opt/hadoopCopy the code

The configuration address

echo 'export HADOOP_HOME=/opt/hadoop/' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/bin' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/sbin' >> /etc/profile
source /etc/profile
Copy the code

Configure YARN and Hadoop

echo "export JAVA_HOME=/usr/java8" >> /opt/hadoop/etc/hadoop/yarn-env.sh
echo "export JAVA_HOME=/usr/java8" >> /opt/hadoop/etc/hadoop/hadoop-env.sh
Copy the code

View the Hadoop installation

hadoop version
Copy the code

If the preceding information is displayed, the installation is successful

5. Use VIm to operate core-site and HDFS-site

vim /opt/hadoop/etc/hadoop/core-site.xml
Copy the code

Enter the VIM environment

Press I (INSERT) to modify

Move the cursor between Configuration and copy the following information

<property>
        <name>hadoop.tmp.dir</name>
        <value>file:/opt/hadoop/tmp</value>
        <description>location to store temporary files</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
Copy the code

Press Esc to stop the modification and press “:wq” to exit the viM modification

Do the same for hdFS-site

vim /opt/hadoop/etc/hadoop/hdfs-site.xml
Copy the code
<property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/opt/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/opt/hadoop/tmp/dfs/data</value>
    </property>
Copy the code

To configure the connection between master and slave, run the following command and press Enter until the following image appears

ssh-keygen -t rsa 
Copy the code

Run the following code

cd .ssh
cat id_rsa.pub >> authorized_keys
Copy the code

Start the Hadoop

hadoop namenode -format
start-dfs.sh
start-yarn.sh
Copy the code

Hit the pit:

ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation
Copy the code

I stepped in a little hole here,

Solution:

Blog.csdn.net/ystyaosheng…

Check whether the configuration is successful

jps
Copy the code

Successful figure

6. Note: next, you need to open your firewall port in Aliyun to access it on the browser. Otherwise, you can’t access it

Finally, you can enter xx.xx.xx. XX** : 9870 or ** xx.xx.xx. XX:8088 in your browser to access your Hadoop

The renderings are as follows

As well as

HDFSAPI (new)

Let’s format it

hdfs namenode -format 
hadoop-daemon.sh start namenode 
Copy the code

Manor.blog.csdn.net/article/det…

Afterword.

📢 welcome to like 👍 collect ⭐ message 📝 if there is an error please correct

📢 This article was originally written by Manor at 🙉

⚠️ Please contact me to delete any images or quotes in this article that violate your legal rights.

😄 welcome to point out the article error, and I exchange ~

About the Hadoop cluster deployment process encountered any problems, welcome to follow the public account to consult me ~