1. The WSL preparation
1.1 Download WSL from the Microsoft Store
I downloaded Ubuntu 18.04 LTS here
1.2 Preparing for Ubuntu
Make sure that the options for Enabling or disabling Windows features for Hyper-V and for Linux and Windows subsystems are enabled once the download is complete
Then restart the system to open WSL. At this time, the system will ask you to wait for a while to prepare. When you are ready, you will be prompted to enter your user name and password twice
Input after completion
sudo apt-get update
Copy the code
Ubuntu has an SSH client. You also need to install the SSH Server
sudo apt-get install openssh-server
Copy the code
Manually start the SSHD
sudo service ssh restart
Copy the code
Next, go to the SSH configuration file and modify some Settings
sudo vi /etc/ssh/sshd_config
Copy the code
Change the port number (optionally) and allow root to log in with any authentication
perform
ssh localhost
Copy the code
1.3 Configuring File Transfer between WIN10 and WSL
Enter \\ WSL $in Windows 10 explorer
Right-click and map to a network drive to be seen on this computer
2. The JDK configuration
Download the open – the JDK 2.1-8
Start by installing OpenJDK8 and fixing dependencies
sudo apt install openjdk-8-jdk
sudo apt -f install
Copy the code
Run the following two statements to check whether the installation is successful
java -version
javac
Copy the code
2.2 Configuring Java Environment Variables
vi /etc/profile
Copy the code
Add the following
exportJAVA_HOME = / usr/lib/JVM/Java - 1.8.0 comes with its -- amd64export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
Copy the code
Let environment variables take effect
source /etc/profile
Copy the code
3. Hadoop configuration
3.1 download Hadoop
Hadoop-3.2.1. Tar. gz: hadoop-3.2.1
C: \ Users \ username \ AppData \ Local \ Packages \ CanonicalGroupLimited Ubuntu18.04 onWindows_79rhkp1fndgsc \ LocalState \ rootfs \ usr \local
Copy the code
Restart the PC and run the command in WSL terminal /user/local/
Sudo tar XVF - hadoop - 3.2.1. Tar. GzCopy the code
To check whether Hadoop is successfully installed, run the /usr/local/hadoop-3.2.1 directory
./bin/hadoop version
Copy the code
You can see
Give read and write access
Sudo chmod -r 777 /usr/local/hadoop-3.2.1Copy the code
3.2 Configuring a pseudo-distributed Environment
To configure a pseudo-distributed Hadoop environment, modify the core-site. XML and hdFs-site. XML files in /usr/local/hadoop-3.2.1/etc/hadoop/
sudo vi core-site.xml
Copy the code
Change the core-site. XML file to
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-3.2.1/data/ TMP </value> </property> <name>fs.defaultFS</name> <value> HDFS ://localhost:9000</value> </property> </configuration>Copy the code
sudo vi hdfs-site.xml
Copy the code
Change the core-site. XML file to
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Copy the code
Next we have to configure hadoop-env.sh in this directory
sudo vi hadoop-env.sh
Copy the code
Find the JAVA_HOME option instead
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
Copy the code
3.3 Starting a Cluster
First, format the NameNode
bin/hdfs namenode -format
Copy the code
First open the namenode
sbin/hadoop-daemon.sh start namenode
Copy the code
And open the datanode
sbin/hadoop-daemon.sh start datanode
Copy the code
Finally, enter to see which node is running
jps
Copy the code