Environmental information

  1. Operating system: macOS Mojave 10.14.6
  2. The JDK: 1.8.0 comes with211 (installation location: / Library/Java/JavaVirtualMachines/jdk1.8.0211.jdk/Contents/Home)
  3. Hadoop: 3.2.1

Open the SSH

In “System Preferences “->” Share”, set the following Settings:

Password-free login

  1. Run the following command to create a key:
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsaCopy the code

Next, eventually generate idrsa and idrsa.pub files in ~/. SSH directory

  1. Run the following command to put your private key in the SSH authorized directory. In this way, you do not need to enter your password for SSH login:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysCopy the code
  1. Try SSH login, this time without a password:
Last login: Sun Oct 13 21:44:17 on ttys000
(base) zhaoqindeMBP:~ zhaoqin$ ssh localhost
Last login: Sun Oct 13 21:48:57 2019
(base) zhaoqindeMBP:~ zhaoqin$Copy the code

Download hadoop

  1. Download hadoop, address is: http://hadoop.apache.org/releases.html
  2. Gz: ~/software/hadoop-3.2.1/

If you only need hadoop single-machine mode, you can do it now. However, the single-machine mode does not have HDFS, so the next step is to set the pseudo-distributed mode.

Pseudo-distributed mode setting

Go to the hadoop-3.2.1/etc/hadoop directory and set the following parameters:

  1. Open the hadoop-env.sh file and add the JAVA path Settings:
Export JAVA_HOME = / Library/Java/JavaVirtualMachines jdk1.8.0 _211. JDK/Contents/HomeCopy the code
  1. Open the core-site. XML file and change the configuration node to the following:
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost:9000</value>
  </property>
</configuration>Copy the code
  1. Open the hdFS-site. XML file and change the configuration node to the following:
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>Copy the code
  1. Open the mapred-site. XML file and change the configuration node to the following:
<configuration>
    <property>
         <name>mapreduce.framework.name</name>
         <value>yarn</value>
     </property>
</configuration>Copy the code
  1. Open the yarn-site. XML file and change the configuration node to the following:
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP _MAPRED_HOME</value> </property> </configuration>Copy the code
  1. Run the following command in hadoop-3.2.1/bin to initialize the HDFS:
./hdfs namenode -formatCopy the code

After the initialization is successful, the following information is displayed:

The 2019-10-13 22:13:32, 468 INFO the namenode. NNStorageRetentionManager: Going to retain 1 images with txID >= 0 2019-10-13 22:13:32,473 INFO namenode.FSImage: FSImageSaver clean checkpoint: Txid =0 when shutdown. 2019-10-13 22:13:32,474 INFO namenod. namenode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down the NameNode at zhaoqindeMBP / 192.168.50.12 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /Copy the code

Start the

  1. Go to the hadoop-3.2.1/sbin directory and run./start-dfs.sh to start the HDFS.
(base) zhaoqindeMBP:sbin zhaoqin$ ./start-dfs.sh Starting namenodes on [localhost] Starting datanodes Starting secondary  namenodes [zhaoqindeMBP] zhaoqindeMBP: Warning: Permanently added 'zhaoqindembp, 192.168.50.12 (ECDSA) to the list of known hosts. The 2019-10-13 22:28:30, 597 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableCopy the code

The above warning does not affect use;

  1. Browser access address: localhost:9870, you can see the Hadoop web page as follows:

  2. Go to the hadoop-3.2.1/sbin directory and run./start-yarn.sh to start YARN:
base) zhaoqindeMBP:sbin zhaoqin$ ./start-yarn.sh
Starting resourcemanager
Starting nodemanagersCopy the code
  1. The access address of the browser is localhost:8088. The Yarn web page is as follows:

  2. Run the JPS command to view all Java processes. Normally, you can view the following processes:
(base) zhaoqindeMBP:sbin zhaoqin$ jps
2161 NodeManager
1825 SecondaryNameNode
2065 ResourceManager
1591 NameNode
2234 Jps
1691 DataNodeCopy the code

So far, the deployment, setup, and startup of hadoop3 pseudo-distributed environment have been completed.

Stopping the Hadoop Service

Go to hadoop-3.2.1/sbin and run./stop-all.sh to stop all Services of Hadoop.

(base) zhaoqindeMBP:sbin zhaoqin$ ./stop-all.sh
WARNING: Stopping all Apache Hadoop daemons as zhaoqin in 10 seconds.
WARNING: Use CTRL-C to abort.
Stopping namenodes on [localhost]
Stopping datanodes
Stopping secondary namenodes [zhaoqindeMBP]
2019-10-13 22:49:00,941 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping nodemanagers
Stopping resourcemanagerCopy the code

The above is the Mac environment deployment hadoop3 all the process, I hope to give you some reference.

Welcome to follow the public number: programmer Xin Chen