This is the sixth day of my participation in the Gwen Challenge in November. Check out the details: The last Gwen Challenge in 2021.”

Lead the environment

Service iptables stop chkconfig iptables off Disable selinux vim /etc/selinux/config selinux =disabled Conf >>>>> server ntp1.aliyun.com service NTPD start chkconfig NTPD on RPM -i jdK-8u181-linux-x64. RPM ** /usr/java/default)** vim /etc/profile export JAVA_HOME=/usr/java/default export PATH=$PATH:$JAVA_HOME/bin source /etc/profileCopy the code

Avoid close configuration

Ssh-keygen -t dsa -p "-f ~/.ssh/id_dsa -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- rw - r - r - 1 root root on September 28, 602 therefore authorized_keys -rw------- 1 root root 672 September 28 15:57 ID_dsa-rw-r --r-- 1 root root 602 September 28 15:57 id_dsa.pub -rw-r--r-- 1 root root On September 28 723 17:03 known_hosts -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- the cat ~ /. SSH/id_dsa. Pub > > ~/.ssh/authorized_keys #### Add the contents of the public key of *. Pub to the authorized_keys of the node you want to log in toCopy the code

Stand-alone mode

mkdir /opt/bigdata/hadoop cd /opt/bigdata/hadoop wget < https://mirrors.nav.ro/apache/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz > tar xf hadoop - 2.10.1. Tar. Gz CD Hadoop - 2.10.1 PWD -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - / opt/bigdata/hadoop/hadoop - 2.10.1 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- # configure hadoop environment into the environment variable vim/etc/profile ----------------------------------------------------------- export JAVA_HOME=/usr/java/default export PATH = $PATH: $JAVA_HOME/bin export HADOOP_HOME = / opt/bigdata/hadoop/hadoop - 2.10.1 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin ----------------------------------------------------------- Hadoop profile: CD $HADOOP_HOME/etc/hadoop vim hadoop-env.sh ----------------------------------------------------------- export JAVA_HOME=/usr/java/default -----------------------------------------------------------Copy the code
  • core-site.xml
* * <! </name> <value> HDFS ://hadoop1:9000</value> </property>Copy the code
  • hdfs-site.xml
<! Replication </name> <value> 1</value> </property> <! -- <property> <name>dfs.namenode.name.dir</name> <value>/var/bigdata/hadoop/local/dfs/name</value> </property> <! -- Property > <name>dfs.namenode.http-address</name> <value>hadoop1:50070</value> </property> <! --> <property> <name> dfs.datanode.data.dir</name> <value>/var/bigdata/hadoop/local/dfs/data</value> </property> <! - secondary start node and end - - > < property > < name > DFS. The namenode. Secondary. HTTP - address < / name > < value > hadoop1:50090 < / value > </property> <! - secondary working directory from the nameNode pull the fsimage and editlog location - > < property > < name > DFS. The nameNode. Checkpoint. Dir < / name > <value>/var/bigdata/hadoop/local/dfs/secondary</value> </property>Copy the code
  • Slaves file
Enter hadoop1, the node where the Datanode is runningCopy the code
  • Initialize the HDFS
HDFS namenode -format # This operation will cause HDFS to create the directory of namenode in the configuration # and initialize an empty fsimage, generate VERSION and CIDCopy the code
  • Start the
# first time: The datanode and secondary roles will initialize their own data directory start-dfs.shCopy the code