This is the 7th day of my participation in Gwen Challenge
preface
Today, I will briefly tell you about Hadoop source code compilation, which is often overlooked by everyone. But when you’re looking for a job, you’ll be asked if you’ve compiled source code, so it’s important to know how it works and how it works. The reason for compiling the source code is simple: the official jar is 32-bit by default, and most computers are 64-bit, so you need to compile the source code to 64-bit for it to work.
1. Overview of Hadoop source code compilation process
The whole compilation process is summarized into three steps :(1) preliminary preparation; (2) JAR package installation; (2) Compile the source code. Common problems during compilation can be divided into two categories :(1) the initial virtual machine can’t handle it; (2) The jar package installation process problems will not be solved, the following small series will tell you some problems that should be paid attention to. Here is an overview of the work to be done in each step (refer to the Silicon Valley Big Data course), as follows:
To configure CentOS to connect to the Internet, run the following command on the Terminal of a Linux VM: ping www.baidu.com.
Note: Problems occur when the root role is used to compile and the folder permissions are reduced.
1.2 JAR package preparation (Hadoop source code, JDK8, Maven, Ant, Protobuf)
(1) the hadoop – 2.7.2 – SRC. Tar. Gz
(2) the JDK 8 u144 – Linux – x64. Tar. Gz
(3) Apache-ant-9.9-bin.tar. gz (build tool, package)
(4) the apache maven – 3.0.5 – bin. Tar. Gz
(5) protobuf-2.5.0.tar.gz (serialization framework)
Issues summary post for version 2.7.0www.tuicool.com/articles/IB…
2 source code compilation operation
A process overview given in front is a reference to the Silicon Valley big data tutorial, this section xiaobian with you to practice it.
Prepare as described in the previous section. Note that the virtual machine must be clean, with only the network configured and no JAR packages installed. Since I have prepared this before, I chose to clone the VIRTUAL machine directly. This process has been explained before, see “Big Data Hadoop runtime environment setup” for details. The jar packages used by xiaobian are as follows:
2.1 Decompress the JDK and configure environment variables JAVA_HOME and PATHjava-version(Verify whether the configuration is successful.)
2.2 Decompressing Maven and configuring MAVEN_HOME and PATH
2.3 Ant Decompress and configure ant _HOME and PATH
Yum appear afore-mentioned problems, please look at the solution: my.oschina.net/u/4340589/b…
2.4 installing glibc-headers**** and g++ run the following command
2.6 Decompress Protobuf,The decompressed protobuf home directory is displayed/opt/module/protobuf-2.5.0, and then execute the commands one after the other
[root@hadoop101 software]#yum installopenssl-devel
[root@hadoop101 software]#yum installncurses-devel
conclusion
So much for getting started with Hadoop. Xiaobian learned a lot, I believe that all the way with the partners also harvest a lot. Next, xiaobian will take you to learn about HDFS, MapReduce, and Yarn Hadoop components. Please continue to pay attention! Xiaobian will continue to update big data and other content, please stay tuned. More exciting content, please pay attention to the public number: Xiao Han senior take you to learn