“This is the fourth day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”
Hello, everyone, I am Huaijin Shake Yu, a big data meng new, home has two gold swallowing beast, Jia and Jia, can code can teach next almighty dad
If you like my article, you can [follow ⭐]+[like 👍]+[comment 📃], your three companies is my motivation, I look forward to growing up with you ~
1. Prepare the IMPALA environment
1, upload the IMPALA. Tar. Gz to/var/lib/ambari – server/resources/sports/HDP / 3.1 / services and decompression
2. Upload impala_cdh.tar. gz to /var/www/html and decompress it
3, upload impala.repo image to /etc/yom.repos.d /
The corresponding page can be opened in the webpage section
http://172.29.30.61/IMPALA_CDH/
2. Restart ambari – server
# service ambari-server restart
Copy the code
Ampala (3) installation
1. Login ambari page for management
2. Click Add Service in the lower left corner of the home page and select the impala that is not installed for installation
3. Select the host to be installed
Installation may encounter some errors, currently there are the following solutions:
1)
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 38, in <module> BeforeAnyHook().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 31, in hook setup_users() File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/shared_initialization.py", line 50, in setup_users groups = params.user_to_groups_dict[user], KeyError: u'impala' Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-863.json', '/ ambari master node to execute command line, need python to run environment, can try to use the default, $cluster_name = Sea $ambari_server = Sea $ambari cluster_name = Sea $ambari_server = Sea $ambari cluster_name = Sea $ambari_server = Sea $ambari cluster_name = Sea $ambari_server # CD /var/lib/ambari-server/resources/scripts # python configs.py -u admin -p admin -n $cluster_name -l $ambari_server -t 8080 -a get -c cluster-env |grep -i ignore_groupsusers_create "ignore_groupsusers_create": "false", # python configs.py -u admin -p admin -n $cluster_name -l $ambari_server -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v trueCopy the code
2) Modify python files on all impala installed nodes
# vim/var/lib/ambari - agent/cache/sports/HDP / 3.1 / services/IMPALA/package/scripts/IMPALA - catalog. PyCopy the code
Delete the three lines of code that refer to the address, and the result is this:
Install the software again
4. Modify HDFS configuration files
Operating on Ambari, find the config for HDFS
Go to the Custom Core-site and click Add Property
Add the following
dfs.client.read.shortcircuit=true
dfs.client.read.shortcircuit.skip.checksum=false
dfs.datanode.hdfs-blocks-metadata.enabled=true
Copy the code
Go to the Custom HDFS-site and click Add Property
Datanode.hdfs-blocks -metadata.enabled=true dfs.block.local-path-access dfs.client.file-block-storage-locations.timeout.millis=60000Copy the code
Restart the cluster.
5. Modify the Impala configuration (on all machines)
# vim /etc/default/impala
Copy the code
Add kudu address in MEM_LIMIT= 20GB configuration (kudu is installed, but this step does not affect the impala installation) :
KUDU_MASTER_HOST=xxx3.hadoop.com:7051
Copy the code
Add the following configuration at the end of the IMPALA_SERVER_ARGS configuration item:
-kudu_master_hosts=${KUDU_MASTER_HOST} \
Copy the code
6. Preparations before impala starts (all nodes)
1. Check whether core-site. XML HDFS -site. XML hive-site. XML exists in /etc/impala/conf/
If you do not need to synchronize the preceding three XML files from the CONF of hive and HDFS
2. Create the impala directory and add related permissions
Each node adds the Impala user to the group
# usermod -G hive,hdfs,hadoop impala
Copy the code
Create impala related directories and authorize them
# su - hdfs -c "hadoop fs -mkdir /user/impala"
# su - hdfs -c "hadoop fs -chown -R impala /user/impala"
Copy the code
3. Configure JAVA_HOME for Bigtop-utils
# vim /etc/default/bigtop-utils export JAVA_HOME=/app/tools/ Java /jdk1.8.0_201Copy the code
Create a soft connection to /usr/lib/impal/lib
Upload hdP-2.6.4.0-91_hadoop_lib. zip to /usr/hdp-/ and decompress the file
Run the following soft connection commands
Ln -s - f/usr/HDP/HDP - _hadoop_lib 2.6.4.0-91 / hadoop - annotations. Jar/usr/lib/impala/lib/hadoop - annotations. Jar ln -s - f The/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop - archives - 2.7.3.2.6.4.0-91. The jar/usr/lib/impala/lib/hadoop - archives. Jar ln -s - f The/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop - auth. Jar/usr/lib/impala/lib/hadoop - auth. Jar ln -s - f The/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop - aws. Jar/usr/lib/impala/lib/hadoop - aws. Jar ln -s - f The/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop - common. Jar/usr/lib/impala/lib/hadoop - common. Jar ln -s - f The/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop - HDFS - 2.7.3.2.6.4.0-91. The jar/usr/lib/impala/lib/hadoop - HDFS. Jar ln -s - f / usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop - graphs - the client - common - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-mapreduce-client-common.jar ln -s -f / usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop - graphs - the client - core - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-mapreduce-client-core.jar ln -s -f / usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop - graphs - the client - jobclient - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-mapreduce-client-jobclient.jar ln -s -f / usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop - graphs - the client - shuffle - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-mapreduce-client-shuffle.jar ln -s -f The/usr/HDP/HDP - _hadoop_lib/hadoop 2.6.4.0-91 - yarn - API - 2.7.3.2.6.4.0-91. The jar/usr/lib/impala/lib/hadoop - yarn - API. Jar ln -s - f / usr/HDP/HDP - _hadoop_lib/hadoop 2.6.4.0-91 - yarn - the client - 2.7.3.2.6.4.0-91. The jar/usr/lib/impala/lib/hadoop - yarn - client. The jar Ln -s - f/usr/HDP/HDP - _hadoop_lib/hadoop 2.6.4.0-91 - yarn - common - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-yarn-common.jar ln -s -f / usr/HDP/HDP - _hadoop_lib/hadoop 2.6.4.0-91 - yarn - server - applicationhistoryservice - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-yarn-server-applicationhistoryservice.jar ln -s -f The/usr/HDP/HDP - _hadoop_lib/hadoop 2.6.4.0-91 - yarn - server - common - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-yarn-server-common.jar ln -s -f The/usr/HDP/HDP - _hadoop_lib/hadoop 2.6.4.0-91 - yarn - server - the nodemanager - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-yarn-server-nodemanager.jar ln -s -f The/usr/HDP/HDP - _hadoop_lib/hadoop 2.6.4.0-91 - yarn - server - the resourcemanager - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-yarn-server-resourcemanager.jar ln -s -f / usr/HDP/HDP - _hadoop_lib/hadoop 2.6.4.0-91 - yarn - server - web - proxy - 2.7.3.2.6.4.0-91. The jar /usr/lib/impala/lib/hadoop-yarn-server-web-proxy.jar ln -s -f The/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hbase - annotations - 1.2.0 - cdh5.16.2. Jar/usr/lib/impala/lib/hbase - annotations. Jar ln -s -f/usr/HDP/HDP - _hadoop_lib 2.6.4.0-91 / hbase - the client - 1.2.0 - cdh5.16.2. Jar/usr/lib/impala/lib/hbase - client. Jar ln -s - f The/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hbase - common - 1.2.0 - cdh5.16.2. Jar/usr/lib/impala/lib/hbase - common. Jar ln -s - f The/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hbase - protocol - 1.2.0 - cdh5.16.2. Jar/usr/lib/impala/lib/hbase - protocol. The jar ln -s - f /usr/hdp-hdP-2.6.4.0-91_hadoop_lib /hive-ant-1.2.1000.2.6.4.0-91.jar /usr/lib/impala/lib/hive-ant.jar -s -f /usr/hdp-hdP-2.6.4.0-91_hadoop_lib /hive-beeline-1.2.1000.2.6.4.0-91.jar /usr/lib/impala/lib/hive-beeline. Jar ln -s -f /usr/hdp-hdP-2.6.4.0-91_hadoop_lib /hive-common-1.2.1000.2.6.4.0-91.jar /usr/lib/impala/lib/hive-common.jar ln -s -f /usr/hdp-hdP-2.6.4.0-91_hadoop_lib /hive-exec-1.2.1000.2.6.4.0-91.jar /usr/lib/impala/lib/hive-exec.jar ln -s -f The/usr/HDP/HDP - _hadoop_lib 2.6.4.0-91 / hive - hbase - handler - 1.2.1000.2.6.4.0-91. The jar/usr/lib/impala/lib/hive - hbase - handler. The jar Ln -s - f/usr/HDP/HDP - _hadoop_lib 2.6.4.0-91 / hive - metastore - 1.2.1000.2.6.4.0-91. The jar Jar ln -s -f /usr/hdp-hdP-2.6.4.0-91_hadoop_lib /hive-serde-1.2.1000.2.6.4.0-91.jar Jar ln -s -f /usr/hdp-hdP-2.6.4.0-91_hadoop_lib /hive-service-1.2.1000.2.6.4.0-91.jar /usr/lib/impala/lib/hive-service.jar ln -s -f The/usr/HDP/HDP - _hadoop_lib 2.6.4.0-91 / hive - shims - common - 1.2.1000.2.6.4.0-91. The jar/usr/lib/impala/lib/hive - shims - common. The jar Ln -s -f /usr/hdp-hdP-2.6.4.0-91_hadoop_lib/hive-shims-0.20s-1.2.1000.2.6.4.0-91. jar /usr/lib/impala/lib/hive-shims.jar Ln -s - f/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hive - shims - the scheduler - 1.2.1000.2.6.4.0-91. The jar Jar ln -s -f /usr/hdp-hdP-2.6.4.0-91_hadoop_lib /hadoop/libhadoop.so / usr/lib/impala/lib/libhadoop so ln -s - f/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop/libhadoop. So the 1.0.0 / usr/lib/impala/lib/libhadoop. So the 1.0.0 ln -s - f/usr/HDP/HDP - 2.6.4.0-91 _hadoop_lib/hadoop/libhdfs. So The/usr/lib/impala/lib/libhdfs. So ln -s - f/usr/HDP / 2.6.4.0-91 / usr/lib/libhdfs. So. 0.0.0 The/usr/lib/impala/lib/libhdfs. So. 0.0.0Copy the code
7. Impala starts and reports error solutions
Restart impala through Ambari
conclusion
If you like my article, you can [follow ⭐]+[like 👍]+[comment 📃], your three companies is my motivation, I look forward to growing up with you ~
You can pay attention to the penguin “Huai Jin Shake yu jia and jia”, get resources to download