Author: Zhicheng Lin, technical support of AliCloud EMR product team, has many years of experience in open source big data
1. Take the test cluster version as an example (EMR-4.4.1)
2. Execute the command as follows
cp / opt/apps/ecm/service/flink / 1.10 - VVR - 1.0.2 - hadoop3.1 / package/flink - 1.10 - the VVR - 1.0.2 - hadoop3.1 / conf/SQL - the client - defaults. Mot l /etc/ecm/flink-conf/
And make the following changes
3. Distribute the configuration to the nodes
(Perform the following steps if you need to use them on other clusters, and all copy JAR steps on all machines.)
scp /etc/ecm/flink-conf/sql-client-defaults.yaml root@emr-worker-1:/etc/ecm/flink-conf/
scp /etc/ecm/flink-conf/sql-client-defaults.yaml root@emr-worker-2:/etc/ecm/flink-conf/
. (Refer to the following here, because there are several JARs to copy.)
4. Copy the JAR package
start-cluster.sh
sql-client.sh embedded
You will find the following error:
The reason is that the JAR package is missing. If you do the following, you will get a series of errors. CD /usr/lib/flink-current/lib sudo cp /lib/hive-current/lib/hive-exec-3.1.2.jar. Sudo wget https://repo1.maven.org/maven… sudo wget https://repo1.maven.org/maven… sudo wget https://repo1.maven.org/maven…
5, start,
start-cluster.sh
sql-client.sh embedded
The following is empty because the new cluster has no data. Now go to Hive to create a point of data
Reexecute SQL-Client. sh embedded is now visible to SQL Client.
The query found an error.
6, line fault
Port 8081 has not been activated after checking
The previous conflict was caused by 1.10.2. Theoretically VVR-1.10 can use the community’s hive-connector-1.10.x. This issue has been fixed in 1.11. So let’s change the JAR package. Mv flink – connector – hive_2. 11-1.10.2. Jar/TMP/sudo wget https://repo1.maven.org/maven…
7. Re-execute
start-cluster.sh; sql-client.sh embedded
It works when it appears as shown above.
The original link
This article is the original content of Aliyun, shall not be reproduced without permission.