Create the required file packages

To deploy a custom Parcel package in CM, you need a Parcel package and a CSD package.

The CSD package is not required if the custom parcel package is distributed to servers as a static resource. The CSD package exists as a parcel package control script.

cm_ext

To help ensure that CSD is correct for parcel builds, you should run validation tool CM_ext against parcel and CSD. The tool provides different granularity of operation — it can validate a single JSON file, parcel directory, or parcel file — so it can be used throughout the development process.

download

$ mkdir -p ~/github/cloudera
#Manually download address https://github.com/cloudera/cm_ext
$ git clone https://github.com/cloudera/cm_ext.git
Copy the code

The installation

#Note: If you manually download and unzip directly to the working directory, the directory name will be cm_ext-master. gitcloneThe directory name would be cm_ext
$ cd cm_ext
$ mvn install
$ cd validator
$ java -jar target/validator.jar <arguments>
Copy the code

use

# Validate a parcel.json file
$ java -jar /root/github/cloudera/cm_ext-master/validator/target/validator.jar -p parcel.json
# Validate an alternatives.json file
$ java -jar /root/github/cloudera/cm_ext-master/validator/target/validator.jar -a alternatives.json
# Validate an permissions.json file
$ java -jar /root/github/cloudera/cm_ext-master/validator/target/validator.jar -r permissions.json
# Validate a parcel directory
$Java jar/root/making cloudera/cm_ext - master/validator/target/validator. The jar - d CDH - 5.0.0-0. Cdh5b2. P0.283 /
# Validate a parcel file
$Java jar/root/making cloudera/cm_ext - master/validator/target/validator. The jar - f CDH - 5.0.0-0. Cdh5b2. P0.283 - el6. Parcel
Copy the code

Create parcel packages and parcel. Sha files

Create the parcel package

Parcel package directory structure A Parcel package consists of two parts: a meta directory and a project directory

  • Meta directory

    The Meta directory is the key for CM to manage the Parcel, without meta, the parcel package is just a plain zip package. Meta allows you to define five files: Json file,The permissions. Json file, release-notes.txt In normal cases, you only need parcel. Json and The Parcel Defines The parcel Script to meet all The requirements of a custom parcel.

    • parcel.json

      Fixed name for the most important file in the parcel, other files are optional and must exist in the meta directory. Used to describe information about parel packages

    • The Parcel Defines Script

      Parcel definition scripts typically expose parcel file locations to CSD via the export environment variable. The file name can be customized. In The parcel. Json file, use The scripts tag to indicate The parcel Defines Script name.

    • alternatives.json

      Github.com/cloudera/cm…

    • permissions.json

      This command is used to grant specified permissions to specified files after decompression.

      Github.com/cloudera/cm…

    • release-notes.txt

      Github.com/cloudera/cm…

  • Project directory

    Project directory, no format requirements, users according to the need to customize.

#Parcel Package directory structure--name: name of the parcel --version: Parcel package version [name]-[version]/ [name]-[version]/meta/ [name]-[version]/meta/parcel. Json [name]-[version]/meta/gplextras_env.sh [name]-[version]/lib/ [name]-[version]/lib/ailab/ [name]-[version]/lib/ailab/lib/  [name]-[version]/lib/ailab/bin/ [name]-[version]/lib/ailab/conf/Copy the code
Create a parcel. Json file
# Note: Json files are not allowed to have comments. The comments here are intended to explain the meaning of each tag. If you want to use this JSON, please remove the comment. {# must be 1"schema_version": 1, # of parcel"name": "AILAB", # Parcel version"version": "dev"When a parcel is activated, a soft connection for the parcel is created"setActiveSymlink": true, # list the parcels that conflict with this parcel. Only one conflicting parcel can be active at a time."conflicts": "", # expose the Tag value of the service provided. CSD and Parcel are linked by the provides Tag."provides": ["ailab"], # specify The name (location) of The Parcel Since parcel and CSD are independent of each other But CSD controls the startup and shutdown of a parcel. Therefore, it is often necessary to specify The location of a Parcel in The Parcel Defines Script through environment variables. When the value of the parcel tag in the CSD service. SDL file is the value of the provides tag in the parcel The CSD package can read the environment variables defined by the script specified in the scripts tag of the parcel. Json file when executing the start and stop commands"scripts"A parcel must provide this script even if it does not need to define any environment variables (the script itself can be empty)."defines": "env.sh"}, # Other tags are optional"packages": [{
                "name": "ailab"."version": "dev"}]."components": [{
                "name": "ailab"."version": "dev"."pkg_version": "dev"}]."users": {
                "datax": {
                        "longname": "AILAB"."home": "/var/lib/ailab"."shell": "/bin/bash"."extra_groups": []}},"groups": ["ailab"]}Copy the code
Create The Parcel Defines The Script file
#! /bin/bash
#Among them$PARCEL_DIRNAMEand$PARCELS_ROOTIs provided by CM to allow Parcel to determine its location
# $PARCELS_ROOT: This is the directory where all the parcels on the file system reside. (Default: /opt/cloudera/ sing)
# $PARCEL_DIRNAME: the$PARCELS_ROOTThe name of the parcel under the directory
#That is, the absolute path to the parcel directory is:$PARCELS_ROOT/$PARCEL_DIRNAMEAILAB_DIRNAME=${PARCEL_DIRNAME:-"AILAB"} export AILAB_HOME=$PARCELS_ROOT/$AILAB_DIRNAME/lib/ailab export DATAX_HOME=$AILAB_HOME/.. /.. /.. /DATAX/lib/dataxCopy the code

This script, combined with the provides and scripts tags of the parcel. Json file above, provides the CSD script with two environment variables “DATAX_HOME” and “AILAB_HOME”.

Specified by the value of the parcel. RequiredTags tag in the service.sdl file

Packaging for.parccelFile union check
#Parcel name format is fixed
#The parcel folder name is [name]-[version], and the parcel folder name is [name]-[version]-[distro suffix]. Parcel 
#Distro suffix: Distribution suffix indicating the Linux system version on which the parcel package will be deployed. 
#Parcel effective distribution suffix: https://github.com/cloudera/cm_ext/wiki/Parcel-distro-suffixesThe tar - ZCVF AILAB - V1.2.1 - el7. Parcel AILAB - V1.2.1
#The cm_ext program, an official download, can be used to validate all the files needed to make a parcel package.
#While most of the other tutorial directories on the web for making a parcel package are cm_ext, I downloaded it to cm_ext-master. People decide the path based on the package they download.Java jar/root/making cloudera/cm_ext - master/validator/target/validator. The jar - f AILAB - V1.2.1 - el7. ParcelCopy the code

Create a parcel. Sha file

After a. Parcel package is created, the server needs to generate a. Sha file for the parcel package to verify its integrity. Sha is the name of the parcel

#Create a. Sha file and add the value of the sha1sum command to the new. Sha fileSha1sum AILAB - V1.2.1 - el7. Parcel | awk '{print $1}' > AILAB - V1.2.1 - el7. Parcel. ShaCopy the code

Create the CSD package

Official documentation: github.com/cloudera/cm…

Naming rule: [name]-[version].jar

The directory structure

  • descriptor/service.sdlCSD configuration file
  • descriptor/service.mdl
  • scripts/Script file, typically used to run programs in a Parcel package
  • aux/Secondary files can exist in this directory and, if they exist, will be associated withscriptsThe directory is sent to the agent together. If you have a static configuration file required by the service (for example)topology.pyAnd it’s not generated by CSD, which is useful.

service.sdl

Example labels:

For more information, please refer to the official documentation: github.com/cloudera/cm…

{
        / / version number
        "version": "dev".// CSD name, which forms the CSD package name with the version tag
        "name": "AILAB".// The service name, which is displayed on the CM console
        "label": "Ailab".// CM console to add a service description of the service
        "description": "The ailab service".// Service icon, change the relative path of the file, is the service specification, generally must exist
        // Image format is very strict, must be 28*28 PNG
	"icon": "images/ailab.png"."runAs": {
		"user": "root"."group": "root"
	},
        // Specify which environment variables that the CSD needs to fetch from the Parcel
	"parcel": {
                // A parcel for the Tag must exist in the CM server
		"requiredTags": [
			"ailab"].// A parcel for the Tag does not exist in the CM server
		"optionalTags": [
			"ailab"]},// Configuration, can be filled in the CM console custom value full service visible
        // Used to generate configuration files, or to provide environment variables to role scripts
	"parameters": [{// The actual variable name
                        "name": "HADOOP_CONF_CORE_DIR".// Alias, displayed in thin volume below configName in CM console
			"label": "hadoop_conf_core_dir".// A description of configuration variables
			"description": "core-site.xml's location".// The name in the CM console
			"configName": "HADOOP_CONF_CORE_DIR".// Whether null is allowed
			"required": true.// Is this mandatory
                        configurableInWizard: true.// The type of the configuration value
			"type": "string".// The default value configured
			"default": "/etc/hadoop/conf"}]./ / role. For the CM console, a CSD is a service, and a service can have multiple roles, each of which has operations to start, stop, restart, and so on
	"roles": [{// The role name, name,label,pluralLabel are the same
                        "name": "AILAB_CONSOLE"."label": "AILAB_CONSOLE"."pluralLabel": "AILAB_CONSOLE".// Start the configuration, which specifies what to do when a role's start command is executed on the CM console
			"startRunner": {
                                // Run the script to compare the path
				"program": "scripts/control.sh".// Run the script parameters
				"args": [
					"start"."console"].// Which variables are required as environment variables
				"environmentVariables": {
                                        // Environment variable name :${config variable name}
                                        // Use ${} to get the value of the parameters
					"HADOOP_CONF_HIVE_DIR": "${HADOOP_CONF_HIVE_DIR}"."HADOOP_CONF_HDFS_DIR": "${HADOOP_CONF_HDFS_DIR}"."HADOOP_CONF_CORE_DIR": "${HADOOP_CONF_CORE_DIR}"."HADOOP_CONF_YARN_DIR": "${HADOOP_CONF_YARN_DIR}"}},// Stop the configuration, specifying what to do when a role's stop command is executed on the CM console
			"stopRunner": {
                                // Script throw is not finished after three seconds, force stop
				"timeout": "30000"."runner": {
					"program": "scripts/control.sh"."args": [
						"stop"."console"]}},// The role configuration parameters are visible in the role
			"parameters": [{"name": "server.port"."label": "server.port"."description": "server port"."configName": "server.port"."required": "true"."type": "string"."default": "5555"
				},
				{
					"name": "log.home"."label": "log_home"."description": "log home"."configName": "log.home"."required": "true"."type": "string"."default": "logs/console"
				},
				{
					"name": "spring.datasource.url"."label": "spring.datasource.url"."description": "database dburl"."configName": "spring.datasource.url"."required": "true"."type": "string"."configurableInWizard": true."default": "jdbc:mysql://xxx:3306/dataworks? useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&allowMultiQueries=true&useSSL=false&"
				},
				{
					"name": "spring.datasource.username"."label": "spring.datasource.username"."description": "spring.datasource.username"."configName": "spring.datasource.username"."required": "true"."type": "string"."configurableInWizard": true."default": "xxx"
				},
				{
					"name": "spring.datasource.password"."label": "spring.datasource.password"."description": "spring.datasource.password"."configName": "spring.datasource.password"."required": "true"."type": "password"."configurableInWizard": true."default": "xxx"
				},
				{
					"name": "spring.datasource.driverClassName"."label": "spring.datasource.driverClassName"."description": "spring.datasource.driverClassName"."configName": "spring.datasource.driverClassName"."required": "true"."type": "string"."default": "com.mysql.jdbc.Driver"}].// Generate the file
			"configWriter": {
				"generators": [{// The generated file is named application-console.properties
                                                "filename": "application-console.properties".// No configuration variables are required. If not written, all role visible variables are used
                                                "excludedParams": [
							"hadoop.kerberos.keytab"."hadoop.kerberos.principal"."hive.metastore.client.capability.check"."model.deploy.dir"."HADOOP_CONF_CORE_DIR"."HADOOP_CONF_HDFS_DIR"."HADOOP_CONF_YARN_DIR"."HADOOP_CONF_YARN_DIR"].// The generated file type
						"configFormat": "properties"
					},
					{
						"filename": "hadoop-defaults.conf".// Which configuration variables are required
						"includedParams": [
							"hadoop.kerberos.keytab"."hadoop.kerberos.principal"."hive.metastore.client.capability.check"."model.deploy.dir"]."configFormat": "properties"}},// Specify role logs
			"logging": {
                                // Role log location
				"dir": "/var/log/ailab/console".// Whether the dir can be modified in the configuration UI
				"modifiable": true.// Role log name
				"filename": "info.log".// Log type
				"loggingType": "logback"}},// Can have multiple roles.
		{
			"name": "AILAB_ONLINE"."label": "AILAB_ONLINE"."pluralLabel": "AILAB_ONLINE"."startRunner": {
				"program": "scripts/control.sh"."args": [
					"start"."online"]."environmentVariables": {
					"HADOOP_CONF_HIVE_DIR": "${HADOOP_CONF_HIVE_DIR}"."HADOOP_CONF_HDFS_DIR": "${HADOOP_CONF_HDFS_DIR}"."HADOOP_CONF_CORE_DIR": "${HADOOP_CONF_CORE_DIR}"."HADOOP_CONF_YARN_DIR": "${HADOOP_CONF_YARN_DIR}"}},"stopRunner": {
				"timeout": "30000"."runner": {
					"program": "scripts/control.sh"."args": [
						"stop"."online"]}},"parameters": [{"name": "server.port"."label": "server.port"."description": "server port"."configName": "server.port"."required": "true"."type": "string"."default": "5556"}]."configWriter": {
				"generators": [{"filename": "application-online.properties"."configFormat": "properties"."excludedParams": [
							"hadoop.kerberos.keytab"."hadoop.kerberos.principal"."hive.metastore.client.capability.check"."model.deploy.dir"."HADOOP_CONF_CORE_DIR"."HADOOP_CONF_HDFS_DIR"."HADOOP_CONF_YARN_DIR"."HADOOP_CONF_YARN_DIR"] {},"filename": "hadoop-defaults.conf"."includedParams": [
							"hadoop.kerberos.keytab"."hadoop.kerberos.principal"."hive.metastore.client.capability.check"."model.deploy.dir"]."configFormat": "properties"}},"logging" : {
				"dir" : "/var/log/ailab/online"."modifiable" : true."filename" : "info.log"."loggingType" : "logback"}}}]Copy the code
Project Examples:
{
"version": "dev"."name": "AILAB"."label": "Ailab"."description": "The ailab service"."icon": "images/ailab.png"."runAs": {
		"user": "root"."group": "root"
	},
	"parcel": {
		"requiredTags": [
			"ailab"]."optionalTags": [
			"ailab"]},"parameters": [{"name": "HADOOP_CONF_CORE_DIR"."label": "hadoop_conf_core_dir"."description": "core-site.xml's location"."configName": "HADOOP_CONF_CORE_DIR"."required": "true"."type": "string"."default": "/etc/hadoop/conf"
		},
		{
			"name": "HADOOP_CONF_HDFS_DIR"."label": "hadoop_conf_hdfs_dir"."description": "hdfs-site.xml's location"."configName": "HADOOP_CONF_HDFS_DIR"."required": "true"."type": "string"."default": "/etc/hadoop/conf"
		},
		{
			"name": "HADOOP_CONF_YARN_DIR"."label": "hadoop_conf_yarn_dir"."description": "yarn-site.xml's location"."configName": "HADOOP_CONF_YARN_DIR"."required": "true"."type": "string"."default": "/etc/hadoop/conf"
		},
		{
			"name": "HADOOP_CONF_HIVE_DIR"."label": "hadoop_conf_hive_dir"."description": "hive-site.xml's location"."configName": "HADOOP_CONF_HIVE_DIR"."required": "true"."type": "string"."default": "/etc/hive/conf"
		},
		{
			"name": "hadoop.kerberos.keytab"."label": "hadoop.kerberos.keytab"."description": "hadoop.kerberos.keytab"."configName": "hadoop.kerberos.keytab"."required": "true"."type": "string"."default": "/home/hiveall.keytab"
		},
		{
			"name": "hadoop.kerberos.principal"."label": "hadoop.kerberos.principal"."description": "hadoop.kerberos.principal"."configName": "hadoop.kerberos.principal"."required": "true"."type": "string"."default": "hive"
		},
		{
			"name": "hive.metastore.client.capability.check"."label": "hive.metastore.client.capability.check"."description": "hive.metastore.client.capability.check"."configName": "hive.metastore.client.capability.check"."required": "true"."type": "string"."default": "true"
		},
		{
			"name": "model.deploy.dir"."label": "model.deploy.dir"."description": "model.deploy.dir"."configName": "model.deploy.dir"."required": "true"."type": "string"."configurableInWizard": true."default": "hdfs://xxx:8020/user/secsmart/deploy"}]."roles": [{"name": "AILAB_CONSOLE"."label": "AILAB_CONSOLE"."pluralLabel": "AILAB_CONSOLE"."startRunner": {
				"program": "scripts/control.sh"."args": [
					"start"."console"]."environmentVariables": {
					"HADOOP_CONF_HIVE_DIR": "${HADOOP_CONF_HIVE_DIR}"."HADOOP_CONF_HDFS_DIR": "${HADOOP_CONF_HDFS_DIR}"."HADOOP_CONF_CORE_DIR": "${HADOOP_CONF_CORE_DIR}"."HADOOP_CONF_YARN_DIR": "${HADOOP_CONF_YARN_DIR}"}},"stopRunner": {
				"timeout": "40000"."runner": {
					"program": "scripts/control.sh"."args": [
						"stop"."console"]}},"parameters": [{"name": "server.port"."label": "server.port"."description": "server port"."configName": "server.port"."required": "true"."type": "string"."default": "5555"
				},
				{
					"name": "log.home"."label": "log_home"."description": "log home"."configName": "log.home"."required": "true"."type": "string"."default": "logs/console"
				},
				{
					"name": "spring.http.multipart.max-file-size"."label": "spring.http.multipart.max-file-size"."description": "spring.http.multipart.max-file-size"."configName": "spring.http.multipart.max-file-size"."required": "true"."type": "string"."default": "100MB"
				},
				{
					"name": "spring.http.multipart.max-request-size"."label": "spring.http.multipart.max-request-size"."description": "spring.http.multipart.max-request-size"."configName": "spring.http.multipart.max-request-size"."required": "true"."type": "string"."default": "100MB"
				},
				{
					"name": "server.tomcat.max-http-post-size"."label": "server.tomcat.max-http-post-size"."description": "server.tomcat.max-http-post-size"."configName": "server.tomcat.max-http-post-size"."required": "true"."type": "string"."default": "104857600"
				},
				{
					"name": "server.tomcat.basedir"."label": "server.tomcat.basedir"."description": "server.tomcat.basedir"."configName": "server.tomcat.basedir"."required": "true"."type": "string"."default": "/var/tmp"
				},
				{
					"name": "spring.datasource.url"."label": "spring.datasource.url"."description": "database dburl"."configName": "spring.datasource.url"."required": "true"."type": "string"."configurableInWizard": true."default": "jdbc:mysql://xxx:3306/dataworks? useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&allowMultiQueries=true&useSSL=false&"
				},
				{
					"name": "spring.datasource.username"."label": "spring.datasource.username"."description": "spring.datasource.username"."configName": "spring.datasource.username"."required": "true"."type": "string"."configurableInWizard": true."default": "xxx"
				},
				{
					"name": "spring.datasource.password"."label": "spring.datasource.password"."description": "spring.datasource.password"."configName": "spring.datasource.password"."required": "true"."type": "password"."configurableInWizard": true."default": "xxx"
				},
				{
					"name": "spring.datasource.driverClassName"."label": "spring.datasource.driverClassName"."description": "spring.datasource.driverClassName"."configName": "spring.datasource.driverClassName"."required": "true"."type": "string"."default": "com.mysql.jdbc.Driver"
				},
				{
					"name": "spring.datasource.hive.url"."label": "spring.datasource.hive.url"."description": "spring.datasource.hive.url"."configName": "spring.datasource.hive.url"."required": "true"."type": "string"."configurableInWizard": true."default": "jdbc:hive2://xxx:10000/dataworks; principal=hive/xxx"
				},
				{
					"name": "spring.datasource.hive.username"."label": "spring.datasource.hive.username"."description": "spring.datasource.hive.username"."configName": "spring.datasource.hive.username"."required": "true"."type": "string"."default": ""
				},
				{
					"name": "spring.datasource.hive.password"."label": "spring.datasource.hive.password"."description": "spring.datasource.hive.password"."configName": "spring.datasource.hive.password"."required": "true"."type": "password"."default": ""
				},
				{
					"name": "spring.datasource.hive.driver-class-name"."label": "spring.datasource.hive.driver-class-name"."description": "spring.datasource.hive.driver-class-name"."configName": "spring.datasource.hive.driver-class-name"."required": "true"."type": "string"."default": "org.apache.hive.jdbc.HiveDriver"
				},
				{
					"name": "spring.datasource.clickhouse.driver-class-name"."label": "spring.datasource.clickhouse.driver-class-name"."description": "spring.datasource.clickhouse.driver-class-name"."configName": "spring.datasource.clickhouse.driver-class-name"."required": "true"."type": "string"."default": "ru.yandex.clickhouse.ClickHouseDriver"
				},
				{
					"name": "service.cluster.zookeeper.address"."label": "service.cluster.zookeeper.address"."description": "service.cluster.zookeeper.address"."configName": "service.cluster.zookeeper.address"."required": "true"."type": "string"."configurableInWizard": true."default": "xxx:2181,xxx:2181,xxx:2181"
				},
				{
					"name": "service.conf.zookeeper.address"."label": "service.conf.zookeeper.address"."description": "service.conf.zookeeper.address"."configName": "service.conf.zookeeper.address"."required": "true"."type": "string"."configurableInWizard": true."default": "xxx:2181"
				},
				{
					"name": "kerberos.switch"."label": "kerberos.switch"."description": "kerberos.switch"."configName": "kerberos.switch"."required": "true"."type": "string"."configurableInWizard": true."default": "true"
				},
				{
					"name": "hdfs.root.dir"."label": "hdfs.root.dir"."description": "hdfs.root.dir"."configName": "hdfs.root.dir"."required": "true"."type": "string"."default": "/user/secsmart/dataworks"
				},
				{
					"name": "dubbo.application.name"."label": "dubbo.application.name"."description": "dubbo.application.name"."configName": "dubbo.application.name"."required": "true"."type": "string"."default": "login-dubbo-consumer"
				},
				{
					"name": "dubbo.registry.address"."label": "dubbo.registry.address"."description": "dubbo.registry.address"."configName": "dubbo.registry.address"."required": "true"."type": "string"."configurableInWizard": true."default": "zookeeper://xxx:2181? backup=xxx:2181"
				},
				{
					"name": "dubbo.service.version"."label": "dubbo.service.version"."description": "dubbo.service.version"."configName": "dubbo.service.version"."required": "true"."type": "string"."default": "1.0.0"
				},
				{
					"name": "dubbo.consumer.timeout"."label": "dubbo.consumer.timeout"."description": "dubbo.consumer.timeout"."configName": "dubbo.consumer.timeout"."required": "true"."type": "string"."default": "3000"
				},
				{
					"name": "flink.jobmanager.address"."label": "flink.jobmanager.address"."description": "flink.jobmanager.address"."configName": "flink.jobmanager.address"."required": "true"."type": "string"."configurableInWizard": true."default": "xxx:8081,xxx:8081"}]."configWriter": {
				"generators": [{"filename": "application-console.properties"."excludedParams": [
							"hadoop.kerberos.keytab"."hadoop.kerberos.principal"."hive.metastore.client.capability.check"."model.deploy.dir"."HADOOP_CONF_CORE_DIR"."HADOOP_CONF_HDFS_DIR"."HADOOP_CONF_YARN_DIR"."HADOOP_CONF_YARN_DIR"]."configFormat": "properties"
					},
					{
						"filename": "hadoop-defaults.conf"."includedParams": [
							"hadoop.kerberos.keytab"."hadoop.kerberos.principal"."hive.metastore.client.capability.check"."model.deploy.dir"]."configFormat": "properties"}},"logging": {
				"dir": "/var/log/ailab/console"."modifiable": true."filename": "info.log"."loggingType": "logback"}}, {"name": "AILAB_ONLINE"."label": "AILAB_ONLINE"."pluralLabel": "AILAB_ONLINE"."startRunner": {
				"program": "scripts/control.sh"."args": [
					"start"."online"]."environmentVariables": {
					"HADOOP_CONF_HIVE_DIR": "${HADOOP_CONF_HIVE_DIR}"."HADOOP_CONF_HDFS_DIR": "${HADOOP_CONF_HDFS_DIR}"."HADOOP_CONF_CORE_DIR": "${HADOOP_CONF_CORE_DIR}"."HADOOP_CONF_YARN_DIR": "${HADOOP_CONF_YARN_DIR}"}},"stopRunner": {
				"timeout": "30000"."runner": {
					"program": "scripts/control.sh"."args": [
						"stop"."online"]}},"parameters": [{"name": "server.port"."label": "server.port"."description": "server port"."configName": "server.port"."required": "true"."type": "string"."default": "5556"
				},
				{
					"name": "log.home"."label": "log_home"."description": "log home"."configName": "log.home"."required": "true"."type": "string"."default": "logs/online"
				},
				{
					"name": "spring.datasource.url"."label": "spring.datasource.url"."description": "database dburl"."configName": "spring.datasource.url"."required": "true"."type": "string"."configurableInWizard": true."default": "jdbc:mysql://xxx:3306/dataworks? useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&allowMultiQueries=true&useSSL=false&"
				},
				{
					"name": "spring.datasource.username"."label": "spring.datasource.username"."description": "spring.datasource.username"."configName": "spring.datasource.username"."required": "true"."type": "string"."configurableInWizard": true."default": "xxx"
				},
				{
					"name": "spring.datasource.password"."label": "spring.datasource.password"."description": "spring.datasource.password"."configName": "spring.datasource.password"."required": "true"."type": "password"."configurableInWizard": true."default": "xxxx"
				},
				{
					"name": "spring.datasource.driverClassName"."label": "spring.datasource.driverClassName"."description": "spring.datasource.driverClassName"."configName": "spring.datasource.driverClassName"."required": "true"."type": "string"."default": "com.mysql.jdbc.Driver"
				},
				{
					"name": "service.cluster.zookeeper.address"."label": "service.cluster.zookeeper.address"."description": "service.cluster.zookeeper.address"."configName": "service.cluster.zookeeper.address"."required": "true"."type": "string"."configurableInWizard": true."default": "xxx:2181,xxx:2181,xxx:2181"
				},
				{
					"name": "service.registryMode"."label": "service.registryMode"."description": "service.registryMode"."configName": "service.registryMode"."required": "true"."type": "string"."default": "ip"
				},
				{
					"name": "service.network-interface-name"."label": "service.network-interface-name"."description": "service.network-interface-name"."configName": "service.network-interface-name"."required": "true"."type": "string"."configurableInWizard": true."default": "ens32"
				},
				{
					"name": "kerberos.switch"."label": "kerberos.switch"."description": "kerberos.switch"."configName": "kerberos.switch"."required": "true"."type": "string"."default": "true"
				},
				{
					"name": "service.console.address"."label": "service.console.address"."description": "service.console.address"."configName": "service.console.address"."required": "true"."type": "string"."configurableInWizard": true."default": "xxx:5555"
				},
				{
					"name": "hdfs.root.dir"."label": "hdfs.root.dir"."description": "hdfs.root.dir"."configName": "hdfs.root.dir"."required": "true"."type": "string"."default": "/user/secsmart/dataworks"}]."configWriter": {
				"generators": [{"filename": "application-online.properties"."configFormat": "properties"."excludedParams": [
							"hadoop.kerberos.keytab"."hadoop.kerberos.principal"."hive.metastore.client.capability.check"."model.deploy.dir"."HADOOP_CONF_CORE_DIR"."HADOOP_CONF_HDFS_DIR"."HADOOP_CONF_YARN_DIR"."HADOOP_CONF_YARN_DIR"] {},"filename": "hadoop-defaults.conf"."includedParams": [
							"hadoop.kerberos.keytab"."hadoop.kerberos.principal"."hive.metastore.client.capability.check"."model.deploy.dir"]."configFormat": "properties"}},"logging" : {
				"dir" : "/var/log/ailab/online"."modifiable" : true."filename" : "info.log"."loggingType" : "logback"}}}]Copy the code

scripts/*.sh

The most important file through which the CM console controls the Parcel

#! /bin/shecho "======Show variables======" echo "AILAB_HOME : $AILAB_HOME" echo "DATAX_HOME : $DATAX_HOME" echo "PARCELS_ROOT : $PARCELS_ROOT" echo "PARCEL_DIRNAME : $PARCEL_DIRNAME" echo "====Show variables over====" start_console() { echo "Running AILAB" nohup $CONF_DIR/bin/startup-console.sh > /var/log/ailab/console/info.log & tailf /var/log/ailab/console/info.log } stop_console() { echo "stop console" cp -r "$AILAB_HOME/bin" "$CONF_DIR" source $CONF_DIR/bin/stop-console.sh echo "stop  console End... exit 0" } start_online() { echo "Running ONLINE" nohup $CONF_DIR/bin/startup-online.sh > /var/log/ailab/online/info.log & tailf /var/log/ailab/online/info.log echo "Running ONLINE End..." } stop_online() { echo "stop ONLINE" cp -r "$AILAB_HOME/bin" "$CONF_DIR" source $CONF_DIR/bin/stop-online.sh echo "stop ONLINE End... dir" } init() { echo "hadoop conf init start" cp -r "$AILAB_HOME/conf" \ "$AILAB_HOME/bin" \ "$AILAB_HOME/data" \ "$AILAB_HOME/python" \ "$AILAB_HOME/workspace" \ "$AILAB_HOME/krb5" \ "$CONF_DIR" echo "hadoop conf path environment variables:" echo "HADOOP_CONF_HDFS_DIR:$HADOOP_CONF_HDFS_DIR" echo "HADOOP_CONF_YARN_DIR:$HADOOP_CONF_YARN_DIR" echo "HADOOP_CONF_HIVE_DIR:$HADOOP_CONF_HIVE_DIR" echo "HADOOP_CONF_CORE_DIR:$HADOOP_CONF_CORE_DIR" cp "$HADOOP_CONF_CORE_DIR/core-site.xml" \ "$HADOOP_CONF_HDFS_DIR/hdfs-site.xml" \ "$HADOOP_CONF_YARN_DIR/yarn-site.xml" \ "$HADOOP_CONF_HIVE_DIR/hive-site.xml" \ "$CONF_DIR/conf/hadoop/" sed -i "s/=/ /" "$CONF_DIR/hadoop-defaults.conf" mv "$CONF_DIR/hadoop-defaults.conf" "$CONF_DIR/conf/hadoop/" echo "hadoop conf init end" } init_role() { init echo "$ROLE init" mv "$CONF_DIR/application-$ROLE.properties" "$CONF_DIR/conf/$ROLE/application.properties" echo "$ROLE init end...." } ROLE=$2 case "$1" in start) init_role start_"$ROLE" ;; stop) stop_"$ROLE" ;; *) echo "Usage AILAB {start|stop} {console|online}" ;; esacCopy the code

Verify and package as CSDJAR package

#Verify.sdl files
java -jar /root/github/cloudera/cm_ext-master/validator/target/validator.jar -s descriptor/service.sdl
#Package as csDJAR packageThe jar - CVF AILABCSD - 1.0. Jar *Copy the code

Uploading data packets

Run the SCP command to send the parcel package and parcel. Sh file to the /opt/cloudera/parcel-repo folder on the Cm-server

SCP ailab-v1.2.1-el7.parcel. Sha nn1:/opt/cloudera/parcel-repo/ SCP ailab-v1.2.1-el7.parcel nn1:/opt/cloudera/parcel-repo/Copy the code

Run the SCP command to send the CSD package to the /opt/ Cloudera/CSD folder on the CM-server

SCP AILABCSD - 1.0 / AILABCSD - 1.0. Jar nn1: / opt/cloudera/CSDCopy the code

Distribute, activate packages

  1. Enter CM Client and click the Parcel button
  2. Click the “Check for new Parcel” button and CM will scan it/opt/cloudera/cloudera-repoPath to the Parcle package and refresh the results to the page.

An error in this step indicates that the Parcel format or SHA file has failed to be verified. The Parcel package and.sha files need to be remade.

  1. Click the Assign button to distribute the parcel to all cM-Agent servers/opt/cloudera/parcelsTo decompress the decompression. When the action is complete, the assign button becomes the Activate button.

  1. Click the activate button, in/opt/cloudera/parcelsA soft connection corresponding to the parcel is generated in the path. Official explanation:a parcel causes the Cloudera Manager to link to the new components

  1. Restart the Cm-server service

Restarting the CM-Server service is to update CSD packages. The Parcel is dynamically aware and the CM console is aware of files uploaded. But CSD requires a reboot.

#After the restart, CM Client cannot be accessed for about 1 minute. Wait patiently
systemctl restart cloudera-scm-server
Copy the code

Methods that do not require a restart are also provided for ease of development, but are not recommended for formal environments because unexpected problems may occur

Log in to the CM console and access the following URL HTTP in your browser://xxx:7180/cmf/csd/refreshCall this interface HTTP if it is a new installation://xxx:7180/cmf/csd/install? CsdName = CSDCall this interface HTTP if you are reinstalling://xxx:7180/cmf/csd/reinstall? csdName=AILAB-devNote: csdName is the name of the JAR packageCopy the code
  1. On the CM Client home page, click Add Service to display the service type list, which contains the user-defined Parcel service
  2. Select the service, click Continue, and select the host you want to deploy. Make sure.

Run the package

Delete packages, services

  1. Click the icon Parcel button in the CM client
  2. Click the Disable button. When this action is complete, the button becomes assigned and activated

Soft connection Deletion

  1. Click the select Remove from server button. When this action is complete, the parcel status changes to downloaded
  2. Clicking the Delete button will remove the parcel from cM-Server’s /opt/ Cloudera /cloudera-repo.
  3. On the Cm-Sever server/opt/cloudera/csdDelete the CSD jar package.
  4. Of all cM-Agent servers, in/opt/cloudera/parcel-cacheUnder the path, delete the service. Torrent file (without verification, not clear not delete can, guarantee in case, best delete).
  5. Most importantly, of all the Cm-Agent servers, the/opt/cloudera/cloudera-repoUnder the path, there is a hidden folder.floodThere will be a parcel and a. Torrent file. Delete it. Otherwise, if there is a parcel with the same name later, it will not be distributed from the server server/opt/cloudera/cloudera-repoPath, but from the local.floodIn the acquisition. This results in the use of the previous parcel.
#Delete cached and hidden files.Rm - rf/opt/cloudera/parcels/flood/NGINX - V1.19.6 - el7. Parcel rm - rf / opt/cloudera/parcels/flood/NGINX - V1.19.6 - el7. Parcel. Torrent rm - rf / opt/cloudera/parcel - cache/NGINX - V1.19.6 - el7. Parcel. The torrentCopy the code

Please leave a message if you have any questions

Issue: To be added