Xxl-job distributed scheduling framework

Overview of XXL-job

1.1 What is XXL-job?

  • Xxl-job is a lightweight distributed task scheduling platform. Its core design goal is rapid development, simple learning, lightweight and easy to expand. Now open source and access to many companies online product lines, out of the box.
  • Generally speaking, XXL-job is a task scheduling framework. By introducing xxL-job-related dependencies and writing codes according to relevant formats, a visual interface can be used to start, execute, and stop tasks, including logging, query, and task status monitoring
  • If xxL-job is described as a person, each microservice introduced into xxL-job is an independent person (executor) written in a convention formatHandlerFor the tablefoodA visual interface can determine which actuator (the person) to eat or not to eat something (a timed task) and when to eat it (Corn expressions control whether to execute or stop or; Start immediately);
  • Weakness of Quartz: Quartz, as a leader in open source task scheduling, is the first choice for task scheduling. However, in a clustered environment, Quartz uses an API to manage tasks, which presents the following problems:
    • It is inhumane to operate tasks by calling apis.
    • The QuartzJobBean that needs to persist the business into the underlying data table is quite intrusive.
    • The scheduling logic and QuartzJobBean are coupled in the same project, which leads to a problem. As the number of scheduling tasks increases and the scheduling task logic becomes heavier, the performance of the scheduling system will be greatly limited by the business.

Xxl-job makes up for Quartz.

  • RemoteHttpJobBean:
    • Normal Quartz development, the task logic is generally maintained in QuartzJobBean, coupling is very serious.

    • Xxl-job “scheduling module” and “task module” are completely decoupled, and all scheduling tasks in the scheduling module use the same QuartzJobBean, that is, RemoteHttpJobBean. The different scheduling tasks maintain their respective scheduling parameters in their extended table data, and when RemoteHttpJobBean execution is triggered, the different scheduling parameters will be parsed to initiate remote calls to their respective remote executor services.

    • This invocation model is similar to RPC invocation, with RemoteHttpJobBean providing the ability to invoke a proxy and executor providing the ability to remote a service.

  • Architecture design
    • Xl-job abstracts scheduling behaviors into a common platform called the scheduling center. The platform itself does not undertake service logic and the scheduling center only initiates scheduling requests.

    • The tasks are abstracted into scattered JobHandler and managed by the executor. The executor is responsible for receiving scheduling requests and executing the service logic in the corresponding JobHandler.

    • Therefore, “scheduling” and “task” can be decoupled into scheduling module and execution module to improve the overall stability and expansibility of the business system:

    • Scheduling module (scheduling center) : responsible for managing scheduling information and issuing scheduling requests according to scheduling configuration, without undertaking business codes. The decoupling of scheduling system and task improves system availability and stability, and the performance of scheduling system is no longer limited by task modules. Supports visual, simple and dynamic management of scheduling information, including task creation, update, deletion, GLUE development, and task alarm. All the above operations take effect in real time, and supports monitoring of scheduling results, execution logs, and Failover.

    • Execution module (executor) : Responsible for receiving scheduling requests and executing task logic. Task module focuses on the execution of tasks and other operations, development and maintenance more simple and efficient; Receives execution requests, termination requests, and log requests from the Dispatch Center.

  • The system architecture of XXL-Job is shown as follows:

1.2 features

  1. Simple: CRUD operations can be performed on Web pages. The operation is simple and can be started in one minute.
  2. Dynamic: Dynamically modify the task status, start/stop a task, and terminate a running task, with immediate effect.
  3. Scheduling center HA (central) : The scheduling center designs its own scheduling components and supports cluster deployment to ensure the HA of the scheduling center.
  4. Executor HA (distributed) : Tasks are executed in distributed mode. Task executors support cluster deployment to ensure task HA.
  5. Registry: The actuator automatically registers tasks periodically. The scheduling center automatically discovers the registered tasks and triggers the execution of the tasks. At the same time, it also supports manual input actuator address;
  6. Flexible capacity expansion and reduction: Once a new actuator goes online or offline, tasks will be reassigned in the next scheduling.
  7. Routing policies: When an executor is deployed in a cluster, it provides various routing policies, including the first, last, polling, random, consistent HASH, least Frequently used, most recently unused, failover, and busy failover.
  8. Failover: If the task routing policy is Failover, if a machine in the actuator cluster is faulty, the system automatically switches over to a normal actuator to send scheduling requests.
  9. Blocking processing policy: the processing policy when scheduling is too dense for the executer to process. The policies include single-machine serial (default), discarding subsequent scheduling, and scheduling before overwriting.
  10. Task timeout control: You can customize the task timeout period, and the task will be interrupted when the task runs out of time.
  11. Retry when a task fails: You can customize the retry times for task failures. When a task fails, the system automatically retries the task based on the preset retry times. The sharding task supports failure retry of the sharding granularity.
  12. Task failure alarm; By default, email failure alarms are provided, and expansion interfaces are reserved to facilitate the expansion of SMS and nailing alarms.
  13. Fragment broadcast task: In the executor cluster deployment, if the task routing policy is Set to Fragment broadcast, a task scheduling task triggers all executors in the cluster to execute a task. You can develop fragment tasks based on fragment parameters.
  14. Dynamic sharding: Fragment broadcast tasks are sharding based on actuators. Dynamic expansion of actuator clusters is supported to dynamically increase the number of fragments for collaborative service processing. It can significantly improve the task processing capability and speed when performing large data operations.
  15. Event triggering: In addition to the Cron mode and Task dependent mode, the event – based task triggering mode is supported. The scheduling center provides the API service to trigger a single task execution, which can be flexibly triggered based on service events.
  16. Task progress monitoring: Supports real-time monitoring of task progress.
  17. Rolling real-time log: support online view of scheduling results, and support Rolling real-time view of the complete execution log output by the actuator;
  18. GLUE: Provides a Web IDE that supports online development of task logic code, dynamic release, real-time compilation and effectiveness, and omits the process of online deployment. Supports 30 versions of historical version backtracking.
  19. Script tasks: Script tasks can be developed and run in GLUE mode, including Shell, Python, NodeJS, PHP, and PowerShell scripts.
  20. Command line tasks: natively provides a generic command line task Handler (Bean tasks, “CommandJobHandler”); The business side only needs to provide the command line;
  21. Task dependency: Subtask dependency can be configured. When the parent task is successfully executed, a subtask is triggered. Multiple subtasks are separated by commas (,).
  22. Consistency: The Scheduling center uses DB locks to ensure the consistency of distributed scheduling in the cluster. A task scheduling task triggers only one execution.
  23. User-defined task parameters: Supports online scheduling task input parameters, which take effect immediately.
  24. Scheduling thread pool: the scheduling system is triggered by multiple threads to ensure that scheduling is executed accurately and not blocked.
  25. Data encryption: Encrypts the communication between the scheduling center and the actuator to improve the security of scheduling information.
  26. Email alarm: support email alarm when the task fails, support configuring multiple email addresses to send alarm emails;
  27. Push Maven central repository: The latest stable version will be pushed to Maven central repository for easy access and use.
  28. Running report: Supports real-time viewing of running data, such as the number of tasks, scheduling times, and number of actuators. And scheduling report, such as scheduling date distribution map, scheduling success distribution map, etc.
  29. Full asynchronous: Full asynchronous design and implementation of task scheduling process, such as asynchronous scheduling, asynchronous operation, asynchronous callback, etc., can effectively perform traffic peak clipping for dense scheduling and theoretically support the operation of tasks with any duration;
  30. Cross-platform: natively provides generic HTTP task handlers (Bean tasks, “HttpJobHandler”); The business side only needs to provide HTTP links, regardless of language or platform;
  31. Internationalization: The scheduling center supports internationalization Settings. Chinese and English are available by default.
  32. Containerization: provide official Docker image, and real-time update push DockerHub, further realize the product out of the box;
  33. Thread pool isolation: scheduling thread pool isolation split, Slow tasks automatically demoted to the “Slow” thread pool, avoid running out of scheduling threads, improve system stability;
  34. User management: Supports online management of system users, including administrators and common users.
  35. Permission control: The administrator has full permissions on the executor. Common users can perform related operations only after they have permissions on the executor.

1.3 Official Information:

  • Document address: www.xuxueli.com/xxl-job/#/
  • Source code address:
    • Making: github.com/xuxueli/xxl…
    • Yards cloud: gitee.com/xuxueli0323…
  • Environment:
    • Maven 3+
    • Jdk 1.7 +
    • Mysql 5.6 +
  • Quick start:
    • For details about how to quickly build the XXL-Job framework, see the official documents:

www.xuxueli.com/xxl-job/#/?…

Use of xxl-job

2.1 Preparations – Configure the scheduling center

  1. Download official source code
  2. Import tables_xxl_job. SQL database tables in the /xxl-job/doc/db/ directory of the project to the database

2.2 Configuring the Actuator

  1. Importing dependency packages:
<! -- xxl-job-core --> <dependency> <groupId>com.cdmtc</groupId> <artifactId>xxl-job-core</artifactId> <version>2.02.</version>
        </dependency>
Copy the code
  1. Configuration Items:

Modify the port number is 8888, so the record here. The XXL. Job. Admin. Address visual interface configuration change the port number to 8888 executor: actuator configuration information

  • The meanings of configuration items related to xxL-job actuator are as follows:
    • Xxl.job.admin. addresses Deployment address of the dispatch center. If the scheduling center is deployed in a cluster, use commas (,) to separate multiple IP addresses. This address will be used by the executor for executor heartbeat registration and Task Result callback.

    • Xxl.job.executor. Appname Indicates the application name of the executor. It is the basis for grouping the heartbeat registration of the executor.

    • Xxl.job.executor. IP IP address of the actuator, which is used for Scheduling Center to Request and trigger tasks and For Actuator Registration. The IP address of the actuator is empty by default, indicating that the IP address is automatically obtained. If there are multiple nics, you can manually set the IP address. If the IP address is manually set, Host is bound to the IP address.

    • Xxl.job.executor. Port Port number of the executor. The default value is 9999. When multiple actuators are deployed on a single machine, configure different actuator ports.

    • Xxl.job. AccessToken Communication token of the actuator, not null time enabled.

    • Xxl.job.executor. Logpath Specifies the path for storing log files output by the executor. You must have read and write permissions on the path.

    • XXL. Job. Executor. Logretentiondays actuators regular cleaning function of the log file, specify the log to save days, log file is automatically deleted date. Save for at least 3 days, otherwise the function will not take effect.

Note that the configuration file of xxl-job executor can also be hosted by Disconf.

  1. Create XxlJobConfig. Java
package com.cdmtc.config;

import com.xxl.job.core.executor.impl.XxlJobSpringExecutor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

/**
 * xxl-job config
 *
 * @author xuxueli 2017-04-28
 */
@Configuration
public class XxlJobConfig {
    private Logger logger = LoggerFactory.getLogger(XxlJobConfig.class);

    @Value("${record.xxl.job.admin.addresses}")
    private String adminAddresses;

    @Value("${record.xxl.job.executor.appname}")
    private String appName;

    @Value("${record.xxl.job.executor.ip}")
    private String ip;

    @Value("${record.xxl.job.executor.port}")
    private int port;

    @Value("${record.xxl.job.accessToken}")
    private String accessToken;

    @Value("${record.xxl.job.executor.logpath}")
    private String logPath;

    @Value("${record.xxl.job.executor.logretentiondays}")
    private int logRetentionDays;


    @Bean(initMethod = "start", destroyMethod = "destroy")
    public XxlJobSpringExecutor xxlJobExecutor(a) {
        logger.info(">>>>>>>>>>> xxl-job config init.");
        XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor();
        xxlJobSpringExecutor.setAdminAddresses(adminAddresses);
        xxlJobSpringExecutor.setAppName(appName);
        xxlJobSpringExecutor.setIp(ip);
        xxlJobSpringExecutor.setPort(port);
        xxlJobSpringExecutor.setAccessToken(accessToken);
        xxlJobSpringExecutor.setLogPath(logPath);
        xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays);

        return xxlJobSpringExecutor;
    }

    /** * The "InetUtils" component provided by "Spring-Cloud-Commons" can be used to flexibly customize the registration IP for multiple network cards and in-container deployment. * * 1, introduce dependencies:  * 
      
        * 
       
        org.springframework.cloud
        * 
       
        spring-cloud-commons
        * < version > ${version} < / version > * < / dependency > * * 2, configuration files, or container startup variables * spring in cloud. Inetutils. Preferred - networks: 'XXX. XXX. XXX. 3, obtain IP * * * String ip_ = inetUtils. FindFirstNonLoopbackHostInfo () getIpAddress (); * /
      


}
Copy the code

This is an actuator configuration class that reads actuator configuration information

  • There are two things to note about the XxlJobConfig configuration class:

  • The component scanning

    • 2 use the @ ComponentScan annotations, scanning com. Example. Demo. Jobhandler package, will be the task processor load to the Spring container.
  • Gets the actuator instance

    • The xxlJobExecutor() method on line 29 instantiates an xxl-job executor object, calling its start() method when the executor is initialized and its destroy() method when the executor is destroyed.
  1. Adding a scheduled task:

2.3 Configuring the Visual Interface

  1. Configure the database path and other information

  1. Set the login account password

You can change the password to another account

  1. Start the project: XxlJobAdminApplication. Java
  2. On visual interface address: http://10.4.7.214:8080/xxl-job-admin/jobinfo [can be configured actuators such as first and then to log in, can configure port or modified, the default is 8080, here show the port number to 8888 】

For other operations such as cluster configuration, see the official documents

2.4 Develop the first task Hello, World

  1. First create the scheduled task code under the microservice configured with the actuator:

2. Start the microservice projects (Eureka,config… Etc.) 3. Start the dispatch center: XxlJobAdminApplication. Java

4. Log in to the scheduling center, enter the account password, and configure the actuator 5. Go to the task management page6. Click Add task and set related parameters7. Return to the task management page8. If you clickPerform a taskIs executed only once9. If you clickThe log“Is displayed, the scheduling log page is displayed10. If you clickStart the, the scheduled task is directly started (the task is executed according to the Corn expression), and the start button becomes the stop button

If the stop button is not clicked, the state will always be started. If the stop button is clicked, the timed task will stop and the Corn expression will no longer take effect.

11. If you clickThe editor“, the scheduled task update page is displayed12. If you clickdelete, delete the scheduled task configuration directly. 13. If you clickactuator, the scheduled tasks of the actuator are displayed. 14. Task description and JobHandler are the search criteria for the scheduled task configuration.

2.5 Running Reports

  • This section visually displays the running status of scheduled tasks
  • The illustration is as follows:

Iii. Detailed description of various functions of the visual interface

3.1 Information to be filled in when adding an actuator is as follows:

  • AppName: This is the application name that uniquely identifies each cluster of actuators. Actuators are automatically registered periodically with AppName as the parameter. You can use this configuration to automatically discover registered actuators for task scheduling.
  • Name: Indicates the name of the executor. AppName can improve the readability of the executor because AppName is limited to alphanumeric elements.
  • Sort: The sort of actuators. The list of available actuators will be read according to this sort if actuators are needed in the system, such as new tasks.
  • Registration mode: The scheduling center can obtain the address of an actuator in either of the following ways:
  • Automatic registration: The actuator automatically registers the actuator. The scheduling center can dynamically discover the address of the actuator through the underlying registry.
  • Manual entry: Manually enter actuator address information separated by commas for the dispatch center.
  • Machine address: This parameter can be edited only when Registration Mode is set to Manual entry. The address information of the actuator can be manually maintained.

Note that the value of AppName should be the same as the xxl.job.executor. AppName field in the application.properties file of the sample project, and automatic registration should be selected. After the new actuator is added, I can see the new actuator in the list. When I write, I replace applcation. Properties with bootstrap.yml, but the contents remain unchanged.

3.2 The following information needs to be filled in when you add a task:

  • Executor: an executor bound to a task. When a task triggers scheduling, the registered executor is automatically discovered to implement the automatic task discovery function. On the other hand, tasks can be easily grouped. Each task must be bound to an executor, which can be set on the Executor Management page.
  • Task description: Provides task description for task management.
  • Routing policies: When an executor is deployed in a cluster, multiple routing policies are provided, including:
    • FIRST: Always select the FIRST machine.
    • LAST: Fixed selecting the LAST machine.
    • ROUND: Select each machine in turn.
    • RANDOM: A machine that is randomly selected to be online.
    • CONSISTENT_HASH: Each task selects a certain machine based on the HASH algorithm, and all tasks are uniformly HASH on different machines.
    • LEAST_FREQUENTLY_USED (least frequently used) : The least frequently used machine is elected first.
    • LEAST_RECENTLY_USED (most recently unused) : The machine that has been used for the longest time is elected first.
    • FAILOVER: The heartbeat detection is performed in sequence. The first successful heartbeat detection is selected as the target executor and the system dispatches the heartbeat.
    • BUSYOVER: Idle detection is performed sequentially. The first successful idle detection machine is selected as the target executor and scheduling is initiated.
    • SHARDING_BROADCAST(Sharding broadcast) : Broadcast triggers all machines in the cluster to perform a task and transmits sharding parameters. Sharding tasks can be developed based on sharding parameters.
  • Cron: indicates the Cron expression that triggers the execution of a task.
  • Operation mode:
    • BEAN mode: Tasks are maintained on the executor side in the form of JobHandler; You need to match tasks in the executor with the “JobHandler” attribute;
    • GLUE pattern (Java) : Tasks are maintained in the dispatch center in source code; The task of this pattern is actually a Java class code inherited from IJobHandler and maintained as “Groovy” source code. It runs in an executor project and can inject other services in the executor using @Resource/@Autowire.
    • GLUE pattern (Shell) : Tasks are maintained in the dispatch center in source code; The task in this mode is actually a “shell” script;
    • The GLUE pattern (Python) : Tasks are maintained in the dispatch center in source code; The task in this mode is actually a “Python” script;
    • GLUE pattern (NodeJS) : Tasks are maintained in the dispatch center in source code; The task in this mode is actually a “nodejs” script;
  • JobHandler:
    • This parameter takes effect only when the running mode is BEAN mode and corresponds to the user-defined value of the @JobHandler annotation of the newly developed JobHandler class in the executor.
  • Subtasks:
    • Each task has a unique task ID (the task ID can be obtained from the task list). When the task is successfully executed, an active scheduling of the task corresponding to the sub-task ID will be triggered.
  • Blocking processing strategy: Processing strategy when scheduling is too dense for the executor to process:
    • Failure alarm (default) : When scheduling or execution fails, a failure alarm is triggered and an alarm email is sent by default.
    • Retry: If a scheduling failure occurs, the system automatically tries again except for the failure alarm. Note that a retry is not performed on a failed execution, but is determined based on the return value of the callback.
  • Task parameters: Parameters required by task execution. If multiple parameters are separated by commas (,), they will be converted into arrays during task execution.
  • Alarm email: Specifies the email address for notifying task scheduling failures. Multiple email addresses can be configured. Use commas (,) to separate multiple email addresses.
  • Person in charge: The person in charge of a task.

Note that a similar window will pop up when you edit a task. For the input items, refer to the New Task window.

3.3 Bean-mode tasks

  • BEAN mode:
    • The task logic exists in the form of JobHandler in the “executor” project, as we just demonstrated in the Hello,World starter case
  • There are three things to note about this code:
    • The @jobHandler annotation of xxl-job (line 1) must be used to specify the JobHandler name as demoJobHandler, and the value of the JobHandler field of the new task in the scheduling center must be the same.
    • You must inherit the IJobHandler abstract class (line 3) and implement its execute() method, which implements the task logic.
    • The IJobHandler abstract class also has the init() and destroy() methods, which are empty methods that are called when a task instance is initialized and destroyed, and which the task implementation class can optionally override.

3.4 GLUE (Java) Mode tasks,

  • Tasks are maintained in the dispatch center in source code and are updated online via the Web IDE, compiled and executed in real time, so there is no need to specify JobHandler. The development process is as follows:
  • Step-1 Creates a scheduling task
    • Refer to “Task Scheduling Properties” above to configure parameters for the new task, and select “GLUE mode (Java)” as shown below:

The dispatch center will dispatch this task every 15 minutes.

  • Step-2 develops the task code

  • Select the specified GLUE (Java) task in the task list and click the “GLUE” button on the right of the task to go to the Web IDE interface of the GLUE task, where the task code can be developed (you can also copy and paste the task code into the edit after development in the IDE).

  • Version tracing: On the Web IDE interface of the GLUE task, select Version Tracing in the upper right corner to list the update history of the GLUE task (30 versions of the GLUE task are supported). Select a version to display the code of the GLUE version. After saving the GLUE code, the code will be rolled back to the previous version. The GLUE task code and the Web IDE interface are shown below:

3.5 Fragment Broadcast Task

  • When the task routing policy is set to Fragment Broadcast, a task scheduling task triggers all actuators in the cluster to execute a task and transmits fragment parameters. You can develop fragment tasks based on the fragment parameters.

  • Sharding broadcast implements sharding in the dimension of actuators. Dynamic expansion of the actuator cluster increases the number of fragments dynamically for collaborative service processing. It can significantly improve the task processing capability and speed when performing large data operations.

  • Fragment Broadcast is the same as the common task development process. The difference is that you can obtain fragment parameters and use them to process fragment services. The development process is as follows:

  • Step-1 develops the JobHandler code

    • In the example engineering com. Example. Demo. Jobhandler package, new ShardingJobHandler task, the key code is as follows:
@JobHandler(value="shardingJobHandler")
@Service
public class ShardingJobHandler extends IJobHandler {
 @Override
 public ReturnT<String> execute(String param) throws Exception {
 // Fragment parameters
 ShardingUtil.ShardingVO shardingVO = ShardingUtil.getShardingVo();
 XxlJobLogger.log("Shard parameter: Current shard number = {0}, total shard number = {1}", shardingVO.getIndex(), shardingVO.getTotal());
 // Business logic
 for (int i = 0; i < shardingVO.getTotal(); i++) {
 if (i == shardingVO.getIndex()) {
 XxlJobLogger.log("{0} slice, hit fragment start processing", i);
 } else {
 XxlJobLogger.log("{0} slice, ignore", i); }}returnSUCCESS; }}Copy the code
  • Line 9 gets the shard parameter, and line 10 gets the two properties of the shard parameter:

    • Shardingvo.getindex () Indicates the id of the current actuator in the actuator cluster list (starting from 0).
    • Shardingvo.gettotal () Total number of slices, the total number of machines in the executor cluster.
  • Step-2 Creates a scheduling task

    • Refer to the above “Task scheduling Properties” to configure the parameters of the newly created task, select “BEAN mode” for the running mode, “Fragment Broadcast” for the routing policy, and fill in the value defined in the task annotation @jobHandler for the JobHandler property, as shown in the following figure:

The dispatch center broadcasts the shardingJobHandler task every 15 minutes (because the Corn expression is set to execute every 15 minutes).

  • The sharding broadcast routing strategy applies not only to the BEAN runtime but also to the GLUE (Java) runtime. This feature applies to the following business scenarios:
  • Sharding task Scenario
    • For a cluster of 10 actuators to process 10W pieces of data, each machine only needs to process 1W pieces of data, which takes 10 times less time.
  • Broadcast Task Scenario
    • Broadcast executor machine runs shell script, broadcast cluster node for cache update, etc.

3.6 Task Management

  • In the task list, you can see the task ID, task description, running mode, Cron, owner, and status of each task. You can perform the following operations on a task:
    • Execution: Manually triggers task scheduling without affecting the original scheduling rules.
    • Pause/Resume: You can pause or resume a task. Note that the pause/resume function only applies to the subsequent scheduling behavior of the task and does not affect the triggered scheduling task.
    • Log: You can view historical task scheduling logs. You can view the scheduling result and execution result of each task on the log Loading history page. You can click Execution Log to view the complete executor logs.
    • Edit: On the Edit Task page that is displayed, update the task attributes and save the Settings. You can modify the task attributes.
    • GLUE: This operation applies only to GLUE tasks. You will go to the GLUE task’s Web IDE interface, where you can develop the task code.
    • Delete: Deletes the task.

3.7 Task Scheduling Logs

  • In the XXL-Job scheduling center, click to go to the Scheduling Log page.
  1. Viewing Scheduling Logs
    • You can view the scheduling result and execution result of each task on the Scheduling Log page, as shown in the following figure:

– The following information can be obtained from scheduling logs: – Scheduling time: specifies the time when the scheduling center triggers the current scheduling and sends task execution signals to the Executor. – Scheduling result: Scheduling center triggers the scheduling result. 200 indicates success. 500 or other indicates failure. – Scheduling Note: Log information about scheduling triggered by Scheduling Center. – Execution time: callback time after the task is executed in Executor. – Execution result: Indicates the task execution result in Actuator. 200 indicates that the task is executed successfully. 500 or other indicates that the task fails. – Execution Note: Log information about this task in Executor. – In the example project, scheduling logs are stored in /data/applogs/xxl-job/xxl-job-demo.log and can be configured in the logback. XML file.

  1. Viewing Execution Logs
    • Click the “Execution Log” button on the right of a log line to jump to the execution log interface and view the complete log printed in the service code, as shown below:

  1. Terminates a running task
  • This feature is only for executing tasks. On the task log page, click the “Terminate Task” button on the right, and the task termination request will be sent to the corresponding executor of the task, which will terminate the task and empty the entire task execution queue, as shown below:

  • Task termination is achieved through the “Interrupt” thread of execution, which raises an “InterruptedException” exception. Therefore, if JobHandler catches the exception internally and digests it, the task termination function will not work.

  • Therefore, if the above task termination does not work, special handling (thrown up) is required for the InterruptedException exception in JobHandler. In addition, when a child thread is started in JobHandler, the child thread cannot catch and process InterruptedException and should actively throw it upward.

  1. Deleting Execution Logs

On the task log page, after selecting the actuator and task, click the “Clear” button on the right, and the “Log Clearing” pop-up box will appear, in which different types of log clearing policies can be selected, and click “OK” button to perform log clearing operations, as shown below:

3.8 Execution Failure Alarm

  • Overview: When a scheduled task fails to be executed, the log automatically records the failure result. If the email address is configured in the application.properties file of CDMTC.

  • Effect demonstration:

  • Each execution failure can be notified

  • To enable the SMS notification function, you need to obtain the authorization code from the email box. The method of obtaining the authorization code from each email box may be different. Qq mailbox is the account from the Settings button, and then select POP3/SMTP service, click open, and obtain the authorization code as prompted;

  • Obtain the authorization code icon:

The authorization code is the password in the configuration file

4. Remotely invoke xxl-job

4.1 Remote Call API Description

  • This is done using RestTemplate, and you can do it in other ways, but you’ll need to think about clustering later, and RestTemplate is just used as the example remote call interface

4.2 Environment Preparation

  • pom.xml:
<? xml version="1.0" encoding="UTF-8"? > <project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0. 0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId>  <version>2.12..RELEASE</version>
    </parent>
    <groupId>com.demo</groupId>
    <artifactId>springboot_quick</artifactId>
    <version>1.0-SNAPSHOT</version>


    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <xdocreport.version>1.0. 5</xdocreport.version> </properties> <dependencies> <! <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> </project>Copy the code

4.3 start the class

  • MySpringBootApplication.java
package com.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.client.RestTemplateBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.web.client.RestTemplate;

/** * create by: zhanglei */
//@EnableFeignClients
@SpringBootApplication
public class MySpringBootApplication {
    public static void main(String[] args) {

        SpringApplication.run(MySpringBootApplication.class);
    }
    @Bean
    RestTemplate restTemplate(RestTemplateBuilder restTemplateBuilder) {
        returnrestTemplateBuilder.build(); }}Copy the code

4.4 the test class

  • TestDemo.java
import com.demo.MySpringBootApplication;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.ResponseEntity;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.util.LinkedMultiValueMap;
import org.springframework.util.MultiValueMap;
import org.springframework.web.client.RestTemplate;
import java.util.ArrayList;
import java.util.List;

/ * * *@create_by: zhanglei
 * @craete_time 2019/7/2
 */
@SpringBootTest(classes = MySpringBootApplication.class)
@RunWith(SpringRunner.class)
public class TestDemo {

    @Test
    public void test(a) {
        System.out.println("hello,world");
    }

    @Autowired
    private RestTemplate restTemplate;

    /* Cookie is generated according to the user name and password, basically unchanged, can directly save the database or Redis, and then read, that is, do not need to log in repeatedly */
    /* Cookie If the Cookie expires at a later time, you can periodically refresh the Cookie or re-log in to save the Cookie. */

    /** * simulate login and get Cookie */
    @Test
    public void login(a){
        HttpHeaders headers = new HttpHeaders();
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("userName"."admin");
        map.add("password"."123456");
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/login", request, String.class);
        System.out.println(response.getHeaders().get("Set-Cookie").get(0));                // XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly
    }


/* Group operations --> Perform operations on actuators */

    /** * Save Group */
    @Test
    public void saveGroup(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("appName"."xxl-job-executor-cdmtc-record");        // Application name
        map.add("title"."Test actuator");      // The name of the actuator
        map.add("order"."1");          // Sort
        map.add("addressType"."1");        // Register mode: 0 is
        map.add("addressList"."10.4.7.214:9999,10.4. 7.214:9999");          // Separate multiple addresses with commas
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobgroup/save", request, String.class);
        System.out.println(response.getBody());        // {"code":200," MSG ":null,"content":null} returns this and adds data to the database successfully
    }

    /** * Modify group */
    @Test
    public void updateGroup(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("id"."4");          // The id must not be empty
        map.add("appName"."xxl-job-executor-cdmtc-record");        // Application name
        map.add("title"."Test actuator 323223");      // The name of the actuator
        map.add("order"."1");          // Sort
        map.add("addressType"."1");        // Register mode: 0 is
        map.add("addressList"."10.4.7.214:9999");          // Separate multiple addresses with commas
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobgroup/update", request, String.class);
        System.out.println(response.getBody());         //{"code":200,"msg":null,"content":null}
    }

    /** * Delete group */
    @Test
    public void removeGroup(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("id"."4");          // Delete, id must not be empty
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobgroup/remove", request, String.class);
        System.out.println(response.getBody());         //{"code":200,"msg":null,"content":null}
    }

    /* Scheduled task operations: query, add, edit, start, stop, delete, etc. */
    /** * gets the list of tasks under the specified actuator */
    @Test
    public void pageList(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("jobGroup"."2");
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobinfo/pageList", request, String.class);
        System.out.println(response.getBody());             //{"recordsFiltered":4,"data":[{"id":13,"jobGroup":2,"jobCron":"0/1 * * * * ? ", "jobDesc" : "test the HelloWorld", "addTime", 1561687650000, "updateTime" : 1562037928000, "the author" : "zhanglei," "alarmEmail" : "13262096 [email protected]","executorRouteStrategy":"FIRST","executorHandler":"firstJobHandler","executorParam":"456464564","executorBloc kStrategy":"SERIAL_EXECUTION","executorTimeout":0,"executorFailRetryCount":0,"glueType":"BEAN","glueSource":"","glueRema Fairly rk initialization ":" GLUE code ", "glueUpdatetime childJobId" : 1561687650000, "" :" ", "jobStatus" : "NONE"}, {" id ": 12," jobGroup ": 2," jobCron ":" 0/1 * * * *? ", "jobDesc" : "test the HelloWorld", "addTime", 1561612429000, "updateTime" : 1561612429000, "the author" : "zhanglei," "alarmEmail" : ""," execu torRouteStrategy":"FIRST","executorHandler":"firstJobHandler","executorParam":"","executorBlockStrategy":"SERIAL_EXECUTI ON","executorTimeout":0,"executorFailRetryCount":0,"glueType":"BEAN","glueSource":"","glueRemark":"GLUE code initialization ","glueUpdat etime":1561612429000,"childJobId":"","jobStatus":"NONE"},{"id":4,"jobGroup":2,"jobCron":"0/1 * * * * ? ","jobDesc":" Test Task 1","addTime":1561538414000,"updateTime":1561538431000,"author":"XXL","alarmEmail":"","executorRouteStra tegy":"FIRST","executorHandler":"firstJobHandler","executorParam":"123","executorBlockStrategy":"SERIAL_EXECUTION","exec UtorTimeout ":100,"executorFailRetryCount":0,"glueType":"BEAN","glueSource":"","glueRemark":"GLUE code initialization ","glueUpdatetime": 1561538414000,"childJobId":"","jobStatus":"NONE"},{"id":2,"jobGroup":2,"jobCron":"0/1 * * * * ? ","jobDesc":" Test Task 1","addTime":1561532680000,"updateTime":1561612757000,"author":"XXL","alarmEmail":"","executorRouteStra tegy":"FIRST","executorHandler":"demoJobHandler","executorParam":"123","executorBlockStrategy":"SERIAL_EXECUTION","execu TorTimeout ":101,"executorFailRetryCount":1,"glueType":"BEAN","glueSource":"","glueRemark":"GLUE code initialization ","glueUpdatetime":1 561532680000,"childJobId":"","jobStatus":"NONE"}],"recordsTotal":4}
    }

    /** * Add scheduled task configuration */
    @Test
    public void addInfo(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("jobGroup"."1");        // Primary key ID of the actuator
        map.add("jobCron"."0/1 * * * *? ");        / / expression
        map.add("jobDesc"."Test mission. I'm the latest test mission. Ahhhhh!");         // Task description
        map.add("author"."zhanglei");           / / head
        map.add("alarmEmail"."[email protected]");     // Alarm email
        map.add("executorRouteStrategy"."FIRST");            // Execute the routing policy
        map.add("executorHandler"."Test JobHandler");              // Executor, task Handler name
        map.add("executorParam"."121454");            // Actuator, task parameters
        map.add("executorBlockStrategy"."SERIAL_EXECUTION");        // Block processing policy
        map.add("executorTimeout"."101");          // Task execution timeout duration, in seconds
        map.add("executorFailRetryCount"."1");       // Number of failed retries
        map.add("glueType"."BEAN");                 / / GLUE type # com. XXL. Job. Core. GLUE. GlueTypeEnum
        map.add("glueSource"."");               / / GLUE source code
        map.add("glueRemark"."GLUE code initialization");               / / GLUE remark
        map.add("childJobId"."");               // Subtask ID, separated by multiple commas
// map.add("jobStatus",""); // Base on quartz
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobinfo/add", request, String.class);
        System.out.println(response.getBody());             //{"code":200,"msg":null,"content":"15"}
    }

    /** * Modifies scheduled task configuration */
    @Test
    public void updateInfo(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("id"."14");             // Note: The modification must have a primary key
        map.add("jobGroup"."1");        // Primary key ID of the actuator
        map.add("jobCron"."0/1 * * * *? ");        / / expression
        map.add("jobDesc"."Test mission. I'm the latest test mission. Ahhhhh!");         // Task description
        map.add("author"."zhanglei");           / / head
        map.add("alarmEmail"."[email protected]");     // Alarm email
        map.add("executorRouteStrategy"."FIRST");            // Execute the routing policy
        map.add("executorHandler"."Test JobHandler");              // Executor, task Handler name
        map.add("executorParam"."121454");            // Actuator, task parameters
        map.add("executorBlockStrategy"."SERIAL_EXECUTION");        // Block processing policy
        map.add("executorTimeout"."101");          // Task execution timeout duration, in seconds
        map.add("executorFailRetryCount"."1");       // Number of failed retries
        map.add("glueType"."BEAN");                 / / GLUE type # com. XXL. Job. Core. GLUE. GlueTypeEnum
        map.add("glueSource"."");               / / GLUE source code
        map.add("glueRemark"."GLUE code initialization");               / / GLUE remark
        map.add("childJobId"."");               // Subtask ID, separated by multiple commas
// map.add("jobStatus",""); // Base on quartz
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobinfo/update", request, String.class);
        System.out.println(response.getBody());             //{"code":200,"msg":null,"content":null}
    }

    /** * Delete the scheduled task configuration */
    @Test
    public void removeInfo(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("id"."15");             // Note: Delete must have a primary key
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobinfo/remove", request, String.class);
        System.out.println(response.getBody());             //{"code":200,"msg":null,"content":null}
    }

    /** * Starts a scheduled task */
    @Test
    public void startInfo(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("id"."13");             // Id of the started task
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobinfo/start", request, String.class);
        System.out.println(response.getBody());             //{"code":200,"msg":null,"content":null}
    }


    /** * Stop the scheduled task */
    @Test
    public void stopInfo(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("id"."13");             // Id of the started task
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobinfo/stop", request, String.class);
        System.out.println(response.getBody());             //{"code":200,"msg":null,"content":null}
    }

    /** * Performs a scheduled task */
    @Test
    public void startOne(a){
        HttpHeaders headers = new HttpHeaders();
        List<String> cookies = new ArrayList<>();
        /* Login to get a Cookie */
        cookies.add("XXL_JOB_LOGIN_IDENTITY=6333303830376536353837616465323835626137616465396638383162336437; Path=/; HttpOnly");
        headers.put(HttpHeaders.COOKIE,cookies);
        MultiValueMap<String, String> map = new LinkedMultiValueMap<String, String>();
        map.add("id"."13");             // Id of the started task
        map.add("executorParam"."13");             // Start task parameters
        HttpEntity<MultiValueMap<String, String>> request = new HttpEntity<MultiValueMap<String, String>>(map, headers);
        ResponseEntity<String> response = restTemplate.postForEntity("http://localhost:8888/xxl-job-admin/jobinfo/trigger", request, String.class);
        System.out.println(response.getBody());             //{"code":200,"msg":null,"content":null}}}Copy the code

4.5 conclusion

  • In addition to the common functions described in this article, XXL-job has advanced functions such as task dependency, event triggering, access token, and failover
  • Examples show that XXL-Job has a powerful unified task scheduling function and can be applied to most service scenarios that require task scheduling. There is only one user in the whole system. The user name and password are stored in the xxl-job-admin.properties file in the scheduling center (server), as shown in the following figure:

The author has put the task permission management function on the agenda in the TODO LIST of xxL-job official document. The future version will be more and more perfect!