Laravel queues provide a unified API for different backend queue services, such as Beanstalk, Amazon SQS, Redis, and even other relational database-based queues. The purpose of queues is to delay processing of time-consuming tasks, such as sending mail, drastically reducing Web requests and their corresponding time.

The queue configuration file is stored in config/queue.php. The configuration of each queue driver can be found in this file, including database, Beanstalkd, Amazon SQS, Redis, and synchronous (locally used) drivers. It also includes a NULL queue driver for tasks that abandon queues

Why use queues?

Queues are generally used to:

Asynchronous retryCopy the code

You may have other reasons for using queues, but these should be the two most basic.

When are queues used?

Given why queues are used, there are several types of tasks that use queues:

Time-consuming, such as uploading a file after some format conversion. To ensure delivery rate, such as SMS, there is always a chance of failure to call someone else’s API, so in order to ensure delivery, retry is necessary. When using queues, it is important to understand whether the task can be asynchronous or not, and if asynchrony causes problems, then you should not use queues.

Necessary Settings for the driver

database.php

The redis database section is configured in database.php. By default, there is a default connection

Edit the.env configuration file and fill in REDIS_HOST, REDIS_PASSWORD, and REDIS_PORT with the corresponding values of Redis in your server based on the configuration items required for this default connection.

queue.php

Env first needs to be configured with QUEUE_DRIVER, which is Redis because we are going to use Redis.

The connection value is the redis default connection in database.php.

The database

To use the database queue driver, you need to create a table to store tasks. You can use queue:table Artisan

php artisan queue:table
Copy the code

Handle failed tasks

Sometimes the task in your queue will fail. Don’t worry, things aren’t always smooth sailing.

Laravel has a convenient way built in to specify the maximum number of task retries. When the task exceeds this number of retries, it is inserted into the failed_JOBS table. To create migration files for the failed_jobs table, use the queue:failed-table command and then use the Migrate Artisan command to generate the failed_jobs table:

php artisan queue:failed-table
Copy the code

Command to create a migration of the table. Once the migration has been created, you can use the migrate command to create tables:

php artisan migrate
Copy the code

Executing commands

php artisan queue:work --daemon --quiet --queue=default --delay=3 --sleep=3 --tries=3
Copy the code
--daemon
Copy the code

The queue:work Artisan command includes a –daemon option for forcing the queue worker to continue processing jobs without ever re-booting the framework. This results in a significant reduction of CPU usage when compared to the queue:listen command

In general, this option is usually added to the Supervisor to save CPU usage.

--quiet
Copy the code

Nothing is printed

--delay=3
Copy the code

How long to delay retry after a task fails, in seconds. I personally recommend not to set this value too short, because if a task fails (for example, due to network reasons), the retry time is too short, which may lead to consecutive failures.

--sleep=3
Copy the code

When I went to Redis to get a task, I found that there was no task and how long did the rest take, in seconds. This value depends on whether your task is urgent or not. If it is a very urgent task, don’t wait too long.

--tries=3
Copy the code

Defines the maximum number of retries for a failed task. This value is set according to the importance of the task, generally three times is suitable.

Create a task

Generate task class

In your application, queue tasks are placed in the app/Jobs directory by default. If the directory does not exist, it will be created automatically when you run the make: Job Artisan command. You can generate a new queue task with the following Artisan command:

php artisan make:job Demo
Copy the code

The generated class implements the Illuminate\Contracts\Queue\ShouldQueue interface, which means that the task will be pushed to a Queue rather than executed synchronously.

<?php

namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Queue\SerializesModels;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Support\Facades\Log;

class Demo implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public $param;
    /**
     * Create a new job instance.
     *
     * @return void
     */
    public function __construct($param = '')
    {
        $this->param = $param;
    }

    /**
     * Execute the job.
     *
     * @return void
     */
    public function handle()
    {
        Log::info('Hello, '.$this->param);
    }
}

Copy the code

Controller code

Public function queue_demo() {$num = rand(1,999999999); // This task will be sent to the default queue... DemoJob::dispatch($num); }Copy the code

Open the queue

php artisan queue:work --queue=default
Copy the code

Since it is local, listening needs to be turned on, and when the interface is accessed, tasks in the queue are triggered

The Supervisor configuration is required online

The Supervisor configuration

The installation Supervisor

Supervisor is a process monitoring software on A Linux operating system that automatically restarts queue: Listen or Queue :work commands when they fail. To install Supervisor in Ubuntu, use the following command:

sudo apt-get install supervisor
Copy the code

If manually configuring the Supervisor yourself sounds a bit overwhelming, consider using Laravel Forge, which automatically installs and configures the Supervisor for your Laravel project.

Configure the Supervisor

The Supervisor configuration file is stored in the /etc/supervisor/conf.d directory. From this directory you can create any number of configuration files to ask the Supervisor how to monitor your process. For example, we create a laravel-worker.conf to start and monitor a queue:work process:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3
autostart=true
autorestart=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
Copy the code

The numprocs command in this example asks the Supervisor to run and monitor eight Queue: Work processes, and to restart them if they fail. Of course, you must change the queue:work SQS of the command command to show the queue driver of your choice.

When the Supervisor configuration file is created, you need to update the Supervisor configuration and start the process with the following command:

sudo supervisorctl reread

sudo supervisorctl update

sudo supervisorctl start laravel-worker:*
Copy the code

It is designed to run the supervisor command in the container before it is used. Otherwise, the container will display an error. The container is run on the /etc/supervisord.conf command in the container.

For details about how to set up and use Supervisor, see the Supervisor official document

Q&A

  1. unix:///var/run/supervisor.sock no such file

    It is reported that the supervisor is installed and the container is running while the service is not opened

    Container container container container container container container container container container container container container container

  2. The process specified in Command is up, but the Supervisor keeps restarting

    $path/bin/ elasticSearch -d = $path/bin/ elasticSearch -d = $path/bin/ elasticSearch -d = $path/bin/ elasticSearch -d = $path/bin/ elasticSearch -d = $path/bin/ elasticSearch -d

    Workaround: The Supervisor cannot detect the PID of the background startup process, and the Supervisor itself is the background startup daemon, so don’t worry about this

  3. Multiple container services are running, causing it to be unable to shut down the container properly

    Problem Description: Running supervisord – c/etc/supervisord. Before the conf, directly run supervisord – c/etc/supervisord. D/xx. Conf led some process by multiple superviord management, can’t normally closed process.

    Solution: use the ps – fe | grep supervisord to view all the supervisord launch services, kill the process.

Specific example code referenceMaking the warehouse