The original article is reprinted from liu Yue’s Technology blog v3u.cn/a_id_115
As we all know, Celery is a simple, flexible and reliable distributed system for processing large amounts of messages. In a previous article: Python3.7 +Tornado5.1.1+Celery3.1+Rabbitmq3.7.16 Finally, due to the async keyword problem, we have to modify the source code of the three-party library. In fact, we can package celery service into a mirror through docker, so that we can use celery or other systems relying on celery in the future, we only need to run the mirror in the form of container service without cumbersome configuration and installation.
Start with a new celery_with_docker folder, cdcelery_with_docker
Create dockerFile
FROM python
LABEL author="liuyue"
LABEL purpose = ' '
RUN apt update
RUN pip3 install setuptools
ENV PYTHONIOENCODING=utf-8
# Build folder
RUN mkdir -p /deploy/app
WORKDIR /deploy/app
#only copy requirements.txt. othors will be mounted by -v
#COPY app/requirements.txt /deploy/app/requirements.txt
#RUN pip3 install -r /deploy/app/requirements.txt
RUN pip3 install celery
# run sh. Start processes in docker-compose.yml
#CMD ["/usr/bin/supervisord"]
CMD ["/bin/bash"]
Copy the code
Meaning base image we use Python then install celery
Then create a new docker-comemage.yml
# Use postgres/example user/password credentials
version: '3.4'
services:
myrabbit:
#restart: always
#build: rabbitmq/
image: rabbitmq:3-management
# hostname: rabbit-taiga
environment:
RABBITMQ_ERLANG_COOKIE: SWQOKODSQALRPCLNMEQG
# RABBITMQ_DEFAULT_USER: "guest"
# RABBITMQ_DEFAULT_PASS: "guest"
# RABBITMQ_DEFAULT_VHOST: "/"
# RABBITMQ_NODENAME: taiga
RABBITMQ_DEFAULT_USER: liuyue
RABBITMQ_DEFAULT_PASS: liuyue
ports:
- "15672:15672"
# - "5672:5672"
api:
#restart: always
stdin_open: true
tty: true
build: ./
image: celery-with-docker-compose:latest
volumes:
- ./app:/deploy/app
ports:
- "80:80"
command: ["/bin/bash"]
celeryworker:
image: celery-with-docker-compose:latest
volumes:
- ./app:/deploy/app
command: ['celery'.'-A'.'tasks'.'worker'.'-c'.'4'.'--loglevel'.'info']
depends_on:
- myrabbit
Copy the code
This configuration file will pull the rabbitMQ image separately and start the RabbitMQ service with the username and password: Liuyue: LiuYUE then create a celery project and place the directory in /deploy/app, then map the host app directory to /deploy/app and start the celery service
Finally, we just need to create an app folder on the host and create some task scripts
New tasks. Py
from celery import Celery
SERVICE_NAME = 'myrabbit'
app = Celery(backend = 'rpc://', broker = 'amqp://liuyue:liuyue@{0}:5672/'.format(SERVICE_NAME))
@app.task
def add(x, y):
print(123123).return x + y
Copy the code
Create the task call file test.py
import time
from tasks import add
# celery -A tasks worker -c 4 --loglevel=info
t1 = time.time()
result = add.delay(1, 2)
print(result.get())
print(time.time() - t1)
Copy the code
The final directory structure for the project looks like this
Then execute the command docker-compose up — force-set in the project root directory
Celery and RabbitMQ services are now started
Go to the browser http://localhost:15672 and log in to LiuYUE: Liuyue
docker exec -i -t celery-with-docker-compose-master_api_1 /bin/bash
Copy the code
As you can see, the container has shared the host’s APP folder by mounting it
We then execute the asynchronous task python3 test.py
You can see that the execution was successful
Therefore, in the host machine, no environment needs to be configured, just need to install a Docker, the construction and execution of asynchronous task queue are all isolated in the docker internal container, only specific codes and scripts are written in the host machine through the docker mount command. This means that developers only need to write code on the host machine, and do not need to worry about configuration and deployment issues.
Finally, attach the full code of the project: gitee.com/QiHanXiBei/…
The original article is reprinted from liu Yue’s Technology blog v3u.cn/a_id_115