The dream-seeker characters
The sample code covered in this article has been synchronously updated to the HelloGithub-Team repository
The previous series of tedious deployment steps were painful. These include:
- There are n commands to execute on the server
- The local environment is inconsistent with the server environment. When the local environment is running correctly, the deployment server is suspended and cannot be started
- If this happens, n more commands are executed on the server to resolve the problem
- The code was updated locally, and when the deployment went live, the history repeated itself, and I wanted to die
So is there a way to align the local development environment with the online environment? This allows us to validate locally before we deploy to live, and as long as the validation is ok, we are 99% sure that it will work once we deploy to live (1% is reserved for procedural metaphysics).
The solution is to use Docker.
Docker is a container technology that provides us with an isolated operating environment. To use Docker, we first need to choreograph an image, which is used to describe what the isolation environment should look like, what dependencies it needs to install, what applications it needs to run, etc. It can be likened to the manufacturing diagram of a cargo ship.
With mirrors, you can build a virtually isolated environment into the system, called a container, like a factory building a ship from a blueprint. Factories can build countless of them.
The container is built, and once it’s started, the isolation environment is up and running. Due to pre-choreographed mirroring, the internal environment of the container is the same whether it is running locally or online, which ensures consistency between the local and online environments and greatly reduces problems due to environmental differences.
So, let’s first choreograph the Docker image.
Py and production.py. We first create the following directory structure to store the development environment and the online environment images respectively:
HelloDjango-blog-tutorial\
blog\
...
compose\
local\
production\
django\
nginx\
...
Copy the code
The local directory contains the Docker image files for the development environment. The Django folder in the production\ directory contains the images based on this project. The Nginx directory contains the Nginx images for the online environment.
The online environment
Image file
In the production\ Django directory, we will create an image of the blog project line environment named Dockerfile:
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev
WORKDIR /app
RUN pip install pipenv -i https://pypi.douban.com/simple
COPY Pipfile /app/Pipfile
COPY Pipfile.lock /app/Pipfile.lock
RUN pipenv install --system --deploy --ignore-pipfile
COPY . /app
COPY ./compose/production/django/start.sh /start.sh
RUN sed -i 's/\r//' /start.sh
RUN chmod +x /start.sh
Copy the code
First we declare at the beginning of the image file that this image is built on the Python :3.6-alpine base image using FROM Python :3.6-alpine. Alpine is a Linux distribution that is small, lightweight, and secure. We need the Python environment to run our program, so we use this small but basic image that contains the full Python environment to build our application image.
ENV PYTHONUNBUFFERED 1 Sets the environment variable PYTHONUNBUFFERED=1
The next RUN command installs a dependency on the image-processing package Pilliow, since the Python library Pillow is used to process images using Django.
Then use WORKDIR /app to set the working directory, and later commands executed in the Docker container started based on this image will take this directory as the current working directory.
Then, RUN PIP install pipenv is used to install pipenv. The -i parameter specifies the pypI source, which is generally specified as douban source in China. In this way, pipenV installation package can be downloaded faster.
Then we copy the project dependencies Pipfile and pipfile. lock to the container and run pipenv install to install the dependencies. With the –system parameter, Pipenv does not create a virtual environment. Instead, dependencies are installed into the container’s Python environment. Since the container itself is a virtual environment, there is no need to create a virtual environment.
Then copy the project files to the container’s /app directory (of course, some files are not necessary for the program to run, so we’ll set a dockerignore file in a moment where the specified files will not be copied to the container).
Then we also copied the start.sh file to the/directory of the container, removed the carriage return (Windows only, Linux in the container), and gave it executable permissions.
Start the Gunicorn service:
#! /bin/shpython manage.py migrate python manage.py collectstatic --noinput gunicorn blogproject.wsgi:application -w 4 -k gthread - b 0.0.0.0:8000 - chdir = / appCopy the code
We’ll tell the container to execute this command on startup, which launches our Django application. – chdir = / app shows that/app as the root directory, so you can find blogproject. Wsgi: application.
Create a.dockerignore file in the project root directory and specify the files not to be copied to the container:
.*
_credentials.py
fabfile.py
*.sqlite3
Copy the code
The online environment uses Nginx and also orchestrates an image of Nginx. The image file is placed in the compose production Nginx directory:
FROM nginx:1.17.1
# replace with domestic source
RUN mv /etc/apt/sources.list /etc/apt/sources.list.bak
COPY ./compose/production/nginx/sources.list /etc/apt/
RUN apt-get update && apt-get install -y --allow-unauthenticated certbot python-certbot-nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY ./compose/production/nginx/HelloDjango-blog-tutorial.conf /etc/nginx/conf.d/HelloDjango-blog-tutorial.conf
Copy the code
The image was built based on nginx:1.17.1 base image, then we updated the system and installed Certbot to configure HTTPS certificates. Since nginx:1.17.1 is based on Ubuntu, the installation will be slow due to the large number of dependencies to be installed. We will replace the software source with a domestic source to speed up the installation slightly.
The last step is to copy the application’s nginx configuration to the conf.d directory of the nginx container. The contents are the same as configuring nginx directly on the system.
upstream hellodjango_blog_tutorial {
server hellodjango_blog_tutorial:8000;
}
server {
server_name hellodjango-blog-tutorial-demo.zmrenwu.com;
location /static {
alias /apps/hellodjango_blog_tutorial/static;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://hellodjango_blog_tutorial;
}
listen 80;
}
Copy the code
Instead of configuring Nginx directly on the host machine, use the upstream module of Nginx, which essentially forwards requests. Nginx forwards all requests to the upstream helloDJango_blog_tutorial module for processing, The helloDjango_blog_tutorial module actually serves as the container that runs your Django application helloDjango_blog_tutorial (which you’ll run next).
With the image choreographed, you can now build and run containers from the image. But wait, we have two images, one for the Django application and one for Nginx, which means we need to build the container twice and start it twice, which is a bit of a hassle. Is there a way to build and run one command at a time? The answer is docker-compose.
Docker-compose writes the images of each container, along with the parameters used to build and run the container images, into a ymal file. This allows us to build multiple containers with a single build command and start multiple containers with a single up command.
We’ll create a production.yml file at the root of our project to orchestrate django and Nginx containers.
version: '3'
volumes:
static:
database:
services:
hellodjango_blog_tutorial:
build:
context: .
dockerfile: compose/production/django/Dockerfile
image: hellodjango_blog_tutorial
container_name: hellodjango_blog_tutorial
working_dir: /app
volumes:
- database:/app/database
- static:/app/static
env_file:
- .envs/.production
ports:
- "8000:8000"
command: /start.sh
nginx:
build:
context: .
dockerfile: compose/production/nginx/Dockerfile
image: hellodjango_blog_tutorial_nginx
container_name: hellodjango_blog_tutorial_nginx
volumes:
- static:/apps/hellodjango_blog_tutorial/static
ports:
- "80:80"
- "443:443"
Copy the code
Version: ‘3’ declares docker-compose as the syntax for the third generation version
volumes:
static:
database:
Copy the code
Declare two named data volumes, static and database. What are data volumes for? Because the Docker container is an isolated environment, once the container is deleted, the files in the container are deleted along with it. Imagine if we started the blog application container and ran it. After a while, the database in the container would produce data. When we update the code or change the image of the container, we delete the old container, and then rebuild the new container and run it, the database in the old container will be deleted along with the container, and all our hard work on the blog post will go up in flame.
Therefore, we use docker’s data volume to manage the data that needs to be stored persistently. As long as the data is managed by docker’s data volume, the new container can take data from the data volume when it is started, so as to recover the data in the deleted container.
We have two data that need to be managed by the data volume, one is the database file and one is the static file of the application. Database files are easy to understand, so why do static files need volume management as well? Start the new container and use the Python manage.py collectstatic command to collect it again.
The answer is no, data volumes not only have the ability to persist data, but also the ability to share files across containers. Remember, containers are not only isolated from the host machine, but also from each other. Nginx runs in a separate container, so where do the static files it handles come from? Application static files are stored in the application container, which is not accessible by the Nginx container. Therefore, these files are also managed by the data volume. The Nginx container takes static files from the data volume and maps them to its own container.
Next, we define two services, an application service helloDjango_blog_tutorial and an nginx service.
build:
context: .
dockerfile: compose/production/django/Dockerfile
Copy the code
Docker-compose tells docker-compose that the build container is based on the current directory (the directory where the yML file resides) and that the image used is the image file in the path specified by dockerfile.
Image and container_name give names to the image and container to build, respectively.
Working_dir Specifies the working directory.
-
volumes: - database:/app/database - static:/app/static Copy the code
Also note that data volumes can only map folders and not single files, so for our application database, we moved the db.sqlite3 file to the database directory. So we need to change the database configuration in Djangos configuration file so that it generates the database file correctly in the database folder at the root of the project:
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3'.'NAME': os.path.join(BASE_DIR, 'database'.'db.sqlite3'),}}Copy the code
-
env_file: - .envs/.production Copy the code
When the container starts, it reads the contents of the.envs/.production file and injects them into environment variables.
So let’s create this file and put secret_key in it.
DJANGO_SECRET_KEY=2pe8eih8oah2_2z1=7f84bzme7^bwuto7y&f(#@rgd9ux9mp-3 Copy the code
Be sure to add these files containing sensitive information to the version control tool’s ignore list to prevent them from being accidentally pushed to an open repository for public viewing.
-
ports: - "8000:8000" Copy the code
Expose port 8000 in the container and bind to port 8000 on the host machine so that we can access the container through port 8000 on the host machine.
Command: /start.sh Starts the Django application by executing start.sh when the container is started.
The nginx service container is similar, but note that it takes a static file from the data volume static and maps it to /apps/hellodjango_blog_tutorial/static in the nginx container.
location /static {
alias /apps/hellodjango_blog_tutorial/static;
}
Copy the code
This will proxy the static file correctly.
With everything in place, execute the following two commands locally to build and start the container.
docker-compose -f production.yml build
docker-compose -f production.yml up
Copy the code
In this case, you can use the domain name to access the application in the container. Of course, since Nginx runs in the container in the local environment, you need to modify the local hosts file to resolve the domain name to the local IP address.
If local access is fine, you can start the container in the same way by executing the above two commands directly on the server, and your Django application is successfully deployed on the service.
The development environment
Since the online environment is using Docker, the development environment can also use Docker for development. The image and docker-compose file in the development environment are a bit simpler than in the online environment because nginx is not used.
For your development environment, put it under compose\local:
FROM Python: 3.6 - alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev
WORKDIR /app
RUN pip install pipenv -i https://pypi.douban.com/simple
COPY Pipfile /app/Pipfile
COPY Pipfile.lock /app/Pipfile.lock
RUN pipenv install --system --deploy --ignore-pipfile
COPY ./compose/local/start.sh /start.sh
RUN sed -i 's/\r//' /start.sh
RUN chmod +x /start.sh
Copy the code
Note that unlike the online environment, we don’t copy the entire code into a container. The code in the online environment is generally stable, whereas in the development environment, because of the frequent changes and debugging of the code, if we copy the code into the container, the code changes made outside the container will not be sensed inside the container, and the application running inside the container will not be able to synchronize our changes. So we will manage the code through the Docker data volume.
Start. sh no longer starts Gunicorn, but starts the development server using RunServer.
#! /bin/shPython manage.py Migrate Python manage.py runserver 0.0.0.0:8000Copy the code
Then create a docker-compose file local.yml (the same as production.yml) to manage the development container.
version: '3'
services:
djang_blog_tutorial_v2_local:
build:
context: .
dockerfile: ./compose/local/Dockerfile
image: django_blog_tutorial_v2_local
container_name: django_blog_tutorial_v2_local
working_dir: /app
volumes:
- .:/app
ports:
- "8000:8000"
command: /start.sh
Copy the code
Note that we have mounted files from the root of the entire project to the /app directory so that code changes are reflected in the container in real time.
Online deployment
If the container runs locally, the container runs in the online environment is fine, because in theory, our online server will also build the same environment as the container used in the local test, so it’s almost certain that as long as we have Docker on our server, our application will run successfully.
Firstly, Docker is installed in the service. The installation method varies from system to system, and the installation method is very simple. We take CentOS 7 as an example.
First install necessary dependencies:
$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
Copy the code
Then add the repository source:
$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Copy the code
Install Docker:
$ sudo yum install docker-ce docker-ce-cli containerd.io
Copy the code
Start the Docker:
$ sudo systemctl start docker
Copy the code
Set Docker source acceleration (using the image source provided by DaoCloud), otherwise pulling images will be very slow
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
Copy the code
Run a hello world in docker to confirm docker was installed successfully:
$ sudo docker run hello-world
Copy the code
Docker-compose: docker-compose: docker-compose: docker-compose This is a Python package that we can install directly through PIP:
$ pip install docker-compose
Copy the code
In order to avoid possible permissions problems when running some docker commands, we add the current system user to the docker group:
$ sudo usermod -aG docker ${USER}
Copy the code
After adding a group, restart the shell (disconnect and reconnect if SSH is connected).
Everything is ready, only the east wind!
Start preparing our application to run inside the Docker container. Since we deployed the application on the host, we will first stop the related services:
# Stop nginx because we will be running nginx in the container
$ sudo systemctl stop nginx
# Stop blogging apps
$ supervisorctl stop hellodjango-blog-tutorial -c ~/etc/supervisord.conf
Copy the code
Next, pull the latest code to the server, go to the root directory of the project, and create the environment variable files required by the online environment:
$ mkdir .envs
$ cd .envs
$ vi .production
Copy the code
Write the secret key of the online environment to the. Production environment variable file.
DJANGO_SECRET_KEY=2pe8eih8oah2_2z1=7f84bzme7^bwuto7y&f(#@rgd9ux9mp-3
Copy the code
Save and exit.
Go back to the project root directory and run the build command to build the image:
$ docker-compose -f prodcution.yml build
Copy the code
Then we can start to start the Docker container according to the constructed image, but for convenience, our Docker process is still managed by the Supervisor, so we modify the configuration of the blog application to start the Docker container when it starts.
Open the ~ / etc/supervisor/conf. D/hellodjango – blog – tutorial. Ini, modified to the following contents:
[program:hellodjango-blog-tutorial]
command=docker-compose -f production.yml up --build
directory=/home/yangxg/apps/HelloDjango-blog-tutorial
autostart=true
autorestart=unexpected
user=yangxg
stdout_logfile=/home/yangxg/etc/supervisor/var/log/hellodjango-blog-tutorial-stdout.log
stderr_logfile=/home/yangxg/etc/supervisor/var/log/hellodjango-blog-tutorial-stderr.log
Copy the code
Basically, instead of using Gunicorn to start the service, you start docker.
When modifying ini configuration, remember reread to validate the configuration:
$ supervisorctl -c ~/etc/supervisord.conf
> reread
> start
Copy the code
The Docker container started smoothly. Visit our blog site. By abandoning the preparation of the image choreography, we deployed our blog application with a single command to build and start the container. If you switch servers, just run the image build and start container commands again, and the service will be up again! That’s the benefit of Docker.
Pycharm, the most popular IDE for django development, also integrates with Docker. I have embraced Docker in my development work. Unprecedented experience, unprecedented convenience and stability.
HTTPS
Finally, since Nginx is running in a new container, we need to re-apply for and configure HTTPS certificates. This is the same as before, except that Nginx was on the host. This time we run certbot in the container. Certbot has been installed when editing nginx images. You can run the following commands in the docker container:
The docker ps command is used to view the running container, remember the name of the nginx container, and then use the docker execit container name format to execute the command in the specified container, so we execute:
$ docker exec -it nginx certbot --nginx
Copy the code
Enter the information as prompted. The process is exactly the same as in the previous section on deploying on the host.
Automated deployment
Fabric does not need to be modified to try this locally:
pipenv run fab -H server_ip --prompt-for-login-password -p deploy
Copy the code
Perfect! So far, our blog has been running stably online, more and more people will visit our blog, let us continue to improve its function!
“Explain Open Source Project series” — let the people who are interested in open source projects not be afraid, let the initiator of open source projects not be alone. Follow along as you discover the joys of programming, use, and how easy it is to get involved in open source projects. Welcome to leave a message to contact us, join us, let more people fall in love with open source, contribute to open source ~