Recently worked on a small Web project based on Django2.x and wanted to deploy to a cloud server. I often used Ubuntu as the server system for the deployment of the project before, but it is understood that China is more willing to choose CentOS as the server system with lower update frequency and more stable, so I specially replaced my server with CentOS8.0 to try my hand. Docker was chosen to allow me to play around in a sandbox environment without worrying about destroying the host.
This post is mainly to record the process of project deployment, easy to review later, but also to provide a reference for everyone.
Cut the gossip and get to work!
1. Upload project files
Many partners in the local development of the project, do not know how to upload to the server, HERE I give partners a way, big god please ignore. The development tool I used was PyCharm, which directly configured the remote server and then uploaded the project files to the server.
1.1 Configuring the Server ADDRESS
Go to Tools -> Deployment -> Configuration on the menu bar.
After opening it, click the plus sign in the upper left corner, select SFTP, enter the configuration name in the popup window, and create a new server configuration, as shown in the following figure.
Enter the IP address of your remote server in the Host text box, the system User in the User name text box, the login Password of the system User in the Password text box, and the other default options.
After the configuration is complete, click the Test Connection button to Test whether the remote server can be connected.
It is not enough just to connect to the remote server; you also need to set up the local project file path and the destination path to upload to the remote server. Also in the Deployment window, switch to the Mappings TAB, Local Path select the root directory of the project file, Deployment Path select the target path on the remote server (if the previous path is configured correctly and the connection is tested successfully, from here, click the folder icon after the input box, The disk directory of the remote server can be displayed directly.
1.2 Uploading project Files
After the path mapping is configured, you can upload the project file. In the project directory to the left of PyCharm, select the project root directory, right-click, find Deployment, and select Upload to ‘server name’ where the server name refers to the name you just created when the server configuration was created.
Wait a while and see the Upload to ‘server name’ completed: XXX file transferred (AS I’ve uploaded it before I just uploaded one file to show you).
2. The Docker installation
The advantages of the Docker I no longer here, don’t know the friends of baidu by oneself, installation method, a classmate has a good command of English can also refer to the website document: docs.docker.com/engine/inst…
2.1 install yum – utils
Yum-utils is a tool for managing Repository and extension packages (mainly for Repository). Here we install yum-utils to add the Docker repo source later.
yum install -y yum-utils
Copy the code
2.2 Adding the docker.repo source
Here to explain to you why to add the source, familiar with Linux system students please automatically ignore. Manually adding a REPO allows the system to look for updates from the REPO and CentOS official sources, often to allow developers to provide updates faster than in the official repository.
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Copy the code
See here some people may have a question: we are not installed Docker, then the docker-CE is a thing? I’ll keep it in suspense, and I’ll explain what Docker-CE means later.
2.3 installation containerd. IO
Containerd. IO is a container service that can be used to install a Docker container in CentOS8.0.
DNF install https://download.docker.com/linux/centos/8/x86_64/stable/Packages/containerd.io-1.3.7-3.1.el8.x86_64.rpmCopy the code
CentOS8 enables DNF, as a system package management tool, DNF package manager to overcome some of the bottlenecks of YUM package manager, including user experience, memory footprint, dependency analysis, runtime and many other aspects of the content.
2.4 installation Docker
yum install -y docker-ce docker-ce-cli
Copy the code
Here we have docker-CE, what does docker-CE stand for? Now to explain, in March 2017, Docker was divided into two branch versions on the basis of the original: Docker CE and Docker EE. Docker CE stands for Community edition (free) and Docker EE stands for Enterprise edition (cost). Since this is a small personal project (mainly due to poverty), we just need to install the free Docker CE.
2.5 start the Docker
Docker will not automatically start the service after installation, so we need to manually start it.
systemctl enable docker
systemctl start docker
Copy the code
You can run the docker -v or docker version command to view the version information to determine whether the Docker service is successfully started.
2.6 Docker acceleration
When we download the required image, by default, we directly download it from the foreign official image warehouse, which often takes a long time. We can improve the download speed by configuring accelerators for Docker.
Here we choose ali Cloud’s mirror accelerator: cr.console.aliyun.com/cn-hangzhou…
Open /etc/docker-daemon. json and configure the address of ali Cloud image accelerator obtained previously into the file as follows:
{"registry-mirrors": [" Here is your own mirror accelerator address "]}Copy the code
After editing, restart the service for the configuration to take effect:
systemctl daemon-reload
systemctl restart docker
Copy the code
3. Deploy the database
Here I use the database is MySQL database, since the previous has been installed Docker, so the database also put in the container.
3.1 Downloading the MySQL Image
docker pull mysql
Copy the code
By default, the latest version of MySQL is pulled. The version number can be viewed via Docker Images.
3.2 Creating a Bridge Network
MySQL is a separate container, and containers are isolated from each other. Therefore, to enable the containers running back-end services and MySQL containers to communicate with each other, you need to specify a bridge network network when creating and starting containers. Containers in a bridge network can communicate with each other. So we need to create a bridge network first:
Docker network create mynet // mynet is the name of the created bridge network, which will be used later when creating the containerCopy the code
Other operations on bridging networks can be viewed using the Docker network command.
Start the MySQL 3.3
Docker run – Parameter Name Parameter Value Image name: version number. If you do not specify the version number, the version will default to latest.
docker run -itd --network mynet --name mydb -p 3307:3306 -v /opt/mysql/sql:/opt/sql -e MYSQL_ROOT_PASSWORD=123456 mysql
Copy the code
Parameter Description:
-itd is a shorthand for -i-t-d. The combination of these three parameters means to run the container in interactive mode and assign a pseudo-input terminal to the container. The container runs in the background. After the container is run, you can enter the container by running the docker exec it mydb bash command.
–network indicates which bridge network the container is in, and the following parameter values are the mynet bridge network we created earlier.
— Name is the easiest to understand and represents the name of the container.
-p indicates the specified port mapping. 3307 indicates the port of the host, and 3306 indicates the port of the container. This means that port 3307 of the host is mapped to port 3306 of the container.
-v Mount the local directory in the format of -v container directory or -v local directory: container directory. In this way, the SQL resources in the container are synchronized with the local directory. You only need to modify the local SQL resources and restart the container to implement resource update.
-e Specifies the password of the root user of MySQL. I set it to 123456. If you do not specify the password, the root user of MySQL in the container has no password by default.
3.4 Database Initialization
Before starting the project, we need to set up the project database in advance and execute the initial SQL script, which is contained in the project file directory. There are two ways to execute the initialization script, either in the project’s Dockerfile or manually on the server side (we chose to do it manually).
SQL script is copied to /opt/mysql/ SQL. The script will be automatically synchronized to the /opt/ SQL directory of the container.
SQL /opt/mysql/ SQL /init.sqlCopy the code
Then you need to go into the container and execute the script.
Docker exec it mydb bash docker exec it mydb bash docker exec it mydb bash Run the initialization script source /opt/ SQL /init.sqlCopy the code
4. Build a project image
If we want to run the project into a container, we need a system in the container that can support the project. Here we download an Ubuntu image, and then rebuild the project image with our project files and the Ubuntu image, so that we can run the application into the container.
4.1 Downloading an Ubuntu Image
Here we should pay attention to, Docker image repository, in addition to the official image, there will also be others to upload custom images, you can use the Docker search Ubuntu command to search the Docker image repository ubuntu images
The penultimate column “OFFICIAL” (OK) indicates the OFFICIAL image, and the quality of the image can be judged by the number of STARS. Here we have pulled the first OFFICIAL Ubuntu image.
docker pull ubuntu
Copy the code
4.2 write Dockerfile
We want to customize an image, the Dockerfile is essential, it is a text file used to build the image, the text content contains the instructions and instructions required to build the image.
FROM ubuntu
MAINTAINER HOU [email protected]
ADD . /usr/src/
WORKDIR /usr/src
RUN ./init.sh
CMD /usr/src/run.sh
Copy the code
Explain the meaning of each line here:
The “FROM Ubuntu” sentence is easier to understand. Our customized image is based on the image of “FROM”, and our customized image is based on the image of Ubuntu pulled above.
MAINTAINER HOU [email protected] Specifies the image author and contact information.
ADD. /usr/src/ this is a copy command that copies project files from the current directory to the /usr/src/ directory in the temporary container where the Ubuntu image was started.
WORKDIR /usr/src set the working directory of the temporary container to the directory where we copied the project files.
RUN./init.sh executes the initialization script in the project file, which is typically used to configure the components necessary to RUN the installation project on the mirror system.
CMD /usr/src/run.sh sets the command or script to be run when the container is started. After the container is started, the back-end services in the container are automatically run.
4.3 Writing the init.sh file
As mentioned earlier, the main purpose of the init.sh file is to configure the necessary components for the installation project to run on the mirror system. The files written in this article are intended for the Ubuntu 18.04 version of the image and are used to install the Python3 environment and other dependencies needed to run the project.
#! /bin/bash cat sources.list > /etc/apt/sources.list apt update apt install python3 python3-pip -y apt install -y build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev apt install -y unixodbc unixodbc-dev libmysqlclient-dev pip3 install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple pip3 install gunicorn -i https://mirrors.aliyun.com/pypi/simpleCopy the code
If you are familiar with Linux system, you should know that when you download some project resources, you use the official source of Ubuntu by default. The download speed is very slow. Therefore, we usually manually change to the domestic source. The first command in the init.sh file is to set the source of the system to ali Cloud. The sources.list file is located in the root directory of the system, and the content is as follows:
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
Copy the code
4.4 Compiling the run.sh file
CMD /usr/src/run.sh is the last line of the Dockerfile that executes our startup script, whatever the name is. As long as it corresponds to the Dockerfile. The script is as simple as a single command:
#! /bin/bash gunicorn EQMS. Wsgi :application -w 2 -b 0.0.0.0:8000Copy the code
I have a question, is it not one line of command? Why do you have two lines here? In fact, you’ll notice that when we wrote init.sh earlier, the first line of the file also had #! /bin/bash: what does /bin/bash mean? For example, both of the scripts we wrote here are bash. There are many other shell scripts in Linux, such as sh, CSH, KSH, etc. Here is no more to say, interested students to baidu ha.
Wsgi :application -w 2 -b 0.0.0.0:8000 gunicorn EQMS. Wsgi :application -w 2 -b 0.0.0.0:8000 Gunicorn Green Unicorn is a high-performance Python WSGI Unix HTTP Server that is widely used on Unix. This Server is compatible with most Web frameworks, and is simple to implement, high performance, and consumes less system resources. In this case, -w refers to the number of worker processes. Since Gunicorn relies on the operating system for load balancing, we set the number of worker processes to 2. -b is bound to the specified socket. In common language, it is the access address. In this case, any IP address can be accessed through port 8000.
4.5 Building project Mirrors
After the preparation work is completed, we upload the newly added files to the server through the method described in 1. Then we connect to the server through SSH or other connection tools. Run the CD command to switch to the project directory where the Dockerfile resides, and then run the command to build the project image:
docker build -t myproj:latest .
Copy the code
-t can also be written as –tag, which is used to set the image name and tag. The format is name:tag, followed by the name and tag (which can be regarded as the image version).
[note] when executing the command, do not omit the last., this represents which directory is used to build the project image, because we have already switched to the directory where the Dockerfile is located, so it can be used. To represent the current directory.
If there are some warnings during the build process, don’t worry, it will not affect the overall progress. If the image is successfully built, you can use the Docker images command to view the image you built in the image list.
4.6 Running containers
Now comes the most exciting moment, the image has been created and you can now run the container.
docker run -itd --name myserver -p 8000:8000 --network mynet -v /projects/myproj:/usr/src/ myproj
Copy the code
/projects/myproj is the path where my projects will be stored on the server.
conclusion
Well here the whole project is considered successful deployment, friends quickly access in the browser to try it. If you want to play a bit more advanced, you can also download a Nginx, do a reverse proxy and load balancing, interested friends to explore their own.
If you failed to run the gunicorn project or failed to access the gunicorn project after it was started, you can check for error messages by typing the docker logs container name.