When I was an intern in the company before, I had to use Windows operating system for daily development and work according to regulations, but the actual test and online environment of the project were based on Linux, so I could only write the code of a certain function locally and send the code to the server for testing through “jumpers”. Problems can only be modified on the server, and then the modified code will be transmitted to the local and continue the next stage of work. Since I don’t like to use Vim to edit the code directly on the server, and I can’t stand the complicated operation of passing the code through “jumpers” every time, IN line with the truth that life is to torment, I started this torment journey.

In order to not only code, communicate and play happily in Windows, but also quickly deploy and test code in Linux with one click, this paper uses Docker for Windows to arrange a Linux test environment consistent with that on the server in Windows system, without copying our code to the remote server. With a small footprint, we can test our code locally with one click in the same runtime environment.

Results demonstrate

  1. One-click run project:

  1. Simulate one-click redeployment and testing after modifying the project:


1. Installation and setting of Docker for Windows

Download Docker Desktop

Prerequisites: 64-bit Windows 10 Pro and Microsoft Hyper-V

1.1 Installation Procedure

  1. Enable Hyper-V:

    • Control Panel -> Programs and Features -> Enable or disable Windows features -> check “hyhype -V” -> Restart
  2. Install the Docker.

  3. Set the address of the domestic mirror warehouse:

    • After starting Docker, right click the running icon of Docker and choose “Setting” to enter the setting interface;

    • Select the “Daemon” option on the left of the Settings panel, and fill in the “Registry mirrors” into the address of the domestic mirror warehouse (aliyun is recommended. Our test is faster, but you may need to register an account).

    • Click “Apply”;

  4. Setting a shared directory:

    • Go to the Docker Settings panel and select the “Shared Drives” option.

    • Select the drive letter you want to share and click “Apply”. You may need to enter the password of your Windows account.

1.2 Possible Problems:

In the process of Docker installation and setting, the most likely place to encounter problems may be when “setting shared directory”, it may appear that the option will be cancelled after selecting the drive letter and clicking “Apply”. Failure to mount Windows drives will cause Docker containers to freeze when added with the “-v” option. Docker log file (C: ProgramData\DockerDesktop\service.txt) :

[Error] Unable to mount C drive: C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/usr/bin/nsenter1\": stat /usr/bin/nsenter1: no such file or directory": unknown.
Copy the code

The solutions to this problem on the network are as follows:

  • Example Modify a local security policy
  • A new user
  • Install another version or reinstall Docker

I tried the first and third methods, the first method does not work for my Docker, and then use the third method after re-installation, it is quite strange.


2. Official start

2.1 Creating a Working Directory

First, you need to set the shared directory of Windows and Docker Linux test environment as D:\pc_share\, and the directory tree under this directory is as follows:

.
|-- apps					# Place our project
|   |-- test_demo1				# Example Project 1
|   |   |-- app.log				
|   |   `-- app.py				
|   `-- test_demo2				# Example Project 2
|       |-- app.log
|       `-- app.py				
`-- etc						# Place our configuration file
    |-- DockerFile			        Image configuration file, used to build docker images
    |-- requirements.txt		        A dependency library for logging and installing Python projects
    `-- supervisor				# Place the run configuration file for the sample project
        |-- test_demo1.conf	        Record the startup information of example project 1, which is started and monitored by the software Supervisor
        `-- test_demo2.conf		Record the startup information of example project 2, which is started and monitored by the software Supervisor
Copy the code

The above is the directory tree diagram of all the files in our experiment. Relevant items are placed under “D:\pc_share\apps”. To facilitate the demonstration, two Web projects test_demo1 and test_demo1 based on Tornado are created here. Each project contains only one startup file app.py:

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, from test_demo_1")

def make_app(a):
    return tornado.web.Application([
        (r"/", MainHandler),				# routing
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8888)					Example project 2 will listen on another port, otherwise one of them will fail to start due to port occupancy
    tornado.ioloop.IOLoop.current().start()		# Enable Web services
Copy the code

As shown above is a very simple Web program that listens on port ‘8888’ and gets the reply: “Hello, from test_demo_1” if you visit “http:\\localhost:8888”.

2.2 Use Docker to create the program running environment

As can be seen from Section 2.1, at least three environments are required to run the above projects: operating system, Python and Tornado framework. To be consistent with the environment in which the project goes live, three environmental conditions are assumed: Centos7, python2.7, tornado5.1.1. So we need to use Dockerfile to configure the corresponding operating environment, not familiar with Dockerfile students can browse the tutorial: Dockerfile tutorial.

To make it easier to modify the runtime environment, Dockerfile is divided into two parts: configuring the lower runtime environment and configuring the upper runtime environment.

2.2.1 Configuring the underlying Operating environment

The diagram above shows the configuration process of the underlying operating environment, and the corresponding part of the Dockerfile is:

FROM centos:7 as centos_python2				"Centos :7"
MAINTAINER mileskong <xiangqian_kong@126.com>		# Author information

# Environment variable, backslash to indicate newline
ENV PYPI_REPO=https://mirrors.aliyun.com/pypi/simple/ \
	PYTHONIOENCODING=UTF-8 \
	SHARE_PATH=/mnt/share				The mount point of the docker container corresponds to the "D:\pc_share\" path in Windows.

Install the necessary software for Centos7. There are only three commonly used software installed here, which can be modified as needed
RUN set -ex \
	&& yum -y install epel-release wget net-tools \
	&& yum clean all				Clear the cache and simplify mirroring

# Install PIP (omitted from python2.7, because Python2.7.5 is shipped with Centos7)
ENV PIP_VERSION 19.1.1
RUN set -ex; \
	\
	wget -O get-pip.py 'https://bootstrap.pypa.io/get-pip.py'; \
	\
	python get-pip.py \
		--disable-pip-version-check \
		--no-cache-dir \
		"pip==$PIP_VERSION"\; \ pip --version; \ \ find /usr/local -depth \
		\( \
			\( -type d -a \( -name test -o -name tests \) \) \
			-o \
			\( -type f -a \( -name '*.pyc' -o -name '*.pyo' \) \) \
		\) -exec rm -rf '{}' +; \
	rm -f get-pip.py
Copy the code

From the Dockerfile above, we have completed the configuration of an image, which contains: Centos7, PYTHon2.7, PIP.

Save the above as: D:\pc_share\etc\DockerFile, then enter the following command in the Windows command line software to build the image from DockerFile: centos_python2:1.0 (centos_python2 is the image name, 1.0 is the tag, Version number) :

CD D:\pc_share\etc # Go to the directory of the configuration file
#Docker image build command,
#-t [Image name :tag] : sets the image name and tag.
#	-f[file name] : specifies a Dockerfile.
#--target=[phase name] : for multi-phase builds;Docker build -t centos_python2:1.0 -f DockerFile --target=centos_python2Copy the code

Using the command above, you can build an image of Centos7, python2.7, and PIP: centos_python2:1.0. Enter the Docker images command in CMD to get the following information:

The image shown in the red box is only 237Mb in size, but alpine instead of Centos7 can keep the image under 30Mb.

2.2.2 Configuring the Upper-layer Operating Environment

The low-level operating environment provides a relatively general python development platform based on Centos, which is the common foundation for all Python projects to run and basically does not need to be modified in the subsequent development and testing process. However, different Python projects may face different problems, which can be divided into the following categories:

  • Rely on different third-party libraries;
  • Different projects have different startup parameters and startup methods;
  • Configuration files for the same project may change frequently;

Because docker image is read-only and the construction process is time-consuming, we will often modify the project file, configuration file, dependent library and other information in the development process. We can’t rebuild a new image every time we change it. To solve this problem, we put the frequently changing process into another mirror, the upper run-time environment.

To solve the first problem above, we often use requestments. TXT to record and control the Python third-party libraries that the project relies on when developing Python projects. Install the third-party libraries recorded in requestments. TXT with the following command:

pip install -r /&{path}/requirements.txt
Copy the code

The third-party library configuration file for this article’s sample project is placed at: D:\pc_share\etc\requestments.txt:

Tornado = = 5.1.1# dependent Web framework, version 5.1.1The supervisor = = 4.0.2Monitor the execution of python projects using Supervisor
Copy the code

When the third-party libraries that the project depends on change, you can directly modify the requestments.txt file and rebuild the image of the upper runtime based on the lower runtime, eliminating unnecessary operations.

To solve the second and third problem, we use Supervisor to manage Python projects. For a detailed tutorial, see Danshui’s blog: Supervisor Tutorial. By writing startup profiles for Python projects in Supervisor format and placing them in the supervisor monitor folder, Supervisor can easily start and monitor projects based on our configuration files. For example, the startup configuration file for project 1 is placed in: D:\pc_share\etc\supervisor\test_demo1.conf:

[program:test_demo1]							# project name, unique
directory=/mnt/share/apps/test_demo1					/ MNT /share/ = / MNT /share/
command=python app.py							The project startup command
stdout_logfile=/mnt/share/apps/test_demo1/app.log		        # Project log file
priority=1								# priority
numprocs=1								# processes
autostart=true								# Auto start
autorestart=true							Automatic restart
startretries=5								# Number of startup attempts
Copy the code

Above is a sample project 1 startup configuration file, modify the supervisor of the monitoring path to/MNT/share/etc/supervisor /, because the docker Windows can be Shared directory D: \ pc_share \ mounted to/MNT/share /, So when the supervisor is started in the/MNT/share/etc/supervisor/(i.e. Windows in D: \ pc_share \ etc \ supervisor) search the configuration files of the project, and according to the project configuration file to start the corresponding projects.

FROM centos_python2:1.0 as centos_python2_supervisor		"Centos_python2:1.0"

ENV SUPERVISOR_PATH=$SHARE_PATH/etc/supervisor		        The supervisor setup file is configured to read the supervisor configuration file in the same directory that the "$SUPERVISOR_PATH" is pointing to

Install python third-party libraries via PIP, following the list of dependent libraries recorded in requirements.txt
COPY requirements.txt /tmp/					Copy requirements. TXT from D:\pc_share\etc\ to/TMP /
RUN set -ex \
	&& pip install -r /tmp/requirements.txt -i $PYPI_REPO \
	&& rm -rf ~/.cache/pip/*

The supervisor setup file "/etc/supervisord.conf" is designed for handling the supervisor.
RUN	set -ex \
	&& echo_supervisord_conf > /etc/supervisord.conf \
	&& mkdir /etc/supervisord.d/ \
	&& echo "[include]" >> /etc/supervisord.conf \
	&& echo "files = $SUPERVISOR_PATH/*.conf" >> /etc/supervisord.conf \	# supervises the project launch configuration file in the "$SUPERVISOR_PATH" directory
	&& sed -i '/nodaemon/s/false/true/' /etc/supervisord.conf		Is it important to change the nodaemon variable in the supervisord.conf container from false to true, and also change the supervisor to foreground

EXPOSE 8888 8889											Declare exposed ports

CMD ["supervisord"."-c"."/etc/supervisord.conf"]		The command executed when the container (not the image) is started
Copy the code

The image configuration file for the upper running environment is shown above. First, the image is based on the lower-level runtime image just built: Centos_PYTHon2:1.0, on which the PIP tool is used to install the python third-party libraries necessary for the runtime for the project according to the dependencies recorded in requirements.txt. The supervisor setup file /etc/supervisor. conf is configured for the supervisor read project, and the directory in the supervisor setup file is changed to $SUPERVISOR_PATH/, where the $SUPERVISOR_PATH is the environment variable in the previous section. Record the mount point of the startup profile of the project in Windows in the Docker image. The configuration file it is reading on the $SUPERVISOR_PATH is the same configuration file it is running on Windows D:\pc_share\etc\supervisor\.

Finally, the content after CMD represents the statement that will be executed when the container is started, which is usually used to start the project. Since we use the Supervisor to monitor the project, the Supervisor service is directly enabled in the file.

The supervisor setup process has a line of command called sed -i ‘/nodaemon/s/false/true/’ /etc/supervisord.conf, which IS the pit I stepped on without knowing how docker works. Docker image will generate a container after running with the run command. You can simply treat the container as a process, and the process will exit and stop after executing its instructions. So when our instruction is to start the Supervisor, which runs in the background by default, our container starts with a command to start the Supervisor: Container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container container As a result, the Docker container will immediately stop running every time it is started, and the container cannot continue to run as we want. In order to resolve this issue, we can change the supervisor to run in the foreground and change the “nodaemon” in the /etc/supervisord.conf file to “true”. When the container is started, the Supervisor will always occupy the foreground, so that the container cannot “complete” its instructions, and our container can keep running continuously.

2.3 Building an Image and Running it

Through Section 2.2, we completed the compilation of the configuration file Dockerfile of the Docker image. In this Dockerfile, in order to facilitate the modification and speed up the image construction process, we divided it into two parts, corresponding to the image of the lower running environment and the image of the upper running environment respectively. To distinguish, according to the naming conventions of docker images, the former is named centos_PYTHon2:1.0 and the latter is named Centos_PYTHon2_Supervisor :1.0.

(D:\pc_share\etc\ Dockerfile)

FROM centos:7 as centos_python2
MAINTAINER mileskong <xiangqian_kong@126.com>

ENV PYPI_REPO=https://mirrors.aliyun.com/pypi/simple/ \
	PYTHONIOENCODING=UTF-8 \
	SHARE_PATH=/mnt/share

# install software
RUN set -ex \
	&& yum -y install epel-release wget net-tools \
	&& yum clean all

# install pip
ENV PIP_VERSION 19.1.1
RUN set -ex; \
	\
	wget -O get-pip.py 'https://bootstrap.pypa.io/get-pip.py'; \
	\
	python get-pip.py \
		--disable-pip-version-check \
		--no-cache-dir \
		"pip==$PIP_VERSION"\; \ pip --version; \ \ find /usr/local -depth \
		\( \
			\( -type d -a \( -name test -o -name tests \) \) \
			-o \
			\( -type f -a \( -name '*.pyc' -o -name '*.pyo' \) \) \
		\) -exec rm -rf '{}' +; \
	rm -f get-pip.py


FROM centos_python2:1.0 as centos_python2_supervisor

ENV SUPERVISOR_PATH=$SHARE_PATH/etc/supervisor

# install python libs by pip
COPY requirements.txt /tmp/
RUN set -ex \
	&& pip install -r /tmp/requirements.txt -i $PYPI_REPO \
	&& rm -rf ~/.cache/pip/*

# build supervisord.conf 
RUN	set -ex \
	&& echo_supervisord_conf > /etc/supervisord.conf \
	&& mkdir /etc/supervisord.d/ \
	&& echo "[include]" >> /etc/supervisord.conf \
	&& echo "files = $SUPERVISOR_PATH/*.conf" >> /etc/supervisord.conf \
	&& sed -i '/nodaemon/s/false/true/' /etc/supervisord.conf

EXPOSE 8888 8889

CMD ["supervisord"."-c"."/etc/supervisord.conf"]
Copy the code

Here are the steps to build an image from Dockerfile and run it:

Docker build -t centos_python2:1.0 -f DockerFile --target=centos_python2. "Centos_python2:1.0" docker build -t centos_python2_supervisor:1.0 -f DockerFile --target= Centos_python2_supervisor. # Build an upper-level runtime image based on "centos_python2:1.0" : "Centos_python2_supervisor :1.0" docker run -itd d:/pc_share/:/ MNT /share -p 8888:8888 -p 8889:8889 ${image_id} # Run the image "centos_python2:1.0"Copy the code

The above command first builds the centos_PYTHon2:1.0 of the underlying environment, then builds the centos_PYTHon2_Supervisor :1.0 of the underlying environment, and finally starts the image with the run command of the Docker. The most important are the parameters of the run command, which are described in detail below:

docker run [options] [image_id]
# options:-v [host shared directory]:[mount point in docker container]: Mount the shared directory in the host to a mount point of the Docker container. If the mount point does not exist, it will be automatically created when the container is started. -p [host port]:[Docker container port]: map host port to docker container port;Copy the code

According to the parameters of run, we mount the directory d:/pc_share/ in Windows system to/MNT /share in docker container when running the image. Map port 8888 of the host to port 8888 of the Docker container, and map 8889 to port 8889 of the Docker container.

After centos_PYTHon2_Supervisor :1.0 is enabled, you can run the docker ps -a command to view the containers that are enabled by the supervisor:

To access the sample project running in Docker, type “localhost:8888” and “localhost:8889” into the local browser.

3. One-click startup

Chapter 2 introduces in detail how to use Docker to customize the operating environment of the project, and the whole process can be basically divided into the following parts:

  1. Write Dockerfile, customize the running environment and startup configuration of the project;
  2. Build docker image through Dockerfile;
  3. Start the Docker with the run command to generate the container.

In the development process, the first two steps are the process of configuring the environment, while the third step is often repeated. Whenever we modify the project code or configuration file in Windows, we need to restart the Docker container to verify the operation of the code, and the parameters of the run command are complicated. So frequently restarting containers can be annoying.

Since I usually prefer to use VScode as the IDE tool for development, I occasionally found that VScode also supports docker plug-in, which is very convenient. The following brief introduction will introduce how to achieve “one-click startup” through VScode docker plug-in.

3.1 Usage of docker plug-in

  1. invscodeSearch for “docker” in the extension store and install the plugin. It will be found after restartvscodeDocker’s sidebar has an extra docker icon:

As shown in the figure above, this plug-in can directly display the image, container, and repository information that exists locally. The figure shows all the images we created earlier.

  1. Select our mirror imageCentos_python2_supervisor: 1.0, right click and select from the options listRunAdd a new running container to our container list:

  1. If we change the code and need to retest it, select the container and right click to select itRestart ContainerTo restart our container and restart our project:

So, when we write a Dockerfile and build an image, the basic runtime environment is set up. After that, every time you need to test the modified code, you just need to select the corresponding container in the Docker plug-in interface and restart it. If the test wants more detailed information about how the project is running, right click and select Attach Shell to enter the container and view the relevant logs from a familiar terminal:

Thanks for watching!