When I first got to know Docker, I thought it was just for operation and maintenance. Later, when I actually used it, I realized that the Docker was a magic tool. No matter what the development scenario is easy to handle. Any environment you want can be generated at will, and it is more flexible and lighter, which perfectly realizes the concept of microservices.
What is theDocker
Docker is an open source application container engine, which is based on the Go language and complies with the Apache2.0 protocol. The traditional virtual machine technology is to create a set of virtual hardware, run a complete operating system on it, and then run required application processes on the system. The application process in the container runs directly on the host kernel, without its own kernel and without hardware virtualization. It takes up fewer resources and can do more.
Comparison with traditional VMS
features | The container | The virtual machine |
---|---|---|
Start the | Second level | Minutes of class |
Disk boot | Generally for MB | Generally for GB |
performance | Close to the native | Weaker than |
System support | Supports thousands of containers on a single machine | Usually dozens |
The installationDocker
The installation methods are quite simple. I use a MAC and download the software directly from Docker’s official website to install it, making the whole process barrier-free.
Docker
concept
- Mirror (
images
) : Docker image is a special file system, in addition to providing programs, libraries, resources, configuration files required by the container runtime, but also contains some configuration parameters prepared for the runtime (such as anonymous volumes, environment variables, users, etc.). The image does not contain any dynamic data and its contents are not changed after the build. (Straightforward point can be understood as the system installation package) - Container (
container
) : The relationship between image and container, as in object-oriented programmingclass
andThe instance
Similarly, a mirror is a static definition, and a container is an entity of the mirror runtime. Containers can be created, started, stopped, deleted, paused, and so on. (It can be interpreted as an installed system)
Docker image is used
First, download the image
Now that we know the concept of Docker, we’ll try pulling flask images to use. Images can be found by searching hub.docker.com/ or by command.
docker search flask
Copy the code
alpine
# pull mirrorDocker pull tiangolo/uwsgi - nginx - flask: python3.7 - alpine3.8Download the image list to see if it exists
docker images
Copy the code
Run the flask image
After downloading the image, start to run it, feel the light weight of Docker, fast. Flask file: /docker/flask: /docker/flask: /docker/flask: /docker/flask:
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello(a):
return "Hello World from Flask!"
if __name__ == "__main__":
Enable debug mode only in test environment
app.run(host='0.0.0.0', debug=True, port=80)
Copy the code
File structure now:
Flask └─ app └─ main.pyCopy the code
Run the command
docker run -it --name test -p 8080:80 -v /docker/flask/app:/app -w /app tiangolo/uwsgi-nginx-flask:python3.7-alpine3.8 python main.py
Copy the code
The following describes the meanings of the command parameters: it combines the -i -t command to interact with the container using the specified terminal. –name Specifies the name of the container. -p Maps port 8080 of the host to port 80 of the container. -v mounts the host’s /docker/flask/app file to the container’s /app file, which is automatically created if it is not in the container. -w The /app file is used as the workspace. Subsequent commands are executed in this file path by default. Tiangolo/Uwsgi-nginx-flask: python3.7-AlPINE3.8 Image name and label. Python main.py uses Python to run the workspace main.py file. Running results:
Custom mirror
When using someone else’s custom image is always not perfect, if you are in your own project, you can’t pull it down and reconfigure it every time. Like the image above, I don’t like such long names and think of the headache I get every time I type them (Tiangolo/Uwsgi-nginx-flask: PYTHon3.7-Alpine3.8).
Write a Dockerfile file
Open the /docker/flask path and create a Dockerfile in the root directory.
# Base image
FROM tiangolo/uwsgi-nginx-flask:python3.7-alpine3.8
# Getting used to seeing files without Vim, use Alpine's package management to install one
RUN apk add vim
Install a redis package with PIP
RUN pip3 install redis
# Add our app file to the custom image
COPY ./app /app
Copy the code
Now our file structure is:
Flask └── └─ └─ flask ├─ └─ └Copy the code
The rest of the run is OK! Be sure to execute the build command in the same directory as the Dockerfile.
Docker build -t myflask. Sending build context to docker daemon 4.608kB Step 1/4: FROM Tiangolo /uwsgi-nginx-flask: PYTHon3.7-AlPINE3.8 --> C69984FF0683 Step 2: RUN apk add vim ---> Using cache ---> ebe2947fcf89 Step 3/4 : RUN pip3 install redis ---> Runningin aa774ba9030e
Collecting redis
Downloading https://files.pythonhosted.org/packages/f5/00/5253aff5e747faf10d8ceb35fb5569b848cde2fdc13685d42fcf63118bbc/redis-3.0.1-py2.py3-none-any.whl (61kB)
Installing collected packages: redis
Successfully installed redis-3.0.1
Removing intermediate container aa774ba9030e
---> 47a0f1ce8ea2
Step 4/4 : COPY ./app /app
---> 50908f081641
Successfully built 50908f081641
Successfully tagged myflask:latest
Copy the code
-t Specifies the target path to be created. Remember, the dot represents the current path of the Dockerfile file, can be specified as an absolute path. Myflask can be started by running Python main.py, and the vim and Redis packages are built in.
Docker Compose
Make multiple containers a whole
Each of our containers is responsible for one service, so it’s not practical to start multiple containers manually. In this case we can associate each container with Docker Compose to form a complete project.
The Compose project is written in Python and its implementation calls the API provided by the Docker service to manage the container.
# installation docker - compose
sudo pip3 install docker-compose
Copy the code
Implement a Web that records access times
Here we start the flask and redis containers with the docker-comemage.yml file and associate the two different containers with each other. First of all in/docker/flask directory to create docker – compose. Yml file, content is as follows:
version: '3'
services:
flask:
image: myflask
container_name: myflask
ports:
- 8080:80
volumes:
- /docker/flask/app:/app
working_dir: /app
The command is executed after the command is run
command: python main.py
redis:
# If the image is not available, it will be downloaded automatically
image: "redis:latest"
container_name: myredis
Copy the code
Then we modify the main.py code above to connect to the Redis database and log the number of visits to the site. The modified contents of main.py are as follows:
from flask import Flask
from redis import Redis
app = Flask(__name__)
redis = Redis(host='redis', port=6379)
@app.route("/")
def hello(a):
count = redis.incr('visit')
return f"Hello World from Flask! The page has been accessed{count}Time."
if __name__ == "__main__":
# Only for debugging while developing
app.run(host='0.0.0.0', debug=True, port=80)
Copy the code
The current file structure is:
Flask ├─ app │ ├─ └─ docker-teach.txtCopy the code
These file parameters are taken from Docker, basically can be understood, nothing else, direct command line run:
docker-compose up
Copy the code
Spicy simple! Now we can see the result by visiting http://localhost:8080/ in our browser, and the page will be automatically increased with each visit.
docker ps
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
66133318452d redis:latest "Docker - entrypoint. S..." 13 seconds ago Up 12 seconds 6379/tcp myredis
0956529c3c9c myflask "/ entrypoint. Sh pyth..."13 seconds ago Up 11 seconds 443/ TCP, 0.0.0.0:8080->80/ TCP myflaskCopy the code
Docker Compose Docker Compose is a complete Docker, with these later development is not too cool, each container as long as the maintenance of its own service environment is OK.
Docker daily operation
Common Operations on Mirroring
# Download image
docker pull name
# list local mirrors
docker images
Run the build container with an image
docker run name:tag
# delete mirror
docker rmi id/name
Copy the code
Common Container operations
You can start, stop, and restart a container using its ID or alias.
# View the running container
docker ps
View all generated containers
docker ps -a
# start container
docker start container
# stop container
docker stop container
# restart container
docker restart container
# Remove containers that are not needed (containers must be stopped before removal)
docker rm container
# Enter the container running in the background
docker exec -it container /bin/sh
# Print the internal information of the container (the -f parameter can be used to view the internal information in real time)
docker logs -f container
Copy the code
If you enter the container through -i -t, you can press CTRL + P and then CTRL + Q to exit the interface group without closing the container.
Docker-compose common operation
Autocomplete a series of operations including building the image, (re) creating the service, starting the service, and associating the service-related containers.
docker-compose up
# This command will stop the container started by the up command and remove the network
docker-compose down
Start an existing service container.
docker-compose start
Stop a container that is already running, but do not delete it. These containers can be started again with start.
docker-compose stop
Restart the service in the project
docker-compose restart
Copy the code
By default, docker-compose up starts all containers in the foreground, and the console will print the output of all containers at the same time, making it easy to debug. When ctrl-C stops the command, all containers will stop.
conclusion
Although I have not been in touch with Docker for a long time, the architecture of micro-service subdivision really amazes me. Used to play VM virtual machine, that use cost is too high, not flexible enough, after a period of time to give up, honest maintenance of their own local environment. With this Docker, you can test any environment you want, just a few lines of code generation, a kind of arbitrary freedom. The commands written above are for daily use, which can meet the basic needs. If you really want to go deeper, I suggest you to find detailed documents. I will not write too cumbersome, and I hope everyone can get in touch with this Docker.