Docker profile

Docker is an open source application container engine, which is based on the Go language and complies with the Apache2.0 protocol.

Docker allows developers to package their applications and dependencies into a lightweight, portable container that can then be distributed to any popular Linux machine, as well as virtualization.

Containers are completely sandboxed, have no interfaces with each other (like iPhone apps), and most importantly, have very low performance overhead.

Docker application scenario

  1. Automated packaging and distribution of Web applications.
  2. Automated testing and continuous integration, release.
  3. Deploy and adjust databases or other backend applications in a service environment.
  4. Build from scratch or extend existing OpenShift or Cloud Foundry platforms to build your own PaaS environment.

The advantages of the Docker

Deliver your applications quickly and consistently

Docker simplifies the development lifecycle by allowing developers to work in a standardized environment using native containers of your applications or services.

Containers are well suited for continuous integration and continuous delivery (CI/CD) workflows, consider the following example scenario:

  1. Your developers write code locally and use Docker containers to share their work with colleagues.
  2. They use Docker to push their applications into a test environment and perform automated or manual testing.
  3. When developers find bugs, they can fix them in the development environment and then redeploy them to the test environment for testing and verification.
  4. When the test is complete, the patch is pushed to production as easily as an updated image.

Responsive deployment and scaling

Docker is a container-based platform that allows for highly portable workloads. Docker containers can run on a developer’s machine, on a physical or virtual machine in a data center, on a cloud service, or in a hybrid environment.

Docker’s portability and lightweight nature also allows you to easily complete dynamically managed workloads and extend or dismantle applications and services in real time as indicated by business requirements.

Running more workloads on the same hardware

Docker is light and fast. It provides a viable, economical, and efficient alternative to hypervisor-based virtual machines. Docker is ideal for high-density environments and small to medium sized deployments, where you can do more with less.

Docker architecture

Docker consists of three basic concepts:

  • Image: Docker images are templates used to create Docker containers. This is equivalent to a root file system. For example, the official ubuntu:16.04 image includes a complete set of root files for ubuntu 16.04’s minimum system.

  • Container: A Container is one or a group of applications that run independently and is an entity of the mirror runtime. The relationship between an Image and a Container is similar to that between a class and an instance in object-oriented programming. An Image is a static definition and a Container is an entity of the Image runtime. Containers can be created, started, stopped, deleted, paused, and so on.

  • Repository: A Repository can be thought of as a code control center that holds images.

Docker uses a client-server (C/S) architecture pattern that uses remote apis to manage and create Docker containers.

Among them:

Docker Host: A physical or virtual machine that executes Docker daemons and containers.

Docker Registry: Docker repository is used to store images, which can be understood as a code repository in code control. The official Docker Hub(hub.docker.com) offers a huge collection of images for use.

Docker Machine: Docker Machine is a command line tool to simplify the installation of Docker, through a simple command line can be installed on the corresponding platform Docker.

Docker installation

This section uses the docker installation on CentOS as an example. Other systems install Docker in a similar way.

Automatic installation using the official installation script

The installation command is as follows:

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
Copy the code

Start the Docker

sudo systemctl start docker
Copy the code

Uninstall the docker

Delete the installation package:

yum remove docker-ce
Copy the code

Delete images, containers, and configuration files.

rm -rf /var/lib/docker
Copy the code

Docker image acceleration

Sometimes it is difficult to pull the image from DockerHub in China, so you can configure the image accelerator.

Docker official and many domestic cloud service providers provide domestic accelerator services, such as:

  • Mirror: at hkust docker.mirrors.ustc.edu.cn/
  • Netease: hub-mirror.c.163.com/
  • Aliyun: https://< your ID>.mirror.aliyuncs.com
  • Seven niuyun accelerator: reg-mirror.qiniu.com

For systems using systemd, write the following contents to /etc/docker-daemon. json (if the file does not exist, create a new one) :

{"registry-mirrors":["https://reg-mirror.qiniu.com/"]}
Copy the code

Then restart the service:

sudo systemctl daemon-reload
sudo systemctl restart docker
Copy the code

Check whether the accelerator is working

Run the docker info command. If the following information is displayed, the configuration is successful.

$ docker info
Registry Mirrors:
    https://reg-mirror.qiniu.com
Copy the code

Docker container is used

Access to the mirror

If we do not have a local image, such as an Ubuntu image, we can use the docker pull command to load the Ubuntu image:

docker pull ubuntu
Copy the code

Start the container

Start a container with an Ubuntu image and enter the container in command line mode:

docker run -it ubuntu /bin/bash
Copy the code

Parameter Description:

  • -i: interactive operation. Allows you to interact with standard input (STDIN) inside the container.
  • – t: terminal.
  • Ubuntu: Indicates an Ubuntu image.
  • /bin/bash: After the image name is the command. Here we want an interactive Shell, so /bin/bash is used.

Now that we are in the Ubuntu container, we try to run the command ls in the container to see the list of files in the current directory

root@0123ce188bd8:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
Copy the code

We can exit the container by running the exit command or using CTRL+D.

Start the container (background mode)

In most scenarios, we want the Docker service to run in the background. We can specify the container mode by using -d.

runoob@runoob:~$docker run -d ubuntu:15.10 /bin/sh -c "while true; do echo hello world; sleep 1; done" 2b1b7a428627c51ab8810d541d759f072b4fc75487eed05812646b8534a2fe63Copy the code

2b1b7a428627… This long string is called the container ID, and we can use the container ID to see what happens to the corresponding container.

First, we need to confirm that the container is running, which can be checked with docker PS:

runoob@runoob:~$ docker ps CONTAINER ID IMAGE COMMAND ... 5917eAC21C36 Ubuntu :15.10 "/bin/sh -c 'while t..." .Copy the code

Output details:

  • CONTAINER ID: indicates the ID of a CONTAINER.

  • IMAGE: IMAGE used.

  • COMMAND: The COMMAND that is run when the container is started.

  • CREATED: time when the container was CREATED.

  • STATUS: indicates the STATUS of the container.

There are seven states:

  • Created (already created)

  • Restarting

  • Running or Up

  • Removing (on the move)

  • Paused

  • Exited (stop)

  • He was dead.

  • PORTS: port information of the container and connection type used (TCP \ UDP).

  • NAMES: indicates the container name.

Use the docker logs command within the host host to view the standard output within the container:

runoob@runoob:~$ docker logs 2b1b7a428627
hello world
hello world
hello world
Copy the code

Stop the container

We use the docker stop command to stop the container:

docker stop 2b1b7a428627
Copy the code

Restart the container

docker restart 2b1b7a428627
Copy the code

Into the container

With the -d argument, the container goes into the background after it starts. To enter the container, use the following command:

  • docker attach

  • Docker exec: It is recommended to use the docker exec command, because this will not cause the container to stop.

docker attach 1e560fca3906
Copy the code
docker exec -it 243c32535da7 /bin/bash
Copy the code

Export container

docker export 1e560fca3906 > ubuntu.tar
Copy the code

Importing a Container Snapshot

You can use docker import to import the snapshot file ubuntu.tar into the image test/ Ubuntu :v1:

cat docker/ubuntu.tar | docker import - test/ubuntu:v1
Copy the code

Use Docker Images to view the list of images

docker images
Copy the code

Alternatively, it can be imported by specifying a URL or directory, for example:

docker import http://example.com/exampleimage.tgz example/imagerepo
Copy the code

Remove the container

docker rm 1e560fca3906
Copy the code

When deleting a container, the container must be stopped; otherwise, an error will be reported.

The following command cleans up all containers in the terminated state.

docker container prune
Copy the code

Run a Web application

Let’s try to build a Web application using Docker.

We use ready-made images

runoob@runoob:~# docker pull training/webapp # Load image runoob@runoob:~# docker run -d -p training/ webApp Python app.pyCopy the code

Parameter Description:

  • -d: Allows the container to run in the background.

  • -p: randomly maps the network ports used inside the container to the hosts we use.

Use Docker PS to see the container we are running:

runoob@runoob:~# docker ps CONTAINER ID IMAGE COMMAND ... PORTS d3d5e39ed9d3 training/webapp "python app.py" ... 0.0.0.0:32769 - > 5000 / TCPCopy the code

There’s more port information here.

Docker opens port 5000 (default Python Flask port) mapped to host port 32769.

Now we can access the WEB application through the browser!

We can also set different ports with the -p parameter:

runoob@runoob:~$ docker run -d -p 5000:5000 training/webapp python app.py
Copy the code

Shortcuts to network ports

Docker ps command can be used to view the container port mapping, Docker also provides another shortcut docker port, docker port can be used to view the specified (ID or name) container of a certain port mapping to the host port number.

runoob@runoob:~$ docker port bf08b7f2cd89
5000/tcp -> 0.0.0.0:5000
Copy the code

View WEB application logs

Docker logs [ID or name] can view the standard output inside the container.

unoob@runoob:~$docker logs -f bf08b7f2cd89 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) 192.168.239.1 - - [09/May/2016 16:30:37] "GET/HTTP/1.1" 200-192.168.239.1 - - [09/May/2016 16:30:37] "GET/HTTP/1.1Copy the code

-f: Let docker logs output internal container standard output as if using tail -f.

View the processes of the WEB application container

We can also use Docker Top to see the processes running inside the container

runoob@runoob:~$ docker top wizardly_chandrasekhar UID PID PPID … TIME CMD root 23245 23228 … 00:00:00 python app.py

Check the WEB application

Use Docker inspect to view the underlying information of docker. It returns a JSON file that records the configuration and status of the Docker container.

runoob@runoob:~$ docker inspect wizardly_chandrasekhar [ { "Id": "bf08b7f2cd897b5964943134aa6d373e355c286db9b9885b1f60b6e8f82b2b85", "Created": "The 2018-09-17 T01: the 174228707 z", "Path" : "python", "Args" : [" app. Py "], "State" : {" Status ":" running ", "running" : true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 23245, "ExitCode": 0, "Error": ""," StartedAt ":" the 2018-09-17 T01: the 494185806 Z ", "FinishedAt" : "0001-01-01 t00:00:00 Z"},...Copy the code

Docker image is used

When running a container, docker will automatically download the image from the Docker repository if the image does not exist locally. By default, it is downloaded from the Docker Hub public image source.

Listing mirrors

We can use Docker images to list the images on the localhost.

runoob@runoob:~$Docker images REPOSITORY TAG IMAGE ID CREATED SIZE Ubuntu 14.04 90d5884b1EE0 5 days ago 188 MB PHP 5.6 F40e9e0f10c8 9 days ago 444.8 MBCopy the code

Description of each option:

  • REPOSITORY: represents the REPOSITORY source of the image

  • TAG: indicates the TAG of the mirror

  • IMAGE ID: indicates the ID of an IMAGE

  • CREATED: indicates the time when a mirror is CREATED

  • SIZE: indicates the mirror SIZE

The same REPOSITORY can have multiple tags representing different versions of the REPOSITORY. For example, ubuntu REPOSITORY has 15.10, 14.04, and other versions. We use REPOSITORY:TAG to define different images. If you do not specify a version tag for an image, for example if you only use Ubuntu, Docker will default to Ubuntu: Latest image.

Get a new image

When we use a nonexistent image on the localhost, Docker automatically downloads the image. If we want to pre-download the image, we can use the Docker pull command to download it.

Crunoob @ runoob: ~ $docker pull ubuntu: 13.10Copy the code

Find the mirror

We can search for images from the Docker Hub website: hub.docker.com/

We can also use the Docker search command to search for images.

Remove the mirror

Delete the image using the docker rmI command, for example we delete the hello-world image:

$ docker rmi hello-world
Copy the code

Create a mirror image

When the image we download from the Docker image repository does not meet our needs, we can change the image in the following two ways.

Update the image from the container already created, and submit the image

Update image

Before updating the image, we need to create a container using the image.

runoob@runoob:~$ docker run -t -i ubuntu:15.10 /bin/bash
root@e218edb10161:/# 
Copy the code

Use apt-get update in the running container.

After completing the operation, type the exit command to exit the container.

At this time, the container with ID E218EDB10161 is the container changed according to our requirements. We can commit a container copy by using the docker commit command.

runoob@runoob:~$ docker commit -m="has update" -a="runoob" e218edb10161 runoob/ubuntu:v2
sha256:70bf1840fd7c0d2d8ef0a42a817eb29f854c1af8f7c59fc03ac7bdee9545aff8
Copy the code

Parameter description:

  • -m: indicates the description to be submitted

  • -a: Specifies the mirror author

  • E218edb10161: indicates the container ID

  • Runoob/Ubuntu :v2: specifies the name of the target image to be created

Build the mirror

We use the command docker build to create a new image from scratch. To do this, we need to create a Dockerfile file that contains a set of instructions that tell Docker how to build our image. We’ll talk about that later.

Setting a Mirror Label

We can use the docker tag command to add a new tag to the image.

runoob@runoob:~$ docker tag 860c279d2fec runoob/centos:dev
Copy the code

Docker tag Image ID, 860c279d2fec, user name, repository name, and new tag.

Docker container connection

Network applications can run in the container. To make these applications accessible externally, you can specify port mappings with the -p or -p parameters.

The differences between the two methods are:

  • -p: indicates a high-end port randomly mapped from a container port to a host.
  • -p: indicates that an internal port in a container is bound to a specified host port.
docker run -d -p 5000:5000 training/webapp python app.py
Copy the code

Alternatively, we can specify the network address for the container binding, such as binding 127.0.0.1.

runoob@runoob:~$docker run -d -p 127.0.0.1:5001:5000 training/ webApp Python app.py 95c6ceef88ca3e71eaf303c2833fd6701d8d1b2572b5613b5a932dfdfe8a857c runoob@runoob:~$ docker ps CONTAINER ID IMAGE COMMAND . PORTS NAMES 95c6ceef88ca training/webapp "python app.py" ... Training/webApp "python app.py" training/webapp "python app.py" training Berserk_bartik fCE072CC88ce training/ webApp "Python app.py"... 0.0.0.0:32768 - > 5000 / TCP grave_hopperCopy the code

To bind UDP ports, add/UDP to the end of the port.

Docker run -d -p 127.0.0.1:500:5000 /udp training/ webApp Python app.pyCopy the code

Docker container interconnection

Port mapping is not the only way to connect a Docker to another container.

Docker has a connection system that allows multiple containers to connect together and share connection information.

A Docker connection creates a parent-child relationship where the parent container can see the child’s information.

New network

Start by creating a new Docker network.

docker network create -d bridge test-net
Copy the code

Parameter Description:

  • -d: Specifies the Docker network type, including bridge and overlay.

  • The overlay network type is used in Swarm mode, which you can ignore in this section.

Connect the container

Run a container and connect to the new test-net network:

$ docker run -itd --name test1 --network test-net ubuntu /bin/bash
Copy the code

Open a new terminal, run another container and join the test-net network:

$ docker run -itd --name test2 --network test-net ubuntu /bin/bash
Copy the code

Test1 and test2 are connected by ping.

If no ping command is used to ping test1 or test2, run the following command to ping the test1 or test2 containers.

apt-get update
apt install iputils-ping
Copy the code

Then enter the following command in the test1 container:

docker exec -it test1 /bin/bash

ping test2
Copy the code

Test1 containers are connected to test2 containers.

Docker Compose is recommended if you have multiple containers that need to connect to each other, as described below.

Configure DNS

We can add the following contents to the host /etc/dock/daemon. json file to set DNS for all containers:

{
  "dns" : [
    "114.114.114.114"."8.8.8.8"]}Copy the code

After this setting, the DNS of the startup container is automatically configured to 114.114.114.114 and 8.8.8.8.

You need to restart the Docker for the configuration to take effect.

To check whether the DNS of the container is in effect, run the following command:

$ docker run -it --rm  ubuntu  cat etc/resolv.conf
Copy the code

Manually specify the configuration of the container

If you only want to set up DNS in the specified container, you can use the following command:

$ docker run -it --rm -h host_ubuntu  --dns=114.114.114.114 --dns-search=test.com ubuntu
Copy the code

Docker warehouse management

A Repository is a centralized place for storing images. Here’s a look at the Docker Hub. Of course, not only Docker Hub, but the remote service providers are different, the operation is the same.

Docker Hub

Currently Docker officially maintains a public warehouse Docker Hub.

Most of the requirements can be met by downloading the image directly from the Docker Hub.

registered

Sign up for a free Docker account at hub.docker.com.

Login and Logout

Login requires user name and password. After successful login, we can pull all images under our account from docker Hub.

exit

To exit the Docker Hub, use the following command:

docker logout
Copy the code

Pull the mirror

You can use the Docker search command to find the image in the official repository, and use the Docker pull command to download it locally.

Push the mirror

After login, users can push their image to docker Hub through docker push command.

Please replace username in the following command with your Docker account username.

$docker tag Ubuntu :18.04 username/ Ubuntu :18.04 $Docker image ls REPOSITORY tag Image ID CREATED... Ubuntu 18.04 275d79972a86 6 days ago... Username/Ubuntu 18.04 275d79972a86 6 days ago... $DOCker Push username/ Ubuntu :18.04 $Docker search Username/Ubuntu NAME DESCRIPTION STARS OFFICIAL AUTOMATED username/ubuntuCopy the code

Docker Dockerfile

What is a Dockerfile?

A Dockerfile is a text file used to build an image. The text content contains the instructions and instructions required to build the image.

Use Dockerfile to customize the image

Here to customize a nginx mirror (built inside the mirror there will be a/usr/share/nginx/HTML/index. The HTML file) as an example explain use Dockerfile custom image.

In an empty directory, create a new file named Dockerfile and add the following contents to the file:

FROM nginx
RUN echo 'This is a locally built nginx image' > /usr/share/nginx/html/index.html
Copy the code

The function of the FROM and RUN directives

FROM: Custom images are based on images FROM, and nginx is the base image required for customization. Subsequent operations are based on Nginx.

RUN: Used to execute the command line commands that follow. There are two formats:

Shell formats:

RUN < command line command > # < command line command > is equivalent to shell commands that operate on terminals.Copy the code

The exec formats:

# RUN ["./test.php", "dev", "offline"] = RUN./test.php dev offlineCopy the code

Note: Each execution of the Dockerfile directive creates a new layer on top of the Docker. So too many meaningless layers will cause the image to expand too much. Such as:

The FROM centos RUN yum install wget RUN wget - O redis. Tar. Gz "http://download.redis.io/releases/redis-5.0.3.tar.gz" RUN tar - XVF redis.tar.gz The preceding command creates a layer 3 image. It can be simplified into the following format: The FROM centos RUN yum install wget \ && wget - O redis. Tar. Gz "http://download.redis.io/releases/redis-5.0.3.tar.gz" \ && tar -xvf redis.tar.gzCopy the code

Start building the image

Build a nginx:v3 from the Dockerfile in the directory (image name: image tag).

$ docker build -t nginx:v3 .
Copy the code

The last. Is a context path. So what is a context path?

Context path, refers to the docker image building, sometimes want to use the local file (such as copy), docker build command after knowing this path, will package all contents in the path.

Docker is running in C/S mode. Our machine is C, and the Docker engine is S. The actual construction process is completed under the Docker engine, so we cannot use our local files at this time. This needs to package the files under the specified directory of our local computer together and provide them to the Docker engine for use.

Note: Do not put useless files in the context path, as they will be packaged together and sent to the Docker engine.

Dockerfile instruction in detail

COPY

Copy directive that copies a file or directory from the context directory to a specified path in the container.

Format:

COPY [--chown=<user>:<group>] < source path 1>... < target path > COPY [- chown = < user >, < group >] [" < 1 > the source path, "... "> < target path"]Copy the code

Such as:

COPY hom* /mydir/ COPY hom? .txt /mydir/Copy the code

ADD

The format of the ADD command is the same as that of COPY (COPY is officially recommended for the same requirements). Functions are similar, with the following differences:

ADD automatically copies and unzips < source > to < destination > in tar format gzip, bzip2, and xz.

CMD

Similar to the RUN directive, used to RUN a program, but at different points in time:

CMD runs at docker run time. RUN is in the Docker build.

What it does: Specifies the default program to run for the started container. The container ends when the program ends. Programs specified by CMD directives can be overwritten by programs specified to run in the Docker run command line argument.

Note: If there are multiple CMD directives in a Dockerfile, only the last one takes effect.

Format:

CMD < > shell command CMD [" < executable file or command > ", "< param1 >", "< param2 >",...  CMD ["<param1>","<param2>",...] The ENTRYPOINT directive provides a default parameter for the program specified by the ENTRYPOINT directiveCopy the code

The second format is recommended because the execution process is relatively clear. The first format is actually automatically converted to the second format as it runs, and the default executable is sh.

ENTRYPOINT

Similar to CMD directives, but not overridden by directives specified by the docker Run command line arguments, and these command line arguments are sent as arguments to the program specified by the ENTRYPOINT directive.

However, if you run docker Run with the — entryPoint option, the program specified by the CMD directive will be overwritten.

Advantages: When docker run is executed, you can specify the parameters required for ENTRYPOINT to run.

Note: If there are multiple ENTRYPOINT directives in a Dockerfile, only the last one takes effect.

Format:

ENTRYPOINT ["<executeable>","<param1>","<param2>",...]
Copy the code

It can be used with the CMD command: CMD is usually used as a variable parameter. In this case, CMD is used as a parameter to ENTRYPOINT, as described in the following example.

Example:

Assuming that the nginx:test image has been built from Dockerfile:

FROM nginx

ENTRYPOINT ["nginx"."-c"] # set the cords
CMD ["/etc/nginx/nginx.conf"] # varargs
Copy the code

1. Run without passing parameters

$ docker run  nginx:test
Copy the code

By default, the container runs the following command to start the main process.

nginx -c /etc/nginx/nginx.conf
Copy the code

2. Run the parameter transfer

$ docker run  nginx:test -c /etc/nginx/new.conf
Copy the code

By default, the container will run the following command to start the main process (/etc/nginx/new.conf: assuming the file already exists in the container).

nginx -c /etc/nginx/new.conf
Copy the code

ENV

Set the environment variable and define the environment variable so that it can be used in subsequent instructions.

Format:

ENV <key> <value>
ENV <key1>=<value1> <key2>=<value2>...
Copy the code

The following example sets NODE_VERSION = 7.2.0, which can be referenced by $NODE_VERSION in subsequent directives:

ENV NODE_VERSION 7.2.0
Copy the code
RUN curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
  && curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc"
Copy the code

ARG

Build parameters, and ENV to. But the scope is different. ARG environment variables are only valid in Dockerfile, that is, only in the docker build process, the built image does not exist in this environment variable.

The build command docker build can be overridden with –build-arg < parameter name >=< value >.

Format:

ARG < parameter name >[=< default value >]Copy the code

VOLUME

Define anonymous data volumes. If you forget to mount the data volume when starting the container, it will be automatically mounted to the anonymous volume.

Function:

Avoid the loss of important data due to container restart, which can be fatal. Avoid growing containers.

Format:

VOLUME ["< path 1>", "< path 2>"... VOLUME < path >Copy the code

When starting the container Docker run, we can change the mount point with the -v argument.

EXPOSE

Just declare the port.

Function:

Help mirror users understand the daemon port of the mirror service to facilitate the configuration of mappings. When random port mapping is used at runtime, that is, when Docker run -p, the ports for EXPOSE are automatically mapped randomly.

Format:

EXPOSE < Port 1> [< Port 2>...]Copy the code

WORKDIR

Specify the working directory. The working directory specified with WORKDIR is present at each layer of the build image. WORKDIR Specifies the working directory, which must be created in advance.

Each RUN command in docker build is a new layer. Only directories created by WORKDIR will always exist.

Format:

WORKDIR < working directory path >Copy the code

USER

The command is used to specify the user and user group that execute the subsequent command. The user and user group must exist in advance.

Format:

USER < username >[:< USER group >]Copy the code

HEALTHCHECK

Specifies a program or directive that monitors the running status of the Docker container service.

Format:

HEALTHCHECK [options] CMD < command > : sets the command to check the container health. HEALTHCHECK NONE: if the underlying image has a HEALTHCHECK, use this line to mask its HEALTHCHECK. HEALTHCHECK [options] CMD < command > : To use the command followed by CMD, refer to the usage of CMD.Copy the code

ONBUILD

Used to delay the execution of build commands. The ONBUILD command in Dockerfile will not be executed during the build process (assuming the image is test-build). When a new Dockerfile is built using the previously built image FROM test-build, the command ONBUILD specified in the Dockerfile of test-build will be executed.

Format:

ONBUILD < other instructions >Copy the code

Docker Compose

Compose profile

Compose is a tool for defining and running multi-container Docker applications. With Compose, you can use YML files to configure all the services your application needs. Then, with a single command, all services can be created and started from the YML file configuration.

Compose uses three steps:

  • Use Dockerfile to define your application’s environment.

  • Use docker-comemage.yml to define the services that make up the application so that they can run together in an isolated environment.

  • Finally, execute the docker-compose up command to get the entire application up and running.

Compose the installation

Run the following command on Linux to download a stable version of Docker Compose:

$sudo curl - L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname - s) - $(uname -m)" - o /usr/local/bin/docker-composeCopy the code

Apply executable permissions to binaries:

$ sudo chmod +x /usr/local/bin/docker-compose
Copy the code

Create a soft chain:

$ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
Copy the code

Test whether the installation is successful:

$docker-compose --version cocker-compose version 1.24.1, build 4667896bCopy the code

use

1, preparation,

Create a test directory composetest, create a file named app.py in the test directory and copy and paste the following:

import time

import redis
from flask import Flask

app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)


def get_hit_count():
    retries = 5
    while True:
        try:
            return cache.incr('hits')
        except redis.exceptions.ConnectionError as exc:
            if retries == 0:
                raise exc
            retries -= 1
            time.sleep(0.5)


@app.route('/')
def hello():
    count = get_hit_count()
    return 'Hello World! I have been seen {} times.\n'.format(count)
Copy the code

In this example, redis is the host name of the Redis container on the application network using port 6379.

Create another file named requirements.txt in the composetest directory with the following contents:

flask
redis
Copy the code

2. Create Dockerfile

In the composetest directory, create a file named Dockerfile with the following contents:

FROM python:3.7-alpine WORKDIR /code ENV FLASK_APP app.py ENV FLASK_RUN_HOST 0.0.0.0 RUN apk add --no-cache GCC musl-dev  linux-headers COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . CMD ["flask", "run"]Copy the code

Dockerfile contents

  • FROM Python :3.7-alpine: Build an image FROM a Python 3.7 image.
  • WORKDIR /code: Set the working directory to /code.
  • ENV Sets the environment variables used by the flask command.
  • RUN apk add –no-cache GCC musl-dev linux-headers: Install GCC so that Python packages such as MarkupSafe and SQLAlchemy can compile and accelerate.
  • COPY requirements.txt requirements.txt and RUN PIP install -r requirements.txt, COPY requirements.txt and install Python dependencies.
  • COPY.. : Copies the current directory in the. Project to the working directory in the. Mirror.
  • CMD [“flask”, “run”]: The container provides the default execution command: flask run.

Create docker-comemage.yml

Create a file called docker-comemage. yml in your test directory and paste the following:

Docker-comemage. yml configuration file

# yaml configuration
version: '3'
services:
  web:
    build: .
    ports:
     - "5000:5000"
  redis:
    image: "redis:alpine"
Copy the code

The Compose file defines two services: Web and Redis.

Web: This Web service uses an image built from the current directory of Dockerfile. It then binds the container and host to the exposed port 5000. This sample service uses the Flask Web server’s default port 5000.

Redis: This Redis service uses the public Redis image of Docker Hub.

4. Build and run your application using the Compose command

In the test directory, run the following command to start the application:

docker-compose up
Copy the code

If you want to run the service in the background, add the -d parameter:

docker-compose up -d
Copy the code

Actual deployment – Deploying node.js+mysql applications

I have organized a virtual machine site construction tutorial: Ali Cloud Server site construction Guide, including node.js, mysql, Redis, nginx deployment, set up a basic back-end application.

However, this deployment requires a lot of time and effort to migrate machines each time, so we decided to learn from it and use Docker instead.

Docker installation and startup

Mine is Ali cloud machine, the system is CentOS.

So run the following command to install Docker

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
Copy the code

Verify after installation

[root@iZ8vb55rs42xic2s53uc3yZ nodejs]# docker -v
Docker version 20.10.5, build 55c4c88
Copy the code

Then start docker

systemctl start docker
Copy the code

Deploy Node.js applications using Docker

Deploying a Node.js application requires the Node.js environment, installing dependencies, executing startup commands, and so on.

These docker support, we directly look at the final written Dockerfile.

A Dockerfile is a text file that is used to build the image and contains the instructions required to build the image.

# It is based on node: 12.22.1-Alpine3.10 base images. These images can be found at https://hub.docker.com/_/node and you can choose the version of Node.js you want.
FROM node:12.22.1-alpine3.10
The # ADD command adds all files from the project to the image. Since the image is built on B/S architecture, it cannot directly access files from the project
ADD . /nodejs
The # WORKDIR command sets the working directory, which is similar to CD to the root directory of the project
WORKDIR /nodejs
# RUN can execute commands where the project dependencies are installed.
RUN npm --registry=https://registry.npm.taobao.org \
--cache=$HOME/.npm/.cache/cnpm \
--disturl=https://npm.taobao.org/dist \
--userconfig=$HOME/.cnpmrc install
# EXPOSE defines the port of the application
EXPOSE 3000
Pm2 is used to provide the process daemon for the application
CMD ./node_modules/.bin/pm2 start pm2.json --no-daemon --env production
Copy the code

Build the mirror

With Dockerfile, we can build the image directly by executing the following command

docker build -t nodejs .
Copy the code

Nodejs is the image name, but you can change it to something else.

The result after running is

. Successfully built 621c07eeba87 Successfully tagged nodejs:latestCopy the code

The image is successfully constructed.

Start the container

Run the following command to start the container

docker run --name nodejs -it -p 3000:3000 nodejs
Copy the code

The first nodejs is the container name and the second is the image name. What this command means is to run a container with the nodejs image (the image built earlier) on port 3000.

The Node.js service is now started and you can access the application using IP + port number (3000).

Use Docker to deploy mysql

Deploying mysql is similar.

Access to the mirror

Mysql has a ready-made image, so it doesn’t need to be built through Dockerfile

Run the following command to pull a mirror

Docker pull mysql/mysql - server: 5.7Copy the code

Run the container

Execute the command

Docker run --name mysql -d -e MYSQL_ROOT_PASSWORD=password -p 3306:3306 mysql/mysql-server:5.7Copy the code

So mysql is up and running.

Add external access to the database

By default, MySQL can only be accessed using a local IP address (127.0.0.1), not from an external network. So you need to set the container running Node.js to access the container running MySQL services.

First, enter the MySQL container

docker exec -it mysql bash
Copy the code

Enter the Mysql server

mysql -uroot -ppassword
Copy the code

The password is the password you used to run the container

Finally, add external access

Mysql > alter database
USE mysql;
-- Grant all IP access permissions to the root account
GRANT ALL PRIVILEGES ON *.* TO "root"@"%" IDENTIFIED BY "password";
Update permission Settings
FLUSH PRIVILEGES;
Copy the code

In this way, your Node.js service can access the mysql database with a username and password connection.

Docker Compose

We actually run two services (Node.js and mysql), so is there a way to start all of them with one click? The answer is Docker Compose, which uses YML files to configure all the services your application needs and can launch them with one click.

Go straight to the final docker-comemage.yml

Docker Compose version
version: "3"

services:
  nodejs:
    Build the image using the Dockerfile described earlier
    build:
      context: .
      dockerfile: Dockerfile
    # image name
    image: nodejs
    # container name
    container_name: nodejs
    restart: unless-stopped
    ports:
      - "3000:3000"
    Mysql > select * from mysql
    depends_on:
      - "mysql"
    networks:
      - app-network
  mysql:
    network_mode: "host"
    environment:
      MYSQL_ROOT_PASSWORD: "password"
    image: "Docker. IO/mysql: 5.7"
    container_name: mysql
    restart: always
    volumes:
      - "./mysql/conf/my.cnf:/etc/my.cnf"
      - "./mysql/init:/docker-entrypoint-initdb.d/"
    [Fixed] /var/log/mysql does not have permissions
    entrypoint: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld"
    ports:
      - "3306:3306"

networks:
  app-network:
    driver: bridge
Copy the code

You can see that it has two services, nodeJS and mysql.

There are two lines in it that need some explanation

- "./mysql/conf/my.cnf:/etc/my.cnf"
- "./mysql/init:/docker-entrypoint-initdb.d/"
Copy the code

It allows you to customize mysql my.cnf and execute some initial SQL (or scripts)

The directory structure in the project is as follows, which is associated with the directory structure in the command.

mysql
  - conf
    - my.cnf
  - init
    - init.sql
Copy the code

Let’s look at the contents of init.sql

USE mysql;
GRANT ALL PRIVILEGES ON *.* TO "root"@"%" IDENTIFIED BY "password";
FLUSH PRIVILEGES;
Copy the code

In this way, the previous command to add external access to the database can be executed automatically.

One-click deployment of Node.js +mysql services

So far, our directory structure at this point is already like this, the contents of the file have been introduced

your-nodejs-project
  - mysql
    - conf
      - my.cnf
    - init
      - init.sql
  - docker-compose.yml
  - Dockerfile
Copy the code

To deploy the Node.js +mysql service, run the following command from the root directory of the project.

docker-compose up -d
Copy the code

If there is no image (I removed the node.js image I had previously built before executing the docker-compose command), it will build (Node.js) or pull (mysql) images

Building Nodejs Step 1/6: FROM node:12.22.1- AlPINE3.10... Successfully built 90227eea977f Successfully tagged nodejs:latest Creating mysql ... done Creating nodejs ... doneCopy the code

You can see that both services are created.

And they all worked.

[root@iZ8vb55rs42xic2s53uc3yZ blog-server]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f9dbd102678f Nodejs docker - entrypoint. "s..." About an hour ago Up About an hour 0.0.0.0:3000->3000/ TCP nodejs f91b47be9d02 mysql:5.7 "bash -c 'chown -r m... About an hour ago Up About an hour mysqlCopy the code

If you run the same docker-compose up -d command again, because it is the same and already mirrored, it will return the result very quickly:

[root@iZ8vb55rs42xic2s53uc3yZ blog-server]# docker-compose up -d
mysql is up-to-date
nodejs is up-to-date
Copy the code

At this point, we have successfully deployed node.js+mysql application using Docker, and it will be much easier to migrate our service on other virtual machines in the future, because it can achieve one-click deployment!

The same nginx, Redis these have corresponding mature images, deployment operations are similar, I will not introduce here.